• treefrog@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    I thought the main issue was that AI don’t really know how to say I don’t know or second guess themselves, as it would take a lot more robust architecture with multiple feedback loops. Like a brain.

    Anyway, LLM’s aren’t the only AI that do this. So them being trained on Facebook data certainly isn’t the whole issue.

    • dan1101@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Yeah it’s the old garbage in, garbage out problem, the AI algorithms don’t really understand what they are outputting.

      I think at this point voice recognition and text generation AI would be more useful as something like a phone assistant. You could tell it complex things like “Mute my phone for the next 2 hours” or “Notify me if I receive an email from John Smith.” Those sort of things could be easily done by AI algorithms that A) Understand your voice and B) Are programmed to know all the features of the OS. Hopefully with a known dataset like a phone OS there shouldn’t be hallucination problems, the AI could just act as an OS concierge.