• Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    It’s kind of funny how AI has the exact same problems some humans have.
    I always thought AI wouldn’t have that kind of problems, because they would be carefully fed accurate information.
    Instead they are taught from things like Facebook and the thing formerly known as Twitter.
    What an idiotic timeline we are in. LOL

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      I thought the main issue was that AI don’t really know how to say I don’t know or second guess themselves, as it would take a lot more robust architecture with multiple feedback loops. Like a brain.

      Anyway, LLM’s aren’t the only AI that do this. So them being trained on Facebook data certainly isn’t the whole issue.

      • dan1101@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Yeah it’s the old garbage in, garbage out problem, the AI algorithms don’t really understand what they are outputting.

        I think at this point voice recognition and text generation AI would be more useful as something like a phone assistant. You could tell it complex things like “Mute my phone for the next 2 hours” or “Notify me if I receive an email from John Smith.” Those sort of things could be easily done by AI algorithms that A) Understand your voice and B) Are programmed to know all the features of the OS. Hopefully with a known dataset like a phone OS there shouldn’t be hallucination problems, the AI could just act as an OS concierge.

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    They can’t. AI has hallucinations. Google has shown that AI can’t even rely on external sources, either.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      At least LLMs will. The only real fix we’ve seen was running it through additional specialized LLMs to try to massage out errors, but that just increases costs and scale for marginally low results.

  • Deconceptualist@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    As others are saying it’s 100% not possible because LLMs are (as Google optimistically describes) “creative writing aids”, or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There’s no “intelligence” present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

    “Hallucinations” is a total misnomer because the text generation isn’t tied to reality in the first place, it’s just mathematically “what next word is most likely”.

    https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

    • _number8_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      all we know about ourselves is what’s in our memories. the way normal writing or talking works is just picking what words sound best in order