• MataVatnik@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    6 months ago

    AI did boom, but people don’t realize the peak happened a year ago. Now all we have is latecomers with FOMO. It’s gonna be all incremental gains from here on.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      6 months ago

      AI did boom, but people don’t realize the peak happened a year ago.

      A simple control algorithm “if temperature > LIMIT turnOffHeater” is AI, albeit an incredibly limited one.

      LLMs are not AI. Please don’t parrot marketing bullshit.

      The former has an intrinsic understanding about a relationship based in reality, the latter has nothing of the likes.

      • MataVatnik@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I can see where you’re getting at, LLM don’t necessarily solve a problem, they just mímic patterns in data.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).

          I appreciate that you see my point and admit that it makes some sense :)

          Example where I think pattern recognition by deep learning can be extremely useful:

          • recheck medical imaging data of patients that have already been screened by a doctor, to flag some data for a re-check by a second doctor. This could improve chances of e.g. early cancer detection for patients, without a real risk of a false detection, because again, a real doctor will look at the flagged results in detail before even alarming a patient to a potential diagnosis
          • pre-filter large amounts of data for potential matches -> e.g. exoplanet search by certain patterns (planet hunters lets humans do this as crowdsourcing)

          But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems “AGI” / “human-like”. They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.

          Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as “too simple” to be AI - which means that a person with such a view makes a qualitative distinction between control laws and “AI”, where a quantitative distinction between “simple AI” and “advanced AI” would be appropriate.

          And such a qualitative distinction that elevates a complex word guessing machine to “intelligence”, that can only be made by people who actually believe there’s understanding behind those word predictions.

          That’s my take on this.