• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

    I don’t see how you could realistically provide that guarantee.

    I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

    If we knew how to make AI – and this is going past just LLMs and stuff – avoid doing hazardous things, we’d have solved the Friendly AI problem. Like, that’s a good idea to work towards, maybe. But point is, we’re not there.

    Like, I’d be willing to see the state fund research on that problem, maybe. But I don’t see how just mandating that models be conformant to that is going to be implementable.