Shall we trust LM defining legal definitions, deepfake in this case? It seems the state rep. is unable to proof read the model output as he is “really struggling with the technical aspects of how to define what a deepfake was.”
These types of things are exactly what Generative AI models are good for, as much as Internet people don’t want to hear it.
Things that are massively repeatable based off previous versions (like legislation, contracts, etc) are pretty much perfect for it. These are just tools for already competent people. So in theory you have GenAI crank out the boring stuff and have an expert “fill in the blanks” so to speak
Ideally it would be a generative AI trained specifically on legal textbooks.
I don’t know why there seem to be no LLMs trained specifically on expert subject matter.
There are, just not available publicly. Tons of enterprises (law firms included) are paying to have models trained on their data
There are, just not available publicly.
I meant publicly available
Someone should run all lawyer books through Chat-GPT so we can have a free opensource lawyer in our phones.
During a traffic stop: “Hold on officer, I gotta ask my lawyer. It says to shut the hell up.”
Cop still shoots him in the head so he can learn his lesson. He pulled out his phone!
Honestly I think this is the inevitable future. There are lots of jobs where what you’re paying for is the knowledge. And while LLMs likely won’t be as good as an actual expert, most “professionals”, in my experience, both in personal professional work, as well as contracting “professional” work, are not even remotely experts, and a properly-trained LLM will run circles around them.
You won’t be able to buy them, because machines are, for some reason, not allowed to be fallible like humans, but I can certainly see a scenario where someone takes an open-source LLM and trains it with professional materials (obtained both legally and illegally) and releases it for free, and it does a better job than 70% of “professionals”.