Each conversation lasted a total of five minutes. According to the paper, which was published in May, the participants judged GPT-4 to be human a shocking 54 percent of the time. Because of this, the researchers claim that the large language model has indeed passed the Turing test.
That’s no better than flipping a coin and we have no idea what the questions were. This is clickbait.
Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.
Each conversation lasted a total of five minutes. According to the paper, which was published in May, the participants judged GPT-4 to be human a shocking 54 percent of the time. Because of this, the researchers claim that the large language model has indeed passed the Turing test.
That’s no better than flipping a coin and we have no idea what the questions were. This is clickbait.
While I agree it’s a relatively low percentage, not being sure and having people pick effectively randomly is still an interesting result.
The alternative would be for them to never say that gpt-4 is a human, not 50% of the time.
Participants only said other humans were human 67% of the time.
Which makes the difference between the AIs and humans lower, likely increasing the significance of the result.
Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.