Analysis of information sources in references of the Wikipedia article "List of open letters by academics" in English language version.
But Timnit Gebru, whose academic paper was cited to support that claim, wrote on Twitter on Thursday that her article actually warns against making such inflated claims about AI.
"They basically say the opposite of what we say and cite our paper," she wrote.
Her co-author Emily Bender said the letter was a "hot mess" and was "just dripping with AI hype".
Rather than fear-mongering, the letter is careful to highlight both the positive and negative effects of artificial intelligence.
Among the research cited was "On the Dangers of Stochastic Parrots", a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as "more powerful than GPT4".
"By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI," she said. "Ignoring active harms right now is a privilege that some of us don't have."
Among the research cited was "On the Dangers of Stochastic Parrots", a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as "more powerful than GPT4".
"By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI," she said. "Ignoring active harms right now is a privilege that some of us don't have."