Eliezer Yudkowsky (English Wikipedia)

Analysis of information sources in references of the Wikipedia article "Eliezer Yudkowsky" in English language version.

refsWebsite
Global rank English rank
1st place
1st place
146th place
110th place
6th place
6th place
low place
low place
9th place
13th place
2,031st place
1,171st place
928th place
651st place
175th place
137th place
5th place
5th place
1,943rd place
1,253rd place
9,352nd place
5,696th place
low place
low place
99th place
77th place
140th place
115th place
low place
low place
228th place
158th place
484th place
323rd place
220th place
155th place
1,757th place
1,054th place
low place
low place

aaai.org

  • Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015). "Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications. Archived from the original on January 15, 2016. Retrieved October 16, 2015.

archive.org

bloomberg.com

businessinsider.com

cnbc.com

datacenterdynamics.com

fivethirtyeight.com

intelligence.org

jta.org

lesswrong.com

newramblerreview.com

newyorker.com

  • Hutson, Matthew (May 16, 2023). "Can We Stop Runaway A.I.?". The New Yorker. ISSN 0028-792X. Archived from the original on May 19, 2023. Retrieved May 19, 2023. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didn't report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that it's legitimate to take action. But, in A.I., there's no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. "There will be no fire alarm that is not an actual running AGI," Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. ... That may require quitting A.I. cold turkey before we feel it's time to stop, rather than getting closer and closer to the edge, tempting fate. But shutting it all down would call for draconian measures—perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote, in an editorial for Time, that we should "be willing to destroy a rogue datacenter by airstrike," even at the risk of sparking "a full nuclear exchange."
  • Packer, George (2011). "No Death, No Taxes: The Libertarian Futurism of a Silicon Valley Billionaire". The New Yorker. p. 54. Archived from the original on December 14, 2016. Retrieved October 12, 2015.

technologyreview.com

theatlantic.com

theconversation.com

vice.com

vox.com

web.archive.org

worldcat.org

  • Hutson, Matthew (May 16, 2023). "Can We Stop Runaway A.I.?". The New Yorker. ISSN 0028-792X. Archived from the original on May 19, 2023. Retrieved May 19, 2023. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didn't report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that it's legitimate to take action. But, in A.I., there's no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. "There will be no fire alarm that is not an actual running AGI," Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. ... That may require quitting A.I. cold turkey before we feel it's time to stop, rather than getting closer and closer to the edge, tempting fate. But shutting it all down would call for draconian measures—perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote, in an editorial for Time, that we should "be willing to destroy a rogue datacenter by airstrike," even at the risk of sparking "a full nuclear exchange."

youtube.com