Risque de catastrophe planétaire lié à l'intelligence artificielle générale (French Wikipedia)

Analysis of information sources in references of the Wikipedia article "Risque de catastrophe planétaire lié à l'intelligence artificielle générale" in French language version.

refsWebsite
Global rank French rank
57th place
4th place
1st place
1st place
2nd place
3rd place
11th place
325th place
69th place
232nd place
20th place
72nd place
12th place
46th place
18th place
118th place
low place
low place
4th place
12th place
61st place
177th place
140th place
516th place
low place
low place
5th place
13th place
193rd place
473rd place
34th place
142nd place
7th place
28th place
107th place
294th place
234th place
147th place
433rd place
1,002nd place
254th place
519th place
low place
low place
low place
low place
low place
low place
137th place
607th place
36th place
125th place
1,559th place
1,879th place
low place
low place
146th place
429th place
259th place
613th place
896th place
1,435th place
228th place
675th place
28th place
122nd place
175th place
291st place
92nd place
477th place
low place
low place
108th place
409th place
670th place
1,426th place
low place
low place
low place
585th place
83rd place
2nd place
97th place
106th place
low place
low place
26th place
110th place
low place
low place
266th place
679th place
low place
low place
low place
low place
6,466th place
356th place
5,484th place
301st place
2,263rd place
5,876th place
low place
low place
2,327th place
2,288th place
low place
low place
613th place
961st place
low place
low place
612th place
531st place
786th place
1,674th place
4,686th place
3,752nd place
low place
low place
3,951st place
6,772nd place
153rd place
287th place
484th place
1,219th place
low place
low place
30th place
86th place
low place
low place
54th place
149th place
269th place
620th place
41st place
173rd place
8th place
42nd place
378th place
798th place
95th place
439th place
1,040th place
2,713th place
low place
low place
low place
low place
low place
low place
616th place
1,521st place
187th place
491st place
low place
low place
low place
low place
986th place
713th place
114th place
415th place
2,053rd place
3,192nd place
170th place
593rd place
low place
8,261st place
476th place
2,225th place
220th place
626th place
low place
940th place
590th place
36th place

01net.com

aeon.co

  • (en) « True AI is both logically possible and utterly implausible | Aeon Essays », sur Aeon (consulté le ) : « Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously. »

aiimpacts.org

analyticsindiamag.com

arxiv.org

axrp.net

bbc.co.uk

bbc.com

  • (en-GB) « Stephen Hawking warns artificial intelligence could end mankind », BBC News,‎ (lire en ligne, consulté le )
  • (en) Richard Fisher, « The intelligent monster that you should let eat you », sur www.bbc.com (consulté le )
  • Jane Wakefield, « Why is Facebook investing in AI? », BBC News,‎ (lire en ligne [archive du ], consulté le ).
  • (en-GB) « Intelligent Machines: What does Facebook want with AI? », BBC News,‎ (lire en ligne, consulté le ) :

    « Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives. »

  • (en-GB) « Microsoft's Bill Gates insists AI is a threat », BBC News,‎ (lire en ligne, consulté le ) :

    « https://www.bbc.co.uk/news/31047780 »

  • (en-GB) « How are humans going to become extinct? », BBC News,‎ (lire en ligne, consulté le )

bfmtv.com

brianchristian.org

businessinsider.com

cam.ac.uk

turingarchive.kings.cam.ac.uk

cbsnews.com

chicagotribune.com

cnbc.com

cnn.com

datafranca.org

decrypt.co

doi.org

dx.doi.org

doi.org

  • (en) Yoshija Walter, « The rapid competitive economy of machine learning development: a discussion on the social risks and benefits », AI and Ethics,‎ (ISSN 2730-5961, DOI 10.1007/s43681-023-00276-7, lire en ligne, consulté le )
  • (en) Nick Bostrom, « The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents », Minds and Machines, vol. 22, no 2,‎ , p. 71–85 (ISSN 1572-8641, DOI 10.1007/s11023-012-9281-3, lire en ligne, consulté le ).

economist.com

  • « Clever cogs », The Economist,‎ (ISSN 0013-0613, lire en ligne, consulté le ).
  • (en) « Clever cogs », The Economist,‎ (ISSN 0013-0613, lire en ligne, consulté le ) :

    « an A.I. will need to desire certain states and dislike others. Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels. »

  • (en) « Clever cogs », The Economist,‎ (ISSN 0013-0613, lire en ligne, consulté le ) :

    « the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote »

edge.org

euronews.com

fastcompany.com

forbes.com

fortune.com

fusion.net

futureoflife.org

harvard.edu

ui.adsabs.harvard.edu

huffpost.com

  • (en) « Transcending Complacency On Superintelligent Machines », sur HuffPost, (consulté le ) : « So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI. »

humanetech.com

  • (en) « The AI Dilemma », sur www.humanetech.com (consulté le )

independent.co.uk

intelligence.org

irishtimes.com

issn.org

portal.issn.org

  • Turchin et Denkenberger, « Classification of global catastrophic risks connected with artificial intelligence », AI & Society, vol. 35, no 1,‎ , p. 147–163 (ISSN 0951-5666, DOI 10.1007/s00146-018-0845-5, S2CID 19208453, lire en ligne)
  • (en-US) Gerrit De Vynck, « The debate over whether AI will destroy us is dividing Silicon Valley », Washington Post,‎ (ISSN 0190-8286, lire en ligne, consulté le )
  • (en-GB) Simon Parkin, « Science fiction no more? Channel 4’s Humans and our rogue AI obsessions », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le )
  • Hans-Peter Breuer, « Samuel Butler's "The Book of the Machines" and the Argument from Design », Modern Philology, vol. 72, no 4,‎ , p. 365–383 (ISSN 0026-8232, lire en ligne, consulté le )
  • (en-US) Cade Metz, « Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots », The New York Times,‎ (ISSN 0362-4331, lire en ligne, consulté le )
  • (en) « Anticipating artificial intelligence », Nature, vol. 532, no 7600,‎ , p. 413–413 (ISSN 1476-4687, DOI 10.1038/532413a, lire en ligne, consulté le ) :

    « Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours. »

  • (en-US) Gerrit De Vynck, « The debate over whether AI will destroy us is dividing Silicon Valley », Washington Post,‎ (ISSN 0190-8286, lire en ligne, consulté le )
  • « Clever cogs », The Economist,‎ (ISSN 0013-0613, lire en ligne, consulté le ).
  • (en-GB) Josh Taylor et Alex Hern, « ‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le )
  • (en) Fabio Urbina, Filippa Lentzos, Cédric Invernizzi et Sean Ekins, « Dual use of artificial-intelligence-powered drug discovery », Nature Machine Intelligence, vol. 4, no 3,‎ , p. 189–191 (ISSN 2522-5839, DOI 10.1038/s42256-022-00465-9, lire en ligne, consulté le )
  • (en) Yoshija Walter, « The rapid competitive economy of machine learning development: a discussion on the social risks and benefits », AI and Ethics,‎ (ISSN 2730-5961, DOI 10.1007/s43681-023-00276-7, lire en ligne, consulté le )
  • (en-GB) Ben Doherty, « Climate change an 'existential security risk' to Australia, Senate inquiry says », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le )
  • (en) Nick Bostrom, « The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents », Minds and Machines, vol. 22, no 2,‎ , p. 71–85 (ISSN 1572-8641, DOI 10.1007/s11023-012-9281-3, lire en ligne, consulté le ).
  • (en) « Clever cogs », The Economist,‎ (ISSN 0013-0613, lire en ligne, consulté le ) :

    « an A.I. will need to desire certain states and dislike others. Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels. »

  • (en) Sotala et Yampolskiy, « Responses to catastrophic AGI risk: a survey », Physica Scripta, vol. 90, no 1,‎ , p. 12 (ISSN 0031-8949, DOI 10.1088/0031-8949/90/1/018001, Bibcode 2015PhyS...90a8001S).
  • (en) Haney, « The Perils & Promises of Artificial General Intelligence », SSRN Working Paper Series,‎ (ISSN 1556-5068, DOI 10.2139/ssrn.3261254, S2CID 86743553).
  • (en) Michael Shermer, « Apocalypse AI », Scientific American, vol. 316, no 3,‎ , p. 77 (ISSN 0036-8733, PMID 28207698, DOI 10.1038/scientificamerican0317-77, lire en ligne, consulté le ) :

    « AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world »

    .
  • (en) Michael Shermer, « Apocalypse AI », Scientific American, vol. 316, no 3,‎ , p. 77 (ISSN 0036-8733, PMID 28207698, DOI 10.1038/scientificamerican0317-77, lire en ligne, consulté le ) :

    « artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization. »

  • (en) Baum, « Countering Superintelligence Misinformation », Information, vol. 9, no 10,‎ , p. 244 (ISSN 2078-2489, DOI 10.3390/info9100244).
  • (en) « Clever cogs », The Economist,‎ (ISSN 0013-0613, lire en ligne, consulté le ) :

    « the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote »

  • (en) LIPPENS, « Imachinations of Peace: Scientifictions of Peace in Iain M. Banks's The Player of Games », Utopianstudies Utopian Studies, vol. 13, no 1,‎ , p. 135–147 (ISSN 1045-991X, OCLC 5542757341)
  • (en-GB) Alex Hern, « Elon Musk says he invested in DeepMind over 'Terminator' fears », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le ) :

    « just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there. »

  • (en-US) Caleb Garling, « Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines », Wired,‎ (ISSN 1059-1028, lire en ligne, consulté le )
  • (en-US) Kevin Kelly, « The Myth of a Superhuman AI | Backchannel », Wired,‎ (ISSN 1059-1028, lire en ligne, consulté le )
  • (en-US) « Barack Obama Talks AI, Robo Cars, and the Future of the World », Wired,‎ (ISSN 1059-1028, lire en ligne, consulté le ) :

    « there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen. »

  • (en-US) « Barack Obama Talks AI, Robo Cars, and the Future of the World », Wired,‎ (ISSN 1059-1028, lire en ligne, consulté le ) :

    « And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man. »

  • Barrett et Baum, « A model of pathways to artificial superintelligence catastrophe for risk and decision analysis », Journal of Experimental & Theoretical Artificial Intelligence, vol. 29, no 2,‎ , p. 397–414 (ISSN 0952-813X, DOI 10.1080/0952813x.2016.1186228, arXiv 1607.07730, S2CID 928824, lire en ligne [archive du ], consulté le )
  • (en) Carayannis et Draper, « Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence », AI & Society,‎ , p. 1–14 (ISSN 0951-5666, PMID 35035113, PMCID 8748529, DOI 10.1007/s00146-021-01382-y, S2CID 245877737)
  • (en-GB) Condé Nast, « AI uprising: humans will be outsourced, not obliterated », Wired UK,‎ (ISSN 1357-0978, lire en ligne, consulté le )
  • (en) Mark Bridge, « Making robots less confident could prevent them taking over », The Times,‎ (ISSN 0140-0460, lire en ligne, consulté le )
  • (en-GB) Kari Paul, « Letter signed by Elon Musk demanding AI research pause sparks controversy », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le ) :

    « By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have. »

  • Geist, « It's already too late to stop the AI arms race—We must manage it instead », Bulletin of the Atomic Scientists, vol. 72, no 5,‎ , p. 318–321 (ISSN 0096-3402, DOI 10.1080/00963402.2016.1216672, Bibcode 2016BuAtS..72e.318G, S2CID 151967826)
  • Maas, « How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons of mass destruction », Contemporary Security Policy, vol. 40, no 3,‎ , p. 285–311 (ISSN 1352-3260, DOI 10.1080/13523260.2019.1576464, S2CID 159310223)

itu.int

  • Ramamoorthy et Yampolskiy, « Beyond MAD? The race for artificial general intelligence », ICT Discoveries, ITU, vol. 1, no Special Issue 1,‎ , p. 1–8 (lire en ligne [archive du ], consulté le )

jstor.org

  • Hans-Peter Breuer, « Samuel Butler's "The Book of the Machines" and the Argument from Design », Modern Philology, vol. 72, no 4,‎ , p. 365–383 (ISSN 0026-8232, lire en ligne, consulté le )

lemonde.fr

  • « Intelligence artificielle : la course à la régulation entre grandes puissances », Le Monde.fr,‎ (lire en ligne, consulté le )

lesnumeriques.com

lesswrong.com

  • (en) « Instrumental Convergence », sur LessWrong (consulté le )
  • (en) Eliezer Yudkowsky, « Coherent decisions imply consistent utilities », LessWrong,‎ (lire en ligne, consulté le )
  • (en) Stuart Armstrong, « General purpose intelligence: arguing the Orthogonality thesis », LessWrong,‎ (lire en ligne, consulté le ).
  • (en) « Treacherous Turn - LessWrong », sur lesswrong.com (consulté le ).

lukemuehlhauser.com

  • (en) « Hillary Clinton on AI risk », sur lukemuehlhauser.com (consulté le ) : « Technologists like Elon Musk, Sam Altman, and Bill Gates, and physicists like Stephen Hawking have warned that artificial intelligence could one day pose an existential security threat. Musk has called it “the greatest risk we face as a civilization.” Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about “the rise of the robots” in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up. »

mambapost.com

microsoft.com

research.microsoft.com

nature.com

  • (en) « Anticipating artificial intelligence », Nature, vol. 532, no 7600,‎ , p. 413–413 (ISSN 1476-4687, DOI 10.1038/532413a, lire en ligne, consulté le ) :

    « Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours. »

  • (en) Dignum, « AI — the people and places that make, use and manage it », Nature, vol. 593, no 7860,‎ , p. 499–500 (DOI 10.1038/d41586-021-01397-x, lire en ligne)
  • (en) Fabio Urbina, Filippa Lentzos, Cédric Invernizzi et Sean Ekins, « Dual use of artificial-intelligence-powered drug discovery », Nature Machine Intelligence, vol. 4, no 3,‎ , p. 189–191 (ISSN 2522-5839, DOI 10.1038/s42256-022-00465-9, lire en ligne, consulté le )

nbcnews.com

newsweek.com

newyorker.com

  • (en-US) Condé Nast, « The Doomsday Invention », sur The New Yorker, (consulté le ) : « there is not a good track record of less intelligent things controlling things of greater intelligence »
  • (en-US) Condé Nast, « The Doomsday Invention », sur The New Yorker, (consulté le ) : « the prospect of discovery is too sweet »

nickbostrom.com

nih.gov

ncbi.nlm.nih.gov

pubmed.ncbi.nlm.nih.gov

  • (en) Michael Shermer, « Apocalypse AI », Scientific American, vol. 316, no 3,‎ , p. 77 (ISSN 0036-8733, PMID 28207698, DOI 10.1038/scientificamerican0317-77, lire en ligne, consulté le ) :

    « AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world »

    .
  • (en) Michael Shermer, « Apocalypse AI », Scientific American, vol. 316, no 3,‎ , p. 77 (ISSN 0036-8733, PMID 28207698, DOI 10.1038/scientificamerican0317-77, lire en ligne, consulté le ) :

    « artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization. »

norighttobelieve.wordpress.com

  • (en) « Alan Turing », sur No Right to Believe (consulté le ) : « Let us now assume, for the sake of argument, that [intelligent] machines are a genuine possibility, and look at the consequences of constructing them. [...] There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control… »

npr.org

nytimes.com

  • (en) Cade Metz, « How Could A.I. Destroy Humanity? », The New York Times,‎ (lire en ligne, consulté le )
  • (en-US) Cade Metz, « Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots », The New York Times,‎ (ISSN 0362-4331, lire en ligne, consulté le )
  • (en) Farnaz Fassihi, « U.N. Officials Urge Regulation of Artificial Intelligence », The New York Times,‎ (lire en ligne, consulté le )

openai.com

openphilanthropy.org

ourworldindata.org

overcomingbias.com

ox.ac.uk

fhi.ox.ac.uk

  • (en-GB) Future of Humanity Institute- FHI, « Future of Humanity Institute », sur The Future of Humanity Institute, (consulté le )

philarchive.org

philosophynow.org

rethinkrobotics.com

  • Brooks, « artificial intelligence is a tool, not a threat » [archive du ],  : « I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence. »

scientificamerican.com

semanticscholar.org

api.semanticscholar.org

skeptic.com

slate.com

spiceworks.com

substack.com

maxmore.substack.com

techcrunch.com

ted.com

telegraph.co.uk

tf1info.fr

theatlantic.com

thebulletin.org

theguardian.com

  • (en-GB) Simon Parkin, « Science fiction no more? Channel 4’s Humans and our rogue AI obsessions », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le )
  • (en-GB) Josh Taylor et Alex Hern, « ‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le )
  • (en-GB) Ben Doherty, « Climate change an 'existential security risk' to Australia, Senate inquiry says », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le )
  • (en-GB) Alex Hern, « Elon Musk says he invested in DeepMind over 'Terminator' fears », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le ) :

    « just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there. »

  • (en-GB) Kari Paul, « Letter signed by Elon Musk demanding AI research pause sparks controversy », The Guardian,‎ (ISSN 0261-3077, lire en ligne, consulté le ) :

    « By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have. »

  • Samuel Gibbs, « Elon Musk: regulate AI to combat 'existential threat' before it's too late », The Guardian,‎ (lire en ligne [archive du ], consulté le )

thehill.com

  • (en-US) Ali Breland, « Elon Musk: We need to regulate AI before ‘it’s too late’ », sur The Hill, (consulté le ) : « Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry [...] It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation. »

thetimes.co.uk

  • (en) Mark Bridge, « Making robots less confident could prevent them taking over », The Times,‎ (ISSN 0140-0460, lire en ligne, consulté le )

theverge.com

  • (en) James Vincent, « Google's AI researchers say these are the five key problems for robot safety », The Verge,‎ (lire en ligne [archive du ], consulté le )

time.com

un.org

news.un.org

usatoday.com

usine-digitale.fr

  • Léna Corot, « L'ONU ne parvient toujours pas à se mettre d'accord sur l'interdiction des armes létales autonomes », L'Usine Digitale,‎ (lire en ligne, consulté le )

vanityfair.com

venturebeat.com

vice.com

vox.com

washingtonpost.com

  • (en-US) Gerrit De Vynck, « The debate over whether AI will destroy us is dividing Silicon Valley », Washington Post,‎ (ISSN 0190-8286, lire en ligne, consulté le )
  • (en) Peter Holley, « Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’ », The Washington Post,‎ (lire en ligne)
  • (en-US) Gerrit De Vynck, « The debate over whether AI will destroy us is dividing Silicon Valley », Washington Post,‎ (ISSN 0190-8286, lire en ligne, consulté le )

web.archive.org

wired.co.uk

  • (en-GB) Condé Nast, « AI uprising: humans will be outsourced, not obliterated », Wired UK,‎ (ISSN 1357-0978, lire en ligne, consulté le )

wired.com

  • (en-US) Caleb Garling, « Andrew Ng: Why ‘Deep Learning’ Is a Mandate for Humans, Not Just Machines », Wired,‎ (ISSN 1059-1028, lire en ligne, consulté le )
  • (en-US) Kevin Kelly, « The Myth of a Superhuman AI | Backchannel », Wired,‎ (ISSN 1059-1028, lire en ligne, consulté le )
  • (en-US) « Barack Obama Talks AI, Robo Cars, and the Future of the World », Wired,‎ (ISSN 1059-1028, lire en ligne, consulté le ) :

    « there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen. »

  • (en-US) « Barack Obama Talks AI, Robo Cars, and the Future of the World », Wired,‎ (ISSN 1059-1028, lire en ligne, consulté le ) :

    « And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man. »

worldcat.org

  • (en) Stuart Rusell et Peter Norvig, Artificial intelligence : a modern approach, (ISBN 0-13-604259-7 et 978-0-13-604259-4, OCLC 359890490, lire en ligne) :

    « Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself. »

  • (en) Federico Pistono et Roman V. Yampolskiy, Unethical Research: How to Create a Malevolent Artificial Intelligence, (OCLC 1106238048).
  • (en) Nick Bostrom, Superintelligence : paths, dangers, strategies, (ISBN 978-0-19-166682-7, 0-19-166682-3 et 978-1-306-96473-9, OCLC 889267826, lire en ligne) :

    « It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work ... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible. »

  • (en) LIPPENS, « Imachinations of Peace: Scientifictions of Peace in Iain M. Banks's The Player of Games », Utopianstudies Utopian Studies, vol. 13, no 1,‎ , p. 135–147 (ISSN 1045-991X, OCLC 5542757341)

zdnet.com