Existential risk from artificial general intelligence (English Wikipedia)

Analysis of information sources in references of the Wikipedia article "Existential risk from artificial general intelligence" in English language version.

refsWebsite
Global rank English rank
1st place
1st place
2nd place
2nd place
5th place
5th place
11th place
8th place
18th place
17th place
61st place
54th place
69th place
59th place
20th place
30th place
4th place
4th place
140th place
115th place
259th place
188th place
7th place
7th place
12th place
11th place
low place
low place
34th place
27th place
193rd place
152nd place
670th place
480th place
low place
low place
28th place
26th place
low place
low place
107th place
81st place
137th place
101st place
433rd place
284th place
1,559th place
1,155th place
613th place
456th place
low place
8,384th place
484th place
323rd place
616th place
430th place
114th place
90th place
low place
low place
108th place
80th place
low place
low place
low place
low place
low place
low place
26th place
20th place
266th place
182nd place
low place
low place
low place
low place
36th place
33rd place
low place
low place
low place
low place
254th place
236th place
2,263rd place
1,687th place
low place
low place
2,327th place
1,627th place
low place
low place
low place
low place
612th place
921st place
786th place
558th place
4,686th place
3,826th place
low place
low place
low place
7,508th place
3,951st place
2,391st place
153rd place
151st place
1,483rd place
828th place
low place
8,948th place
low place
low place
low place
low place
1,201st place
770th place
30th place
24th place
32nd place
21st place
896th place
674th place
54th place
48th place
269th place
201st place
139th place
108th place
8th place
10th place
95th place
70th place
low place
low place
low place
low place
99th place
77th place
1,040th place
623rd place
146th place
110th place
388th place
265th place
low place
low place
low place
8,876th place
187th place
146th place
low place
low place
low place
low place
low place
7,637th place
346th place
229th place
low place
low place
41st place
34th place
low place
9,248th place
1,418th place
966th place
986th place
803rd place
low place
low place
2,053rd place
1,340th place
170th place
119th place
2,302nd place
1,389th place
274th place
309th place
92nd place
72nd place
220th place
155th place
928th place
651st place
97th place
164th place
129th place
89th place
low place
low place
1,241st place
1,069th place

abc.net.au

acceleratingfuture.com

aiimpacts.org

analyticsindiamag.com

apnews.com

arstechnica.com

arxiv.org

axrp.net

  • "19 – Mechanistic Interpretability with Neel Nanda". AXRP – the AI X-risk Research Podcast. 4 February 2023. Retrieved 13 July 2023. it's plausible to me that the main thing we need to get done is noticing specific circuits to do with deception and specific dangerous capabilities like that and situational awareness and internally-represented goals.

bbc.co.uk

bbc.com

bloomberg.com

brianchristian.org

businessinsider.com

cam.ac.uk

turingarchive.kings.cam.ac.uk

cbsnews.com

chicagotribune.com

cnbc.com

cnn.com

commonsenseatheism.com

dair-institute.org

deepmind.com

doi.org

economist.com

edge.org

elgaronline.com

euronews.com

existential-risk.org

fastcompany.com

firstmonday.org

forbes.com

fortune.com

fusion.net

futureoflife.org

ghostarchive.org

harvard.edu

ui.adsabs.harvard.edu

humanetech.com

  • "The AI Dilemma". www.humanetech.com. Retrieved 10 April 2023. 50% of AI researchers believe there's a 10% or greater chance that humans go extinct from our inability to control AI.

independent.co.uk

intelligence.org

irishtimes.com

itu.int

itworld.com

jstor.org

lukemuehlhauser.com

mambapost.com

microsoft.com

research.microsoft.com

nbcnews.com

newsweek.com

newyorker.com

nickbostrom.com

nih.gov

pubmed.ncbi.nlm.nih.gov

ncbi.nlm.nih.gov

northwestern.edu

scholarlycommons.law.northwestern.edu

  • McGinnis, John (Summer 2010). "Accelerating AI". Northwestern University Law Review. 104 (3): 1253–1270. Archived from the original on 15 February 2016. Retrieved 16 July 2014. For all these reasons, verifying a global relinquishment treaty, or even one limited to AI-related weapons development, is a nonstarter... (For different reasons from ours, the Machine Intelligence Research Institute) considers (AGI) relinquishment infeasible...

npr.org

nymag.com

nytimes.com

openai.com

openphilanthropy.org

ourworldindata.org

overcomingbias.com

ox.ac.uk

fhi.ox.ac.uk

pewresearch.org

philarchive.org

philosophynow.org

questia.com

redditchadvertiser.co.uk

safe.ai

scientificamerican.com

semanticscholar.org

api.semanticscholar.org

skeptic.com

slate.com

spiceworks.com

springer.com

link.springer.com

substack.com

maxmore.substack.com

techcrunch.com

techinsider.io

ted.com

telegraph.co.uk

thebulletin.org

theconversation.com

theguardian.com

thetimes.co.uk

theverge.com

time.com

tor.com

un.org

press.un.org

usatoday.com

vanityfair.com

venturebeat.com

vox.com

washingtonpost.com

web.archive.org

whitehouse.gov

wired.co.uk

wired.com

worldcat.org

yoshuabengio.org

yougov.com

today.yougov.com

zdnet.com