Calculations on this page were done using the uncertain.py Python script that assumes independence of sub-estimates in order to tighten up otherwise very broad ranges.
For estimates of when advanced AI might come to exist see for example Forecasting AI.
(existential threat probability) = (chance of advanced AI 20-100 years out) x (chance hostile or neutral) x (chance this represents an existential risk to humanity) = 60-80% x 50-80% x 80-100% = 29-53%
Guesstimate the world has 2,000-5,000 researchers focussed on long term improvement to AI to the point where it might become competitive with human intelligence. A majority of AI research is focussed on applying AI, or making short term improvements to existing techniques. Estimate there are 40,000-50,000 AI researchers, but guess only 5-10% perform the sort of long term work that might be of concern. There were around 40,000 - 50,000 artificial intelligence scientific papers published per year over the period 2011-2015. Papers published divided by number of researchers is about 1.0 for technology papers.
We guesstimate the cost of successfully influencing legislation focusing on smarter than human AI at $10m-50m/year. We make this estimate considering the estimates made on our Lobbying and Advocacy page. This estimate is about a quarter that of some of the broad policy areas surveyed on account of it being a narrow focused issue. We would consider reducing this estimate further except for the novelty of the issue. Such efforts are likely to only delay, not eliminate any threat from smarter than human AI. Other approaches would be neccessary to eliminate any threat.
What might such legislation contain? It is early to say, a lot of policy work is required first, but the best course of action might be not to have any legislation at all. That is influence the legislative process to ensure that smarter than human AI isn't suddenly declared as a national goal. Just as having smarter than human AI made a national goal might speed up its emergence, not having a policy would slow it down, and hopefully provide more time to ensure we get it right.
We will consider four different possible approaches to dealing with the threat posed by hostile or neutral smarter than human AI.
Project | Cost | Real world outcome | Outcome estimates | Economic value in Western terms |
---|---|---|---|---|
Prevent hostile AI through research to build a friendly AI control system | $200m | chance of eliminating a 29-53% chance of an existential threat | Guess effort to develop a friendly AI control system (MIRIs traditional approach) costs 100 people x 20 years x $100k/person/year (with no uncertainty; uncertainty shows up in the results); guess chance of success 10-30% (problem appears hard); chance that AI researchers will want to adopt it 30-80%; chance that they will technically be able to do so 20-30%; save 10-30% x 30-80% x 20-30% x 29-53% x 8b lives; value placed on a life $2m | $60t-300t (leverage factor 300,000 - 1,500,000) |
Prevent hostile AI through co-opting researchers to build a friendly AI control system | $40m-80m | chance of eliminating a 29-53% chance of an existential threat | Reduce cost of previous approach to 20-40% of original value if it is possible to co-opt universities and other institutions into performing much of the research | $60t-300t (leverage factor 900,000 - 6,000,000) |
Prevent hostile AI through influencing long term AI research | $40m-100m/year x 30 years | chance of eliminating a 29-53% chance of an existential threat that occurs 40 years out | Consider attempting to influencing researchers and research institutions to focus on friendly AI through conferences, prizes, awards, advertising, and grants costing $20,000 per long term AI researcher per year; total cost $40m-100m/year; assume smarter than human AI occurs 30 years out; odds that attempting to co-opt the researchers will make a difference, guess 10-30%; save 10-30% x 29-53% x 8b lives; value placed on a life $2m | $580t-2000t (leverage factor 250,000 - 1,200,000) |
Delay hostile AI through legislative process | $10m-50m/year x 35 years | delay by 2-5 years a 29-53% chance of an existential threat | Guess legislation delays the possible emergence of hostile AI by 2-5 years; assume smarter than human AI still occurs 35 years out; save 2-5 x 29-53% x 8b life years; choose not to discount life; so value placed on a life year is $2,000,000 / (80 / 2) = $50k; assume any loss due to delay in beneficial AI technologies is small compared to existential threat | $290t-860t (leverage factor 250,000 - 1,500,000) |
900,000 - 6,000,000