Decisive Strategic Advantage without a Hard Takeoff (part 1)
A common question when discussing the social implications of AI is the question of whether to expect a soft takeoff or a hard takeoff. In a hard takeoff, an AI will, within a relatively short time, grow to superhuman levels of intelligence and become impossible for mere humans to control anymore. Essentially, a hard takeoff will allow the AI to achieve what’s a so-called decisive strategic...
Read MoreSimplifying the environment: a new convergent instrumental goal
Convergent instrumental goals (also basic AI drives) are goals that are useful for pursuing almost any other goal, and are thus likely to be pursued by any agent that is intelligent enough to understand why they’re useful. They are interesting because they may allow us to roughly predict the behavior of even AI systems that are much more intelligent than we are. Instrumental goals are...
Read MoreAI risk model: single or multiple AIs?
EDIT April 20th: Replaced original graph with a clearer one. My previous posts have basically been discussing a scenario where a single AI becomes powerful enough to threaten humanity. However, there is no reason to only focus on the scenario with a single AI. Depending on our assumptions, a number of AIs could also emerge at the same time. Here are some considerations. A single AI The classic...
Read MoreDisjunctive AI risk scenarios: AIs gaining the power to act autonomously
Previous post in series: AIs gaining a decisive advantage Series summary: Arguments for risks from general AI are sometimes criticized on the grounds that they rely on a series of linear events, each of which has to occur for the proposed scenario to go through. For example, that a sufficiently intelligent AI could escape from containment, that it could then go on to become powerful enough to...
Read MoreDisjunctive AI risk scenarios: AIs gaining a decisive advantage
Arguments for risks from general AI are sometimes criticized on the grounds that they rely on a series of linear events, each of which has to occur for the proposed scenario to go through. For example, that a sufficiently intelligent AI could escape from containment, that it could then go on to become powerful enough to take over the world, that it could do this quickly enough without being...
Read More