Birthday present - quite a lot to take in. A wide-randing and deep look at Artificial Intelligence, the future and its implications, from someone who is a physicist rather than a Computer scientist.
Starts with a lot of defintions about what life is, what intelligence is.
- Life 1.0 (biological stage): evolves its hardware and software e.g. bacteria
- Life 2.0 (cultural stage): evolves its hardware, designs much of its software e.g humans
Life 3.0 (technological stage): designs its hardware and software e.g. future computer systems
- Intelligence = Ability to accomplish complex goals
- Artificial Intelligence (AI) = Non-biological intelligence
- Narrow intelligence = Ability to accomplish a narrow set of goals, e.g., play chess or drive a car
- General intelligence = Ability to accomplish virtually any goal, including learning
- Universal intelligence = Ability to acquire general intelligence given access to data and resources
- Artificial General Intelligence (AGI) = Ability to accomplish any cognitive task at least as well as humans
Superintelligence = General intelligence far beyond human level
- Civilization = Interacting group of intelligent life forms
- Consciousness = Subjective experience
- Qualia = Individual instances of subjective experience
- Ethics = Principles that govern how we should behave
- Teleology = Explanation of things in terms of their goals or purposes rather than their causes
- Goal-oriented behavior = Behavior more easily explained via its effect than via its cause
- Having a goal = Exhibiting goal-oriented behavior
Having purpose = Serving goals of one’s own or of another entity
- Friendly AI = Superintelligence whose goals are aligned with ours
Cyborg = Human-machine hybrid
- Intelligence explosion = Recursive self-improvement rapidly leading to superintelligence
- Singularity = Intelligence explosion
Explanation of neural networks, machine learning etc.
Career Advice: choose professions that seem unlikely to get automated in the near future. Does it require interacting with people and using social intelligence? Does it involve creativity and coming up with clever solutions? Does it require working in an unpredictable environment?
The main trend on the job market isn’t that we’re moving into entirely new professions. Rather, we’re crowding into those pieces of terrain that haven’t yet been submerged by the rising tide of technology!
A fast AI takeoff makes world takeover easier, while a slow one makes an outcome with many competing players more likely.
It’s a mistake to passively ask “what will happen,” as if it were somehow predestined! Instead ask: “What should happen? What future do we want?” If we don’t know what we want, we’re unlikely to get it.
Consciousness is by far the most remarkable trait.
A human-extinction scenario that some people may feel better about: viewing the AI as our descendants.
The only viable path to broad relinquishment of technology is to enforce it through a global totalitarian state. If some but not all relinquish a transformative technology, then the nations or groups that defect will gradually gain enough wealth and power to take over.
We’ve dramatically underestimated life’s future potential.
We’re not limited to century-long life spans marred by disease.
Life has the potential to flourish for billions of years, throughout the cosmos.
Unambitious civilizations simply become cosmically irrelevant. Almost all life that exists will be ambitious life.
A blue whale is rearranged krill.
The real risk with Artificial General Intelligence isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
Three tough subproblems:
- Making AI learn our goals
- Making AI adopt our goals
- Making AI retain our goals
The time window during which you can load your goals into an AI may be quite short: the brief period between when it’s too dumb to get you and too smart to let you.
A superintelligent AI will resist being shut down if you give it any goal that it needs to remain operational to accomplish - and this covers almost all goals! If you give a superintelligence the sole goal of minimizing harm to humanity, for example, it will defend itself against shutdown attempts because it knows we’ll harm one another much more in its absence through future wars and other follies.
The propensity to change goals in response to new experiences and insights increases rather than decreases with intelligence.
The ethical views of many thinkers can be distilled into four principles:
- Utilitarianism: Positive conscious experiences should be maximized and suffering should be minimized.
- Diversity: A diverse set of positive experiences is better than many repetitions of the same experience, even if the latter has been identified as the most positive experience possible.
- Autonomy: Conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle.
- Legacy: Compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans today would view as terrible.
Would we really want people from 1,500 years ago to have a lot of influence over how today’s world is run? If not, why should we try to impose our ethics on future beings that may be dramatically smarter than us?
If some sophisticated future computer programs turn out to be conscious, should it be illegal to terminate them? If there are rules against terminating digital life forms, then need there also be restrictions on creating them to avoid a digital population explosion?
How should we strive to shape the future of our Universe? If we cede control to a superintelligence before answering these questions rigorously, the answer it comes up with is unlikely to involve us. This makes it timely to rekindle the classic debates of philosophy and ethics, and adds a new urgency to the conversation!
Philosophy with a deadline.
What particle arrangements are conscious? Consciousness is an emergent phenomenon. Consciousness is the way that information feels when it’s processed in certain ways.