Artificial Intelligence: How well does Nick Bostrom understand AI?
Artificial Intelligence: How well does Nick Bostrom understand AI?. Are You Mr or Mrs own that kind of concern?, If yes then please check the good feedback right after below:
Nick Bostrom is a speculative philosopher, not an AI practitioner. His work is more concerned with the risks and ethics of AI than with AI itself.
Bostrom’s major work in AI consists of studying the risks of superintelligence. He argues that after an AI achieves human-level intelligence, it would take much less time for it to achieve superintelligence. Then he explores the possibility than an AI with non-human motivations might get out of control and dangerous.
There are many ways that Bostrom’s predictions might fail, such as:
- Superintelligence might not get out of control, but rather might continue to develop smoothly from lower levels of intelligence.
- Superintelligence might be benign and develop an above-human level of ethics.
- Given how superintelligence developed out of human society, it is likely to continue to develop in ways that are beneficial to that society.
- Superintelligence might be more specialized than general, and so not interested in competing with or replacing humans.
- The nature of superintelligence may be more predictable than we imagine.
- Superintelligence might be more limited than we imagine, given how much the world is mathematically intractable and hard to observe.
- There will likely be a whole world of separate AIs, rather than a single superintelligence, which will force all the AIs to work with each other.
Bostrom is just sketching out the possible dangers of AI from general notions the same way as you or I would. He doesn’t have any special knowledge of the future and hasn’t experienced a superintelligence any more than the rest of us. So you don’t need to accept his guesses about the development of AI as a factual or valid picture of the future.
We find it hard to imagine how the future will control the superintelligence that they develop. But our ancestors often found it hard to imagine how we control the complexity of our present world. This is a failure of imagination, not a failure of the future.
We at this situspanda dotcom hope if those answers above can answer your question. Thank You So Much