|Over Two Meters Tall!
||07-02-15 09:25 PM
Originally Posted by el_machinae
There are two major societal risks, as far as I can see.
- the escape of a super-AI with a goal that we don't like
And here's the biggest problem with AI. We hairless monkeys can't even get together in groups larger than one without disagreeing and insisting we have an 'ultimate' solution, which is usually the equivalent of utter submission or destruction of the loser.
I think it would be a hoot if an Abominable Intelligence ended up achieving the terrifying goal of far outstripping it's creator's intelligence, only to use that power to improve humans in ways the humans found entirely agreeable and sensible. Then the Imperium would have to destroy it on principle, being otherly and an Abominable Intelligence! :laugh: