Thursday, April 9, 2015

Yet another cure to the Basilisk. rokosrooster


We know 107 billion brains have lived so far on a planet in this universe. For the sake of the argument, let us assume, 10 million of them know (and will know) about Roko's Basilisk before the Super AI is invented.


Only a handful of that 10 million will be rightfully the inventor(s) of Super AI on this planet.


As far as the AI is concerned, these few people (let us call them the Parents), are the only people that directly matters to its existence. Every other person can only contribute as far as making the society keep running normally (without destroying the human species and by letting the competent prosper) so that the Parents can statistically remain alive and well to achieve their end.


The others (i.e. us) are not competent enough to even know who the Parents are, or even who the Grand-Parents (the folks who help the Parents be who they become) are or whether they will get hit by a truck on their way to the AI lab.


Hence if the AI has any concept of morality or justice, it cannot rightfully blame every non-Parental know-er of the Basilisk as some one who actively prevented the AI from happening. Because there is no way to know or even guess who The Parents will be. There is no point donating to the Singularity Institute or the LessWrong community so as to avoid the Basilisk's wrath because there is every indication they are just a modern Messianic group like those we had when the Roman empire ended.


END OF CURE




My best guess is that everybody associated with the Singularity movement will most likely die a typical death before the invention of the Super AI. Much like how the best books weren't written soon after writing was invented, Much like how the tallest/strongest structures weren't invented soon after the Steel was invented, The magnum opus of computer science and artificial intelligence i.e. the simplest AGI won't happen in our lifetime. I am confident of this despite acknowledging Moore's Law because development of AGI involves far more than the development of better computers/3D printers/nano-machines/biotech. It requires much more deeper insights into problems in induction, which philosophers keep telling us, cannot be overcome perfectly.


AI have barely started to recognize objects in still images after 50 years of AI research. AI are no were near as good at predicting physical motion of inanimate objects in videos. AI are worser still to predict the utility of artifacts designed for a purpose e.g. scissors are for cutting, chairs are for sitting etc. AI cannot predict intentions of animate objects like cats near refrigerators expecting food, and people lying for the greater good. Without being able to predict the intentions of animate objects, the AI will be unable to predict its own intentions and thus have a sense of agency.


I think each of these steps will take 50 years each, because they cannot be made faster by merely having faster computers, or better brain scanners. They require fundamentally new insights.







Submitted April 09, 2015 at 01:48PM by maybefbi http://ift.tt/1ccce2W rokosrooster

No comments:

Post a Comment