Roko's basilisk - RationalWiki
Roko's basilisk - RationalWiki
Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence.
According to the proposition, it is possible that this ultimate intelligence may punish those who fail to help it, with greater punishment accorded those who knew the importance of the task. This is conventionally comprehensible, but the notable bit of the basilisk and similar constructions is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.
[...]
In short order, LessWrong posters began complaining that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (who would know they'd once read Roko's idea) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. Thus, even looking at this idea was harmful, lending Roko's proposition the "basilisk" label (after the "basilisk" image from David Langford's science fiction stories, which was in turn named after the legendary serpent-creature from European mythology that killed those who saw it). The more sensitive on LessWrong began to have nightmares.
No comments:
Post a Comment