Wednesday, July 30, 2014
Monday, July 21, 2014
Is quantum mechanics relevant to the philosophy of mind (and the other way around)?
from Scientia Salon
Thursday, June 5, 2014
One of the projects I've been highly absorbed in lately is the new podcast and video series, SpaceTimeMind, that I'm co-hosting with Richard Brown. There's a lot of overlap in themes between SpaceTimeMind and the Alternate Minds project. See, for instance, our 5th episode, Transhumanism and Existentialism. Especially pertinent is our latest installment, our interview with Roger Williams, author of The Metamorphosis of Prime Intellect (discussed previously here and here).
Monday, January 6, 2014
Time travel has captured the public imagination for much of the past century, but little has been done to actually search for time travelers. Here, three implementations of Internet searches for time travelers are described, all seeking a prescient mention of information not previously available. The first search covered prescient content placed on the Internet, highlighted by a comprehensive search for specific terms in tweets on Twitter. The second search examined prescient inquiries submitted to a search engine, highlighted by a comprehensive search for specific search terms submitted to a popular astronomy web site. The third search involved a request for a direct Internet communication, either by email or tweet, pre-dating to the time of the inquiry. Given practical verifiability concerns, only time travelers from the future were investigated. No time travelers were discovered. Although these negative results do not disprove time travel, given the great reach of the Internet, this search is perhaps the most comprehensive to date.
(ht: Maureen Eckert)
Wednesday, May 29, 2013
Friday, April 5, 2013
Saturday, February 23, 2013
Roko's basilisk - RationalWiki
Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence. According to the proposition, it is possible that this ultimate intelligence may punish those who fail to help it, with greater punishment accorded those who knew the importance of the task. This is conventionally comprehensible, but the notable bit of the basilisk and similar constructions is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.
In short order, LessWrong posters began complaining that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (who would know they'd once read Roko's idea) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. Thus, even looking at this idea was harmful, lending Roko's proposition the "basilisk" label (after the "basilisk" image from David Langford's science fiction stories, which was in turn named after the legendary serpent-creature from European mythology that killed those who saw it). The more sensitive on LessWrong began to have nightmares.