Showing posts with label Anti-minds. Show all posts
Showing posts with label Anti-minds. Show all posts

Tuesday, August 26, 2014

ECHOPRAXIA, Peter Watt's sequel to BLINDSIGHT

One of my all-time favorite cogsci-fi novels, Peter Watt's Blindsight (previously here and here) has a sequel now. It's Echopraxia, and this review makes it sound pretty terrific.

I'm stoked! Anyone else read it?

Explanation of the title from the review:

As for zombies, they are simply people whose higher thought processes have been turned off. This is done either by surgery, or as a side-effect of bioengineered viral plagues. Zombies function autonomically, without conscious awareness. Their mental apparatus is “reduced to fight/flight/fuck” basic responses. They make great soldiers and sex slaves, because they follow orders unquestioningly. They are in fact subject to the malady that gives the novel its title:  “echopraxia”  is the condition in which a person compulsively imitates someone else’s actions and behavior. When they are not under hierarchical control, they simply imitate one another, and go on rampages like in the movies.

Saturday, February 23, 2013

Roko's basilisk - RationalWiki

Roko's basilisk - RationalWiki

Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence. According to the proposition, it is possible that this ultimate intelligence may punish those who fail to help it, with greater punishment accorded those who knew the importance of the task. This is conventionally comprehensible, but the notable bit of the basilisk and similar constructions is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.

[...]

In short order, LessWrong posters began complaining that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (who would know they'd once read Roko's idea) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. Thus, even looking at this idea was harmful, lending Roko's proposition the "basilisk" label (after the "basilisk" image from David Langford's science fiction stories, which was in turn named after the legendary serpent-creature from European mythology that killed those who saw it). The more sensitive on LessWrong began to have nightmares.

Tuesday, September 11, 2012

Reading this post will destroy your soul | MetaFilter

Reading this post will destroy your soul | MetaFilter:
The Motif of Harmful Sensation (or as TV Tropes calls it, the Brown Note) is a recurring idea in literature: physical or mental damage that a person suffers merely by experiencing what should normally be a benign sensation. The phenomenon appears in both traditional and modern stories.

Monday, April 2, 2012

Why our minds have probably evolved as far as they can go

Why our minds have probably evolved as far as they can go (io9.com):
Our brains have reached an evolutionary "sweet spot", and we can't get much smarter without making major trade-offs. That's the finding of psychologists Thomas Hills of the University of Warwick and Ralph Hertwig of the University of Basel. They have examined a number of studies, and they have come to one inescapable conclusion: there's a steep price to pay for enhanced brainpower, and it's almost certainly not a good deal from an evolutionary perspective.

Wednesday, February 22, 2012

Bookslut | Blindsight by Peter Watts

Nice discussion of Watts's Blindsight, including comparisons to other works on similar themes: Bookslut | Blindsight by Peter Watts:
As in Bruce Sterling's "Swarm," and some of Stephen Baxter's work (such as the grand Evolution), the value of consciousness itself is questioned. Control is an illusion, after all: think about moving your arm, and your arm will already be in motion. We exist after the fact -- or, as Siri's friend Pag puts it, "We're not thinking machines, we're -- we're feeling machines that happen to think."
We are observers, not agents, and where's the survival advantage in that? Watts's aliens, certainly, think rings around his humans and posthumans. They can detect the electromagnetic fluctuations of a human brain, and rewire them in real time. They can time their movements so precisely as to hide in the saccades of our eyes. And they can do it, in part, because they are not conscious, because consciousness is expensive: "I wastes energy and processing power, self-obsesses to the point of psychosis [...] They turn your own cognition against itself. They travel between the stars. This is what intelligence can do, unhampered by self-awareness," is Sarasti's blunt assessment. We are a fluke, a mistake; in evolutionary terms, a dead end. Once we get beyond the surface of our planet we are not fit.

Saturday, June 25, 2011

Can there be a Singularity without superintelligence (or vice versa)?


Can there be a Singularity without superintelligence (or vice versa)?
Mitchell Howe:
Strictly speaking, it is possible for there to be both a Singularity that does not entail the creation of superintelligence, and for supertintelligence to not trigger the onset of a Singularity. Both are improbable, regardless of the specific criteria used to define Singularity or superintelligence, but some of the potential "loopholes" are worth discussing.

The potential for Singularity without superintelligence depends largely on which variant of the Singularity is being used. A predictive horizon, for example, can be reached if it is anchored at some particular date (which it never really is, in my experience.) In fact, if this date is sufficiently far back in our past, one could argue (uselessly) that we are living in a Singularity now. Also, if a Singularity is considered reached when the distance to the predictive horizon becomes sufficiently small, our own lack of foresight, not the arrival of superintelligence, may turn out to be the cause. If the idea of a developmental Singularity is used, it is possible that existing trends in automation will result in sharply spiking productivity without the need for any greater intelligence. Finally, even the "greater intelligence" definition of Singulairty need not neccessarily mean the arrival of superintelligence -- which implies minds vastly more intelligent than we are now. In each of these cases, however, one must wonder how long exponentially spiking rates of progress, foreseeable or otherwise, could contiune before superintelligence appeared as one of the many new products of such an age -- or before slightly greater intelligence helped design superintelligent successors. So, the Singularity has a very reasonable chance of preceeding superintelligence, but probably not by much. As other parts of this Q&A discuss, it would be very surprising if greater intelligence proved to be impossible or limited.

On the flip side of this question, that of superintelligence without Singularity, the salient concern is for just how "super" and involved superintelligence would be in our own affairs. If superintelligence were surprisingly unimpressive, malicious, or apathetic, its creation would not do much to initiate a Singularity for the rest of us. There are, in fact, a host of such concerns people tend to have about superintelligence, and the most important of these have their own extended responses in this Q&A. For now, let it be said that most of the common concerns are groundless -- based on flawed, if understandable ideas about intelligence -- and that the rest can probably be dealt with through responsible approaches to research and design.

Friday, June 17, 2011

Dennettian Zimboes


From Cosma Shalizi's review of Daniel Dennett, Brainchildren

The standard objection to Dennett's view of the mind is that it makes no allowance for the difference between creatures with inner lives, namely us, and those without, namely zombies. Zombies might have quite sophisticated dispositions and sensitivities to their external environment, and even to their own information-processing (so the objection goes), but they'd have no inner experience --- they might be able to discriminate red roses from yellow roses, but they'd have no experience of redness, no red qualia. Dennett's quite characteristic response to this objection is to argue that there is no defensible difference between sufficiently nuanced sensitivities and qualia. Consider the case, he asks us, not of zombies per se but of zimboes, who are behaviorally just like us conscious human beings, but have no inner lives. Zombies are the mindless malevolent minions in a Boris Karloff movie; zimboes, when villainous, are more in the Sidney Greenstreet line but, by hypothesis, they show just the same range of heroism, vice, and moral muddle that we do. They'd certainly talk and act as though they thought they had qualia. Maybe brain damage can make people into zimboes --- only they'd insist nothing was wrong! Maybe lots of people (all, of course, normal-seeming) are zimboes --- John Searle, for instance, or this reviewer, or your landlord. They could be everywhere. Consciousness could be a genetic abnormality. Even your best-beloved could be a mere zimbo. In fact, how do you know that you are not a zimbo?

Dennett's answer is that you don't, because, as it happens, you are. Turned around: zimboes, creatures with sophisticated sensitivities to the external world and their inner environment, enjoy just as much consciousness as there is to be had.

Monday, November 29, 2010

Peter Watts (author)




I've just started reading Peter Watt's freely available novel, Blindsight. So far, it's terrific. And, as Charles Stross blurbs,:


"Imagine a neurobiology-obsessed version of Greg Egan writing a first contact with aliens story from the point of view of a zombie posthuman crewman aboard a starship captained by a vampire, with not dying as the boobie prize."

http://en.m.wikipedia.org/wiki/Peter_Watts_(author)

Wednesday, May 19, 2010

Blindsight (science fiction novel) - Wikipedia, the free encyclopedia


Quotes:

Blindsight (science fiction novel) - Wikipedia, the free encyclopedia

Eighty years in the future, Earth becomes aware of an alien presence when thousands of micro-satellites surveil the Earth; through good luck, the incoming alien vessel is detected, and the ship Theseus, with its artificial intelligence captain and crew of five, are sent out to engage in first contact with the huge alien vessel called Rorschach. As they explore the vessel and attempt to analyze it and its inhabitants, the narrator considers his life and strives to understand himself and ponders the nature of intelligence and consciousness, their utility, and what an alien mind might be like. Eventually the crew realizes that they are greatly outmatched by the vessel and its unconscious but extremely capable inhabitants.


    This message was sent to you by petemandik via Diigo

    Tuesday, January 12, 2010

    Thursday, July 16, 2009

    Accelerando


    I just finished reading Accelerando by Charles Stross today. My two word mini review is: Wow, damn. Slightly longer: I'd have to rank this up there with Sterling's Schismatrix, Stephenson's Snow Crash, Vinge's A Fire Upon the Deep, and Egan's Diaspora as far mind-bending idea-saturation goes. This is a depiction of a post-singularity/post-human future thoroughly informed by contemporary experience with internet and related technologies. Stross also demonstrates quite a bit of familiarity with philosophy of mind, especially Dennett's (the Dennettian notions of zimboes and Cartesian theaters get put to work). (There's quite a bit of Clark/Chalmers extended mind stuff, too.)

    It will take me a while to fully digest all of Stross's ideas relevant to the Alternate Minds project, (e.g. his treatment of group minds and virtual minds) but I'm especially impressed right now with his depiction of transcendent intelligences and the threat they pose to the enhanced-but-still-human post-humans (and the development of what Stross calls "cognitive anti-bodies" and what I call "anti-minds").

    Monday, July 6, 2009

    ZALGO!

    The "ZALGO!" meme, explained and complied at the following links:


    Some examples:




    Thursday, June 25, 2009

    Of Anti-minds and the Swarm


    Does it take a mind to detect a mind? If there could be a principled answer to this question the implications would be huge for the philosophy and science of mind.


    Consider that so much of science depends on the unintelligent detection of unintelligents. Hydrogen samples are not particularly intelligent. Further, mechanisms capable of detecting the presence of hydrogen need not themselves be intelligent.


    Maybe part of being a natural kind is that the unintelligent detection of instances of that kind is possible. Jerry Fodor has suggested that non-natural kinds like crumpled shirts or doorknobs can only be detected by minds. You have to be the sort of thing that knows a bunch of stuff in order to "light up" in the presence of a door knob.


    In the Sterling short story "Swarm" (excerpts here), the Nest is this asteroid that is mostly just a big super-organism that wanders the universe and whenever it is "invaded" it assimilates the invaders. Most of the various diverse species in the asteroid were once representatives of vast space-faring technological cultures that, when they encountered the Nest, got taken over and reduced to unintelligent animals and integrated into the Nest ecology inside of the asteroid. Swarm is an intelligent organism activated under certain instances for the protection of the Nest. Swarm explains how ultimately useless intelligence and consciousness is and suggests that the Nest is entirely unintelligent, and that the Nest grows a new Swarm whenever an intelligent invader needs to be dealt with. Once the intelligent invader is dealt with (rendered into a dumb slave animal) then Swarm self-destructs being no longer needed.


    It occured to me that Swarm was to minds what antibodies are to germs, so I coined "anti-mind". It also occurred to me that if Swarm was right that prior to the activation of Swarm, the Nest group organism was truly non-cognitive, then whatever mechanism that activates the growth of a new Swarm must itself be an unintelligent mechanism. So, the idea of an anti-mind is the idea of a thing that is not a mind but is capable of detecting minds. But this leads to what strikes me as some pretty interesting philosophical questions: Is there any way a dumb mechanism can detect the presence of intelligence? Can an unconscious mechanism detect the presence of consciousness?


    If Dennett is right, intentional systems are detectable only from the intentional stance, which I take to entail that only minds can detect minds. If a lot of qualia-freaks are right, the only way to detect the presence of qualia is to have some yourself, and thus only consciousness can detect consciousness.


    If these remarks are correct, the implications for science fiction are obvious: the "anti-mind" in the Sterling story is impossible. But enough about fiction: what about science? If the impossibility of unintelligent detection entails that the kinds that are intelligently detected are non-natural, then is a full-blown science of such kinds thereby doomed?


    Excerpts from Sterling's Swarm





    åņŧį-mįņd


    Originally uploaded by Pete Mandik.



    Excerpts from Bruce Sterling's Swarm



    "You are a young race and lay great stock by your own cleverness, " Swarm said. "As usual, you fail to see that intelligence is not a survival trait."


    Afriel wiped sweat from his face. "We've done well," he said. "We came to you, and peacefully. You didn't come to us."


    "I refer to exactly that," Swarm said urbanely. "This urge to expand, to explore, to develop, is just what will make you extinct. You naively suppose that you can continue to feed your curiosity indefinitely. It is an old story, pursued by countless races before you. Within a thousand years perhaps a little longer your species will vanish."



    "Intelligence is very much a two-edged sword, Captain-Doctor. It is useful only up to a point. It interferes with the business of living. Life, and intelligence, do not mix very well. They are not at all closely related, as you childishly assume."


    "But you, then you are a rational being”"


    "I am a tool, as I said." "When you began your pheromonal experiments, the chemical imbalance became apparent to the Queen. It triggered certain genetic patterns within her body, and I was reborn. Chemical sabotage is a problem that can best be dealt with by intelligence. I am a brain replete, you see, specially designed to be far more intelligent than any young race. Within three days I was fully self-conscious. Within five days I had deciphered these markings on my body. They are the genetically encoded history of my race within five days and two hours I recognized the problem at hand and knew what to do. I am now doing it. I am six days old."



    "Technology, though I am capable of it, is painful to me. I am a genetic artifact; there are fail-safes within me that prevent me from taking over the Nest for my own uses. That would mean falling into the same trap of progress as other intelligent races."





    LinkWithin

    Related Posts Plugin for WordPress, Blogger...