Archive

Archive for the ‘philosophy’ Category

Prosthetic Technology Presentation

March 4, 2013 1 comment

This is a presentation I gave at the William S. Boyd School of Law in preparation for the Southern Association for the History of Medicine and Science conference in Charleston, South Carolina.

It covers the history of prosthetic devices, current prosthetics, future trends, and ethical questions.

 

Slides:

https://docs.google.com/file/d/0B2QzBzjmgxN5RktFOHlpaWQ5VFk/edit?usp=sharing

Advertisement

Musings On Robot Sex Dolls and Companions

May 13, 2012 11 comments

The currents of the internet work in odd ways; this past week the theme seems to be robot sex. Since I have had it on the brain, I figure I will contribute to the trendiness and throw my own 2c in. (Just as a note, I will indicate any link that is explicitly Not Safe For Work).  I am going to blur the line a bit between just discussing robot sex and discussing robot companionship, a somewhat more involved relationship than the purely physical.

It seems to me there are essentially three main questions when it comes to human-robot sex. First, can we build a machine that anyone would want to have sex with? Second, how “intelligent” should that machine be? Third, is this just a fetish for weirdoes?

Technical Feasibility:

Not only can we build robots that people want to have sex with; we already have.

Certainly, there are all manner of devices people use for sexual pleasure, but I want to focus on machines more sophisticated than your average vibrator.

The aptly titled fuckingmachines.com (NSFW) is a pornographic site founded in 2000 that features videos and pictures of women having sex with robots that are not particularly technically advanced, and certainly not on the level of a sophisticated android sex-bot. Think battle bots for the bedroom. Despite the lack of sophistication, these are industrial pieces of hardware. For the home user, somewhat tamed versions of machines built for pleasure are available from mainstream websites like this “Love Glider Sex Machine” from Amazon.com (NSFW).

Andydroids.com (NSFW) has a number of both male and female android dolls for purchase. Although the website is not well constructed, this page (NSFW) seems to show various servos, circuit boards, and otherwise fairly advanced robotics working together to create a somewhat lifelike robot. Less sophisticated, but perhaps more lifelike, are Real Dolls (NSFW), in production since 1996. Real Dolls are as close as I have seen to human-looking sex bots, but are still a long way from indistinguishable from human.

The most realistic robot that I have yet seen (though it is not designed specifically for sex) is Geminoid F from Osaka University’s Professor Hiroshi Ishigurou. This robot can smile, talk, move, and appears very lifelike. According to this video, she even has “basic emotions and behaviors” programmed in. The biggest problems that I can see from the demonstration videos are that (1) the robot might be firmly entrenched in the “uncanny valley” (2) her movements are still a little jerky, and (3) her software is highly advanced, but hardly lifelike.

The uncanny valley is a hypothesis that argues that as robots become more human-like a human observer’s emotional response becomes more positive and empathetic. However, at some point, the robot is –too- lifelike, and a feeling of revulsion quickly replaces the positive and empathetic emotional response. If the robot becomes yet more lifelike, to the point of being indistinguishable from a human, the human observer’s emotional response will again become positive and empathic. Thus, to have a sex bot that anyone would actually want to have sex with, the robot is going to have to be on one side or the other of the uncanny valley; either not particularly lifelike, or extremely lifelike. For a robot that is expected to be more than a sex toy (say, for someone that a human might want to be partnered with) the robot would have to be extremely advanced and nearly indistinguishable from a human being.

Jerky movements can be compensated for by ever-better servos and other methods of movement. Popular Science, for instance, recently reported on Nobuhiro Takahashi and the University of Electro-Communications’ new robotic butt that responds to “slaps, caresses, and finger pokes.”

The video is a little creepy, but shows the sort of fine ‘muscle’ movement that Geminoid F lacks; movement that could be very useful in other parts of the robot as well.

ExtremeTech posted an article about Kissenger, a telepresence robot designed to allow two humans to kiss across great distances through a robot. Although this is hardly more advanced than previous robots, it does suggest that humans are willing to at least attempt to transmit an emotional connection through a robot. In addition, as ET points out, how much of a stretch is it from kissing a robot with another human on the other side to kissing a robot controlled by an A.I.?

This ScienceDaily article highlights synthetic skin that could, one day, allow a robot to feel. Even if we assume that there is no qualia (roughly: experiential consciousness) behind a robot feeling, all the data streams involved in transmitting some kind of feeling could be very useful for triggering micro-movements in various parts of the skin, perhaps even including subtle changes like goose bumps, etc.

Technically, I think we are about there. Some more materials development (in particular a temperature regulation system and a lubrication system would be two huge upgrades that I have not seen) some finer muscle control, and some more realistic design and robots might just climb out of the uncanny valley. However, what about the software side of the robot?

A.I. and Sex-Bots:

The next question is how much artificial intelligence a robot companion ought to have.

On one end of the scale, we have Real Dolls – essentially human-looking mannequins without any sort of robotics or artificial intelligence. These sorts of sex-bots are fine as far as they go for purely physical entertainment, but most people probably will not develop any emotional connection to their toys (especially if they hang their Real Doll by the “removable neck bolt” as their FAQ suggests.)

Towards the middle of the scale, and likely right at the edge of our current capability, we have Geminoid F; a robot with basic emotional scales programmed in that can spontaneously create new reactions to situations. The jerky physical movement is mimicked by the jerky emotional reactions; they are broadly appropriate, but are not exactly finely tuned enough to seem human.

Ideally, it seems like the perfect robot companion ought to have emotions that at least mimic human emotions very well; the ability to smile, wink, and bite their lip at just the right time and have something that at least seems plausibly like a twinkle in their eye. Perhaps complex human-based personality profiles could be uploaded that allow the robot to seem very much like a human being, albeit with customizable settings for each individual user to account for differing tastes. Maybe the robot could exhibit this personality outside of the bedroom as well; transforming a sex robot into something more like a personal companion or even a partner.

However, it seems important to limit both sex robots and companion robots to non-conscious levels of intelligence. Most importantly, because I think that cognitive criteria are the defining hallmarks of a “person,” and that a robot with actual consciousness ought to be considered a person. If we think it is wrong to keep people for sex toys (and we certainly do) then I cannot see the same behavior being justifiable for conscious robots.

However, even outside of the moral personhood angle, a conscious robot would have something like free will, or at least clearly articulable preferences. If the goal of a sex-robot or companion robot is to have the ideal partner, then we certainly don’t want our robot telling us ‘no’ or ‘I’m not in the mood’ (unless we program that in for some sort of more realistic behavior.) We want to be able to program in our individual desires and preferences which make the robot ideal for each of us, and a robot with free will would presumably be overwriting our preferences with their own fairly often. A robot with true artificial intelligence would not have many advantages over a human partner.

In short, much like the physical problem of the uncanny valley, we want a robot intelligent enough to seem human-like without actually being conscious enough to be a person.

Who Would Want A Sex Robot?

We can dispense with the obvious fairly quickly; probably people with intimacy issues, various kinks and fetishes, and those who just want sex without everything else that often comes with it would be first in line for a very realistic sex-bot. ExtremeTech recently wrote an article about robot prostitutes that argues that robots could take over the prostitution industry (wouldn’t a sex-bot be cheaper over the long run, after all?) in addition to lessening human trafficking, pedophilia, and other sex crimes.

I think, however, a compelling case can be made that more than just the socially awkward and sexually deviant (in the clinical sense) would appreciate a sex-robot. Dick Pelletier recently wrote a piece for IEET where he highlights a number of authors who have argued just that, including tech luminary Ray Kurzweil: “Author Ray Kurzweil says tomorrow’s ‘droids could quickly learn to flesh out our positive feelings, providing an addictive allure almost impossible for us to resist.” Indeed, with ruthless, cunning efficiency a robot with sophisticated enough software could read various biometric signals that humans give off, allowing him or her to customize their personality to the preferences of their human owners that the owner may not even know that they have. Moreover, like any good device, the robot would presumably become more accurate over time, and change as their owner does. This sort of adaptive learning is an ingenious solution to forcing the operator to think of all of their own preferences and program them into their robot companion; something humans have a difficult enough time expressing to each other.

The allure of the perfect seducer / seductress is vast, and not to be underestimated. No matter how fabulous your human partner is, there is bound to be –something- about him or her that is not 100% ideal. Maybe they snore. Maybe they like to cut you off while you are talking. Maybe they just forget to put the toilet seat down. Whatever it is, trivial or serious, there is some way (and, likely, a number of ways) that they are not ideal. Of course, humans overlook these qualities in other humans all the time during relationships; coping with each other’s idiosyncrasies and quirks (which might even become endearing after a while) is largely what human relationships are about, and provide an extra level of intimacy in a relationship. Nevertheless, even if your human partner –is- wonderful and you cannot think of a single thing you would change about them, they are still only one personality.

An interesting implication of robot-companions is that there is little reason why multiple personalities could not be installed within one physical frame, and those personalities could be changeable at will. Maybe you want a sultry professional for an office meeting, a wild party girl for a Halloween party, a tomboy for a Super Bowl party and a quiet intellectual for a lazy Sunday afternoon. Perhaps you want a nice gentleman for dinner, a jock for the pool, and a real alpha-male for bed later. A robot companion can switch effortlessly into different personalities, each tailored to your specific desires. These personalities could even be ported into different physical frames for those who desire a differing physical appearance every now and again.

Beyond the physical and personality advantages, there could be greater emotional security from a companion bot as well. From Dick’s IEET article: “A robot partner would be the perfect mate, never showing boredom or being inattentive, Levy says. You will always be the focus and centerpiece of their existence and you never need worry about their being unfaithful or going astray, because loyalty and being faithful are embedded in their programming.” With a divorce rate hovering somewhere around 50% in the United States, human relationships seem to be the emotional equivalent of a coin flip (and subsequent relationships fare even worse.) Never mind the cost of alimony and child support.

In short, I think that with advanced enough A.I. (but not too advanced, per the above) sex or companion robots could very well become the ideal mates for humans. Human-robot relationships could be purely sexual, or they could become more like true companions. Either way, such human-robot interactions do not necessarily mean the end of human-human interactions, or inevitable extinction for lack of reproduction. There are, after all, plenty of children to adopt, and there is little reason to think that the technology involved in creating children will fail to advance as rapidly as other technologies.

We are still a long way from this sort of interaction, but the upsides seem considerable.

Gadgets, Brains, and Healthcare

January 5, 2012 4 comments

Only five days in to 2012, and mind-blowing articles are already dropping.

According to Pentagon scientists (reported by Physorg.com and others), Cornell students have created a device that splits beams of light, hiding an event from sight. They’re calling it a time cloak. For around 40 picoseconds (trillionths of a second) the scientists are able to create a gap in the light by using a time-lens to split the light into slower red and faster blue components. This makes anything occurring in the gap invisible. In theory scientists could make the device effective for a few millions of a second, or perhaps even a few thousandths of a second, but a device large enough to erase a whole second would need to be approximately 18,600mi long. Even for someone like me who envisions mechanical implants for humans and perhaps even brain uploading into a computer, this article is fantastic. I’d love to see some confirmations of this technology and a better explanation for how, exactly, it works. Still, it seems it won’t be a very effective Ring of Gyges anytime soon, if at all.

Researchers in Japan, meanwhile, have created super sensitive sensors out of carbon nanotubes. The sensor is flexible enough to be woven into clothing, and can be stretched to three times its normal size. In addition to rehabilitation uses, this sort of sensor seems great for the blossoming world of controllerless video game systems like the Xbox Kinect. Such sensors are also implantable into people receiving organs (biological or otherwise) or could just be used to record biometrics in your everyday clothing.

Finally, Klaus Stadlmann gives a TED Talk about inventing the world’s smallest 3-D printer. It seems to be about the size of a Playstation 2, and can print in incredible detail. I thought the talk was a little dry, but still interesting.

There have been several interesting brain articles in the last few days. Forbes ticks down their top-10 brain articles from 2011, including memory-assisting chips, using magnetism to affect moral judgments, potential treatments for people suffering from Alzheimer’s disease, and thought-controlled apps for your cell phone. Although the brain is still largely mysterious, scientists are making massive amounts of progress on all fronts yearly.

Discover Magazine reports that anesthesia might be the key to better understanding how consciousness works. Apparently it’s not unusual for patients under anesthesia to wake up, then go back under and never remember that they woke up. I’ve talked a bit about the problem of recognizing consciousness before (one essentially has to rely on reports of consciousness, but consciousness itself cannot be directly tested for) and this article does a good job of reiterating the problem. The researchers hope that by putting people under and eliciting subjective reports of consciousness after the fact, they will be better able to pin down just what it is that makes a person conscious.

Medicalxpress.com posted an article in December asking Why Aren’t We Smarter Already? The authors suggest that there is an upper-limit to various brain functions, and that while drugs and other things could potentially bring low-scoring individuals up, those already at or near peak performance would see little or no gain from the same drugs. If this is right, then there is reason to doubt that mind-enhancing drugs (say, Adderall) could make the smartest people even smarter. Yet, the article only talks about improving the mind that we have, and not about whether it is possible to create an artificial brain (or introduce artificial implants into a biological brain) that -could- break past these natural barriers. It’s no secret that the body is well, but not optimally, designed, and that the same is true of the brain shouldn’t really be surprising.

TechCrunch offers a predictive list of technologies coming in 2012 in an article penned by tech luminary and SingularityU professor Daniel Kraft. According to Daniel, A.I. will become increasingly helpful in determining diseases, from cheap phone apps that detect cancer with their cameras to A.I. assisted diagnoses in remote villages. 3-D printing will continue to advance, massive increases in patient data will be shared on social network sites like patientslikeme.com, and videoconferencing technology like Skype will increasingly allow doctors to examine patients without an office visit. All good things.

Last, but not least, a team of scientists at USC have recently mapped an entire human genome in 3-D. They hope to be able to evaluate genomes not just based on their genetic make-up, but also their physical structure. Because genomes take up three dimensions in the body, a 3-D map should be a lot more accurate than the standard model.

 

Human Exceptionalism and Personhood

August 12, 2011 3 comments

I run a general ‘transhumanism’ RSS stream through a Google News widget, and occasionally it brings me stories that I would never have found on my own. This last week I came across a blog post from Wesley J. Smith called The Trouble With Transhumanism. Mr. Smith was responding to an article by Kyle Munkittrick, but I was primarily interested in Mr. Smith’s argument against expanding the definition of personhood to include some animals and advanced A.I. because I’m in the process of researching for a paper that will argue exactly the opposite.

I’ve gone back and forth with a few people in Mr. Smith’s comment’s section under his article, and there are great points being made in every post (maybe mine are mundane). If you have time, it’s worth checking out. The conversation tended toward transhumanism more generally, however, so I want to address some thoughts here about personhood specifically. I’ll try to outline Mr. Smith’s argument against personhood rights for animals and A.I.s first, but he’s such a prolific writer that I’ve only been able to absorb a smattering of his writing. I asked him to double check my understanding, but he hasn’t gotten back to me yet (busy writing, I’m sure) so I’ll edit this or he can clarify in comments if I’m misrepresenting his arguments.

Mr. Smith advocates for Human Exceptionalism, saying essentially that just because humans are unique, they are exceptional and morally good. Specifically, he says “Because we are unquestionably a unique species–the only species capable of even contemplating ethical issues and assuming responsibilities–we uniquely are capable of apprehending the difference between right and wrong, good and evil, proper and improper conduct toward animals.” He says further that “[a]n embrace of human exceptionalism does not depend on religious belief” and “[a]fter all, what other species in the known history of life has attained the wondrous capacities of human beings? What other species has transcended the tooth-and-claw world of naked natural selection to the point that, at least to some degree, we now control nature instead of being controlled by it?” He seems also to believe that human exceptionalism is the only acceptable basis for ethical behavior: “Or to put it more succinctly if being human isn’t what requires us to treat animals humanely, what in the world does?” and “Without the conviction that humankind has unique worth based on our nature rather than our individual capacities, universal human rights are impossible to sustain philosophically.” He also seems to favor including the unborn in the class of people: “Those individuals who happen to lack those attributes have either not developed them yet (embryos, fetuses, infants), or have illnesses or disabilities that impede their expression. But those attributes are unique to the human species, they are uniquely part of our natures.”

If this is Mr. Smith’s argument, it seems to me there are several problems with it.

First, he seems to indicate that mankind has unique worth just by virtue of being human. I take this to be a biological argument because he rejects the idea that we ought to assign personhood based on specific attributes. Plenty of studies have compared human DNA to chimps and have concluded that we share somewhere between 95% and 98% of the same genetic code. If our exceptional nature is biological, then it seems like either chimps are 95-98% exceptional (and should thus have nearly all of the same rights; an argument I assume Mr. Smith rejects) or something else in the 2%-5% difference in human DNA contains the exceptional parts while the chimps lack those parts and are, thus, not exceptional and undeserving of personhood related rights. There are also minor genetic differences between human beings: somewhere between .1% and .5%. Presumably (if ‘universal human rights’ are based on DNA) none of the ‘exceptional’ traits are contained in the variation between traits of human beings.

So, assuming the 1.5% – 4.5% (I imagine that the difference between chimps and humans is relevant, but the difference between people is not) difference between chimps and humans is relevant, what about the DNA could account for the exceptionalness of humans? Taking Mr. Smith’s statement that exceptionalism doesn’t depend on religious belief at face value, there must be something about the chemistry of the DNA that makes us exceptional. Now that we’ve mapped human DNA, it’s evident that there is no purely exceptional gene: the building blocks for that 1.5% – 4.5% are the same as the building blocks for the other 95.5%+ of genes. That indicates to me that there is nothing exceptional about the genes themselves, although those genes might combine in such a way so as to allow for exceptional traits. After all, the DNA difference has allowed humans to send rockets to the moon while chimps are still struggling with sign-language. If that’s right, then the biological argument is wrong and gives way to the attribute-based arguments that Mr. Smith rejects. Even if the biological argument were right, it’s unclear what about a DNA sequence carries with it inherent moral worth.

If, instead, our (at the moment, so far as we know) unique human attributes could well be exceptional and could serve as a demarcation point between persons and non-persons. This makes sense, though it does ring a bit of speciesism on its face (we get to pick the attributes that make us special, and they just so happen to be the traits that we have.) That doesn’t make the difference wrong, just suspect. What it seems to me we need to do is create a list of those attributes that make us special so that we have an objective set of criteria for including other beings into personhood if they meet those criteria. Plenty of other people have done this, though I’m not yet at the point in my research where I’m prepared to illustrate concrete examples (I’ll probably readdress the question over the next few months as my research continues.)

Whatever those criteria are, it doesn’t seem that we ought to cling to the idea that the uniqueness of humans alone makes us special. It also doesn’t seem to make much sense to suggest that without our unique status morality and animal rights cease to exist. It seems perfectly coherent to suggest that animals, plants, humans, and computers that don’t meet the criteria of personhood are special and worth protecting, even if we don’t consider them equals and people. A dog isn’t a person, but it’s also not worthless. If chimps suddenly became people, (and there is research to suggest that the smartest chimps are at least as smart as 3-4 year old humans, and we certainly consider young children people) dogs still wouldn’t be worthless; chimps would just become a little more special. We already have standards of decency when it comes to animals (animal cruelty laws, anti-poaching statutes, research standards, etc.) and the inclusion of chimps into the ‘people’ category doesn’t seem to affect those standards directly. Also, just because chimps are people doesn’t mean that they get the full gamut of ‘rights’ that (some) humans currently have: they won’t be driving any time soon, or voting, or buy houses. We don’t, after all, let 3-4 year old humans do those things. Even among people some have rights that others do not; felons can’t run for office or own firearms, 15 year olds can’t drink, smoke or drive (legally) and the clearly insane can’t have contracts enforced against them except in limited circumstances. So, even among humans, there are gradations of personhood rights that we (largely) deem acceptable.

Mr. Smith seems to reject transhumanism more generally because he sees it as detracting from the essence of being human. But, to the extent that we can create a list of what attributes make people special, and to the extent that transhumanism can enhance (or at least not detract from) those traits, it seems like transhumanism could make people more, rather than less, special. Indeed, if “overcoming the tooth-and-claw world of natural selection” makes us special, then it seems like transhumanism, in particular, ought to make us really special precisely because it allows us to overcome even more. And to the extent that advanced A.I. and animals can meet the criteria we create, they too should be people.

The main thrust of my paper argument (as currently conceived) is this: A ‘normal’ adult 100% biological human being is certainly a person. If that person wears a pair of night vision goggles, enhancing their vision outside of that humans can normally have, they are still a person. If they lose both their legs and get prostheses, they are still a person. If we replace the arms, they are still a person. Indeed, it seems like we can continue to replace and replace and replace biological parts with mechanical parts and, with the exception of perhaps the brain, the now augmented-human remains a person. Even in the brain, humans who have anti-seizure implants installed, or chunks of brain removed, remain people. So it seems like we could start with a biological human, turn that human into a machine piece by piece, and the personhood never goes away. But Mr. Smith argues that if we start with the machine, even if they have all the same attributes as the person-turned-into-a-machine, the machine is not a person.

I just can’t see why.

Why Technology Is Not Bad For The Brain

June 3, 2011 7 comments
The Mark News recently published a piece by Susan Greenfield of Oxford University’s Institute for the Future of the Mind. She argues that technology is having adverse effects on the human mind for a number of reasons. First, computers are replacing full-on five-sense experience like that we get from the “real world” with two-sense experience (namely, sight and hearing) because that’s all computer monitors are capable of producing. Because monitors only have two senses to work with, they overload those two senses in order to keep us interested, and in the process we move from understanding to simple information processing as we try to make sense of the rapidly flashing sights and sounds emitted by our monitors. This information processing leads to more shallow understanding, and “infantilizes” our brains. She also tacks on a complaint about Twitter status updates being trivial and attention-seeking. She ends by arguing that by spending more time in front of the computer, people are learning less “real life” skills like how to make eye contact and socialize with our peers.

 

With due respect, I think she’s completely missed the mark (no pun intended). I’ll address each of her points in turn.

 

She is undeniably correct that monitors are limited to two-sense output. But this isn’t limited to monitors – televisions, movie screens, etc are also limited to two senses. Books, before that, were also limited to either two senses (if you count the tactile sensation of holding the book, although then it seems like computer keyboards ought to give computers three-sense input) or a single sense – sight. If less than full sensory input negatively affects the brain, then the problem Susan is describing has been going on for quite some time – at least as long as the general populace has had ready access to books.

 

On the other hand, she’s only right as a matter of contingency – if monitors are limited to two senses, then I think that first she isn’t taking into account tactile feedback devices that allow for more sensory input (as a simple example, vibrating “rumble” controllers that have been out for 25 years or so) and second that she’s discounting the relatively short period of time that this will be so. Sight and sound are easily programmable. Basic tactile feedback is easy enough, but more complicated feedback is starting to come around – see the “kissing machine” I wrote about previously, and teledildonics in general. See also the Wii Kinect and Playstation Move, that require users to hold those controllers and actually move about in an approximation of the actual activity. See also, Guitar Hero. Taste and smell are the most complicated to program because either we would need something like a direct neural connection to simulate smells and tastes, or some kind of taste and smell mixing packet integrated with the machine, such that it can produce appropriate smells and tastes on demand. With full immersion VR we will probably get there (and I imagine smell will come first, because it is easier to pump a gas than layer on a taste) but one wonders if expecting taste and smell from a computer is any more sensible than expecting taste and smell from a car.

 

Greenfield is also mostly right that because monitors only have two senses to work with, that they (potentially) maximize their output to those senses. However, the extent to which this is true depends on what the computer is being used for. If you’re playing Call of Duty, with twenty or thirty players running around a chaotic map firing guns, tossing exploding grenades, and calling in attack choppers to rain death down on every enemy on the map then yes, sight and sound are being overloaded to a point where information processing is more important than cognitive understanding. These are “twitch-based” games for a reason – they rely more on instinct than understanding, although some tactical knowledge never hurts in warfare games. Further down the spectrum are turn-based-strategy games, where the game pauses to allow the player to set up their next move and then resumes to show the outcome of the player’s choices. These have periods of downtime where cognitive processing is very important; they are strategy games, and so understanding and appropriately reacting to whatever situation the game is presenting is vital. Like chess.  Then there are moments of flashing sounds and lights where the decision is simulated and pays off in sensory experience. See games like Civilization or the turn-based Final Fantasy games (which are more role playing games than a turn based strategy games, but still require considered thought on a pause screen during battle.)  At the far end of the spectrum is a simply laid out webpage which seems, to me, no more complicated than a book. How ‘overloaded’ are your senses as you read this? How much thinking are you doing about the content of what you read, and how much twich based information processing is expected? Computers (and video game machines, which I consider essentially the same) offer a variety of output options, some more information process-y and some more cognitive.

 

If information processing infantilizes our brains, then it seems we have bigger problems than computers. How does driving (or riding a horse) differ from a twitch based game in terms of the amount of sensory input that needs to be dealt with immediately? Indeed, if computers are limited to two senses, then ought we not be glad that our brain has less information to process? Shouldn’t we worry more about full on experience than the admittedly limited output offered by computers? Each of us is constantly trying to make sense of a “fast and bright” world consisting of whatever sensory inputs are crashing through our eyes, nose and ears and washing over our tongues and skin. If this torrent of information is a bad thing, as Greenfield implies, then ought we not be glad to have the comparatively restful deluge of information offered by the most chaotic two-sense games? Ought games not be restful to the brain, as many gamers think, instead of hyper-active?

 

The understanding we gain from computers depends entirely on what the computer is being used for. If one spends their entire time playing Farmville there will be some cognitive development, but I expect that there is only so much knowledge to be gained from the game. If, however, one spends all their time reading the Stanford Encyclopedia of Philosophy, or even Fark.com, then much more knowledge-based cognitive development is possible. The computer is no more of a brain-sucking device than a television, and the information derived from a television, like a computer, depends on what the device is being used for: Watching the Science channel is going to lead to more cognitive development than Jersey Shore.

 

Greenfield’s final argument, that interacting via computer detracts from “real world” skills, is well taken but ultimately potentially mistaken.

 

First, like her sensory argument, if true at all her argument is only contingently true. As video calling and holographic representation become more popular, eye contact and traditional “real world” skills are more and more useful. I’ll argue below that useful social skills are being learned through the computer medium already. Finally, her argument depends on the “real world” remaining the primary method of communication between people. Just like someone who is socially awkward (perhaps, as Greenfield implies, because of time spent on the computer) has difficulty maneuvering through social situations in the real world, so too does a computer ‘noob’ have difficulty maneuvering through social situations in the virtual world. How many of us have chastised (or at least seen chastised) someone for formatting their emails poorly, or for typing in all caps? How difficult is it to ‘get’ the internet if one doesn’t understand basic internet lingo like emoticons (  🙂  ) and acronyms (LOL). Even a basic understanding is not enough to skate by effortlessly if one doesn’t understand more subtle internet cues that evidence, if not (to borrow an old term) internet leetness (itself an example of internet culture) then at least familiarity and proficiency. Does anyone who spends all their time in the “real world” understand the importance of Anonymous? If one constantly runs across acronyms that they don’t understand (omgwtfbbqpwn) or memes they don’t get (lolcats have become pretty mainstream, but what about technologically impaired duck, or Joseph Ducreux) or anything on (sorry to bring them into it) 4chan. Technologically impaired duck is, as a whole, about people who don’t ‘get’ internet culture.

 

Beyond pure information, I’ve seen that people who have spent their entire lives off line still have difficulty separating good sources from bad, or valid emails from spam and scams. In short, there is a different culture on the internet that could, if the internet becomes the primary method of communication, be just as rich as the “real world” and where picking up subtle cues through emoticons or acronyms is just as important as making eye contact or cracking a joke in the “real world.”

 

Finally, I don’t think that “real world” social skills are being lost simply by being online as Greenfield claims. Certainly, for all its downsides, Facebook has taught people to network better. Prior to the internet, one was limited to communicating with people in roughly the same geographic area, or perhaps a few if that person had traveled extensively for work or school. It was hard to keep in touch, and people had to make an effort to call or write to another person. Now, it’s much easier to keep in touch with far-flung friends and to know without much trouble what is going on in their lives (to the extent that they make such information available) and to drop a note to say hello. Games like World of Warcraft force groups of people to work together, learning leadership skills and group dynamics on the way. The internet is about connecting people with each other, and that is a skill in vogue both online and offline.

 

It seems to me that Greenfield is upset that the method of communication is changing, but I don’t think her (admittedly short) article really explains how technology is damaging the human brain. I have no doubt she’s a smart woman and probably has a lot of academic work to back up her claims; I’d like to see that work and see if it, too, is based on what I consider to be faulty assumptions.

Genetic Coding, Synthetic Brains, and Publicity

April 25, 2011 1 comment

First, a brilliant (aren’t they all) TED talk by Dr. Fineberg, who explains some of the potential right around the corner for genetic rewriting. Interestingly, I think the question that he repeatedly asks is also the answer to critics who suggest that there will be an enormous (and bloody) opposition to superhumans (or transhumans, or whatever your preferred term is): “If the technology existed to allow you to live another 100 years, wouldn’t you want to?”

For me, the answer is clearly yes. Moreover, I don’t see many other people saying no. In a modern context, there are groups who refuse medical treatment (and other technological progress) for moral or religious reasons: the Amish are the most explicit example, but many other people refuse blood transfusions, medical treatment, or choose not to get an abortion because the science doesn’t align with their morals or religious views. Except for abortion, the protests are largely silent and don’t get in the way of the rest of us getting out transfusions or treatment. Hardly anyone protests people wearing eyeglasses, or taking insulin, or even getting brain-to-computer implants that allow victims of various full body paralysis disorders to communicate. It seems to me that the main reason for that is that everyone (or substantially everyone) at the end of the day -wants- to live longer, healthier, more fulfilling lives. And science, through genetic coding and mechanical augmentation, lets us do that.

Note, however, the eugenics undertones in his talk: In this sense I think the ideas in my earlier post remain plausible when genetic coding is restricted to diseases for the unborn (though Dr. Fineberg doesn’t necessarily agree that we should do that) or at least when adults can rewrite their own code ‘on the fly’ while alive (that is, code changes aren’t limited to the developing human but can be performed on already alive people.)It is, to be fair, a fine ethical line and one upon which reasonable people can disagree.

Dr. Fineberg’s talk is below, though I encourage you to visit TED to see other brilliant (but not necessarily futurist) talks.

Second, Science Daily comes through with an article explaining how carbon nanotubes might be used to create synthetic synapses; the building blocks of brains. Two things about this article jump out at me. First, the timeline. In 2006 people at USC started wondering if we might create a synthetic brain. Five years later, they’ve created artificial synapses. That’s a pretty quick turnaround, even by today’s standards. Second, the numbers. According to the article the human brain has something on the order of 100 Billion neurons, and each neuron is comprised of some 10,000 synapses. By my fuzzy math, that means we’d need something on the order of 1 Quadrillion (10^15) carbon nanotubes to recreate the structure of the human brain. I hope we can make these in big batches!

Finally, a little shameless self-promotion. Although I have a ‘going public’ post, as of today I’m officially branching out through social media groups. So, if you’re new here, I encourage you to leave your thoughts and, if you enjoy what you read, share with your friends.

Thank you.

What is Experience?

April 19, 2011 Leave a comment

Given upcoming law finals, I’m going to punt on writing another long article (at least for today) and instead link to another fascinating H+ article, this time by Eray Özkural.

Join Eray on a fascinating journey through the mind of a bat, a person, an upload, and a machine, via philosophers like Searle and Nagel. But not at the same time: An uploaded bat-person cyborg philosopher is just too much.

Transhumanism and Eugenics

April 15, 2011 3 comments

I came across an opinion piece in the Catholic San Francisco Online Edition this week written by Sandro Magister. He was, according to the head notes, summarizing part of a talk by French philosopher Fabrice Hadjadj. Fabrice argues that the term “transhumanism” was coined by Julian Huxley (brother of Aldous Huxley, of Brave New World fame); the first director of the United Nations Educational Scientific and Cultural  Organization (UNESCO) and supporter of eugenics. This seems to be roughly correct (there is some disagreement, but most sources I can find verify the basic information) and I’ll take it as true for the purposes of this article.

Fabrice argues that Huxley (Julian, for the remainder of the article) coined the term to talk about eugenics without using the dirty ‘E’ word so tarnished by Nazi atrocities. He then goes on to say: “Nonetheless, the same thing [eugenics] is intended: the redemption of man through technology” and “It is precisely a matter of improving the “quality” of individuals, as one improves the “quality” of products, and therefore, probably, of eliminating or preventing the birth of everything that would appear as abnormal or deficient.”

There are, it seems to me, two major problems with Fabrice’s argument.

First, as the wiki linked above indicates, eugenics was a respectable idea up until the end of WWII, when the Nazi’s perverted the idea and applied it to traits that most people don’t see as defects. While the Nazis were interested in a so-called ‘master race’ and took eugenics to be the elimination of ‘inferior racial and other undesirable groups’, eugenics prior to WWII focused on a much tamer idea of ‘undesirable traits’ including traits like hemophilia and Huntington’s disease. Even the tamer idea of eugenics carries significant ethical questions concerning people’s right to reproduce, and entails difficult determinations about what traits are ‘desirable’ and which are ‘undesirable’ to whom, and why.

Today, many people would probably agree that traits like autism are largely undesirable (that is, few people if any would choose to have an autistic child, given the choice to have a child without autism) but would balk at the idea of labeling traits like homosexuality undesirable (Fred Phelps and others aside.) At the core, however, eugenics seems to be about the right to reproduce: Who should, and who shouldn’t, given the genetic code that will be passed on. No less than US President Theodore Roosevelt and Supreme Court Justice Oliver Wendell Holmes supported the general idea, though some of the policies set forth by these otherwise great men might today be considered unethical. See, for instance, Buck v. Bell, a US Supreme Court case from 1927 upholding a Virginia forced-sterilization law for the mentally ill. The point, however, is that the general idea of eugenics might be defensible if one doesn’t immediately point to the Nazi’s as the primary supporters of the idea; an move known on the internet as Godwining the argument.

Second, and more importantly, whatever the ethical status of eugenics, it seems to me that most transhumanist ideas represent something different. A large part of transhumanism revolves around individual choice: A particular person might choose to implant a piece of technology, or replace a biological limb with another, or even ingest a pill that rewrites some part of their genetic code such that their own traits are changed. Changing one’s own traits, however, is fundamentally different from telling other’s that they are not allowed to reproduce for the good of the species. The ethical problems inherent in a eugenic ‘master plan’ that tells others that they can or cannot reproduce because some traits that they will pass on to their child are simply not present when an individual, already fully formed, rational, and competent, chooses to change their own appearance or genetic code.

There are non-transhumanist ideas with some similarity to eugenics to which people already subscribe which don’t involve personal choice, but instead choice over another (usually a fetus). For instance, people largely seem to be morally OK with screening for ‘undesirable’ traits in their children like autism and Huntington’s disease. Today, a positive result on such a test largely informs the parents that their child might be afflicted with some disease, but there is little that can be done about the disease itself. If the screening is completed early enough, many women are comfortable with the idea of aborting a would-be child but many others, uncomfortable with the ethical implications, are not. As the traits identified become less and less serious (say, Down syndrome or other life affecting traits on the ‘serious’ extreme, and hair color on the ‘trivial’ extreme), the corresponding number of people willing to accept abortion as an option drops. Rightfully so, as abortion really is an ‘all-or-nothing’ scenario, with no middle ground.

Transhumanism presents the possibility of that middle ground and, when transhumanism is viewed as an extension of current medical technology, likely leads to an extension of current medical attitudes. Thus, it might be morally acceptable to rewrite the genetic code of an infant (or fetus, or embryo, or whatever) to remove serious genetic defects, but less acceptable to rewrite the genetic code of an infant to change their hair color. See, for instance, the ‘designer baby‘ controversy. Because genetic rewrites are not necessarily an all or nothing affair like abortion is, it provides the ability for a parent to have a child without a life altering disease like Downs syndrome without forcing the parents to choose between having a baby with the genetic disease on the one hand, and aborting what would become their child on the other hand.

Further, depending on how many times (and how successfully, and how cheaply, and safely, etc) genetic rewrites can be accomplished, the moral status of even trivial decisions might become a non-issue. Where traits like hair and eye color, skin tone, and maybe even memory and intelligence can be rewritten at will, particularly through cheap, safe drugs that can be taken at any time, a parent’s preference for a (say) brown haired, green eyed child might be but a default setting, freely changeable when that child reaches the age of majority (or the parents are willing to sign the permission forms.) The more traits that can be changed (perhaps sexual preference, gender itself, height, weight, whatever) the less moral stigma is attached to parents choosing any particular traits for their child.

If traits like these are freely, cheaply, and safely changeable, then ‘eugenics’ both ceases to mean what it previously meant (controlling traits through restrictive reproductive permissions) and ceases to carry the attached social stigma (because having an ‘undesirable’ trait, or not, is wholly a matter of personal preference.) Far from code for eugenics, transhumanism might be the idea that makes eugenics itself irrelevant.

Philosophers and Transhumanism

April 9, 2011 Leave a comment

Transhumanism is the most revolutionary transformation in the history of mankind; if even a small amount of what leading futurists predict comes true we will need to redefine the meaning of the word ‘human.’

Below is a paper written for a contemporary philosophy class wherein I urge philosophers to seriously consider the ramifications of transhumanism. Since writing this paper, IEET (and others) have written similar pieces indicating that some philosophers are doing just that.

Why Philosophers Should Get Serious About Transhumanism.