Archive

Posts Tagged ‘brains’

Biotic Hands and Programmable Brains

May 20, 2011 2 comments

Every so often I come across an article that really illustrates how near the future is. This week, I came across two of them.

The first article, by Singularity Hub, is about Milo and Patrick, two men who have chosen to have their hands removed and replaced with biotic hands. Both went through extensive surgery (Milo for around 10 years) but eventually they decided that because the surgeries were ineffective, replacing their hands with biotic hands was a sensible alternative. These stories are important for at least three reasons.

First, both surgeries were elective procedures in the sense that neither man had his hand replaced to save his life, or as part of a traumatic incident. Both men had biological hands, although they were damaged beyond reasonable use. Elective replacement for limbs is on tricky ethical ground because, for many people, replacing limbs is largely a procedure of last resort. Previously, limbs were removed to prevent the spread of gangrene, or to save the person from further infection spreading, or for reasons otherwise necessary to protect the person’s life. Here, for at least two men, given one hand of lesser functionality than normal each, replacing a less functional human hand with a biotic hand more functional than their damaged hand (but, seemingly, less functional than a ‘normal’ hand) made sense.

Second, if two men are able to choose to replace a biological hand with a more functional biotic hand, then others should be allowed to make the same decision. Despite the amazing progress made by Otto Bock (creator of the biotic hand), the hand still doesn’t provide all of the benefits of a normal human hand, and offers only one benefit that a human hand does not: 360 degree range of motion at the wrist. However, the limitations on the hand are technological, and with a sensory feedback system like the one Bock is currently working on those limitations ought to be cured quickly. Once a biotic hand is as functional as a biological hand, scientists ought to be able to craft improvements for the biotic hand that include all sorts of functional improvements (increased grip strength, additional sensory inputs like the ability to sense electric currents, more range of motion, enhanced durability) and some more cosmetic improvements (perhaps a small storage space, or wifi, an OLED screen, or other patient-specific enhancements.) Very quickly, a biotic hand will be superior to a normal hand, and not just a severely damaged hand,

Finally, one cannot escape the naked truth that this is what people have had in mind when they used the word “cyborg” for decades. Although it’s true that eye glasses, pacemakers, and seizure-reducing brain implants are all mechanical augmentations to a biological person such that the term cyborg is properly applied, few people tend to think of their uncle with the pacemaker as a cyborg. In part, that’s true because pacemakers are not visible, and even hearing aids are more like eye glasses than biotic hands because they are removable and not an integral part of the human body. These hands, however, are replacing a major part of the body with a clearly mechanical device. The article is unclear whether these hands come with some sort of synthetic skin that masks their metal servos and pivot points, but from the pictures there is just no mistaking that these men are now part robot.

We have reached a point where we can program mechanical devices so that they can communicate with the brain through the nervous system. But what about programing the brain itself?

Ed Boyden thinks he has created a solution for that too. I highly recommend watching the video yourself. The gist is that neurons in our brains communicate via electrical signals. By using engineered viruses to implant DNA encoded with photo-receptor cells taken from parts of algae into brain cells, Boyden can then shine a light onto parts of the brain and only those cells so implanted with photo-receptors activate for as long as the light shines. By activating particular groups of neurons, Boyden can stop seizures, or overcome fear responses that would otherwise cripple an animal. Using fiber optics and genetic encoding, Boyden has found a way to direct the brain to act just as he wants: He has, in essence, figured out how to program a brain, or at least how to hack the brain to add or remove particular functionality.

Further, when photo-receptor cells are implanted and neural activity can be activated via light, the human brain begins to look even more like a computer. By regulating light inputs the cells implanted with photo-receptors activate and produce particular effects depending on the type of cell that they are. With a basic on-off activation scheme, neurons become a lot like the chips in our computer that we activate with electricity turned either on or off. This on-off sequence is represented by 0’s and 1’s in binary code and upscaled to more complicated programming languages. With an implanted light array, programmers ought to be able to create flashing light sequences that affect the brain in preset ways, essentially writing a code that controls parts of the brain. Even if scientists simply read the signals of the neurons, all of human experience ought to be reducible to groups of neurons firing or remaining dormant in complicated patterns. If that is so, then there is no reason why we couldn’t download a stream of our experiences in complete detail, and perhaps eventually upload them as well.

The viral DNA distribution method has also been used to restore sight to mice with particular forms of blindness, apparently at the same level of functionality as mice who had normal sight their entire lives. This distribution system ought to be able to introduce whatever bits of DNA seem useful, essentially taking parts of the DNA from other animals and implanting them into human cells to augment our own biology. The color-shifting ability of a chameleon, the ultra-sensitive scent glands of a snake, or the incredible eyesight of a hawk are certainly products of their DNA, and conceptually ought to be transferable with the right encoding. Boyden is quick to point out that the technology is just getting started, but given the exponential increase in technological progress I suspect that we will see vast progress in their fields, and perhaps even human testing, in the next five to ten years.

Despite the exciting prospects of viral DNA introduction, I can’t help but flash back to the beginning of movies like Resident Evil and I Am Legend. Even for technophiles like myself, some of this technology is a little unnerving. That’s all the more reason to start taking a hard look at what seems like science fiction now and figure out what ethical lines we are prepared to draw, and what the legal consequences for stepping outside of those lines ought to be. Much of this technology, if used correctly, is a powerful enabler of humanity for overcoming the frailties of our haphazard evolutionary path. The very same technology used incorrectly, however, could have dramatic and catastrophic consequences for individual patients and for humanity as a whole.

These two stories indicate the dual tracks of transhumanism: The mechanical augmentation side replaces biological hands with mechanically superior components while the biological enhancement side introduces bits of foreign DNA into our own cells to provide additional functionality. If the rate of progress continues, both of these tracks ought to be commonplace within the next 20 years or so. At the point where we can reprogram the human brain and replace limbs with mechanically superior prosthetics, Kurzweil’s Singularity will be here.

I, for one, am very excited.
Advertisement

Genetic Coding, Synthetic Brains, and Publicity

April 25, 2011 1 comment

First, a brilliant (aren’t they all) TED talk by Dr. Fineberg, who explains some of the potential right around the corner for genetic rewriting. Interestingly, I think the question that he repeatedly asks is also the answer to critics who suggest that there will be an enormous (and bloody) opposition to superhumans (or transhumans, or whatever your preferred term is): “If the technology existed to allow you to live another 100 years, wouldn’t you want to?”

For me, the answer is clearly yes. Moreover, I don’t see many other people saying no. In a modern context, there are groups who refuse medical treatment (and other technological progress) for moral or religious reasons: the Amish are the most explicit example, but many other people refuse blood transfusions, medical treatment, or choose not to get an abortion because the science doesn’t align with their morals or religious views. Except for abortion, the protests are largely silent and don’t get in the way of the rest of us getting out transfusions or treatment. Hardly anyone protests people wearing eyeglasses, or taking insulin, or even getting brain-to-computer implants that allow victims of various full body paralysis disorders to communicate. It seems to me that the main reason for that is that everyone (or substantially everyone) at the end of the day -wants- to live longer, healthier, more fulfilling lives. And science, through genetic coding and mechanical augmentation, lets us do that.

Note, however, the eugenics undertones in his talk: In this sense I think the ideas in my earlier post remain plausible when genetic coding is restricted to diseases for the unborn (though Dr. Fineberg doesn’t necessarily agree that we should do that) or at least when adults can rewrite their own code ‘on the fly’ while alive (that is, code changes aren’t limited to the developing human but can be performed on already alive people.)It is, to be fair, a fine ethical line and one upon which reasonable people can disagree.

Dr. Fineberg’s talk is below, though I encourage you to visit TED to see other brilliant (but not necessarily futurist) talks.

Second, Science Daily comes through with an article explaining how carbon nanotubes might be used to create synthetic synapses; the building blocks of brains. Two things about this article jump out at me. First, the timeline. In 2006 people at USC started wondering if we might create a synthetic brain. Five years later, they’ve created artificial synapses. That’s a pretty quick turnaround, even by today’s standards. Second, the numbers. According to the article the human brain has something on the order of 100 Billion neurons, and each neuron is comprised of some 10,000 synapses. By my fuzzy math, that means we’d need something on the order of 1 Quadrillion (10^15) carbon nanotubes to recreate the structure of the human brain. I hope we can make these in big batches!

Finally, a little shameless self-promotion. Although I have a ‘going public’ post, as of today I’m officially branching out through social media groups. So, if you’re new here, I encourage you to leave your thoughts and, if you enjoy what you read, share with your friends.

Thank you.

Free Will and Legal Culpability

April 11, 2011 1 comment

In the April 2011 issue of Scientific American (subscription required), Michael Gazzaniga presents a survey of emerging neuroscience research and analyzes the legal issues that those breakthroughs might lead to. Most of the article discusses the legal view of new technologies like fMRI (functional magnetic resonance imaging) as complicated polygraphs. Most judges currently exclude fMRI evidence intended to (for instance) show that a witness is lying or an accused murderer has a brain defect likely to cause insanity because reliable predictions of behavior based on fMRI currently require samples of scans on other brains, and researchers can describe the results of such scans only in terms of what the scans are likely to mean based on those groups of similar scans. Put another way, X-brain-activity is likely to mean that the witness is lying based on the brain activity of other people that have been scanned, but the presence of X-brain-activity does not conclusively prove that the witness is lying (and anyway could only show that the witness believes that they are lying, or intends to tell a lie, not that the propositional phrase spoken by the witness is, in fact and devoid of belief, false.) Correlation, as my undergraduate professors liked to reiterate, is not causation.

That said, however, even in criminal trials the law does not require absolute certainty. Thank goodness for that, because skeptical philosophers continue to point out that about the only thing we can be certain of is that “I exist” (for whichever I happens to be thinking the proposition at the time) and little else. Thanks, Descartes. We’ve decided, as a society, that punishment is proper where some defendant likely committed some wrong (or really, really, likely committed some wrong in the criminal context) and that as a result we accept that some (actually) innocent people are going to be punished when we think it likely that they’re guilty. Given that we don’t require absolute certainty, should brain scans be excluded just because they, too, are not 100% accurate? If the accuracy of a brain scan is tied to the sample size that the results are being compared to, shouldn’t we get larger sample sizes rather than wait on the technology to get better? If a researcher could testify that a sample size of 10,000 subjects reliably detected lies in 85% of cases, it seems (to my underdeveloped understanding of burdens of proof) that the civil ‘preponderance of the evidence’ standard is certainly met, and that the criminal ‘reasonable doubt’ standard is also likely met. If that’s right, then on what basis are these scans being excluded?

Gazzaniga also asks “Would it erode notions of free will and personal responsibility more broadly if all antisocial decisions could seemingly be attributed to some kind of neurological deviation?” He states, as a follow up: “People, not brains, commit crimes.” What could the second question possibly mean, given the current scientific understanding of how the universe works, and, based on that understanding isn’t the first statement likely to be true anyway? To vastly oversimplify a raging philosophical debate, most philosophers and (presumably) scientists ascribe to a roughly physicalist view of the universe. That is: Science tells us that everything is made of physical stuff (quantum mechanics aside) and that everything is a relation between that physical stuff. If we accept the premise that the universe is made of all and only physical stuff, then “we” can be little more than our brains. Our brains, made of physical stuff as they are, are themselves controlled by complicated chemical interactions that present to each of us, individually, as personality and consciousness and to the scientists studying them as brainwaves and chemical activity. Put more simply, we just are our brains, and our brains just are a long chain of chemical reactions. If a deviant chemical reaction causes (what we consider to be) aberrant behavior, then doesn’t it make more sense to fix the chemical reaction than to assign notions of blameworthiness based on outdated information?

I oversimplify, of course, and gloss over interesting ethical questions to make my point. The practice, however, works in the real world. We have little difficulty reserving moral judgment where John accidentally trips and injures Jane so long as John was being reasonably careful and didn’t intend to hurt anyone. We currently (though not frequently) send murders who are clinically insane to treatment facilities rather than jail when they kill someone while insane. We accept that diabetics require insulin because their chemical reactions are deficient, not because they are morally deficient. Why should our brain be treated any differently?

My questions are not condemnations of Gazzaniga, or of his fascinating article. I just wonder at the implications of our judicial policy and a lingering belief that we are something other than our brains when we’ve abandoned such pretenses for less personal organs.