Archive

Archive for the ‘neuroscience’ Category

Rebuilding The Human Body

May 28, 2012 4 comments

I often post articles relating to medical technology, but today I’m going to focus on just those technologies that are available or being researched now that can be implanted into (or onto) humans. Specifically, I am going to talk about several new technologies that promise to restore (and one day replace) faulty biological systems. We will start at the top.


Eyes:

Scientific American reports that scientists have created a retinal implant that can restore sight to some of the blind. Light-detecting cones (called photoreceptors) in the eyes that malfunction cause some forms of blindness. By implanting a tiny 3mm x 3mm chip at the back of the eye, the device can act as artificial photoreceptors and transmit the light that the failing biological photoreceptors no longer do. This implant has been tested on humans (a continuing trial has expanded to five additional cities) but the implant still is not perfect. For one, it requires an external power supply (which sits behind the ear in this model.) For another, only a “narrow field of vision” is restored. Several other companies are also working on solutions.

Already, however, upgrades are in the works. Technology Review reports on a new light-powered implant that promises to remove the external power supply while also granting a much clearer and wider field of vision. However, this device requires relatively bulky external glasses to function (as does another device set for testing next year.) Based on Kurzweil’s exponential predictions, we can expect these devices to double in power and shrink by half in size roughly every two years. By 2020, these devices may very well be fully implantable into a skull, completely replacing the faulty eye.

Not only could one replace the eye, however, the implant could have additional functionality. For instance, Fox News reports on new contact lenses in development for the Department of Defense that offers Heads Up display (HUD) technology in addition to other virtual reality and augmented reality solutions. Other devices, including drones, could transmit real-time battlefield information to each soldier’s implants, giving a true bird’s-eye view of the battlefield and increasing situational awareness immensely. Since this technology is already quite small, integrating it into a false-eye instead of a contact lens ought not to be a very difficult prospect. Of course, this technology is valuable outside of the military as well, so regular folks ought to get a more immersive video game experience and be able to access technology-enhanced vision for their own uses too.

 

Chest:

The eyes are hardly the only organs we can replace; scientists recently implanted an artificial, and pulseless, heart inside a man. Instead of a pulsing supply of blood like a regular heart provides, the new heart supplies a continuous stream of circulating blood.  Although the blood vessels, veins, capillaries, and other blood-transferring structures still limit the force with which blood can be circulated, presumably a device like this could increase blood flow on command when a person is engaged in strenuous activity (and without having to get one’s ‘heart racing’ beforehand.)

Not only is the function of the heart being improved upon, but also scientists have recently created the lightest artificial heart and implanted it into a baby. Doctors in Rome replaced a 16-month old baby’s failing heart with a device that weighs a mere 11 grams (the normal adult heart weighs more than 300 grams.) Further, this device has already gained FDA approval, and so is ready for transplantation into other patients.

Perhaps one way to use these continuous-flow devices is to propel tiny devices for surgery or, later, for delivering drugs and maintaining the general health of our bodies. Scientists at Stanford have invented one such device. This tiny device can move through blood vessels and other parts of the body at the doctor’s direction, cleaning out blood clots and the like. This device has some way to go before it is ready for clinical trials, but does provide proof of concept now.


Arms:

Prosthetic arms have been progressing nicely for several years now (see my Deus Ex article, for instance.) Just recently, however, there have been a number of really exciting improvements in prosthetic arms.

Traditionally (if such a word is appropriate in this sort of fast-moving field) prosthetic arms operate through sensors on the inside of the prosthetic that monitor electrical signals traveling to whatever is left of the patient’s limb; an arm severed above the elbow, for instance, still has muscles that run to the point where the arm was amputated and the sensors detect electrical signals through the skin at the end of the arm. This allowed crude movement at first, and more fine motor control later as the number of sensors increased. There was, however, no sense of feeling in the limb. Now, however, scientists in Vienna have found a way to replace some nerves that originally controlled hand and arm movement and have relocated those nerves to the chest; now the patient, British soldier Andrew Garthwaite, will be able to ‘feel’ through his prosthetic arm when sensors in the arm transmit data to the nerves now relocated into his chest. The movement of nerves ought to also make control of his new arm more fluid and natural. See this video from the BBC:

DARPA (the Defense Advanced Research Projects Agency) and other agencies are trying to improve upon this technology even more by promoting further nerve growth and more feeling in limbs. Wired magazine also has an article describing the technical challenges; the need for polymers that are inoffensive to biological tissue, yet are conductive and strong.

Finally, whether fitted with a prosthetic limb or not, a direct user interface can be implanted under your skin to control other implants (perhaps some of those for bionic eyes, say.) Controlling other implants is going to have to be done manually, though there are several ways to do that. Some, for bionic eyes, could rely on tracking the user’s field of vision to select icons imposed in the field of vision simply by focusing on them for a moment. Others, like for bionic limbs currently, must be pressed manually. The circuit board in the article provides additional functionality, including audio, touch input, and vibration feedback. Of course, ideally limbs could be controlled by thought alone, though we are some way from that. Researcher Albrecht Schmidt says, for instance: “You can also extend social networks into your body — be connected with others with implants, feel pulses of vibration from others,” he added. “This can get very personal… it’s a way of letting someone under your skin.”

 

Legs:

Prosthetic legs has also been making progress, though there currently seems to be a divide between prosthetics that replace legs (like those mentioned in the previously linked Deus Ex article) and bionic systems that connect to the legs on the outside and provide more functionality.

Esko Biotics recently sold its first pair of these latter types of legs. This system is not an implant, but instead an exoskeleton. This exoskeleton moves the limbs of those whose spine no longer allows the patient to control their own limbs. This is important because not all leg conditions could be solved via implant; it is not that there is anything wrong with a paraplegic’s legs, per se, but instead with the nerves that control them. A bionic apparatus like this, then, provide benefit where simply replacing the legs would not.

KurzweilAI.net reports on a similar exoskeleton that uses a skullcap to read electrical signals from the brain, and then translates those electrical signals into exoskeleton movements for the patient’s legs. This system bypasses the faulty nerves and directly controls the legs very like what an undamaged nervous system would do. However, the signals that reach the scalp and are captured by the cap are not very precise; certainly not as detailed as the original signals. A direct brain-machine interface (or BMI) would be better, but involves currently risky surgery and tinkering with the brain; an always dangerous endeavor.

See the following video for an inspirational video about a paraplegic who completed a marathon (though a number of improvements would be ideal):

For some more implant news, check out this BBC feature that highlights a couple of additional technologies: Can You Build A Human Body?

I will conclude with one last quote from Albrecht Schmidt, who I think captures the moment perfectly: “We’re at a point where implants may become something quite normal,” Schmidt said. “This work will open up discussion as to whether we get implants not for a medical reason, but for convenience.”

The new and the new-old: Computers and Woolly Mammoths

January 16, 2012 1 comment

Today on boydfuturist: Advances in computing (and one genetic bombshell.)

I’ll begin with some technical advances in computing during the last week or so. First, Kurzweilai.net links to an article reporting that computer component maker NEC has demonstrated 1.15Tb/s optical transmission speeds over 10,000km (or about 6,200mi). Although it’s probably too soon to hope for internet upgrades for consumers (and might be for the foreseeable future, given the United States’ abysmal internet infrastructure) I can at least dream of upgrading my 300kb/s eventually. This sort of hi-speed internet capability will be vital for the increasing mounds of data that are being sent and received thanks to mobile phones, embedded movies, video conferencing, video gaming, and other hi-bandwidth applications.

To handle the increasing amounts of data, both locally generated and transmitted over the internet, computers are going to need more memory. I suppose I’m dating myself to say that I remember when memory was measured in megabytes, and that 16GB of RAM seems outrageous to me even as I installed it for less than $100 in my buddy’s computer. But we’re going to need more, and it’s going to have to fit in increasingly smaller spaces as we miniaturize computers down to the nano scale. Fortunately, researchers at IBM have stored a byte of memory in a mere 12 atoms, or about 100 times as dense as current materials. Until we can safely and cheaply cool home computers to near absolute zero this won’t be much use at home, but it shows that there is potential to pack a lot of memory into very tiny spaces.

What sort of applications could use such vast amounts of data? Lots, it turns out. Erin Rapacki argues that we ought to begin scanning the “real” world. By scanning every object in the real world in 3-D, we could give computers a vast data set that allows them to recognize virtually any object that they pick up. It turns out that this sort of project is being crowd sourced, with websites being set up for people to upload 3-D scans using their Xbox Kinect to form one enormous database. I have to wonder if increasing reliance on 3-D printers will speed up this process, since any object that we want to print from a 3-D printer needs to be scanned (or built as a file) in 3-D to begin with. Imagine one day being able to download the “Sears Hardware Collection” file and printing whatever tool it is you need at will.

Vinod Khosla argues that computers will take over many of the jobs in the healthcare field, resulting in staggering amounts of data transmitted across the world in the blink of an eye. Whether or not he’s right, there’s no question that healthcare is becoming more automated, scans are taking up more data, and genomics is coming along right behind to fill up whatever empty HDDs are left. It turns out that devices are being created that allow people to control robots with their mind. Given the amount of data the brain creates, no doubt finely tuned devices that will give the same or better performance as natural limbs and organs will need to transmit and/or receive large amounts of information as well. However, even as technology becomes more omnipresent, people are already asking questions about its impact on our biological brains. Just as we start to wonder how technology impacts our biological brain, however, comes news from MIT that they have succeeded in creating a synthetic version of a biological neuron. While we’re hardly ready to build a brain from scratch, this suggests that doing so is not out of the question.

Finally, some non-computing news that is huge. Biologists are now saying that they have the ability to sequence woolly mammoth DNA, replace the relevant bits in an elephant egg, and implant what will be a woolly mammoth into a female elephant. Scientists have said this before, but haven’t had a complete genome to work with because their samples were damaged.  Difficulty: Woolly mammoths have been extinct for thousands of years. Isn’t this how Jurassic Park started and, if so, can it be long before Paris Hilton is walking around with a mini-T-Rex in her purse?

I, for one, certainly hope not.

Gadgets, Brains, and Healthcare

January 5, 2012 4 comments

Only five days in to 2012, and mind-blowing articles are already dropping.

According to Pentagon scientists (reported by Physorg.com and others), Cornell students have created a device that splits beams of light, hiding an event from sight. They’re calling it a time cloak. For around 40 picoseconds (trillionths of a second) the scientists are able to create a gap in the light by using a time-lens to split the light into slower red and faster blue components. This makes anything occurring in the gap invisible. In theory scientists could make the device effective for a few millions of a second, or perhaps even a few thousandths of a second, but a device large enough to erase a whole second would need to be approximately 18,600mi long. Even for someone like me who envisions mechanical implants for humans and perhaps even brain uploading into a computer, this article is fantastic. I’d love to see some confirmations of this technology and a better explanation for how, exactly, it works. Still, it seems it won’t be a very effective Ring of Gyges anytime soon, if at all.

Researchers in Japan, meanwhile, have created super sensitive sensors out of carbon nanotubes. The sensor is flexible enough to be woven into clothing, and can be stretched to three times its normal size. In addition to rehabilitation uses, this sort of sensor seems great for the blossoming world of controllerless video game systems like the Xbox Kinect. Such sensors are also implantable into people receiving organs (biological or otherwise) or could just be used to record biometrics in your everyday clothing.

Finally, Klaus Stadlmann gives a TED Talk about inventing the world’s smallest 3-D printer. It seems to be about the size of a Playstation 2, and can print in incredible detail. I thought the talk was a little dry, but still interesting.

There have been several interesting brain articles in the last few days. Forbes ticks down their top-10 brain articles from 2011, including memory-assisting chips, using magnetism to affect moral judgments, potential treatments for people suffering from Alzheimer’s disease, and thought-controlled apps for your cell phone. Although the brain is still largely mysterious, scientists are making massive amounts of progress on all fronts yearly.

Discover Magazine reports that anesthesia might be the key to better understanding how consciousness works. Apparently it’s not unusual for patients under anesthesia to wake up, then go back under and never remember that they woke up. I’ve talked a bit about the problem of recognizing consciousness before (one essentially has to rely on reports of consciousness, but consciousness itself cannot be directly tested for) and this article does a good job of reiterating the problem. The researchers hope that by putting people under and eliciting subjective reports of consciousness after the fact, they will be better able to pin down just what it is that makes a person conscious.

Medicalxpress.com posted an article in December asking Why Aren’t We Smarter Already? The authors suggest that there is an upper-limit to various brain functions, and that while drugs and other things could potentially bring low-scoring individuals up, those already at or near peak performance would see little or no gain from the same drugs. If this is right, then there is reason to doubt that mind-enhancing drugs (say, Adderall) could make the smartest people even smarter. Yet, the article only talks about improving the mind that we have, and not about whether it is possible to create an artificial brain (or introduce artificial implants into a biological brain) that -could- break past these natural barriers. It’s no secret that the body is well, but not optimally, designed, and that the same is true of the brain shouldn’t really be surprising.

TechCrunch offers a predictive list of technologies coming in 2012 in an article penned by tech luminary and SingularityU professor Daniel Kraft. According to Daniel, A.I. will become increasingly helpful in determining diseases, from cheap phone apps that detect cancer with their cameras to A.I. assisted diagnoses in remote villages. 3-D printing will continue to advance, massive increases in patient data will be shared on social network sites like patientslikeme.com, and videoconferencing technology like Skype will increasingly allow doctors to examine patients without an office visit. All good things.

Last, but not least, a team of scientists at USC have recently mapped an entire human genome in 3-D. They hope to be able to evaluate genomes not just based on their genetic make-up, but also their physical structure. Because genomes take up three dimensions in the body, a 3-D map should be a lot more accurate than the standard model.

 

Year In Review, Mapping The Human Brain, Measuring Machine Intelligence, and Biopiracy

December 27, 2011 1 comment

NewScientist blogs a nice year in review; the top 10 tech stories of the year. Conspicuously absent is some of the work that’s been done on brain machine interfaces, driverless cars, prosthetic technology, and almost anything relating to human genomes or other medical technology. Still, it’s a nice list.

Also from Newscientist, a few quick thoughts on the Human Connectome Project. If you haven’t yet heard of it, the HCP aims to map the human brain. As the article states, the really interesting things will start happening once the data starts coming in around late-2012.

One last article from Newscientist quickly reviews the problems with using the Turing Test to measure machine IQ; the test is both too narrow (because it only tests linguistic aptitude) and too hard (because linguistics are difficult to begin with). Author Paul Marks links to a  couple alternatives to the Turing Test that might be more appropriate.

Finally, the Indian government is suing Monsanto for “biopiracy,” claiming that Monsanto is stealing local crops to produce genetically modified versions for sale in other areas of the world. In large part, I think genetically modified foods are going to be necessary in the not-too-distant future to continue to feed (or enable the feeding of) the world’s massive population, but the strong-arm tactics Monsanto seems to apply really turns people off of GM foods in general. Still, biopiracy is kind of an odd claim; aren’t these crops available at a local market, and if so, how could one patent a naturally occurring food? Without a patent, how could anyone, even Monsanto, pirate it? I’ll keep an eye on the story as it develops.

Why Technology Is Not Bad For The Brain

June 3, 2011 7 comments
The Mark News recently published a piece by Susan Greenfield of Oxford University’s Institute for the Future of the Mind. She argues that technology is having adverse effects on the human mind for a number of reasons. First, computers are replacing full-on five-sense experience like that we get from the “real world” with two-sense experience (namely, sight and hearing) because that’s all computer monitors are capable of producing. Because monitors only have two senses to work with, they overload those two senses in order to keep us interested, and in the process we move from understanding to simple information processing as we try to make sense of the rapidly flashing sights and sounds emitted by our monitors. This information processing leads to more shallow understanding, and “infantilizes” our brains. She also tacks on a complaint about Twitter status updates being trivial and attention-seeking. She ends by arguing that by spending more time in front of the computer, people are learning less “real life” skills like how to make eye contact and socialize with our peers.

 

With due respect, I think she’s completely missed the mark (no pun intended). I’ll address each of her points in turn.

 

She is undeniably correct that monitors are limited to two-sense output. But this isn’t limited to monitors – televisions, movie screens, etc are also limited to two senses. Books, before that, were also limited to either two senses (if you count the tactile sensation of holding the book, although then it seems like computer keyboards ought to give computers three-sense input) or a single sense – sight. If less than full sensory input negatively affects the brain, then the problem Susan is describing has been going on for quite some time – at least as long as the general populace has had ready access to books.

 

On the other hand, she’s only right as a matter of contingency – if monitors are limited to two senses, then I think that first she isn’t taking into account tactile feedback devices that allow for more sensory input (as a simple example, vibrating “rumble” controllers that have been out for 25 years or so) and second that she’s discounting the relatively short period of time that this will be so. Sight and sound are easily programmable. Basic tactile feedback is easy enough, but more complicated feedback is starting to come around – see the “kissing machine” I wrote about previously, and teledildonics in general. See also the Wii Kinect and Playstation Move, that require users to hold those controllers and actually move about in an approximation of the actual activity. See also, Guitar Hero. Taste and smell are the most complicated to program because either we would need something like a direct neural connection to simulate smells and tastes, or some kind of taste and smell mixing packet integrated with the machine, such that it can produce appropriate smells and tastes on demand. With full immersion VR we will probably get there (and I imagine smell will come first, because it is easier to pump a gas than layer on a taste) but one wonders if expecting taste and smell from a computer is any more sensible than expecting taste and smell from a car.

 

Greenfield is also mostly right that because monitors only have two senses to work with, that they (potentially) maximize their output to those senses. However, the extent to which this is true depends on what the computer is being used for. If you’re playing Call of Duty, with twenty or thirty players running around a chaotic map firing guns, tossing exploding grenades, and calling in attack choppers to rain death down on every enemy on the map then yes, sight and sound are being overloaded to a point where information processing is more important than cognitive understanding. These are “twitch-based” games for a reason – they rely more on instinct than understanding, although some tactical knowledge never hurts in warfare games. Further down the spectrum are turn-based-strategy games, where the game pauses to allow the player to set up their next move and then resumes to show the outcome of the player’s choices. These have periods of downtime where cognitive processing is very important; they are strategy games, and so understanding and appropriately reacting to whatever situation the game is presenting is vital. Like chess.  Then there are moments of flashing sounds and lights where the decision is simulated and pays off in sensory experience. See games like Civilization or the turn-based Final Fantasy games (which are more role playing games than a turn based strategy games, but still require considered thought on a pause screen during battle.)  At the far end of the spectrum is a simply laid out webpage which seems, to me, no more complicated than a book. How ‘overloaded’ are your senses as you read this? How much thinking are you doing about the content of what you read, and how much twich based information processing is expected? Computers (and video game machines, which I consider essentially the same) offer a variety of output options, some more information process-y and some more cognitive.

 

If information processing infantilizes our brains, then it seems we have bigger problems than computers. How does driving (or riding a horse) differ from a twitch based game in terms of the amount of sensory input that needs to be dealt with immediately? Indeed, if computers are limited to two senses, then ought we not be glad that our brain has less information to process? Shouldn’t we worry more about full on experience than the admittedly limited output offered by computers? Each of us is constantly trying to make sense of a “fast and bright” world consisting of whatever sensory inputs are crashing through our eyes, nose and ears and washing over our tongues and skin. If this torrent of information is a bad thing, as Greenfield implies, then ought we not be glad to have the comparatively restful deluge of information offered by the most chaotic two-sense games? Ought games not be restful to the brain, as many gamers think, instead of hyper-active?

 

The understanding we gain from computers depends entirely on what the computer is being used for. If one spends their entire time playing Farmville there will be some cognitive development, but I expect that there is only so much knowledge to be gained from the game. If, however, one spends all their time reading the Stanford Encyclopedia of Philosophy, or even Fark.com, then much more knowledge-based cognitive development is possible. The computer is no more of a brain-sucking device than a television, and the information derived from a television, like a computer, depends on what the device is being used for: Watching the Science channel is going to lead to more cognitive development than Jersey Shore.

 

Greenfield’s final argument, that interacting via computer detracts from “real world” skills, is well taken but ultimately potentially mistaken.

 

First, like her sensory argument, if true at all her argument is only contingently true. As video calling and holographic representation become more popular, eye contact and traditional “real world” skills are more and more useful. I’ll argue below that useful social skills are being learned through the computer medium already. Finally, her argument depends on the “real world” remaining the primary method of communication between people. Just like someone who is socially awkward (perhaps, as Greenfield implies, because of time spent on the computer) has difficulty maneuvering through social situations in the real world, so too does a computer ‘noob’ have difficulty maneuvering through social situations in the virtual world. How many of us have chastised (or at least seen chastised) someone for formatting their emails poorly, or for typing in all caps? How difficult is it to ‘get’ the internet if one doesn’t understand basic internet lingo like emoticons (  🙂  ) and acronyms (LOL). Even a basic understanding is not enough to skate by effortlessly if one doesn’t understand more subtle internet cues that evidence, if not (to borrow an old term) internet leetness (itself an example of internet culture) then at least familiarity and proficiency. Does anyone who spends all their time in the “real world” understand the importance of Anonymous? If one constantly runs across acronyms that they don’t understand (omgwtfbbqpwn) or memes they don’t get (lolcats have become pretty mainstream, but what about technologically impaired duck, or Joseph Ducreux) or anything on (sorry to bring them into it) 4chan. Technologically impaired duck is, as a whole, about people who don’t ‘get’ internet culture.

 

Beyond pure information, I’ve seen that people who have spent their entire lives off line still have difficulty separating good sources from bad, or valid emails from spam and scams. In short, there is a different culture on the internet that could, if the internet becomes the primary method of communication, be just as rich as the “real world” and where picking up subtle cues through emoticons or acronyms is just as important as making eye contact or cracking a joke in the “real world.”

 

Finally, I don’t think that “real world” social skills are being lost simply by being online as Greenfield claims. Certainly, for all its downsides, Facebook has taught people to network better. Prior to the internet, one was limited to communicating with people in roughly the same geographic area, or perhaps a few if that person had traveled extensively for work or school. It was hard to keep in touch, and people had to make an effort to call or write to another person. Now, it’s much easier to keep in touch with far-flung friends and to know without much trouble what is going on in their lives (to the extent that they make such information available) and to drop a note to say hello. Games like World of Warcraft force groups of people to work together, learning leadership skills and group dynamics on the way. The internet is about connecting people with each other, and that is a skill in vogue both online and offline.

 

It seems to me that Greenfield is upset that the method of communication is changing, but I don’t think her (admittedly short) article really explains how technology is damaging the human brain. I have no doubt she’s a smart woman and probably has a lot of academic work to back up her claims; I’d like to see that work and see if it, too, is based on what I consider to be faulty assumptions.

Biotic Hands and Programmable Brains

May 20, 2011 2 comments

Every so often I come across an article that really illustrates how near the future is. This week, I came across two of them.

The first article, by Singularity Hub, is about Milo and Patrick, two men who have chosen to have their hands removed and replaced with biotic hands. Both went through extensive surgery (Milo for around 10 years) but eventually they decided that because the surgeries were ineffective, replacing their hands with biotic hands was a sensible alternative. These stories are important for at least three reasons.

First, both surgeries were elective procedures in the sense that neither man had his hand replaced to save his life, or as part of a traumatic incident. Both men had biological hands, although they were damaged beyond reasonable use. Elective replacement for limbs is on tricky ethical ground because, for many people, replacing limbs is largely a procedure of last resort. Previously, limbs were removed to prevent the spread of gangrene, or to save the person from further infection spreading, or for reasons otherwise necessary to protect the person’s life. Here, for at least two men, given one hand of lesser functionality than normal each, replacing a less functional human hand with a biotic hand more functional than their damaged hand (but, seemingly, less functional than a ‘normal’ hand) made sense.

Second, if two men are able to choose to replace a biological hand with a more functional biotic hand, then others should be allowed to make the same decision. Despite the amazing progress made by Otto Bock (creator of the biotic hand), the hand still doesn’t provide all of the benefits of a normal human hand, and offers only one benefit that a human hand does not: 360 degree range of motion at the wrist. However, the limitations on the hand are technological, and with a sensory feedback system like the one Bock is currently working on those limitations ought to be cured quickly. Once a biotic hand is as functional as a biological hand, scientists ought to be able to craft improvements for the biotic hand that include all sorts of functional improvements (increased grip strength, additional sensory inputs like the ability to sense electric currents, more range of motion, enhanced durability) and some more cosmetic improvements (perhaps a small storage space, or wifi, an OLED screen, or other patient-specific enhancements.) Very quickly, a biotic hand will be superior to a normal hand, and not just a severely damaged hand,

Finally, one cannot escape the naked truth that this is what people have had in mind when they used the word “cyborg” for decades. Although it’s true that eye glasses, pacemakers, and seizure-reducing brain implants are all mechanical augmentations to a biological person such that the term cyborg is properly applied, few people tend to think of their uncle with the pacemaker as a cyborg. In part, that’s true because pacemakers are not visible, and even hearing aids are more like eye glasses than biotic hands because they are removable and not an integral part of the human body. These hands, however, are replacing a major part of the body with a clearly mechanical device. The article is unclear whether these hands come with some sort of synthetic skin that masks their metal servos and pivot points, but from the pictures there is just no mistaking that these men are now part robot.

We have reached a point where we can program mechanical devices so that they can communicate with the brain through the nervous system. But what about programing the brain itself?

Ed Boyden thinks he has created a solution for that too. I highly recommend watching the video yourself. The gist is that neurons in our brains communicate via electrical signals. By using engineered viruses to implant DNA encoded with photo-receptor cells taken from parts of algae into brain cells, Boyden can then shine a light onto parts of the brain and only those cells so implanted with photo-receptors activate for as long as the light shines. By activating particular groups of neurons, Boyden can stop seizures, or overcome fear responses that would otherwise cripple an animal. Using fiber optics and genetic encoding, Boyden has found a way to direct the brain to act just as he wants: He has, in essence, figured out how to program a brain, or at least how to hack the brain to add or remove particular functionality.

Further, when photo-receptor cells are implanted and neural activity can be activated via light, the human brain begins to look even more like a computer. By regulating light inputs the cells implanted with photo-receptors activate and produce particular effects depending on the type of cell that they are. With a basic on-off activation scheme, neurons become a lot like the chips in our computer that we activate with electricity turned either on or off. This on-off sequence is represented by 0’s and 1’s in binary code and upscaled to more complicated programming languages. With an implanted light array, programmers ought to be able to create flashing light sequences that affect the brain in preset ways, essentially writing a code that controls parts of the brain. Even if scientists simply read the signals of the neurons, all of human experience ought to be reducible to groups of neurons firing or remaining dormant in complicated patterns. If that is so, then there is no reason why we couldn’t download a stream of our experiences in complete detail, and perhaps eventually upload them as well.

The viral DNA distribution method has also been used to restore sight to mice with particular forms of blindness, apparently at the same level of functionality as mice who had normal sight their entire lives. This distribution system ought to be able to introduce whatever bits of DNA seem useful, essentially taking parts of the DNA from other animals and implanting them into human cells to augment our own biology. The color-shifting ability of a chameleon, the ultra-sensitive scent glands of a snake, or the incredible eyesight of a hawk are certainly products of their DNA, and conceptually ought to be transferable with the right encoding. Boyden is quick to point out that the technology is just getting started, but given the exponential increase in technological progress I suspect that we will see vast progress in their fields, and perhaps even human testing, in the next five to ten years.

Despite the exciting prospects of viral DNA introduction, I can’t help but flash back to the beginning of movies like Resident Evil and I Am Legend. Even for technophiles like myself, some of this technology is a little unnerving. That’s all the more reason to start taking a hard look at what seems like science fiction now and figure out what ethical lines we are prepared to draw, and what the legal consequences for stepping outside of those lines ought to be. Much of this technology, if used correctly, is a powerful enabler of humanity for overcoming the frailties of our haphazard evolutionary path. The very same technology used incorrectly, however, could have dramatic and catastrophic consequences for individual patients and for humanity as a whole.

These two stories indicate the dual tracks of transhumanism: The mechanical augmentation side replaces biological hands with mechanically superior components while the biological enhancement side introduces bits of foreign DNA into our own cells to provide additional functionality. If the rate of progress continues, both of these tracks ought to be commonplace within the next 20 years or so. At the point where we can reprogram the human brain and replace limbs with mechanically superior prosthetics, Kurzweil’s Singularity will be here.

I, for one, am very excited.

That’s What She Said!

May 3, 2011 1 comment

Physorg.com writes that U of Washington researchers have created a computer program capable of making double entendre jokes based on words with “high sexiness value, including “hot” and “meat”…” Despite the serious language analysis involved in such a silly exercise, I can’t help but think that this just means that computers are a little closer to being able to ice their bros once they attain sentience.

In other news, researchers at the University of Electro-Communications in Japan have created a device that lets you simulate a kiss with your partner of choice over the internet: As long as you routinely kiss with a straw in your mouth it seems. However, with better technology and a less pencil-sharpener looking device, users in long distance relationships (of the serious, or more casual kind) could build some level of intimacy despite miles of separation. One of the inventors suggests that if a major pop star were to program their kiss into the device, it might be a new and powerful way of connecting with fans; subject to the technology getting better, that seems like a great point. And it’s not to difficult to imagine other remote-tactile applications. I think that remote-tactile interfaces are going to become immensely popular expansions of the general cyber-sex phenomenon that currently exists, but the devices are going to have to be more realistic than a straw on spin cycle. Certainly the adult entertainment industry is throwing money into the idea, and has even created a racy term for the technology: teledildonics.

Finally, German researchers have created an eye-computer interface where a sub-dermal power supply connects to a chip implanted under the retina to restore some vision to the blind. No longer the stuff of miracles, restoring sight to the blind is both important in its own right (for obvious reasons) and a great step toward understanding how the brain processes visual information. With a little more understanding, and a little better tech, it should be possible to enhance the visual range of people with perfectly normal vision, including such nifty (and useful) additions as zoom, night vision, and wirelessly updated heads-up-displays. After all, basic augmented reality exists currently in goggles, the military is working on more advanced technology, and it seems just a hop, skip, and a jump to the augmented reality not just being a heads-up display, but a display superimposed from our biotic or cybernetic eyes into our field of vision.

Exciting stuff, from the silly to the useful.