Archive

Archive for June, 2011

Now Hacking The Genetic Code

June 28, 2011 2 comments

A group of scientists at the Children’s Hospital of Philadelphia have recently used genome editing to make hemophilia B in mice less severe. Hemophilia B is an inherited genetic disorder that prevents blood from clotting correctly. The most severe forms of hemophilia, where less than 1% of the normal clotting factors in blood are present, can cause spontaneous internal bleeding around the joints even without suffering traumatic injury. Additionally, patients may bleed spontaneously from the nose or gums; a severe problem when the blood fails to clot and the bleeding doesn’t stop. Traditional treatments for hemophilia include injections of harvested or synthetic clotting factors or pills that include synthetic factors.

By introducing an enzyme into the mice, Dr. Katherine High and crew have rewritten the DNA of mice, causing the mice to produce more of the needed clotting factors. After injection, the virus makes its way to the mouse’s liver, and the enzyme cuts out the faulty gene in the cellular DNA. By injecting healthy copies of DNA into the mouse, the faulty DNA is replaced during the body’s normal repair process, and the mouse begins to produce more of the clotting factor. Because the DNA has been rewritten, these cells then continue to produce more of the clotting factor, reducing or eliminating the need for repeated injections or pills. Although this is only the first time genetic rewriting has been successfully used on a living animal, Dr. High and crew have increased the clotting factor in these mice from 1% to 5%; enough to change the severity of the mice’s hemophilia from “severe” to “moderate” or “mild” (5% is the accepted line between mild and moderate hemophilia.) The additional clotting factors lessen the likelihood of spontaneous bleeding at the joints, and patients generally only bleed excessively during surgery or after injury. Dr. High and her team hope to increase the efficiency of the genetic treatment in the future, and apply it to other genetic diseases and HIV.

I’m no genetic scientist, so I’m unsure why only 4%-5% of the DNA would be successfully rewritten. Maybe this is Hollywood talking (I’m looking at you, Resident Evil and I Am Legend), but I’m also a little leery about rewriting DNA. Unlike pharmaceuticals that work their way through the body and are then expelled, one of the main benefits (and potential dangers) of genetic rewriting is that the alterations are permanent. I’m not suggesting that scientists ought not use viral vectors to alter DNA, or denying the efficacy and benefits of editing genomes. Indeed, I think that DNA rewriting is our best shot at curing many of the diseases that currently plague people, and offers the most immediate shot at meaningful enhancement to longevity and abilities. Ray Kurzweil, in The Singularity Is Near, called this type of procedure “the holy grail of genetic bio-engineering,” and I’m certainly not going to argue with Ray. Still, even though this on-the-fly rewriting of DNA in living organisms isn’t yet ready for human trials, I’d like to see scientists begin to educate the public on the relative safety of this sort of procedure. At the very least, some reassurance that an accidental zombie apocalypse only happens in the movies would be nice.

Assuming that this sort of treatment -is- safe, however, genetic rewriting offers a number of exciting opportunities. While ‘coding out’ genetic disorders ought probably be at the top of the to-do list, by rewriting DNA we can ‘code in’ any number of beneficial enhancements to ourselves, from more robust immune systems that can fight off bacterial and viral infection not susceptible to this sort of DNA rewriting cure to more frivolous applications like changing eye and hair color. Because all aspects of the human body are dictated by our DNA, rewriting that DNA offers us the ability to enhance ourselves in any way that we know how to encode for. We could not only tweak our own genetic code and alter it by copying parts of other humans with better genes, but potentially integrate superior attributes of other animals that are, themselves, determined by their DNA. Limited only by our own ingenuity and understanding, we ought to be able to enhance ourselves in any way that nature allows.

Ultimately, I still thing that mechanical implants and engineering will surpass even the vast opportunity afforded by genetic rewriting because mechanical engineering is not limited to the best of nature’s creativity. Very strong bones are certainly beneficial, but calcium cannot hope to compete with titanium and other alloys. Immune systems can do a great job of maintaining our bodies, but eventually nanobots will overcome even the most robust solutions nature allows. All that said, we are rewriting DNA now, and still in the very early stages of creating nano-scale engineering. Until engineering catches up with genetics, this seems to be our best chance at meaningful enhancement in the near future.
Advertisements
Categories: Uncategorized

Medical Information and Privacy

June 17, 2011 Leave a comment

In a recent TED talk, Daniel Kraft offered some insight into the future of healthcare technology. He ought to know: Daniel is an oncologist, and also a chair of the medicine and neuroscience division of SingularityU.

Although Kraft talks about many advances coming in medical technology, a large number of those advances depend on (or at least become more effective with) crowd-sourced medical information; that is, medical information shared by a large number of users such that the data can be analyzed for patterns. Given the large number of topics Kraft was lecturing about, and the short amount of time that TED allows speakers to talk, it’s unsurprising that Kraft was unable to provide more information about how and to what extent consumers are willing to share their information. Yet, when so much hinges on patient’s willingness to share that information, it seems reasonable to ask whether consumers will participate enough to allow the technologies Kraft is lecturing about to become fully effective.

Privacy and technology often seem at odds with one another, and the conflict is not limited to medical issues. Anecdotally, a few of my friends (still in their 20’s or early 30’s) are unwilling to create even a Facebook profile. They often say that they just don’t want their information so publicly accessible, or that they don’t see the benefit in being so easily discoverable. Although an unwillingness to share basic information seems (to me, anyway) unusual for people my age, older people in their 60’s and 70’s often seem more interested in maintaining their privacy than in being connected through Facebook like sites (although I don’t want to discount the many people that age, or older, who -are- willing to share that sort of information and do participate in the online community.)

The Facebook privacy concerns are an interesting case study in willingness to share information for several reasons. First, Facebook has some 500 million users, indicating that a whole lot of people are willing to share personally identifiable information in a limited way. Second, Facebook has had a lot of backlash for it’s information privacy policy, most recently for its new facial recognition technology. PCWorld recently declared the facial recognition feature “creepy” and “terrifying” but later noted that “Facebook’s new facial recognition feature doesn’t yet work well enough to pose a significant threat to your privacy.” Nonetheless, privacy rights groups, the state of Connecticut, and EU regulators are questioning the legitimacy of the technology.

Although the facial recognition technology debate indicates that people are concerned about their privacy, do these concerns attach to medical information as well? One major gripe about Facebook’s new technology (and many of their previous privacy debacles) is that the privacy settings are often enabled by default, without some sort of explicit opt-in by users (although users can opt-out of Facebook features by hunting through the appropriate security settings.) Healthcare laws in the US would probably require that patients specifically opt-in to sharing their information, suggesting that at least one issue in the Facebook cases does not apply to medical information.

Another gripe is that Facebook’s settings generally, and their facial recognition technology specifically, tie a person to the data; when Facebook automatically tags a picture of you with your name it is explicitly connecting information (the picture) and identity (the name, usually with a link to your profile.) In the case of medical information, whether this is true or not seems to be determined on a case-by-case basis. The website Kraft begins his lecture talking about attached a specific genetic bit of information to Kraft, then matched it up with another person and allowed the two users to contact each other, much like the Facebook auto-tag software. Other information, like the information Kraft is suggesting could be continuously uploaded through wireless nanobots or wearable sensors, could be either tied to a specific person (usefully, if one wants their doctor to be able to monitor their health) or else made anonymous for general use.

Assuming the information is not personally identifiable, some studies have found that people are willing to share their medical information anonymously. It is unclear how willing people are to share personally identifiable information. Because the viability of much future medical technology is dependent on uniquely identifiable information available within at least limited contexts, I’d like to see more studies researching whether the public is, in essence, willing to let this information thrive. Although I hadn’t much thought about sharing medical information before, after some consideration, I don’t think I would mind posting it to the internet, so long as there were some controls in place as to who could access my information. Many of my friends would choose otherwise. Would you be willing to share your medical information if doing so could enable some of the technologies Kraft talks about?

Categories: Uncategorized

Why Technology Is Not Bad For The Brain

June 3, 2011 7 comments
The Mark News recently published a piece by Susan Greenfield of Oxford University’s Institute for the Future of the Mind. She argues that technology is having adverse effects on the human mind for a number of reasons. First, computers are replacing full-on five-sense experience like that we get from the “real world” with two-sense experience (namely, sight and hearing) because that’s all computer monitors are capable of producing. Because monitors only have two senses to work with, they overload those two senses in order to keep us interested, and in the process we move from understanding to simple information processing as we try to make sense of the rapidly flashing sights and sounds emitted by our monitors. This information processing leads to more shallow understanding, and “infantilizes” our brains. She also tacks on a complaint about Twitter status updates being trivial and attention-seeking. She ends by arguing that by spending more time in front of the computer, people are learning less “real life” skills like how to make eye contact and socialize with our peers.

 

With due respect, I think she’s completely missed the mark (no pun intended). I’ll address each of her points in turn.

 

She is undeniably correct that monitors are limited to two-sense output. But this isn’t limited to monitors – televisions, movie screens, etc are also limited to two senses. Books, before that, were also limited to either two senses (if you count the tactile sensation of holding the book, although then it seems like computer keyboards ought to give computers three-sense input) or a single sense – sight. If less than full sensory input negatively affects the brain, then the problem Susan is describing has been going on for quite some time – at least as long as the general populace has had ready access to books.

 

On the other hand, she’s only right as a matter of contingency – if monitors are limited to two senses, then I think that first she isn’t taking into account tactile feedback devices that allow for more sensory input (as a simple example, vibrating “rumble” controllers that have been out for 25 years or so) and second that she’s discounting the relatively short period of time that this will be so. Sight and sound are easily programmable. Basic tactile feedback is easy enough, but more complicated feedback is starting to come around – see the “kissing machine” I wrote about previously, and teledildonics in general. See also the Wii Kinect and Playstation Move, that require users to hold those controllers and actually move about in an approximation of the actual activity. See also, Guitar Hero. Taste and smell are the most complicated to program because either we would need something like a direct neural connection to simulate smells and tastes, or some kind of taste and smell mixing packet integrated with the machine, such that it can produce appropriate smells and tastes on demand. With full immersion VR we will probably get there (and I imagine smell will come first, because it is easier to pump a gas than layer on a taste) but one wonders if expecting taste and smell from a computer is any more sensible than expecting taste and smell from a car.

 

Greenfield is also mostly right that because monitors only have two senses to work with, that they (potentially) maximize their output to those senses. However, the extent to which this is true depends on what the computer is being used for. If you’re playing Call of Duty, with twenty or thirty players running around a chaotic map firing guns, tossing exploding grenades, and calling in attack choppers to rain death down on every enemy on the map then yes, sight and sound are being overloaded to a point where information processing is more important than cognitive understanding. These are “twitch-based” games for a reason – they rely more on instinct than understanding, although some tactical knowledge never hurts in warfare games. Further down the spectrum are turn-based-strategy games, where the game pauses to allow the player to set up their next move and then resumes to show the outcome of the player’s choices. These have periods of downtime where cognitive processing is very important; they are strategy games, and so understanding and appropriately reacting to whatever situation the game is presenting is vital. Like chess.  Then there are moments of flashing sounds and lights where the decision is simulated and pays off in sensory experience. See games like Civilization or the turn-based Final Fantasy games (which are more role playing games than a turn based strategy games, but still require considered thought on a pause screen during battle.)  At the far end of the spectrum is a simply laid out webpage which seems, to me, no more complicated than a book. How ‘overloaded’ are your senses as you read this? How much thinking are you doing about the content of what you read, and how much twich based information processing is expected? Computers (and video game machines, which I consider essentially the same) offer a variety of output options, some more information process-y and some more cognitive.

 

If information processing infantilizes our brains, then it seems we have bigger problems than computers. How does driving (or riding a horse) differ from a twitch based game in terms of the amount of sensory input that needs to be dealt with immediately? Indeed, if computers are limited to two senses, then ought we not be glad that our brain has less information to process? Shouldn’t we worry more about full on experience than the admittedly limited output offered by computers? Each of us is constantly trying to make sense of a “fast and bright” world consisting of whatever sensory inputs are crashing through our eyes, nose and ears and washing over our tongues and skin. If this torrent of information is a bad thing, as Greenfield implies, then ought we not be glad to have the comparatively restful deluge of information offered by the most chaotic two-sense games? Ought games not be restful to the brain, as many gamers think, instead of hyper-active?

 

The understanding we gain from computers depends entirely on what the computer is being used for. If one spends their entire time playing Farmville there will be some cognitive development, but I expect that there is only so much knowledge to be gained from the game. If, however, one spends all their time reading the Stanford Encyclopedia of Philosophy, or even Fark.com, then much more knowledge-based cognitive development is possible. The computer is no more of a brain-sucking device than a television, and the information derived from a television, like a computer, depends on what the device is being used for: Watching the Science channel is going to lead to more cognitive development than Jersey Shore.

 

Greenfield’s final argument, that interacting via computer detracts from “real world” skills, is well taken but ultimately potentially mistaken.

 

First, like her sensory argument, if true at all her argument is only contingently true. As video calling and holographic representation become more popular, eye contact and traditional “real world” skills are more and more useful. I’ll argue below that useful social skills are being learned through the computer medium already. Finally, her argument depends on the “real world” remaining the primary method of communication between people. Just like someone who is socially awkward (perhaps, as Greenfield implies, because of time spent on the computer) has difficulty maneuvering through social situations in the real world, so too does a computer ‘noob’ have difficulty maneuvering through social situations in the virtual world. How many of us have chastised (or at least seen chastised) someone for formatting their emails poorly, or for typing in all caps? How difficult is it to ‘get’ the internet if one doesn’t understand basic internet lingo like emoticons (  🙂  ) and acronyms (LOL). Even a basic understanding is not enough to skate by effortlessly if one doesn’t understand more subtle internet cues that evidence, if not (to borrow an old term) internet leetness (itself an example of internet culture) then at least familiarity and proficiency. Does anyone who spends all their time in the “real world” understand the importance of Anonymous? If one constantly runs across acronyms that they don’t understand (omgwtfbbqpwn) or memes they don’t get (lolcats have become pretty mainstream, but what about technologically impaired duck, or Joseph Ducreux) or anything on (sorry to bring them into it) 4chan. Technologically impaired duck is, as a whole, about people who don’t ‘get’ internet culture.

 

Beyond pure information, I’ve seen that people who have spent their entire lives off line still have difficulty separating good sources from bad, or valid emails from spam and scams. In short, there is a different culture on the internet that could, if the internet becomes the primary method of communication, be just as rich as the “real world” and where picking up subtle cues through emoticons or acronyms is just as important as making eye contact or cracking a joke in the “real world.”

 

Finally, I don’t think that “real world” social skills are being lost simply by being online as Greenfield claims. Certainly, for all its downsides, Facebook has taught people to network better. Prior to the internet, one was limited to communicating with people in roughly the same geographic area, or perhaps a few if that person had traveled extensively for work or school. It was hard to keep in touch, and people had to make an effort to call or write to another person. Now, it’s much easier to keep in touch with far-flung friends and to know without much trouble what is going on in their lives (to the extent that they make such information available) and to drop a note to say hello. Games like World of Warcraft force groups of people to work together, learning leadership skills and group dynamics on the way. The internet is about connecting people with each other, and that is a skill in vogue both online and offline.

 

It seems to me that Greenfield is upset that the method of communication is changing, but I don’t think her (admittedly short) article really explains how technology is damaging the human brain. I have no doubt she’s a smart woman and probably has a lot of academic work to back up her claims; I’d like to see that work and see if it, too, is based on what I consider to be faulty assumptions.