A group of scientists at the Children’s Hospital of Philadelphia have recently used genome editing to make hemophilia B in mice less severe. Hemophilia B is an inherited genetic disorder that prevents blood from clotting correctly. The most severe forms of hemophilia, where less than 1% of the normal clotting factors in blood are present, can cause spontaneous internal bleeding around the joints even without suffering traumatic injury. Additionally, patients may bleed spontaneously from the nose or gums; a severe problem when the blood fails to clot and the bleeding doesn’t stop. Traditional treatments for hemophilia include injections of harvested or synthetic clotting factors or pills that include synthetic factors.
In a recent TED talk, Daniel Kraft offered some insight into the future of healthcare technology. He ought to know: Daniel is an oncologist, and also a chair of the medicine and neuroscience division of SingularityU.
Although Kraft talks about many advances coming in medical technology, a large number of those advances depend on (or at least become more effective with) crowd-sourced medical information; that is, medical information shared by a large number of users such that the data can be analyzed for patterns. Given the large number of topics Kraft was lecturing about, and the short amount of time that TED allows speakers to talk, it’s unsurprising that Kraft was unable to provide more information about how and to what extent consumers are willing to share their information. Yet, when so much hinges on patient’s willingness to share that information, it seems reasonable to ask whether consumers will participate enough to allow the technologies Kraft is lecturing about to become fully effective.
Privacy and technology often seem at odds with one another, and the conflict is not limited to medical issues. Anecdotally, a few of my friends (still in their 20’s or early 30’s) are unwilling to create even a Facebook profile. They often say that they just don’t want their information so publicly accessible, or that they don’t see the benefit in being so easily discoverable. Although an unwillingness to share basic information seems (to me, anyway) unusual for people my age, older people in their 60’s and 70’s often seem more interested in maintaining their privacy than in being connected through Facebook like sites (although I don’t want to discount the many people that age, or older, who -are- willing to share that sort of information and do participate in the online community.)
Although the facial recognition technology debate indicates that people are concerned about their privacy, do these concerns attach to medical information as well? One major gripe about Facebook’s new technology (and many of their previous privacy debacles) is that the privacy settings are often enabled by default, without some sort of explicit opt-in by users (although users can opt-out of Facebook features by hunting through the appropriate security settings.) Healthcare laws in the US would probably require that patients specifically opt-in to sharing their information, suggesting that at least one issue in the Facebook cases does not apply to medical information.
Another gripe is that Facebook’s settings generally, and their facial recognition technology specifically, tie a person to the data; when Facebook automatically tags a picture of you with your name it is explicitly connecting information (the picture) and identity (the name, usually with a link to your profile.) In the case of medical information, whether this is true or not seems to be determined on a case-by-case basis. The website Kraft begins his lecture talking about attached a specific genetic bit of information to Kraft, then matched it up with another person and allowed the two users to contact each other, much like the Facebook auto-tag software. Other information, like the information Kraft is suggesting could be continuously uploaded through wireless nanobots or wearable sensors, could be either tied to a specific person (usefully, if one wants their doctor to be able to monitor their health) or else made anonymous for general use.
Assuming the information is not personally identifiable, some studies have found that people are willing to share their medical information anonymously. It is unclear how willing people are to share personally identifiable information. Because the viability of much future medical technology is dependent on uniquely identifiable information available within at least limited contexts, I’d like to see more studies researching whether the public is, in essence, willing to let this information thrive. Although I hadn’t much thought about sharing medical information before, after some consideration, I don’t think I would mind posting it to the internet, so long as there were some controls in place as to who could access my information. Many of my friends would choose otherwise. Would you be willing to share your medical information if doing so could enable some of the technologies Kraft talks about?
With due respect, I think she’s completely missed the mark (no pun intended). I’ll address each of her points in turn.
She is undeniably correct that monitors are limited to two-sense output. But this isn’t limited to monitors – televisions, movie screens, etc are also limited to two senses. Books, before that, were also limited to either two senses (if you count the tactile sensation of holding the book, although then it seems like computer keyboards ought to give computers three-sense input) or a single sense – sight. If less than full sensory input negatively affects the brain, then the problem Susan is describing has been going on for quite some time – at least as long as the general populace has had ready access to books.
On the other hand, she’s only right as a matter of contingency – if monitors are limited to two senses, then I think that first she isn’t taking into account tactile feedback devices that allow for more sensory input (as a simple example, vibrating “rumble” controllers that have been out for 25 years or so) and second that she’s discounting the relatively short period of time that this will be so. Sight and sound are easily programmable. Basic tactile feedback is easy enough, but more complicated feedback is starting to come around – see the “kissing machine” I wrote about previously, and teledildonics in general. See also the Wii Kinect and Playstation Move, that require users to hold those controllers and actually move about in an approximation of the actual activity. See also, Guitar Hero. Taste and smell are the most complicated to program because either we would need something like a direct neural connection to simulate smells and tastes, or some kind of taste and smell mixing packet integrated with the machine, such that it can produce appropriate smells and tastes on demand. With full immersion VR we will probably get there (and I imagine smell will come first, because it is easier to pump a gas than layer on a taste) but one wonders if expecting taste and smell from a computer is any more sensible than expecting taste and smell from a car.
Greenfield is also mostly right that because monitors only have two senses to work with, that they (potentially) maximize their output to those senses. However, the extent to which this is true depends on what the computer is being used for. If you’re playing Call of Duty, with twenty or thirty players running around a chaotic map firing guns, tossing exploding grenades, and calling in attack choppers to rain death down on every enemy on the map then yes, sight and sound are being overloaded to a point where information processing is more important than cognitive understanding. These are “twitch-based” games for a reason – they rely more on instinct than understanding, although some tactical knowledge never hurts in warfare games. Further down the spectrum are turn-based-strategy games, where the game pauses to allow the player to set up their next move and then resumes to show the outcome of the player’s choices. These have periods of downtime where cognitive processing is very important; they are strategy games, and so understanding and appropriately reacting to whatever situation the game is presenting is vital. Like chess. Then there are moments of flashing sounds and lights where the decision is simulated and pays off in sensory experience. See games like Civilization or the turn-based Final Fantasy games (which are more role playing games than a turn based strategy games, but still require considered thought on a pause screen during battle.) At the far end of the spectrum is a simply laid out webpage which seems, to me, no more complicated than a book. How ‘overloaded’ are your senses as you read this? How much thinking are you doing about the content of what you read, and how much twich based information processing is expected? Computers (and video game machines, which I consider essentially the same) offer a variety of output options, some more information process-y and some more cognitive.
If information processing infantilizes our brains, then it seems we have bigger problems than computers. How does driving (or riding a horse) differ from a twitch based game in terms of the amount of sensory input that needs to be dealt with immediately? Indeed, if computers are limited to two senses, then ought we not be glad that our brain has less information to process? Shouldn’t we worry more about full on experience than the admittedly limited output offered by computers? Each of us is constantly trying to make sense of a “fast and bright” world consisting of whatever sensory inputs are crashing through our eyes, nose and ears and washing over our tongues and skin. If this torrent of information is a bad thing, as Greenfield implies, then ought we not be glad to have the comparatively restful deluge of information offered by the most chaotic two-sense games? Ought games not be restful to the brain, as many gamers think, instead of hyper-active?
The understanding we gain from computers depends entirely on what the computer is being used for. If one spends their entire time playing Farmville there will be some cognitive development, but I expect that there is only so much knowledge to be gained from the game. If, however, one spends all their time reading the Stanford Encyclopedia of Philosophy, or even Fark.com, then much more knowledge-based cognitive development is possible. The computer is no more of a brain-sucking device than a television, and the information derived from a television, like a computer, depends on what the device is being used for: Watching the Science channel is going to lead to more cognitive development than Jersey Shore.
Greenfield’s final argument, that interacting via computer detracts from “real world” skills, is well taken but ultimately potentially mistaken.
First, like her sensory argument, if true at all her argument is only contingently true. As video calling and holographic representation become more popular, eye contact and traditional “real world” skills are more and more useful. I’ll argue below that useful social skills are being learned through the computer medium already. Finally, her argument depends on the “real world” remaining the primary method of communication between people. Just like someone who is socially awkward (perhaps, as Greenfield implies, because of time spent on the computer) has difficulty maneuvering through social situations in the real world, so too does a computer ‘noob’ have difficulty maneuvering through social situations in the virtual world. How many of us have chastised (or at least seen chastised) someone for formatting their emails poorly, or for typing in all caps? How difficult is it to ‘get’ the internet if one doesn’t understand basic internet lingo like emoticons ( 🙂 ) and acronyms (LOL). Even a basic understanding is not enough to skate by effortlessly if one doesn’t understand more subtle internet cues that evidence, if not (to borrow an old term) internet leetness (itself an example of internet culture) then at least familiarity and proficiency. Does anyone who spends all their time in the “real world” understand the importance of Anonymous? If one constantly runs across acronyms that they don’t understand (omgwtfbbqpwn) or memes they don’t get (lolcats have become pretty mainstream, but what about technologically impaired duck, or Joseph Ducreux) or anything on (sorry to bring them into it) 4chan. Technologically impaired duck is, as a whole, about people who don’t ‘get’ internet culture.
Beyond pure information, I’ve seen that people who have spent their entire lives off line still have difficulty separating good sources from bad, or valid emails from spam and scams. In short, there is a different culture on the internet that could, if the internet becomes the primary method of communication, be just as rich as the “real world” and where picking up subtle cues through emoticons or acronyms is just as important as making eye contact or cracking a joke in the “real world.”
Finally, I don’t think that “real world” social skills are being lost simply by being online as Greenfield claims. Certainly, for all its downsides, Facebook has taught people to network better. Prior to the internet, one was limited to communicating with people in roughly the same geographic area, or perhaps a few if that person had traveled extensively for work or school. It was hard to keep in touch, and people had to make an effort to call or write to another person. Now, it’s much easier to keep in touch with far-flung friends and to know without much trouble what is going on in their lives (to the extent that they make such information available) and to drop a note to say hello. Games like World of Warcraft force groups of people to work together, learning leadership skills and group dynamics on the way. The internet is about connecting people with each other, and that is a skill in vogue both online and offline.
It seems to me that Greenfield is upset that the method of communication is changing, but I don’t think her (admittedly short) article really explains how technology is damaging the human brain. I have no doubt she’s a smart woman and probably has a lot of academic work to back up her claims; I’d like to see that work and see if it, too, is based on what I consider to be faulty assumptions.