A couple of interesting things happened this week in the legal world regarding consumer technology.
AT&T won its case at the Supreme Court. The court held in a 5-4 decision (along predictable lines) that subscribers could not bring a class action lawsuit against AT&T (and more specifically, that another company couldn’t bring one on their behalf) when the cell phone contract has an arbitration clause. For the last 40 years or so, the Supreme Court has consistently upheld arbitration agreements for individual consumers; in most credit card agreements, car purchases, cable contracts, and other services and goods, exists a clause that says (in more or less plain English) that any dispute you have with the company is subject to arbitration. In effect, the Supreme Court has given its blessing to companies to allow clauses in contracts that force consumers to give up their legal rights in favor of arbitration. It’s worth tempering that concern with two points.
First, arbitration is supposed to be neutral, though some clauses are written better than others. The best (from a consumer prospective) allow for some negotiation as to who the arbitrator is, but in all cases it should be a third party that is (in theory) unaffiliated with the company (or the consumer, for that matter.) In that sense, arbitration might be about as good as going before a judge, though there’s some question whether arbitrators have the same experience as a judge; the clauses can also dictate where the arbitration is to take place, and that location might not be as convenient (by several states, perhaps) as your local courthouse.
Second, there are legitimate reasons for not going to court. Court is expensive, it takes a long time, it’s complicated, and doesn’t necessarily end in a decision that is ‘more fair’. Also, the judiciary really is backed up; in 2010 there was something like half a million cases started or pending, with another 1.5 million bankruptcy cases. State figures are much worse. So, there is some incentive to keep as much work out of the court system as possible (or to increase the number of courts and judges, but that would take money, which means taxes, which Glenn Beck would just rant about.)
So, with those caveats, the Supreme Court has moved toward arbitration. This case, however, is unique because California (despite having the most lawsuits in the county) decided that companies like AT&T couldn’t bind consumers via their arbitration agreements if those agreements said that consumers couldn’t sue as a class (that’s a whole bunch of people in similar situations suing together, in one suit.) Because an entire class sues together in a single suit (albeit a very complicated suit) the judicial efficiency concerns are vastly muted, and it seemed unfair to California that its citizens might have to travel to other states (individually, no less) to arbitrate what is, in all likelihood, a very small claim. Class actions exist because no sane person would spend thousands litigating a small wrong done by a company (say, $50 in overage fees over a year) but might sue as a class of thousands or millions for a large amount of money and then divvy it up between them.
The Supreme Court said no. If the contract you sign says you must arbitrate individually, then you can’t go to court or arbitrate as a class, even if a state law says otherwise. Now, each part of that is established by judicial precedent. Folks who sign arbitration agreements are largely bound to them unless there is something really strange about that particular agreement. And federal law -does- trump state law; pretty much always. But by ruling that classes of people can’t sue instead of arbitrate (and it’s unclear that they can even -arbitrate- as a class) the Supreme Court has given the go-ahead to companies who want to nickel and dime consumers with potentially illegal fees and bank on the unlikeliness of an individual being willing to (in all likelihood) spend more than they would recover to arbitrate their claim with the company. While one can (theoretically) negotiate a contract with a company and get terms removed that both parties don’t agree to … try that the next time you get cell phone service. It just doesn’t happen in the real world, which means that if you want a car, or a cell phone, or a credit card, or a thousand other goods you’ll have to give up your rights to sue, even as a class.
So, now companies have a little more license to abuse their customers. Thanks, SCOTUS.
Read the Court’s opinion here.
In similar news, some 77 million Playstation Network customers have filed a class action suit against Sony for failing to stop hackers from retrieving their names, login information, email addresses, birth dates, purchase history, and possibly credit card information from Sony’s (supposedly) secure servers. These folks want some damages, and at least want Sony to pay for credit card monitoring services for the customers whose information was taken from Sony’s servers. Read the complaint here.
I’m not a PSN customer, so I’m unsure whether their contract specifically states that class action lawsuits or arbitration are disallowed. Given Sony’s massive size, and the sophistication of their company, I’d bet that it does. So, given the new Supreme Court ruling, what can we expect? If I were arguing the case for Sony, I’d argue that each customer is going to have to arbitrate with Sony individually, via the terms of the agreement, for whatever they want. Of the 77 million, I’d expect less than 3 million people to actually arbitrate, and I’d expect to win some of those cases (what arbitration firm wants to rule against the company using their services repeatedly?) and the others would get small payments. It’s certainly feasible to think that a class action worth hundreds of million of dollars (at least) could get paid out for less than 20 million dollars. In essence, Sony asked for their customer’s information, didn’t keep it secure, exposed their customers to potential identity theft and now (if the case works out as I imagine it will) will tell their customers “Sorry guys, but if you want us to do anything about it, you’ll have to come to arbitration.”
Perhaps, however, this PSN case will work its way through the system and it’ll turn out just fine. I’ll be keeping an eye on it, however, to see if the new AT&T ruling shuts down the PSN case before it really gets started.
Two interesting robot articles hit my RSS feed today. Though they’re not ‘intelligent’ robots, they do show some interesting characteristics.
First, a robot will throw out the first pitch today at the Phillies game. Now, a robot throwing pitches is not that strange; batting cages have been automated for as long as I can remember (not to date myself too much) and throw at a variety of speeds. Two things seem interesting about this robot, however: First, it’s mobile. Granted, it’s not bipedal (like some baseball hurling Terminator, thank goodness) but that might be a feature instead of a defect; bipeds are inherently unstable and creating a bipedal robot would be great for making it seem more human but otherwise wouldn’t make sense structurally. Considering the trouble most companies have teaching bipedal robots to walk, it’s unsurprising that a robot ‘slapped together’ over a few months has wheels (not to say that robots haven’t gotten much better at walking over the last few years.) The second interesting characteristic is that it’s throwing a pitch in the MLB. Batting cages are unexciting enough that we’ve never (to my knowledge) had a ‘pitching robot’ throw out the first pitch, and so this is a first of sorts. After seeing Watson on Jeopardy!, and now a robot throwing the first pitch in an MLB game, I wonder what other robot’s we’ll see in the media. Not too shabby for some engineers from Penn on short notice. I almost wish the umps would let this robot unleash it’s best pitch once before the game (or during the 7th, whatever.)
Second, and more traditionally robotic, a robot sculpts an aluminum (showpiece) motorcycle helmet out of a block of metal. Although this robot won’t be confused for human any time soon, it does showcase some amazing skills that robots have, even if they aren’t conscious. Here, a robot is crafting far more quickly, far more precisely, and far more intricately than any human would have been able to (particularly in the time that it does.) With a little conscious creativity, it seems like it could put out some artwork that shames its human counterparts.
Given upcoming law finals, I’m going to punt on writing another long article (at least for today) and instead link to another fascinating H+ article, this time by Eray Özkural.
Join Eray on a fascinating journey through the mind of a bat, a person, an upload, and a machine, via philosophers like Searle and Nagel. But not at the same time: An uploaded bat-person cyborg philosopher is just too much.
I came across an opinion piece in the Catholic San Francisco Online Edition this week written by Sandro Magister. He was, according to the head notes, summarizing part of a talk by French philosopher Fabrice Hadjadj. Fabrice argues that the term “transhumanism” was coined by Julian Huxley (brother of Aldous Huxley, of Brave New World fame); the first director of the United Nations Educational Scientific and Cultural Organization (UNESCO) and supporter of eugenics. This seems to be roughly correct (there is some disagreement, but most sources I can find verify the basic information) and I’ll take it as true for the purposes of this article.
Fabrice argues that Huxley (Julian, for the remainder of the article) coined the term to talk about eugenics without using the dirty ‘E’ word so tarnished by Nazi atrocities. He then goes on to say: “Nonetheless, the same thing [eugenics] is intended: the redemption of man through technology” and “It is precisely a matter of improving the “quality” of individuals, as one improves the “quality” of products, and therefore, probably, of eliminating or preventing the birth of everything that would appear as abnormal or deficient.”
There are, it seems to me, two major problems with Fabrice’s argument.
First, as the wiki linked above indicates, eugenics was a respectable idea up until the end of WWII, when the Nazi’s perverted the idea and applied it to traits that most people don’t see as defects. While the Nazis were interested in a so-called ‘master race’ and took eugenics to be the elimination of ‘inferior racial and other undesirable groups’, eugenics prior to WWII focused on a much tamer idea of ‘undesirable traits’ including traits like hemophilia and Huntington’s disease. Even the tamer idea of eugenics carries significant ethical questions concerning people’s right to reproduce, and entails difficult determinations about what traits are ‘desirable’ and which are ‘undesirable’ to whom, and why.
Today, many people would probably agree that traits like autism are largely undesirable (that is, few people if any would choose to have an autistic child, given the choice to have a child without autism) but would balk at the idea of labeling traits like homosexuality undesirable (Fred Phelps and others aside.) At the core, however, eugenics seems to be about the right to reproduce: Who should, and who shouldn’t, given the genetic code that will be passed on. No less than US President Theodore Roosevelt and Supreme Court Justice Oliver Wendell Holmes supported the general idea, though some of the policies set forth by these otherwise great men might today be considered unethical. See, for instance, Buck v. Bell, a US Supreme Court case from 1927 upholding a Virginia forced-sterilization law for the mentally ill. The point, however, is that the general idea of eugenics might be defensible if one doesn’t immediately point to the Nazi’s as the primary supporters of the idea; an move known on the internet as Godwining the argument.
Second, and more importantly, whatever the ethical status of eugenics, it seems to me that most transhumanist ideas represent something different. A large part of transhumanism revolves around individual choice: A particular person might choose to implant a piece of technology, or replace a biological limb with another, or even ingest a pill that rewrites some part of their genetic code such that their own traits are changed. Changing one’s own traits, however, is fundamentally different from telling other’s that they are not allowed to reproduce for the good of the species. The ethical problems inherent in a eugenic ‘master plan’ that tells others that they can or cannot reproduce because some traits that they will pass on to their child are simply not present when an individual, already fully formed, rational, and competent, chooses to change their own appearance or genetic code.
There are non-transhumanist ideas with some similarity to eugenics to which people already subscribe which don’t involve personal choice, but instead choice over another (usually a fetus). For instance, people largely seem to be morally OK with screening for ‘undesirable’ traits in their children like autism and Huntington’s disease. Today, a positive result on such a test largely informs the parents that their child might be afflicted with some disease, but there is little that can be done about the disease itself. If the screening is completed early enough, many women are comfortable with the idea of aborting a would-be child but many others, uncomfortable with the ethical implications, are not. As the traits identified become less and less serious (say, Down syndrome or other life affecting traits on the ‘serious’ extreme, and hair color on the ‘trivial’ extreme), the corresponding number of people willing to accept abortion as an option drops. Rightfully so, as abortion really is an ‘all-or-nothing’ scenario, with no middle ground.
Transhumanism presents the possibility of that middle ground and, when transhumanism is viewed as an extension of current medical technology, likely leads to an extension of current medical attitudes. Thus, it might be morally acceptable to rewrite the genetic code of an infant (or fetus, or embryo, or whatever) to remove serious genetic defects, but less acceptable to rewrite the genetic code of an infant to change their hair color. See, for instance, the ‘designer baby‘ controversy. Because genetic rewrites are not necessarily an all or nothing affair like abortion is, it provides the ability for a parent to have a child without a life altering disease like Downs syndrome without forcing the parents to choose between having a baby with the genetic disease on the one hand, and aborting what would become their child on the other hand.
Further, depending on how many times (and how successfully, and how cheaply, and safely, etc) genetic rewrites can be accomplished, the moral status of even trivial decisions might become a non-issue. Where traits like hair and eye color, skin tone, and maybe even memory and intelligence can be rewritten at will, particularly through cheap, safe drugs that can be taken at any time, a parent’s preference for a (say) brown haired, green eyed child might be but a default setting, freely changeable when that child reaches the age of majority (or the parents are willing to sign the permission forms.) The more traits that can be changed (perhaps sexual preference, gender itself, height, weight, whatever) the less moral stigma is attached to parents choosing any particular traits for their child.
If traits like these are freely, cheaply, and safely changeable, then ‘eugenics’ both ceases to mean what it previously meant (controlling traits through restrictive reproductive permissions) and ceases to carry the attached social stigma (because having an ‘undesirable’ trait, or not, is wholly a matter of personal preference.) Far from code for eugenics, transhumanism might be the idea that makes eugenics itself irrelevant.
Last night, Ray Kurzweil appeared on the Colbert Report to discuss his new video, Transcendent Man. Because Transcendent Man discusses transhumanism generally, the discussion opened with a film clip and then allowed Kurzweil to discuss transhumanism generally.
I’ve seen the entire Transcendent Man video, and the snippet shown on the Colbert Report is a fair summary. The film itself has (to my mind) a strange turn toward the religious and slightly creepy; I’m not sure if the film really gets across a message about the virtues of transhumanism (though it is discussed throughly) and instead puts the focus on Kurzweil and makes him seem like a creepy prophet. Other interviews suggest that Kurzweil is not nearly so obsessed as Transcendent Man makes him seem (with reviving his father, anyway.)
The Colbert interview, on the other hand, struck more of a techno-idealist tone with the usual Colbert humorous undertone. As Singularity Hub points out, this interview on the Colbert Report will probably reach a younger (and larger) audience than Kurzweil’s longer interviews on shows like Charlie Rose. Because of the Colbert format, Kurzweil wasn’t able to get too in depth with his research and arguments for The Singularity, but Kurzweil was still able to get the high points across and build some excitement. Given the much vaunted ‘Colbert Bump’ I’d expect to see articles in the next few days saying that downloads of Transcendent Man skyrocketed.
While I’m all in favor of transhumanism gaining more mainstream support, I worry that people who are excited by the Colbert interview will get introduced to transhumanism via Transcendent Man, write off Kurzweil and his associated ideas because of the tone the film takes, and associate believers with the fringe beliefs (at least without a little information as prep) that seem to take center stage in the film. I wish there were a better film to introduce the mainstream to transhumanism; or that the film had mentioned other sources of information like H+.
In the April 2011 issue of Scientific American (subscription required), Michael Gazzaniga presents a survey of emerging neuroscience research and analyzes the legal issues that those breakthroughs might lead to. Most of the article discusses the legal view of new technologies like fMRI (functional magnetic resonance imaging) as complicated polygraphs. Most judges currently exclude fMRI evidence intended to (for instance) show that a witness is lying or an accused murderer has a brain defect likely to cause insanity because reliable predictions of behavior based on fMRI currently require samples of scans on other brains, and researchers can describe the results of such scans only in terms of what the scans are likely to mean based on those groups of similar scans. Put another way, X-brain-activity is likely to mean that the witness is lying based on the brain activity of other people that have been scanned, but the presence of X-brain-activity does not conclusively prove that the witness is lying (and anyway could only show that the witness believes that they are lying, or intends to tell a lie, not that the propositional phrase spoken by the witness is, in fact and devoid of belief, false.) Correlation, as my undergraduate professors liked to reiterate, is not causation.
That said, however, even in criminal trials the law does not require absolute certainty. Thank goodness for that, because skeptical philosophers continue to point out that about the only thing we can be certain of is that “I exist” (for whichever I happens to be thinking the proposition at the time) and little else. Thanks, Descartes. We’ve decided, as a society, that punishment is proper where some defendant likely committed some wrong (or really, really, likely committed some wrong in the criminal context) and that as a result we accept that some (actually) innocent people are going to be punished when we think it likely that they’re guilty. Given that we don’t require absolute certainty, should brain scans be excluded just because they, too, are not 100% accurate? If the accuracy of a brain scan is tied to the sample size that the results are being compared to, shouldn’t we get larger sample sizes rather than wait on the technology to get better? If a researcher could testify that a sample size of 10,000 subjects reliably detected lies in 85% of cases, it seems (to my underdeveloped understanding of burdens of proof) that the civil ‘preponderance of the evidence’ standard is certainly met, and that the criminal ‘reasonable doubt’ standard is also likely met. If that’s right, then on what basis are these scans being excluded?
Gazzaniga also asks “Would it erode notions of free will and personal responsibility more broadly if all antisocial decisions could seemingly be attributed to some kind of neurological deviation?” He states, as a follow up: “People, not brains, commit crimes.” What could the second question possibly mean, given the current scientific understanding of how the universe works, and, based on that understanding isn’t the first statement likely to be true anyway? To vastly oversimplify a raging philosophical debate, most philosophers and (presumably) scientists ascribe to a roughly physicalist view of the universe. That is: Science tells us that everything is made of physical stuff (quantum mechanics aside) and that everything is a relation between that physical stuff. If we accept the premise that the universe is made of all and only physical stuff, then “we” can be little more than our brains. Our brains, made of physical stuff as they are, are themselves controlled by complicated chemical interactions that present to each of us, individually, as personality and consciousness and to the scientists studying them as brainwaves and chemical activity. Put more simply, we just are our brains, and our brains just are a long chain of chemical reactions. If a deviant chemical reaction causes (what we consider to be) aberrant behavior, then doesn’t it make more sense to fix the chemical reaction than to assign notions of blameworthiness based on outdated information?
I oversimplify, of course, and gloss over interesting ethical questions to make my point. The practice, however, works in the real world. We have little difficulty reserving moral judgment where John accidentally trips and injures Jane so long as John was being reasonably careful and didn’t intend to hurt anyone. We currently (though not frequently) send murders who are clinically insane to treatment facilities rather than jail when they kill someone while insane. We accept that diabetics require insulin because their chemical reactions are deficient, not because they are morally deficient. Why should our brain be treated any differently?
My questions are not condemnations of Gazzaniga, or of his fascinating article. I just wonder at the implications of our judicial policy and a lingering belief that we are something other than our brains when we’ve abandoned such pretenses for less personal organs.