Archive
Biotic Hands and Programmable Brains
Every so often I come across an article that really illustrates how near the future is. This week, I came across two of them.
Free Will and Legal Culpability
In the April 2011 issue of Scientific American (subscription required), Michael Gazzaniga presents a survey of emerging neuroscience research and analyzes the legal issues that those breakthroughs might lead to. Most of the article discusses the legal view of new technologies like fMRI (functional magnetic resonance imaging) as complicated polygraphs. Most judges currently exclude fMRI evidence intended to (for instance) show that a witness is lying or an accused murderer has a brain defect likely to cause insanity because reliable predictions of behavior based on fMRI currently require samples of scans on other brains, and researchers can describe the results of such scans only in terms of what the scans are likely to mean based on those groups of similar scans. Put another way, X-brain-activity is likely to mean that the witness is lying based on the brain activity of other people that have been scanned, but the presence of X-brain-activity does not conclusively prove that the witness is lying (and anyway could only show that the witness believes that they are lying, or intends to tell a lie, not that the propositional phrase spoken by the witness is, in fact and devoid of belief, false.) Correlation, as my undergraduate professors liked to reiterate, is not causation.
That said, however, even in criminal trials the law does not require absolute certainty. Thank goodness for that, because skeptical philosophers continue to point out that about the only thing we can be certain of is that “I exist” (for whichever I happens to be thinking the proposition at the time) and little else. Thanks, Descartes. We’ve decided, as a society, that punishment is proper where some defendant likely committed some wrong (or really, really, likely committed some wrong in the criminal context) and that as a result we accept that some (actually) innocent people are going to be punished when we think it likely that they’re guilty. Given that we don’t require absolute certainty, should brain scans be excluded just because they, too, are not 100% accurate? If the accuracy of a brain scan is tied to the sample size that the results are being compared to, shouldn’t we get larger sample sizes rather than wait on the technology to get better? If a researcher could testify that a sample size of 10,000 subjects reliably detected lies in 85% of cases, it seems (to my underdeveloped understanding of burdens of proof) that the civil ‘preponderance of the evidence’ standard is certainly met, and that the criminal ‘reasonable doubt’ standard is also likely met. If that’s right, then on what basis are these scans being excluded?
Gazzaniga also asks “Would it erode notions of free will and personal responsibility more broadly if all antisocial decisions could seemingly be attributed to some kind of neurological deviation?” He states, as a follow up: “People, not brains, commit crimes.” What could the second question possibly mean, given the current scientific understanding of how the universe works, and, based on that understanding isn’t the first statement likely to be true anyway? To vastly oversimplify a raging philosophical debate, most philosophers and (presumably) scientists ascribe to a roughly physicalist view of the universe. That is: Science tells us that everything is made of physical stuff (quantum mechanics aside) and that everything is a relation between that physical stuff. If we accept the premise that the universe is made of all and only physical stuff, then “we” can be little more than our brains. Our brains, made of physical stuff as they are, are themselves controlled by complicated chemical interactions that present to each of us, individually, as personality and consciousness and to the scientists studying them as brainwaves and chemical activity. Put more simply, we just are our brains, and our brains just are a long chain of chemical reactions. If a deviant chemical reaction causes (what we consider to be) aberrant behavior, then doesn’t it make more sense to fix the chemical reaction than to assign notions of blameworthiness based on outdated information?
I oversimplify, of course, and gloss over interesting ethical questions to make my point. The practice, however, works in the real world. We have little difficulty reserving moral judgment where John accidentally trips and injures Jane so long as John was being reasonably careful and didn’t intend to hurt anyone. We currently (though not frequently) send murders who are clinically insane to treatment facilities rather than jail when they kill someone while insane. We accept that diabetics require insulin because their chemical reactions are deficient, not because they are morally deficient. Why should our brain be treated any differently?
My questions are not condemnations of Gazzaniga, or of his fascinating article. I just wonder at the implications of our judicial policy and a lingering belief that we are something other than our brains when we’ve abandoned such pretenses for less personal organs.