I haven’t talked about general breakthroughs in medicine recently, and I have a considerable backlog of cool stories on the topic, so let’s explore that a bit today. These breakthroughs aren’t necessarily transhumanist in nature, but they certainly all contribute to the general goal of better healthcare and are but a few steps away from some truly amazing technologies. So, without further ado, the Top 10 Medical Stories in My Queue:
10) The explosion of personalized medicine. The Wall Street Journal wrote a piece earlier this year about the idea of a “doctor in your pocket.” The idea is to have cheap, quick, accurate medical screening on a device that people can carry with them. Daniel Kraft has been very vocal about the explosion of this sort of technology in a limited sense, and Peter Diamandis recently announced an X-Prize for essentially the same concept; a lab on a chip that can diagnose a patient better than a group of board certified physicians. While diagnosis itself is not necessarily transhuman, clearly other transhumanists are interested in the idea, and if successful this sort of technology ought to keep people healthier for longer. With a $10 Million prize pool, there ought to be plenty of incentive to stuff Watson into a cell phone and make med students everywhere cry crocodile tears. The next step: An AI doctor in every house.
9) Likely to be included in these new personalized medicine machines: A cancer sniffing sensor for early detection. Gizmodo has a great article detailing what NASA’s been up to recently, including sensors that can detect toxins in the air and cancer. The really cool thing about this sensor is that it’s virtually production ready and can be attached to a cell phone right out of the box; that is, this isn’t a theoretical device, it’s all but here. Despite all the grumping about the government that frequently goes on, two federal programs (the Department of Homeland Security is the other) are pushing hard to get this technology into circulation. DHS, however, is mostly interested in the toxin and bomb chemical detecting properties while, for our purposes, the cancer detecting ability is probably more interesting (although if bomb making chemicals -are- detected near my cell phone, I sure wouldn’t mind a heads-up!) I don’t know what technology this sensor is using exactly, but it seems reasonable to include the sensor that MIT recently reported on that detects lung cancer through a breath test at an astounding 83% accuracy. There is room for improvement, as trained dogs can detect lung cancer with 98% accuracy, but as Gizmodo quipped, at least you won’t have to carry a Labrador around in your pocket.
8) Intelligent pills. Nature ran a story about them earlier this year, and both Scientific American and Pop Bioethics picked up the story long before I got around to it. The pills will have included along with the medicine a placebo that houses a small sensor. When the sensor interacts with stomach acid, the current will transmit information a very short distance; ideally to a Band-Aid-like device that can be worn on the skin and will measure heart rate, respiration, temperature and the like. This skin sensor, in turn, could wirelessly transmit health information to your doctor, allowing them to check in on your vitals without you needing to visit the office. If you frequently forget to take your pills, (a “problem area” in current medicine) your doctor (or, more likely, a digital secretary) can give you a quick call to remind you. Other pills that include cameras and other sensors are also in development, though those probably wouldn’t be used outside of particular contexts.
7) Wirelessly controlled ‘pills’. Science Daily posted an article about a month ago claiming that MicroCHIPS Inc. successfully tested a device that, once implanted into a patient, can deliver medication when it receives a wireless signal to dispense the drug. The device delivered amounts of the drug comparable to an injection (without needles, which I’m a fan of). Like the pills that alert the doctor when they’ve been taken, these devices help to ensure patient compliance with the doctor’s proscribed drug regimen. Unlike those pills, these devices can automatically deliver the proper drug dose, requiring the patient to neither swallow pills nor suffer injections. Although the initial device only carried 20 doses of the drugs, the final product ought to have hundreds. Further clinical trials will follow.
6) Continuing the pill trend, scientists are working on a special type of aspirin that doesn’t cause ulcers. Oh yeah, and it seems to fight cancer, too. In mice, at least, the drug fights colon, lung, breast, prostate, pancreas and blood cancers. Although the drug seems years away from human clinical trials, the lack of side effects and the bolstered cancer fighting properties show a lot of promise for the future.
5) Rounding out the pill discussion, researchers at Oxford University have discovered something curious about common beta-blockers, normally used to treat heart disease. They seem to also lesson racist associations. Although the sample size was small (18 white college students) the results occurred at a “statistically significant” rate as compared to those who received the placebo. The current theory is that the beta blockers affect the portion of the central nervous system that regulates fear and emotional responses. If the study is repeatable, interesting ethical questions arise about medicating out racist beliefs.
4) Chips for humans. MSNBC reports that the FDA recently approved an RFID chip for humans. Although similar technology has been used for years in pets to help reunite lost pets and their owners, human trials go rather a lot more slowly. There’s also some security issues to work out; RFID isn’t the most secure technology in the world (though it does take someone with some know-how to get to the data, and they need to be in fairly close proximity.) Ultimately, I imagine these sorts of chips will carry a lot more than patient information; credit card info, for instance, has been tried elsewhere in the world and offers a unique way to pay. Either the cost will have to come down from its $150-$200 price point, or the functionality is going to have to drastically increase to justify the costs.
3) Someday, we might be able to replace limbs with bionic equivalents that don’t break, in the meantime we’re stuck with the bones we have. Well, that and some new fracture putty if the Department of Defense has their way (those pesky government types again!) The DoD has commissioned research that already is beginning to bear fruit; the fractures bones of rats healed enough within two weeks that they were up and running around again. Next step; pigs and sheep (sheep bones have already been restored within a month.) Following those trials, the putty may be farmed out to the university veterinarian clinic and, if that goes well, to human trials afterwards. It’s not quite the nutrient bath from the movie Wanted, but it’s better than a cast for months.
2) While your bones are being healed by putty, the doctors might as well patch up your internal organs with some new gummy adhesive. Researchers from UCSD have developed a self-healing gummy (meaning the gummy repairs itself when it’s torn or broken) that becomes super adhesive when it comes into contact with acid; acid like that, say, that resides within the stomach. Perhaps the gummy will be useful for organs other than the stomach, but it seems well suited already to help treat stomach wounds.
1) Finally, for the poor sap that’s -really- undergone some punishment, there’s one last piece of tech that’ll help put Humpty Dumpty back together again. San Diego start-up company Organovo is printing muscles, layer by layer, and then placing it into a mold that allows the muscle to grow into something just like that which comes out of a human being. So far, Organovo has printed cardiac muscle, blood vessels, and lung tissue. Right now the company is focusing on creating tissue identical to natural human tissue so that scientists can experiment on the tissue without having to experiment on attached humans, but it hopes to use the same technology to print whole organs for transplant in the future. Between these last three technologies, then, even the worst accident victim ought to be in pretty good shape in the near(ish) future.
Thanks for sticking with me through the hiatus; the next post ought to be coming more quickly next time.
It’s been a busy few days in health technology news.
First, CTV (via fight aging!) reports that Canadian researchers have discovered stem cells within the eyes of adults that can be used to help cure age-related macular degeneration (AMD) – the leading cause of vision loss in people over 60. Apparently these cells form within the eye during the embryonic stage, and remain dormant (sometimes up to 100 years) in our eyes. By removing the cells and growing them in a culture, scientists can (in theory) restore vision by replacing dysfunctional cells. Further, these stem cells seem to be pluripotent, meaning that the scientists can turn them into other types of cells and thus, to treatments for other diseases. Here’s a quote from the article:
“In culture dishes in the lab, the researchers were able to coax about 10 per cent of the RPE-derived stem cells to grow in the lab. Further prodding caused the cells to differentiate into, or give rise to, a variety of cell types — those that make bone, fat or cartilage.
Temple said her team also generated a progenitor cell that carries some characteristics of one type of nervous system cell, although it was not fully differentiated.
‘But the fact that we could make these cells that were part-way, that were immature, indicates to us that if we keep on manipulating them, going forward in the future, we should be able to find ways to create other types of central nervous system cells,’ she said.
One goal would be to produce neurons, the electrical-signalling cells in the brain and other parts of the central nervous system. That would mark a major step towards the holy grail of regenerative medicine: the ability to repair spinal cord injuries and brain damage caused by such diseases as Alzheimer’s or Parkinson’s.
‘And a really important cell type that we’d love to see if we can make would be the retinal cells, the neural retinal cells like the photoreceptors that are in the eye,” said Temple. “So if we could help make new photoreceptors as well as the RPE — which we’ve already shown we can make — then we would be making two really valuable cell types for age-related macular degeneration.'”
Second, USA Today (via Transhumanic) reports that yet another artificial organ, this time the pancreas, has entered clinical trials. Unfortunately, this organ isn’t exactly like the rest of your organs; it’s a small machine worn outside the body rather than being implanted inside the body where the old pancreas used to go. Nevertheless, it’s seemingly effective at monitoring glucose levels in the blood and calculating how much insulin needs to be injected to bring the blood levels back to normal, and then it injects that amount of insulin. Approval for the device is expected in the next three to five years.
Speaking of clinical trials, however, all is not rosy in the world of academic publishing. Discover Magazine reports on a study conducted by scientists showing that 30 months after clinical trials had been completed, better than half had not been published. After more than four years, one-third of the results from clinical trials remained unpublished. This is problematic for two reasons. First, publishing is a condition of receiving a grant from the National Institute of Health (NIH). Thus, better than half of funded groups breach their funding agreement. Second, and perhaps more importantly, by not publishing their results, these scientists deprive the rest of the scientific community of valuable information; information the scientists conducting this study argue could change the conclusions of researchers based on published work.
“’Overall, addition of unpublished FDA trial data caused 46% (19/41) of the summary estimates from the meta-analyses to show lower efficacy of the drug, 7% (3/41) to show identical efficacy, and 46% (19/41) to show greater efficacy.’ That means that when scientists try to study those FDA-approved drugs, they may not realize that they work less well than published papers indicate (or better, as the case may be).”
This is a trend that needs to stop, especially given the exponential increases in technology and the vast amount of advancement coming yearly; up-to-date results are a must.
Going back to diabetes for a moment, a new study reported by eurekalert shows that poor maternal diet can increase the odds of diabetes in the child. Scientists from Cambridge and Leicester have linked poor maternal diet while pregnant to the fetus’ inability to correctly manage fat cells later in life. “Storing fats in the right areas of the body is important because otherwise they can accumulate in places like the liver and muscle where they are more likely to lead to disease.” The pregnant rats in the study were fed low-protein diets, which led to the unborn rats later being unable to process fat correctly and increased their chances of developing type-2 diabetes. This deficiency caused the now-born rats to look slimmer (because they stored less fat) but nevertheless be more likely to develop diabetes. Similar results were shown in humans with low birth weights.
In a world of increasing medical apps and patient-driven medical data, technologyreview.com reports on the the thoughts of cardiologist Eric Topol, who seems to agree with SingularityU chair Daniel Kraft that this increasing data will revolutionize medicine. The article indicates, however, that there is reason to question whether or not all this additional data is really helpful. In no case does the additional information seem to have hurt (that is, patients did not receive worse care for the abundance of information) but neither did the outcome always improve. What the article does not seem to question, however, is that quite soon there will be a deluge of additional patient information available, first through cell phone apps and the federally funded switch to electronic patient records records, and later through more advanced sensors like nanobots swimming around in the bloodstream. For my money, I suggest that if the patient data isn’t helping to increase patient care, then it’s because the data is not being used correctly. Certainly no doctor can keep track of hundreds or thousands of patients whose information is being updated daily or even weekly, but some sort of computer within a hospital with correctly coded software (or perhaps even a Watson-style supercomputer) easily could, and then could alert the doctor to only the most important cases.
Finally, my law school pal Micah linked me to an article from the BBC, reporting on the first chimera-monkey; a monkey created from several different embryos. Essentially, the scientists took DNA from up to six different embryos, mixed them together into three monkey embryos, and out came apparently healthy monkeys Chimero, Hex, and Roku. The study also found that (somewhat unsurprisingly) stem cells didn’t work the same way in mice as they did in primates, which suggests that perhaps all the backward-engineering we’re doing to revert normal cells into a pluripotent stage might not be effective in humans like it is in mice. That is, there still may be a need for embryonic stem cells. Micah asked whether this experiment might have an impact on our notions of family, in addition to our ideas about personhood.
For a couple of reasons, I think this experiment in particular probably won’t. The only thing different about these monkeys and any other monkeys of the same type is that these were artificially created and had a mixture of several strands of DNA. On one hand, that probably means that there is no clear mother or father; when the DNA of six monkeys is mixed together, who’s the biological parent? On the other hand, a monkey (or a human, for that matter) who receives a transplanted organ now has DNA from at least three different people (both biological parents, plus the donor) and maybe four (if you count the two different DNA-strands that make up the donors’ DNA) different sources. With more transplants comes more DNA; it’s not inconceivable that a human could have a kidney from one donor, a lung from another, and a heart from yet a third; making at least five distinct DNA strands within the same human. Also, in the sense that ‘chimera’ just means composed of different DNA strands, then anyone who already has a transplant is a chimera. so for that reason, I don’t think that a human created this way (as unlikely as it is, given human-experimentation laws) would be any less of a person than a more traditionally-created human.
But speaking of created humans, through various fertility treatments, and including even surrogate mothers (or fathers) and whatnot, our notions of family are becoming less tied to the make-up of our DNA. Even simple adoption shows that a family unit can include members with different DNA without trouble. So the fact that these monkeys are made up of several DNA strands probably shouldn’t start affecting out ideas about family, though in humans they could lead to some hilarious Maury Povich episodes. Also, the fact that a human is created through artificial means hasn’t yet stopped them from being a person in the traditional sense, and so I don’t think it would have any effect on monkeys (though they’re not legally persons, and this is unlikely to change that.)
Something that might make us reconsider our notions of personhood and family is a chimera made of different species; part monkey, part reptile combinations, for example. There, a whole new species is being creates and the being becomes further removed from parents. Because family is more of a social construct now than a DNA-matched set (consider how many people seriously consider their dog / cat / goldfish to be part of their family) even this radical form of chimera might not shake our notions of family. But personhood … that’s something I’ll have to think more about.
Stay tuned for some news about robotics tomorrow; I wanted to make separate posts to keep this one from becoming even more unwieldy than it already is.
Only five days in to 2012, and mind-blowing articles are already dropping.
According to Pentagon scientists (reported by Physorg.com and others), Cornell students have created a device that splits beams of light, hiding an event from sight. They’re calling it a time cloak. For around 40 picoseconds (trillionths of a second) the scientists are able to create a gap in the light by using a time-lens to split the light into slower red and faster blue components. This makes anything occurring in the gap invisible. In theory scientists could make the device effective for a few millions of a second, or perhaps even a few thousandths of a second, but a device large enough to erase a whole second would need to be approximately 18,600mi long. Even for someone like me who envisions mechanical implants for humans and perhaps even brain uploading into a computer, this article is fantastic. I’d love to see some confirmations of this technology and a better explanation for how, exactly, it works. Still, it seems it won’t be a very effective Ring of Gyges anytime soon, if at all.
Researchers in Japan, meanwhile, have created super sensitive sensors out of carbon nanotubes. The sensor is flexible enough to be woven into clothing, and can be stretched to three times its normal size. In addition to rehabilitation uses, this sort of sensor seems great for the blossoming world of controllerless video game systems like the Xbox Kinect. Such sensors are also implantable into people receiving organs (biological or otherwise) or could just be used to record biometrics in your everyday clothing.
Finally, Klaus Stadlmann gives a TED Talk about inventing the world’s smallest 3-D printer. It seems to be about the size of a Playstation 2, and can print in incredible detail. I thought the talk was a little dry, but still interesting.
There have been several interesting brain articles in the last few days. Forbes ticks down their top-10 brain articles from 2011, including memory-assisting chips, using magnetism to affect moral judgments, potential treatments for people suffering from Alzheimer’s disease, and thought-controlled apps for your cell phone. Although the brain is still largely mysterious, scientists are making massive amounts of progress on all fronts yearly.
Discover Magazine reports that anesthesia might be the key to better understanding how consciousness works. Apparently it’s not unusual for patients under anesthesia to wake up, then go back under and never remember that they woke up. I’ve talked a bit about the problem of recognizing consciousness before (one essentially has to rely on reports of consciousness, but consciousness itself cannot be directly tested for) and this article does a good job of reiterating the problem. The researchers hope that by putting people under and eliciting subjective reports of consciousness after the fact, they will be better able to pin down just what it is that makes a person conscious.
Medicalxpress.com posted an article in December asking Why Aren’t We Smarter Already? The authors suggest that there is an upper-limit to various brain functions, and that while drugs and other things could potentially bring low-scoring individuals up, those already at or near peak performance would see little or no gain from the same drugs. If this is right, then there is reason to doubt that mind-enhancing drugs (say, Adderall) could make the smartest people even smarter. Yet, the article only talks about improving the mind that we have, and not about whether it is possible to create an artificial brain (or introduce artificial implants into a biological brain) that -could- break past these natural barriers. It’s no secret that the body is well, but not optimally, designed, and that the same is true of the brain shouldn’t really be surprising.
TechCrunch offers a predictive list of technologies coming in 2012 in an article penned by tech luminary and SingularityU professor Daniel Kraft. According to Daniel, A.I. will become increasingly helpful in determining diseases, from cheap phone apps that detect cancer with their cameras to A.I. assisted diagnoses in remote villages. 3-D printing will continue to advance, massive increases in patient data will be shared on social network sites like patientslikeme.com, and videoconferencing technology like Skype will increasingly allow doctors to examine patients without an office visit. All good things.
Last, but not least, a team of scientists at USC have recently mapped an entire human genome in 3-D. They hope to be able to evaluate genomes not just based on their genetic make-up, but also their physical structure. Because genomes take up three dimensions in the body, a 3-D map should be a lot more accurate than the standard model.