Archive

Posts Tagged ‘singularityu’

Weekend Edition: Heathcare News

January 7, 2012 1 comment

It’s been a busy few days in health technology news.

First, CTV (via fight aging!) reports that Canadian researchers have discovered stem cells within the eyes of adults that can be used to help cure age-related macular degeneration (AMD) – the leading cause of vision loss in people over 60. Apparently these cells form within the eye during the embryonic stage, and remain dormant (sometimes up to 100 years) in our eyes. By removing the cells and growing them in a culture, scientists can (in theory) restore vision by replacing dysfunctional cells. Further, these stem cells seem to be pluripotent, meaning that the scientists can turn them into other types of cells and thus, to treatments for other diseases. Here’s a quote from the article:

“In culture dishes in the lab, the researchers were able to coax about 10 per cent of the RPE-derived stem cells to grow in the lab. Further prodding caused the cells to differentiate into, or give rise to, a variety of cell types — those that make bone, fat or cartilage.

Temple said her team also generated a progenitor cell that carries some characteristics of one type of nervous system cell, although it was not fully differentiated.

‘But the fact that we could make these cells that were part-way, that were immature, indicates to us that if we keep on manipulating them, going forward in the future, we should be able to find ways to create other types of central nervous system cells,’ she said.

One goal would be to produce neurons, the electrical-signalling cells in the brain and other parts of the central nervous system. That would mark a major step towards the holy grail of regenerative medicine: the ability to repair spinal cord injuries and brain damage caused by such diseases as Alzheimer’s or Parkinson’s.

‘And a really important cell type that we’d love to see if we can make would be the retinal cells, the neural retinal cells like the photoreceptors that are in the eye,” said Temple. “So if we could help make new photoreceptors as well as the RPE — which we’ve already shown we can make — then we would be making two really valuable cell types for age-related macular degeneration.'”

Second, USA Today (via Transhumanic) reports that yet another artificial organ, this time the pancreas, has entered clinical trials. Unfortunately, this organ isn’t exactly like the rest of your organs; it’s a small machine worn outside the body rather than being implanted inside the body where the old pancreas used to go. Nevertheless, it’s seemingly effective at monitoring glucose levels in the blood and calculating how much insulin needs to be injected to bring the blood levels back to normal, and then it injects that amount of insulin. Approval for the device is expected in the next three to five years.

Speaking of clinical trials, however, all is not rosy in the world of academic publishing. Discover Magazine reports on a study conducted by scientists showing that 30 months after clinical trials had been completed, better than half had not been published. After more than four years, one-third of the results from clinical trials remained unpublished. This is problematic for two reasons. First, publishing is a condition of receiving a grant from the National Institute of Health (NIH). Thus, better than half of funded groups breach their funding agreement. Second, and perhaps more importantly, by not publishing their results, these scientists deprive the rest of the scientific community of valuable information; information the scientists conducting this study argue could change the conclusions of researchers based on published work.

“’Overall, addition of unpublished FDA trial data caused 46% (19/41) of the summary estimates from the meta-analyses to show lower efficacy of the drug, 7% (3/41) to show identical efficacy, and 46% (19/41) to show greater efficacy.’ That means that when scientists try to study those FDA-approved drugs, they may not realize that they work less well than published papers indicate (or better, as the case may be).”

This is a trend that needs to stop, especially given the exponential increases in technology and the vast amount of advancement coming yearly; up-to-date results are a must.

Going back to diabetes for a moment, a new study reported by eurekalert shows that poor maternal diet can increase the odds of diabetes in the child. Scientists from Cambridge and Leicester have linked poor maternal diet while pregnant to the fetus’ inability to correctly manage fat cells later in life. “Storing fats in the right areas of the body is important because otherwise they can accumulate in places like the liver and muscle where they are more likely to lead to disease.” The pregnant rats in the study were fed low-protein diets, which led to the unborn rats later being unable to process fat correctly and increased their chances of developing type-2 diabetes. This deficiency caused the now-born rats to look slimmer (because they stored less fat) but nevertheless be more likely to develop diabetes. Similar results were shown in humans with low birth weights.

In a world of increasing medical apps and patient-driven medical data, technologyreview.com reports on the the thoughts of cardiologist Eric Topol, who seems to agree with SingularityU chair Daniel Kraft that this increasing data will revolutionize medicine. The article indicates, however, that there is reason to question whether or not all this additional data is really helpful. In no case does the additional information seem to have hurt (that is, patients did not receive worse care for the abundance of information) but neither did the outcome always improve. What the article does not seem to question, however, is that quite soon there will be a deluge of additional patient information available, first through cell phone apps and the federally funded switch to electronic patient records records, and later through more advanced sensors like nanobots swimming around in the bloodstream. For my money, I suggest that if the patient data isn’t helping to increase patient care, then it’s because the data is not being used correctly. Certainly no doctor can keep track of hundreds or thousands of patients whose information is being updated daily or even weekly, but some sort of computer within a hospital with correctly coded software (or perhaps even a Watson-style supercomputer) easily could, and then could alert the doctor to only the most important cases.

Finally, my law school pal Micah linked me to an article from the BBC, reporting on the first chimera-monkey; a monkey created from several different embryos. Essentially, the scientists took DNA from up to six different embryos, mixed them together into three monkey embryos, and out came apparently healthy monkeys Chimero, Hex, and Roku. The study also found that (somewhat unsurprisingly) stem cells didn’t work the same way in mice as they did in primates, which suggests that perhaps all the backward-engineering we’re doing to revert normal cells into a pluripotent stage might not be effective in humans like it is in mice. That is, there still may be a need for embryonic stem cells. Micah asked whether this experiment might have an impact on our notions of family, in addition to our ideas about personhood.

For a couple of reasons, I think this experiment in particular probably won’t. The only thing different about these monkeys and any other monkeys of the same type is that these were artificially created and had a mixture of several strands of DNA. On one hand, that probably means that there is no clear mother or father; when the DNA of six monkeys is mixed together, who’s the biological parent? On the other hand, a monkey (or a human, for that matter) who receives a transplanted organ now has DNA from at least three different people (both biological parents, plus the donor) and maybe four (if you count the two different DNA-strands that make up the donors’ DNA) different sources. With more transplants comes more DNA; it’s not inconceivable that a human could have a kidney from one donor, a lung from another, and a heart from yet a third; making at least five distinct DNA strands within the same human. Also, in the sense that ‘chimera’ just means composed of different DNA strands, then anyone who already has a transplant is a chimera. so for that reason, I don’t think that a human created this way (as unlikely as it is, given human-experimentation laws) would be any less of a person than a more traditionally-created human.

But speaking of created humans, through various fertility treatments, and including even surrogate mothers (or fathers) and whatnot, our notions of family are becoming less tied to the make-up of our DNA. Even simple adoption shows that a family unit can include members with different DNA without trouble. So the fact that these monkeys are made up of several DNA strands probably shouldn’t start affecting out ideas about family, though in humans they could lead to some hilarious Maury Povich episodes. Also, the fact that a human is created through artificial means hasn’t yet stopped them from being a person in the traditional sense, and so I don’t think it would have any effect on monkeys (though they’re not legally persons, and this is unlikely to change that.)

Something that might make us reconsider our notions of personhood and family is a chimera made of different species; part monkey, part reptile combinations, for example. There, a whole new species is being creates and the being becomes further removed from parents. Because family is more of a social construct now than a DNA-matched set (consider how many people seriously consider their dog / cat / goldfish to be part of their family) even this radical form of chimera might not shake our notions of family. But personhood … that’s something I’ll have to think more about.

Stay tuned for some news about robotics tomorrow; I wanted to make separate posts to keep this one from becoming even more unwieldy than it already is.

Advertisements

Gadgets, Brains, and Healthcare

January 5, 2012 4 comments

Only five days in to 2012, and mind-blowing articles are already dropping.

According to Pentagon scientists (reported by Physorg.com and others), Cornell students have created a device that splits beams of light, hiding an event from sight. They’re calling it a time cloak. For around 40 picoseconds (trillionths of a second) the scientists are able to create a gap in the light by using a time-lens to split the light into slower red and faster blue components. This makes anything occurring in the gap invisible. In theory scientists could make the device effective for a few millions of a second, or perhaps even a few thousandths of a second, but a device large enough to erase a whole second would need to be approximately 18,600mi long. Even for someone like me who envisions mechanical implants for humans and perhaps even brain uploading into a computer, this article is fantastic. I’d love to see some confirmations of this technology and a better explanation for how, exactly, it works. Still, it seems it won’t be a very effective Ring of Gyges anytime soon, if at all.

Researchers in Japan, meanwhile, have created super sensitive sensors out of carbon nanotubes. The sensor is flexible enough to be woven into clothing, and can be stretched to three times its normal size. In addition to rehabilitation uses, this sort of sensor seems great for the blossoming world of controllerless video game systems like the Xbox Kinect. Such sensors are also implantable into people receiving organs (biological or otherwise) or could just be used to record biometrics in your everyday clothing.

Finally, Klaus Stadlmann gives a TED Talk about inventing the world’s smallest 3-D printer. It seems to be about the size of a Playstation 2, and can print in incredible detail. I thought the talk was a little dry, but still interesting.

There have been several interesting brain articles in the last few days. Forbes ticks down their top-10 brain articles from 2011, including memory-assisting chips, using magnetism to affect moral judgments, potential treatments for people suffering from Alzheimer’s disease, and thought-controlled apps for your cell phone. Although the brain is still largely mysterious, scientists are making massive amounts of progress on all fronts yearly.

Discover Magazine reports that anesthesia might be the key to better understanding how consciousness works. Apparently it’s not unusual for patients under anesthesia to wake up, then go back under and never remember that they woke up. I’ve talked a bit about the problem of recognizing consciousness before (one essentially has to rely on reports of consciousness, but consciousness itself cannot be directly tested for) and this article does a good job of reiterating the problem. The researchers hope that by putting people under and eliciting subjective reports of consciousness after the fact, they will be better able to pin down just what it is that makes a person conscious.

Medicalxpress.com posted an article in December asking Why Aren’t We Smarter Already? The authors suggest that there is an upper-limit to various brain functions, and that while drugs and other things could potentially bring low-scoring individuals up, those already at or near peak performance would see little or no gain from the same drugs. If this is right, then there is reason to doubt that mind-enhancing drugs (say, Adderall) could make the smartest people even smarter. Yet, the article only talks about improving the mind that we have, and not about whether it is possible to create an artificial brain (or introduce artificial implants into a biological brain) that -could- break past these natural barriers. It’s no secret that the body is well, but not optimally, designed, and that the same is true of the brain shouldn’t really be surprising.

TechCrunch offers a predictive list of technologies coming in 2012 in an article penned by tech luminary and SingularityU professor Daniel Kraft. According to Daniel, A.I. will become increasingly helpful in determining diseases, from cheap phone apps that detect cancer with their cameras to A.I. assisted diagnoses in remote villages. 3-D printing will continue to advance, massive increases in patient data will be shared on social network sites like patientslikeme.com, and videoconferencing technology like Skype will increasingly allow doctors to examine patients without an office visit. All good things.

Last, but not least, a team of scientists at USC have recently mapped an entire human genome in 3-D. They hope to be able to evaluate genomes not just based on their genetic make-up, but also their physical structure. Because genomes take up three dimensions in the body, a 3-D map should be a lot more accurate than the standard model.