Archive

Posts Tagged ‘Watson’

Weekend Edition: Heathcare News

January 7, 2012 1 comment

It’s been a busy few days in health technology news.

First, CTV (via fight aging!) reports that Canadian researchers have discovered stem cells within the eyes of adults that can be used to help cure age-related macular degeneration (AMD) – the leading cause of vision loss in people over 60. Apparently these cells form within the eye during the embryonic stage, and remain dormant (sometimes up to 100 years) in our eyes. By removing the cells and growing them in a culture, scientists can (in theory) restore vision by replacing dysfunctional cells. Further, these stem cells seem to be pluripotent, meaning that the scientists can turn them into other types of cells and thus, to treatments for other diseases. Here’s a quote from the article:

“In culture dishes in the lab, the researchers were able to coax about 10 per cent of the RPE-derived stem cells to grow in the lab. Further prodding caused the cells to differentiate into, or give rise to, a variety of cell types — those that make bone, fat or cartilage.

Temple said her team also generated a progenitor cell that carries some characteristics of one type of nervous system cell, although it was not fully differentiated.

‘But the fact that we could make these cells that were part-way, that were immature, indicates to us that if we keep on manipulating them, going forward in the future, we should be able to find ways to create other types of central nervous system cells,’ she said.

One goal would be to produce neurons, the electrical-signalling cells in the brain and other parts of the central nervous system. That would mark a major step towards the holy grail of regenerative medicine: the ability to repair spinal cord injuries and brain damage caused by such diseases as Alzheimer’s or Parkinson’s.

‘And a really important cell type that we’d love to see if we can make would be the retinal cells, the neural retinal cells like the photoreceptors that are in the eye,” said Temple. “So if we could help make new photoreceptors as well as the RPE — which we’ve already shown we can make — then we would be making two really valuable cell types for age-related macular degeneration.'”

Second, USA Today (via Transhumanic) reports that yet another artificial organ, this time the pancreas, has entered clinical trials. Unfortunately, this organ isn’t exactly like the rest of your organs; it’s a small machine worn outside the body rather than being implanted inside the body where the old pancreas used to go. Nevertheless, it’s seemingly effective at monitoring glucose levels in the blood and calculating how much insulin needs to be injected to bring the blood levels back to normal, and then it injects that amount of insulin. Approval for the device is expected in the next three to five years.

Speaking of clinical trials, however, all is not rosy in the world of academic publishing. Discover Magazine reports on a study conducted by scientists showing that 30 months after clinical trials had been completed, better than half had not been published. After more than four years, one-third of the results from clinical trials remained unpublished. This is problematic for two reasons. First, publishing is a condition of receiving a grant from the National Institute of Health (NIH). Thus, better than half of funded groups breach their funding agreement. Second, and perhaps more importantly, by not publishing their results, these scientists deprive the rest of the scientific community of valuable information; information the scientists conducting this study argue could change the conclusions of researchers based on published work.

“’Overall, addition of unpublished FDA trial data caused 46% (19/41) of the summary estimates from the meta-analyses to show lower efficacy of the drug, 7% (3/41) to show identical efficacy, and 46% (19/41) to show greater efficacy.’ That means that when scientists try to study those FDA-approved drugs, they may not realize that they work less well than published papers indicate (or better, as the case may be).”

This is a trend that needs to stop, especially given the exponential increases in technology and the vast amount of advancement coming yearly; up-to-date results are a must.

Going back to diabetes for a moment, a new study reported by eurekalert shows that poor maternal diet can increase the odds of diabetes in the child. Scientists from Cambridge and Leicester have linked poor maternal diet while pregnant to the fetus’ inability to correctly manage fat cells later in life. “Storing fats in the right areas of the body is important because otherwise they can accumulate in places like the liver and muscle where they are more likely to lead to disease.” The pregnant rats in the study were fed low-protein diets, which led to the unborn rats later being unable to process fat correctly and increased their chances of developing type-2 diabetes. This deficiency caused the now-born rats to look slimmer (because they stored less fat) but nevertheless be more likely to develop diabetes. Similar results were shown in humans with low birth weights.

In a world of increasing medical apps and patient-driven medical data, technologyreview.com reports on the the thoughts of cardiologist Eric Topol, who seems to agree with SingularityU chair Daniel Kraft that this increasing data will revolutionize medicine. The article indicates, however, that there is reason to question whether or not all this additional data is really helpful. In no case does the additional information seem to have hurt (that is, patients did not receive worse care for the abundance of information) but neither did the outcome always improve. What the article does not seem to question, however, is that quite soon there will be a deluge of additional patient information available, first through cell phone apps and the federally funded switch to electronic patient records records, and later through more advanced sensors like nanobots swimming around in the bloodstream. For my money, I suggest that if the patient data isn’t helping to increase patient care, then it’s because the data is not being used correctly. Certainly no doctor can keep track of hundreds or thousands of patients whose information is being updated daily or even weekly, but some sort of computer within a hospital with correctly coded software (or perhaps even a Watson-style supercomputer) easily could, and then could alert the doctor to only the most important cases.

Finally, my law school pal Micah linked me to an article from the BBC, reporting on the first chimera-monkey; a monkey created from several different embryos. Essentially, the scientists took DNA from up to six different embryos, mixed them together into three monkey embryos, and out came apparently healthy monkeys Chimero, Hex, and Roku. The study also found that (somewhat unsurprisingly) stem cells didn’t work the same way in mice as they did in primates, which suggests that perhaps all the backward-engineering we’re doing to revert normal cells into a pluripotent stage might not be effective in humans like it is in mice. That is, there still may be a need for embryonic stem cells. Micah asked whether this experiment might have an impact on our notions of family, in addition to our ideas about personhood.

For a couple of reasons, I think this experiment in particular probably won’t. The only thing different about these monkeys and any other monkeys of the same type is that these were artificially created and had a mixture of several strands of DNA. On one hand, that probably means that there is no clear mother or father; when the DNA of six monkeys is mixed together, who’s the biological parent? On the other hand, a monkey (or a human, for that matter) who receives a transplanted organ now has DNA from at least three different people (both biological parents, plus the donor) and maybe four (if you count the two different DNA-strands that make up the donors’ DNA) different sources. With more transplants comes more DNA; it’s not inconceivable that a human could have a kidney from one donor, a lung from another, and a heart from yet a third; making at least five distinct DNA strands within the same human. Also, in the sense that ‘chimera’ just means composed of different DNA strands, then anyone who already has a transplant is a chimera. so for that reason, I don’t think that a human created this way (as unlikely as it is, given human-experimentation laws) would be any less of a person than a more traditionally-created human.

But speaking of created humans, through various fertility treatments, and including even surrogate mothers (or fathers) and whatnot, our notions of family are becoming less tied to the make-up of our DNA. Even simple adoption shows that a family unit can include members with different DNA without trouble. So the fact that these monkeys are made up of several DNA strands probably shouldn’t start affecting out ideas about family, though in humans they could lead to some hilarious Maury Povich episodes. Also, the fact that a human is created through artificial means hasn’t yet stopped them from being a person in the traditional sense, and so I don’t think it would have any effect on monkeys (though they’re not legally persons, and this is unlikely to change that.)

Something that might make us reconsider our notions of personhood and family is a chimera made of different species; part monkey, part reptile combinations, for example. There, a whole new species is being creates and the being becomes further removed from parents. Because family is more of a social construct now than a DNA-matched set (consider how many people seriously consider their dog / cat / goldfish to be part of their family) even this radical form of chimera might not shake our notions of family. But personhood … that’s something I’ll have to think more about.

Stay tuned for some news about robotics tomorrow; I wanted to make separate posts to keep this one from becoming even more unwieldy than it already is.

Mastering DNA: The Pace Of Genetic Innovation

September 17, 2011 2 comments

I’ve said before that I think mechanical augmentation will ultimately surpass biological engineering in the ability to enhance the human body. In the short term, however, we’re so much further ahead in biological engineering than we are in mechanical augmentation that I have to think biological manipulation and biological engineering (growing new organs, limbs, etc) will provide the greater impact early on, then help mechanical augmentations catch up (perhaps by tweaking the body’s rejection process, or making cells more receptive to mechanically generated electrical currents) before the mechanical augmentations really begin to sweep biological manipulation out of the way.

If nothing else, people like Aubrey DeGray are working on biological engineering that will lengthen our lives to the point where superior mechanical augmentations are available. I doubt we’ll ever see 100% adoption, species wide, of mechanical augmentations anyway; some segment of the population will probably always want to remain at least partially biological, and some segment will probably want to remain completely biological (some may even refuse biological engineering, just as some people reject vaccines now.) Some people tie being biological to being human; a not absurd belief considering that the two have been correlated for pretty much the entire time our species has been around. So, for a number of good reasons, it’s important to advance biological engineering and make the most of the materials evolution has given us.

Those advances are coming quickly; perhaps as quickly as the mechanical augmentation advances. For instance, researchers at the Wellcome Trust Sanger Institute and the University of Oxford have recently developed blueprints of mouse genetics, much like the human genome project. By comparing the genetic coding of 17 different strains of mice, the researchers were able to discover 700 differences in the genetic code of the mice, including differences that seem to account for diseases like heart disease and diabetes. By studying these genetic differences, researchers can better understand how human genes control disease, and can thus test and offer new cures for human heart disease and diabetes, among other diseases. Although this will be a long project, researcher Dr. Thomas Keane had this to say about the rate of progress in researching genetic impact:

“In some cases it has taken 40 years – an entire working life – to pin down a gene in a mouse model that is associated with a human disease, looking for the cause. Now with our catalogue of variants the analysis of these mice is breathtakingly fast and can be completed in the time it takes to make a cup of coffee.”

With this sort of research, what used to take one scientist their entire life can be accomplished dozens (or more) of times per day. Even if we don’t have any more scientists, those scientists that we have are becoming more effective because of technology, and they are better able to identify important genetic traits and ultimately will be able to push through life saving (or enhancing) knowledge at a quicker rate.

Richard Resnick recently spoke at TEDxBoston about the impact technology is having on biological engineering. The human genome, according to Resnick, consists of approximately 3 billion base pairs, and was mapped from 1988-2003 at a cost of 3.8 billion dollars (or about a buck twenty-six per base pair.) Modern machines can sequence approximately 200 billion base pairs per run, and each run takes about one week. Resnick suspects that these machines will soon be able to sequence about 600 billion base pairs per run.

While we’re now able to run more base pairs simultaneously, we’re also able to sequence the genomes more cheaply. By an order of about 100 million times cheaper. That means the original human genome project that cost 3.8 billion can today be sequences for about $38 in less than a week; this dramatic improvement has happened in my lifetime, and shows no signs of slowing down. This year, Resnick expects about 50,000-100,000 human genomes will be mapped, and he expects this number to double, triple, or quadruple each year for several years. This is the sort of progress we’ve observed in computing power (and called Moore’s Law) but at an even more rapid pace.

What does this mean for us? Resnick relates a story of Rick Wilson at the Washington University who, over a couple of weeks (weeks!), mapped the genetic sequence of a woman who died of cancer and compared her genome to a healthy human genome. When he compared the two genomes, he found a 2,000 base pair deletion in the cancerous cells (out of the 3 billion base pairs, or about .0000006% genetic difference) that translated into the discovery of a gene that, if present, indicates a 90% chance that the person with the gene will develop the particular type of cancer this woman had. This means we can screen people to see whether they have this gene (or any of a whole lot more, and we’re discovering more frequently) and give those patients who have the gene an extremely powerful incentive to get screened frequently for cancer.

This sort of targeted screen (and, therefore, targeted treatment) is only possible because of our ability to sequence the human genome. Originally sequencing the human genome cost $3.8 billion. A few years ago, it cost about $100,000. Today, most companies charge about $10,000 for a genome sequence. Next year Resnick predicts a genome sequence will cost about $1,000, and the year after $100, give or take a year. Because human genomes can be sequenced quickly and cheaply, and because the costs are minimal (roughly twice the cost of a pre-employment drug test in a few years) treatments targeted to very specific portions of the human genetic code can enable people to live an average of 5, 10, even 20 years longer than they previously could. By aggregating millions or billions of human genomes, computers can discover further disease-causing mutations, enabling more targeted treatments, and longer lifespans. The amount of information is overwhelming, almost unimaginable, but could have profound implications for the future of humanity.

What sort of implications? Geneticist George Church suggests that we are only years away from being able to screen our genomes, and then reverting some of our cells back to a pluripotent stem-cell stage such that we can then edit those cells with desirable mutations discovered in other genomes (mutations linked to long life, better immune systems, better eyesight, etc.) and then reintroduce those cells into our body such that they replace the original genetic code with one that confers additional benefits. Today, a sick person who needs a bone marrow transplant hopes to find a genetic match with someone else who is willing to donate bone marrow. In a few years, scientists should be able to create disease free bone marrow from the patient’s own cells, and perhaps will be able to create bone marrow based on the patient’s own with additional, beneficial mutations.

We are just now approaching the point where we can upgrade the very core of our beings, our DNA, with helpful mutations discovered in other genomes. We are at the point where we can massively and cheaply aggregate massive amounts of information about genomes so that we can rapidly sort beneficial mutations from detrimental mutations, allowing us to introduce the first into DNA and screen for and treat the second much more precisely than any current treatment. Compared to that, mechanical augmentations look downright crude. So, while mechanical augmentations might win the day over the long haul, in the next 25-30 years I expect truly amazing discoveries and treatments from biological engineering.

Edit to add: An interesting take on the ethical implications of this sort of genetic engineering.

Robot Articles

April 20, 2011 Leave a comment

Two interesting robot articles hit my RSS feed today. Though they’re not ‘intelligent’ robots, they do show some interesting characteristics.

First, a robot will throw out the first pitch today at the Phillies game. Now, a robot throwing pitches is not that strange; batting cages have been automated for as long as I can remember (not to date myself too much) and throw at a variety of speeds. Two things seem interesting about this robot, however: First, it’s mobile. Granted, it’s not bipedal (like some baseball hurling Terminator, thank goodness) but that might be a feature instead of a defect; bipeds are inherently unstable and creating a bipedal robot would be great for making it seem more human but otherwise wouldn’t make sense structurally. Considering the trouble most companies have teaching bipedal robots to walk, it’s unsurprising that a robot ‘slapped together’ over a few months has wheels (not to say that robots haven’t gotten much better at walking over the last few years.) The second interesting characteristic is that it’s throwing a pitch in the MLB. Batting cages are unexciting enough that we’ve never (to my knowledge) had a ‘pitching robot’ throw out the first pitch, and so this is a first of sorts. After seeing Watson on Jeopardy!, and now a robot throwing the first pitch in an MLB game, I wonder what other robot’s we’ll see in the media. Not too shabby for some engineers from Penn on short notice. I almost wish the umps would let this robot unleash it’s best pitch once before the game (or during the 7th, whatever.)

Second, and more traditionally robotic, a robot sculpts an aluminum (showpiece) motorcycle helmet out of a block of metal. Although this robot won’t be confused for human any time soon, it does showcase some amazing skills that robots have, even if they aren’t conscious. Here, a robot is crafting far more quickly, far more precisely, and far more intricately than any human would have been able to (particularly in the time that it does.) With a little conscious creativity, it seems like it could put out some artwork that shames its human counterparts.

Categories: banter, Singularity Hub Tags: , , ,