Archive

Archive for January, 2012

Stem Cell Advances

January 28, 2012 3 comments

Let’s talk about stem cells for a bit.

Few technologies are as contested as stem cell treatments; particularly embryonic stem cells, harvested from embryos specifically for research purposes. The National Institute of Health has a helpful FAQ here that explains more about stem cells generally, including the embryonic variety and the newer induced pluripotent form (where adult stem cells are regressed into an embryonic-stem-cell-like state). Despite the controversy around embryonic stem cells, stem cells generally (including, but not limited to, the embryonic kind) offer enormous potential for radical treatments.

Still, many of these treatments are in the research phase, and the Food and Drug Administration (FDA) recently issued some warnings about stem cells. Essentially, the FDA warns that stem cell treatments are still being approved, and consumers ought to be wary when approached with information about radical stem cell cures; these pitches are often just scams designed to bilk desperate patients and their families out of money. On the other hand, some tech companies prefer to operate overseas because of the regulatory morass imposed by the FDA here in the United States; while the regulations are well intentioned to ensure patient safety (to give the FDA the benefit of the doubt) they frustrate companies that want to get cures to patients as quickly as possible. I’ll outline some of the proposed treatments below:

Researchers in London are conducting clinical trials where stem cells are introduced into the retina of patients that suffer from blindness-causing diseases.  Concurrently, researchers at UCLA are reporting early success with the procedure. Through the first four months, the treatment appears successful and safe. However, “many more years” of trials are needed to further confirm the initial results. Additionally, the trial offered only slight improvement to vision, though curing the disease doesn’t seem to have been the point of the trial.

An article from NPR offers more information about the same experiment, including that the experiment was the first time that humans were helped by stem cells. This article also describes in more detail the improvement made by the test subjects; the improvement seems to be substantial. Scientists in both articles are very careful to reiterate that this should not be touted as a cure for blindness, but only encouraging initial results. Thus far, however, the results seem very impressive indeed.

I’m a little confused by the implication that the previous article makes that the research was the first time that stem cells were used to help humans because this video claims that scientists have used stem cells to grow a new trachea (wind pipe) for patients. The video is a little dry, but I include it below in its entirety for you to judge:

In other experiments, scientists are researching ways to cure Alzheimer’s Disease and regenerating muscle tissue that forms vascular walls. The first set of research ought to eventually allow doctors to introduce healthy neurons into the brains of people afflicted with Alzheimer’s Disease, essentially curing the disease (especially if they can also remove the defective or dead cells; another project Aubrey DeGray and crew are making headway on.) Stem cells have also been introduced into mice and seems to help them live longer, healthier lives. Aubrey DeGray generally predicts that 10-15 years after scientists are able to double the lifespan of mice, they ought to be able to do the same for humans. This research suggests that the mouse-lifespan threshold may soon be met.

In sum, the FDA is right that currently much of the work being done with stem cells is research based or otherwise preliminary, and patients should be wary of supposed stem cell treatments for almost any condition. However, just because the treatments aren’t here yet doesn’t mean that they won’t be soon, and current research suggests that within the next several years stem cell treatments will move from hoax to fact.

Artificially intelligent cars have received a lot of press lately, so look out for my article about them in the next few days.

Advertisements
Categories: legal issues Tags: , , ,

Friday Edition: Augmenting Humans

January 20, 2012 1 comment

This week has brought several interesting articles about human enhancement. Although much of this technology is still in the research phase, some tech is beginning to make it to the market.

Via IEEE Spectrum, bionic eyes are beginning to help those with some forms of blindness see again. Although the glasses are not elegant, the technology not perfected, and the solution not universal, this represents a serious step forward in biotechnology. I wrote about a chip that did substantially similar things as part of a post on Deus Ex: Human Revolution, and now it seems prepared to come to market. Even more exciting is how quickly the problems with the technology should get solved; probably within just a few years. Because this chip-glasses combination works, but is largely limited by resolution, increasing the resolution of the image ought to be relatively easy. After all, from video game systems to cell phones and digital camera, resolution has time and again proven to be a basic fix; the tech is there, it’s just a matter of integrating it.

Other emerging technologies are not as easily remedied. CNN reports on a new bionic hand, another technology highlighted in the Deus Ex video linked above. Matt’s new hand appears close to human level functionality, with an opposable thumb, connecting fingers, and universal application. The CNN video is a little lighter on details than I would like, but I imagine if the hand had touch-sense capability or other upgrades, it would likely have been mentioned. Still, these hands seem to have a quick learning curve and are becoming available for those that need it.

In biological tech, the New York Times reports that doctors in Sweden have replaced a cancer-ridden trachea with one grown in a lab. Although the surgery was very expensive (around $450,000), because the windpipe was grown from the patient’s own cells, no anti-rejection medication is needed. However, the article notes, the body might well ultimately encapsulate the trachea with scar tissue; a process that isn’t exactly rejection but leads to similar results. Other organs are also on the way.

Some people, unwilling to wait for FDA approval and a medical need, are beginning to upgrade themselves. Do It Yourself Biohackers use basic techniques, coupled with a whole lot of courage and determination, to upgrade their senses by doing things like implanting magnets into their fingertips to be able to sense electromagnetic currents, magnetic north, or even the locations of their friends. As the basic technology becomes more widespread, and people become more familiar with how to use it, I imagine this sort of garage-augmentation will only increase in the future. However, I still expect most people will go to a doctor if they can. The posted videos are amazing.

If the private sector isn’t moving technology along rapidly enough, Discover Magazine reports that the US Military, through DARPA (the Defense Advanced Projects Research Agency) and other agencies, has been quietly funding projects along many of the same lines. Although the ultimate purpose to which DARPA and these other agencies may put this technology could be questionable, research unhampered by the usual; regulations and solid financial backing are almost sure to push these technologies forward very rapidly. Most scientists involved in the research, apparently, don’t find the military’s involvement objectionable.

By far the most interesting thing to augment, however, is the brain. Many people suggest that we know too little about the brain to effectively augment it. However, experiments like this one on rat brains are pushing forward our knowledge. Scientists are having luck replacing parts of the rat brain with engineered components that perform the same functions. Once they understand how to replace parts of a rat’s brain with engineered analogues, they hope to be able to help restore functionality to humans with damaged brains.

One possible application that I’ve written about at some length is increasing human intelligence. Neuroscientist Natalie Wolchover considers what might happen if the average intelligence doubled. However, increasing intelligence might not be an entirely good thing; George Dvorsky comments on a recent article by the journal Current Directions In Psychological Sciences that argues there are many side effects to heightened intelligence, and that any advances in human intelligence will need to account for those side effects.

Finally, Sarah Wanenchak offers an insightful article about how disabled people who have limbs replaced with prostheses still face discrimination. It’s worth a read, especially for anyone considering voluntarily adopting these technologies in the future (although I imagine that as more people adopt the technologies, the stigma attached will subside.)

Stay tuned next week for more updates. Now that school has started again, I have a nice backlog of articles to share!

Categories: Uncategorized

The new and the new-old: Computers and Woolly Mammoths

January 16, 2012 1 comment

Today on boydfuturist: Advances in computing (and one genetic bombshell.)

I’ll begin with some technical advances in computing during the last week or so. First, Kurzweilai.net links to an article reporting that computer component maker NEC has demonstrated 1.15Tb/s optical transmission speeds over 10,000km (or about 6,200mi). Although it’s probably too soon to hope for internet upgrades for consumers (and might be for the foreseeable future, given the United States’ abysmal internet infrastructure) I can at least dream of upgrading my 300kb/s eventually. This sort of hi-speed internet capability will be vital for the increasing mounds of data that are being sent and received thanks to mobile phones, embedded movies, video conferencing, video gaming, and other hi-bandwidth applications.

To handle the increasing amounts of data, both locally generated and transmitted over the internet, computers are going to need more memory. I suppose I’m dating myself to say that I remember when memory was measured in megabytes, and that 16GB of RAM seems outrageous to me even as I installed it for less than $100 in my buddy’s computer. But we’re going to need more, and it’s going to have to fit in increasingly smaller spaces as we miniaturize computers down to the nano scale. Fortunately, researchers at IBM have stored a byte of memory in a mere 12 atoms, or about 100 times as dense as current materials. Until we can safely and cheaply cool home computers to near absolute zero this won’t be much use at home, but it shows that there is potential to pack a lot of memory into very tiny spaces.

What sort of applications could use such vast amounts of data? Lots, it turns out. Erin Rapacki argues that we ought to begin scanning the “real” world. By scanning every object in the real world in 3-D, we could give computers a vast data set that allows them to recognize virtually any object that they pick up. It turns out that this sort of project is being crowd sourced, with websites being set up for people to upload 3-D scans using their Xbox Kinect to form one enormous database. I have to wonder if increasing reliance on 3-D printers will speed up this process, since any object that we want to print from a 3-D printer needs to be scanned (or built as a file) in 3-D to begin with. Imagine one day being able to download the “Sears Hardware Collection” file and printing whatever tool it is you need at will.

Vinod Khosla argues that computers will take over many of the jobs in the healthcare field, resulting in staggering amounts of data transmitted across the world in the blink of an eye. Whether or not he’s right, there’s no question that healthcare is becoming more automated, scans are taking up more data, and genomics is coming along right behind to fill up whatever empty HDDs are left. It turns out that devices are being created that allow people to control robots with their mind. Given the amount of data the brain creates, no doubt finely tuned devices that will give the same or better performance as natural limbs and organs will need to transmit and/or receive large amounts of information as well. However, even as technology becomes more omnipresent, people are already asking questions about its impact on our biological brains. Just as we start to wonder how technology impacts our biological brain, however, comes news from MIT that they have succeeded in creating a synthetic version of a biological neuron. While we’re hardly ready to build a brain from scratch, this suggests that doing so is not out of the question.

Finally, some non-computing news that is huge. Biologists are now saying that they have the ability to sequence woolly mammoth DNA, replace the relevant bits in an elephant egg, and implant what will be a woolly mammoth into a female elephant. Scientists have said this before, but haven’t had a complete genome to work with because their samples were damaged.  Difficulty: Woolly mammoths have been extinct for thousands of years. Isn’t this how Jurassic Park started and, if so, can it be long before Paris Hilton is walking around with a mini-T-Rex in her purse?

I, for one, certainly hope not.

Tuesday Edition: CES Tech, Robots, and Power

January 10, 2012 2 comments

There’s no shortage of articles talking about the abundance of technology throughout most of the year, but during the first couple weeks of January many of the big players come out for the Consumer Electronics Show to debut their newest and coolest gadgets right here in Las Vegas. While consumer tech isn’t generally what I’m interested in, in some cases something really interesting comes along. I’ll share a few of those stories here, then share a few interesting articles about robots and artificial intelligence I came across and finally I’ll share an article discussing one way we could have vast amounts of sustainable power without excessive pollution or nuclear waste.

First, the Washington Post reports about the top trends to watch from CES. More powerful smart phones, lighter and more powerful notebooks, and green energy (especially coupled with automotive technology) are the big stories identified. In general, we’re seeing what we should expect; Moore’s Law playing out in ever more powerful devices with a corresponding increase in data production. Google is working to augment TV, 4G phone service continues to roll out (and, hopefully, improve) and cars are continuing to transition off of gasoline-fueled engines.

Samsung unveiled a new TV at CES with an interesting feature: upgradeability. As technology advances more rapidly, products become obsolete (or at least outdated) more quickly. It’s virtually impossible to buy a phone at the beginning of the year and have it still be cutting edge at the end of the year and computers are outpaced almost quarterly. TV’s are no exception, and Samsung seems to be targeting those consumers who want to have the newest functionality without having to buy a new TV every year or two. But I wonder whether this is more of a gimmick than a practical help. TV’s tend to upgrade in two main ways: they get bigger for the same price, and they get new functionality. The move from standard definition to high definition was the biggest jump, as the move from 2-D to 3-D hasn’t really caught on. There have been lighting upgrades, from bulbs to LEDs and plasma to OLEDs. Most of these upgrades couldn’t be accomplished with an expansion slot; nothing but a new TV would make the TV bigger, or the lighting source different. Arguably an upgrade slot could convert a TV from 2-D to 3-D (a new video card can do that in a computer) and could add new functionality like Netflix / GoogleTV / Gamefly capability. The slot probably isn’t big enough to give a TV an integrated Bluray player, or whatever comes next. Because so many things change so rapidly, the expansion slot probably won’t be able to keep up with multiple developments at the same time, and even if it could it’s unlikely to change the underlying hardware, leading to a TV that uses the new features about as well as a 5 year old computer would play Crysis: That is, poorly. But … it is an interesting idea, and I look forward to seeing how it plays out.

Other consumer tech articles for products not being debuted at CES have circulated around the net. The new Windows 8 smartphones and computers, with the help of a company named Tobii, can be controlled just by looking at what you want to select. Assuming this sort of technology catches on (and, if only to improve access for disabled people, I imagine it will) we can expect more natural input and selection from our devices soon. I imagine it wouldn’t be too hard to introduce this into a car with a Heads Up Display (HUD) to allow for truly hands-free control over the onboard electronics. Durdu Guney, out of Michigan Technological University, has created an extremely hi-resolution camera that can be attached to a cell phone. While the camera won’t be used to shoot photos of friends, it could be used to take a look at cells in your blood or check for infection through an on-board app. Finally, scientists in Tokyo have created touchable holograms, adding a new dimension of reality to projected displays. In addition to the uses they discuss in the video, it’s difficult to imagine this not leading directly to the most awesome video games yet, let alone holodeck-style simulations.

All these new devices are transmuting, receiving, and using vast amounts of data. Indeed, CNN Money projects that within 10 years, entire new types of jobs will focus just on sorting through all that data. While that would be nice if it were true, I have to imagine A.I. will be taking over most of that responsibility, as even the NSA is likely going to have trouble sorting through the constant stream of data coming from upwards of 50 Billion digital devices. According to the New York Times, if these trends continue it is increasingly likely that much of this data will come from a relatively small number of users.

Exciting things are afoot in the world of robotics, too. Researchers in Munich have succeeded in creating a more human-like face on a robot. If spontaneous facial movement can be coupled with excellent A.I., robots might start to seem to have more emotions and become less distinguishable from humans. This will likely increase the attachment people feel toward their robots, and might make them more pleasant to deal with in person. Other robots, however, are already ready for prime time. In South Korea, for instance, robots are about to be used to help secure prisons during the evening. South Korea has been very aggressive in integrating robots into society, pledging a robot in every kindergarten classroom by next year. They’ve also been aggressive about militarizing robots; integrating them into the front lines.

Finally, in the biggest story of the day, the Mother Nature Network reports that Japanese scientists have created a new wind turbine design that promises to triple power output from wind turbines. Although short on details, if the financial data in the article is right, with a decent grid the US could satisfy almost all of its power needs much more cheaply than by using other means of energy production. No doubt it would be an expensive national project, but we haven’t had a good reconstruction for a long time, and lots of people need work. This might be a way to solve two problems at the same time.

Sunday Edition: Abundance, Computing, Animal Communication, and Ethics

January 8, 2012 31 comments

Any sufficiently advanced technology is indistinguishable from magic. – Arthur C. Clark, scientist and writer.

With that in mind, let’s talk about magic for a minute. Not so long ago (and in some circles still today) people used to talk about alchemy; turning lead into gold was the usual desire. Without knowledge of elements, atoms, and other basic chemistry, the idea was that one substance could be transmuted into another using the philosopher’s stone which, despite its name, was not always a stone but sometimes an elixir or other substance.

Today, we don’t talk about philosopher’s stones, and rarely talk about turning lead into gold. We could plate lead with gold, of course, but that’s not the same. In theory, one could turn lead into gold by reconfiguring the atoms of lead (82 protons and 82 electrons in six fields, with 126 neutrons in the middle) into atoms of gold (79 protons and 79 electrons in six fields, with 118 neutrons in the middle.) It looks so simple, and indeed we have transmuted lead into gold, but, unfortunately, it take massive amounts of energy to swipe a few basic subatomic particles and turn one element into another.

That notwithstanding, transhumanists hope to convert not just lead into gold, but any element into any other. Like Star Trek’s replicator, scientists hope to use some basic bag of material (it really doesn’t matter what), destroy the material by tearing apart the subatomic particles, and then reassembling them into whatever configuration one wants. Bales of hay could be transmuted into a Ferrari, in theory.  The widespread use of that sort of technology leads to what some transhumanists call abundance; the utter irrelevance of ‘(personal) property’ as such because anything can be turned into anything else. I recently ran across the Foresight Institute’s page on molecular assemblers and I’m fascinated. But, by all accounts, the technology is many years away (but would probably represent the most important invention … ever.)

In the meantime, how is abundance looking? The Huffington Post recently ran an article by Peter Diamandis, who argues that technology has already vastly improved the world as a whole. Global per-capita incomes (inflation adjusted) have tripled, lifespands have doubled, childhood mortality has decreased by 99%. His fascinating article goes on to explain why, despite living in vastly better times (as a world community, not just Americans) we’re still focused on the negative.

To power abundance, of either the molecular assembler or the more recognized variety, we’ll need a lot of computing power. Moore’s Law has predicted, accurately, that the number of transistors on a chip would double every couple of years and, as a corollary, that the processing power would double about every 18 months. Every few years, people predict the end of Moore’s Law, but it’s remained accurate since 1965 (and, more generally, for technology since essentially forever according to Kurzweil.) Researchers from the University of South Whales and Purdue have recently created new wires in silicon a stunning one atom tall by four atoms wide. Such small wires could enable quantum computing in silicon; a stunning feat that would continue Moore’s Law into the foreseeable future. Additionally, it makes nano-scale engineering more feasible.

What could we do with all that computing power? Patrick Tucker of the World Future Society recently offered some thoughts. Artificial Intelligence is already being used to replace workers in China, but even professionals like doctors and lawyers are being helped / replaced by automated robots. Managing all the information being created is vital, so AI is being used to search speeches on TV like one searches the web with Google, and also to sift through human genomes to look for similarities. Google is creating self-driving cars. Researchers in China are identifying the cause of traffic jams based on two years worth of GPS data collected from 33,000 cabs. There will be, in short, need for all the computing power we’re inventing.

I’m going to switch gears for a moment to some random new discoveries. Technology Review reports on new advances in carbon nanotubes that are leading to materials that are more conductive and weigh much less than traditional materials. Meanwhile, technology company Lumus has created a pair of see-through augmented reality glasses that are lightweight and project a HD (720p), 3-D, 87″ screen into the wearer’s field of vision. They’re not the most stylish thing in the world, but who wouldn’t love to throw an 87″ TV into their backpack and set it up in the library? Better yet, let’s put these in a bionic eye. Additionally, scientists are trying to use robots to figure out how language evolves in the natural world, including among animals.

In the realm of ethics, Vinton Cerf argues that internet access is neither a human right nor a civil right in the New York Times opinion pages. This is in response, of course, to the argument that internet access -is- a human right, including a UN Report to that effect. Unsurprisingly, the blogosphere (I’ve wanted to use that word for a while) has lit up with responses on both sides. Here’s one example, from JD Rucker.

Finally, if you’re still feeling down about the world, check out Jason Silva’s videos on techno-optimism. The pattern video at the beginning is particularly good.

Weekend Edition: Heathcare News

January 7, 2012 1 comment

It’s been a busy few days in health technology news.

First, CTV (via fight aging!) reports that Canadian researchers have discovered stem cells within the eyes of adults that can be used to help cure age-related macular degeneration (AMD) – the leading cause of vision loss in people over 60. Apparently these cells form within the eye during the embryonic stage, and remain dormant (sometimes up to 100 years) in our eyes. By removing the cells and growing them in a culture, scientists can (in theory) restore vision by replacing dysfunctional cells. Further, these stem cells seem to be pluripotent, meaning that the scientists can turn them into other types of cells and thus, to treatments for other diseases. Here’s a quote from the article:

“In culture dishes in the lab, the researchers were able to coax about 10 per cent of the RPE-derived stem cells to grow in the lab. Further prodding caused the cells to differentiate into, or give rise to, a variety of cell types — those that make bone, fat or cartilage.

Temple said her team also generated a progenitor cell that carries some characteristics of one type of nervous system cell, although it was not fully differentiated.

‘But the fact that we could make these cells that were part-way, that were immature, indicates to us that if we keep on manipulating them, going forward in the future, we should be able to find ways to create other types of central nervous system cells,’ she said.

One goal would be to produce neurons, the electrical-signalling cells in the brain and other parts of the central nervous system. That would mark a major step towards the holy grail of regenerative medicine: the ability to repair spinal cord injuries and brain damage caused by such diseases as Alzheimer’s or Parkinson’s.

‘And a really important cell type that we’d love to see if we can make would be the retinal cells, the neural retinal cells like the photoreceptors that are in the eye,” said Temple. “So if we could help make new photoreceptors as well as the RPE — which we’ve already shown we can make — then we would be making two really valuable cell types for age-related macular degeneration.'”

Second, USA Today (via Transhumanic) reports that yet another artificial organ, this time the pancreas, has entered clinical trials. Unfortunately, this organ isn’t exactly like the rest of your organs; it’s a small machine worn outside the body rather than being implanted inside the body where the old pancreas used to go. Nevertheless, it’s seemingly effective at monitoring glucose levels in the blood and calculating how much insulin needs to be injected to bring the blood levels back to normal, and then it injects that amount of insulin. Approval for the device is expected in the next three to five years.

Speaking of clinical trials, however, all is not rosy in the world of academic publishing. Discover Magazine reports on a study conducted by scientists showing that 30 months after clinical trials had been completed, better than half had not been published. After more than four years, one-third of the results from clinical trials remained unpublished. This is problematic for two reasons. First, publishing is a condition of receiving a grant from the National Institute of Health (NIH). Thus, better than half of funded groups breach their funding agreement. Second, and perhaps more importantly, by not publishing their results, these scientists deprive the rest of the scientific community of valuable information; information the scientists conducting this study argue could change the conclusions of researchers based on published work.

“’Overall, addition of unpublished FDA trial data caused 46% (19/41) of the summary estimates from the meta-analyses to show lower efficacy of the drug, 7% (3/41) to show identical efficacy, and 46% (19/41) to show greater efficacy.’ That means that when scientists try to study those FDA-approved drugs, they may not realize that they work less well than published papers indicate (or better, as the case may be).”

This is a trend that needs to stop, especially given the exponential increases in technology and the vast amount of advancement coming yearly; up-to-date results are a must.

Going back to diabetes for a moment, a new study reported by eurekalert shows that poor maternal diet can increase the odds of diabetes in the child. Scientists from Cambridge and Leicester have linked poor maternal diet while pregnant to the fetus’ inability to correctly manage fat cells later in life. “Storing fats in the right areas of the body is important because otherwise they can accumulate in places like the liver and muscle where they are more likely to lead to disease.” The pregnant rats in the study were fed low-protein diets, which led to the unborn rats later being unable to process fat correctly and increased their chances of developing type-2 diabetes. This deficiency caused the now-born rats to look slimmer (because they stored less fat) but nevertheless be more likely to develop diabetes. Similar results were shown in humans with low birth weights.

In a world of increasing medical apps and patient-driven medical data, technologyreview.com reports on the the thoughts of cardiologist Eric Topol, who seems to agree with SingularityU chair Daniel Kraft that this increasing data will revolutionize medicine. The article indicates, however, that there is reason to question whether or not all this additional data is really helpful. In no case does the additional information seem to have hurt (that is, patients did not receive worse care for the abundance of information) but neither did the outcome always improve. What the article does not seem to question, however, is that quite soon there will be a deluge of additional patient information available, first through cell phone apps and the federally funded switch to electronic patient records records, and later through more advanced sensors like nanobots swimming around in the bloodstream. For my money, I suggest that if the patient data isn’t helping to increase patient care, then it’s because the data is not being used correctly. Certainly no doctor can keep track of hundreds or thousands of patients whose information is being updated daily or even weekly, but some sort of computer within a hospital with correctly coded software (or perhaps even a Watson-style supercomputer) easily could, and then could alert the doctor to only the most important cases.

Finally, my law school pal Micah linked me to an article from the BBC, reporting on the first chimera-monkey; a monkey created from several different embryos. Essentially, the scientists took DNA from up to six different embryos, mixed them together into three monkey embryos, and out came apparently healthy monkeys Chimero, Hex, and Roku. The study also found that (somewhat unsurprisingly) stem cells didn’t work the same way in mice as they did in primates, which suggests that perhaps all the backward-engineering we’re doing to revert normal cells into a pluripotent stage might not be effective in humans like it is in mice. That is, there still may be a need for embryonic stem cells. Micah asked whether this experiment might have an impact on our notions of family, in addition to our ideas about personhood.

For a couple of reasons, I think this experiment in particular probably won’t. The only thing different about these monkeys and any other monkeys of the same type is that these were artificially created and had a mixture of several strands of DNA. On one hand, that probably means that there is no clear mother or father; when the DNA of six monkeys is mixed together, who’s the biological parent? On the other hand, a monkey (or a human, for that matter) who receives a transplanted organ now has DNA from at least three different people (both biological parents, plus the donor) and maybe four (if you count the two different DNA-strands that make up the donors’ DNA) different sources. With more transplants comes more DNA; it’s not inconceivable that a human could have a kidney from one donor, a lung from another, and a heart from yet a third; making at least five distinct DNA strands within the same human. Also, in the sense that ‘chimera’ just means composed of different DNA strands, then anyone who already has a transplant is a chimera. so for that reason, I don’t think that a human created this way (as unlikely as it is, given human-experimentation laws) would be any less of a person than a more traditionally-created human.

But speaking of created humans, through various fertility treatments, and including even surrogate mothers (or fathers) and whatnot, our notions of family are becoming less tied to the make-up of our DNA. Even simple adoption shows that a family unit can include members with different DNA without trouble. So the fact that these monkeys are made up of several DNA strands probably shouldn’t start affecting out ideas about family, though in humans they could lead to some hilarious Maury Povich episodes. Also, the fact that a human is created through artificial means hasn’t yet stopped them from being a person in the traditional sense, and so I don’t think it would have any effect on monkeys (though they’re not legally persons, and this is unlikely to change that.)

Something that might make us reconsider our notions of personhood and family is a chimera made of different species; part monkey, part reptile combinations, for example. There, a whole new species is being creates and the being becomes further removed from parents. Because family is more of a social construct now than a DNA-matched set (consider how many people seriously consider their dog / cat / goldfish to be part of their family) even this radical form of chimera might not shake our notions of family. But personhood … that’s something I’ll have to think more about.

Stay tuned for some news about robotics tomorrow; I wanted to make separate posts to keep this one from becoming even more unwieldy than it already is.

Gadgets, Brains, and Healthcare

January 5, 2012 4 comments

Only five days in to 2012, and mind-blowing articles are already dropping.

According to Pentagon scientists (reported by Physorg.com and others), Cornell students have created a device that splits beams of light, hiding an event from sight. They’re calling it a time cloak. For around 40 picoseconds (trillionths of a second) the scientists are able to create a gap in the light by using a time-lens to split the light into slower red and faster blue components. This makes anything occurring in the gap invisible. In theory scientists could make the device effective for a few millions of a second, or perhaps even a few thousandths of a second, but a device large enough to erase a whole second would need to be approximately 18,600mi long. Even for someone like me who envisions mechanical implants for humans and perhaps even brain uploading into a computer, this article is fantastic. I’d love to see some confirmations of this technology and a better explanation for how, exactly, it works. Still, it seems it won’t be a very effective Ring of Gyges anytime soon, if at all.

Researchers in Japan, meanwhile, have created super sensitive sensors out of carbon nanotubes. The sensor is flexible enough to be woven into clothing, and can be stretched to three times its normal size. In addition to rehabilitation uses, this sort of sensor seems great for the blossoming world of controllerless video game systems like the Xbox Kinect. Such sensors are also implantable into people receiving organs (biological or otherwise) or could just be used to record biometrics in your everyday clothing.

Finally, Klaus Stadlmann gives a TED Talk about inventing the world’s smallest 3-D printer. It seems to be about the size of a Playstation 2, and can print in incredible detail. I thought the talk was a little dry, but still interesting.

There have been several interesting brain articles in the last few days. Forbes ticks down their top-10 brain articles from 2011, including memory-assisting chips, using magnetism to affect moral judgments, potential treatments for people suffering from Alzheimer’s disease, and thought-controlled apps for your cell phone. Although the brain is still largely mysterious, scientists are making massive amounts of progress on all fronts yearly.

Discover Magazine reports that anesthesia might be the key to better understanding how consciousness works. Apparently it’s not unusual for patients under anesthesia to wake up, then go back under and never remember that they woke up. I’ve talked a bit about the problem of recognizing consciousness before (one essentially has to rely on reports of consciousness, but consciousness itself cannot be directly tested for) and this article does a good job of reiterating the problem. The researchers hope that by putting people under and eliciting subjective reports of consciousness after the fact, they will be better able to pin down just what it is that makes a person conscious.

Medicalxpress.com posted an article in December asking Why Aren’t We Smarter Already? The authors suggest that there is an upper-limit to various brain functions, and that while drugs and other things could potentially bring low-scoring individuals up, those already at or near peak performance would see little or no gain from the same drugs. If this is right, then there is reason to doubt that mind-enhancing drugs (say, Adderall) could make the smartest people even smarter. Yet, the article only talks about improving the mind that we have, and not about whether it is possible to create an artificial brain (or introduce artificial implants into a biological brain) that -could- break past these natural barriers. It’s no secret that the body is well, but not optimally, designed, and that the same is true of the brain shouldn’t really be surprising.

TechCrunch offers a predictive list of technologies coming in 2012 in an article penned by tech luminary and SingularityU professor Daniel Kraft. According to Daniel, A.I. will become increasingly helpful in determining diseases, from cheap phone apps that detect cancer with their cameras to A.I. assisted diagnoses in remote villages. 3-D printing will continue to advance, massive increases in patient data will be shared on social network sites like patientslikeme.com, and videoconferencing technology like Skype will increasingly allow doctors to examine patients without an office visit. All good things.

Last, but not least, a team of scientists at USC have recently mapped an entire human genome in 3-D. They hope to be able to evaluate genomes not just based on their genetic make-up, but also their physical structure. Because genomes take up three dimensions in the body, a 3-D map should be a lot more accurate than the standard model.