Archive

Posts Tagged ‘augmented reality’

Sunday Edition: Abundance, Computing, Animal Communication, and Ethics

January 8, 2012 31 comments

Any sufficiently advanced technology is indistinguishable from magic. – Arthur C. Clark, scientist and writer.

With that in mind, let’s talk about magic for a minute. Not so long ago (and in some circles still today) people used to talk about alchemy; turning lead into gold was the usual desire. Without knowledge of elements, atoms, and other basic chemistry, the idea was that one substance could be transmuted into another using the philosopher’s stone which, despite its name, was not always a stone but sometimes an elixir or other substance.

Today, we don’t talk about philosopher’s stones, and rarely talk about turning lead into gold. We could plate lead with gold, of course, but that’s not the same. In theory, one could turn lead into gold by reconfiguring the atoms of lead (82 protons and 82 electrons in six fields, with 126 neutrons in the middle) into atoms of gold (79 protons and 79 electrons in six fields, with 118 neutrons in the middle.) It looks so simple, and indeed we have transmuted lead into gold, but, unfortunately, it take massive amounts of energy to swipe a few basic subatomic particles and turn one element into another.

That notwithstanding, transhumanists hope to convert not just lead into gold, but any element into any other. Like Star Trek’s replicator, scientists hope to use some basic bag of material (it really doesn’t matter what), destroy the material by tearing apart the subatomic particles, and then reassembling them into whatever configuration one wants. Bales of hay could be transmuted into a Ferrari, in theory.  The widespread use of that sort of technology leads to what some transhumanists call abundance; the utter irrelevance of ‘(personal) property’ as such because anything can be turned into anything else. I recently ran across the Foresight Institute’s page on molecular assemblers and I’m fascinated. But, by all accounts, the technology is many years away (but would probably represent the most important invention … ever.)

In the meantime, how is abundance looking? The Huffington Post recently ran an article by Peter Diamandis, who argues that technology has already vastly improved the world as a whole. Global per-capita incomes (inflation adjusted) have tripled, lifespands have doubled, childhood mortality has decreased by 99%. His fascinating article goes on to explain why, despite living in vastly better times (as a world community, not just Americans) we’re still focused on the negative.

To power abundance, of either the molecular assembler or the more recognized variety, we’ll need a lot of computing power. Moore’s Law has predicted, accurately, that the number of transistors on a chip would double every couple of years and, as a corollary, that the processing power would double about every 18 months. Every few years, people predict the end of Moore’s Law, but it’s remained accurate since 1965 (and, more generally, for technology since essentially forever according to Kurzweil.) Researchers from the University of South Whales and Purdue have recently created new wires in silicon a stunning one atom tall by four atoms wide. Such small wires could enable quantum computing in silicon; a stunning feat that would continue Moore’s Law into the foreseeable future. Additionally, it makes nano-scale engineering more feasible.

What could we do with all that computing power? Patrick Tucker of the World Future Society recently offered some thoughts. Artificial Intelligence is already being used to replace workers in China, but even professionals like doctors and lawyers are being helped / replaced by automated robots. Managing all the information being created is vital, so AI is being used to search speeches on TV like one searches the web with Google, and also to sift through human genomes to look for similarities. Google is creating self-driving cars. Researchers in China are identifying the cause of traffic jams based on two years worth of GPS data collected from 33,000 cabs. There will be, in short, need for all the computing power we’re inventing.

I’m going to switch gears for a moment to some random new discoveries. Technology Review reports on new advances in carbon nanotubes that are leading to materials that are more conductive and weigh much less than traditional materials. Meanwhile, technology company Lumus has created a pair of see-through augmented reality glasses that are lightweight and project a HD (720p), 3-D, 87″ screen into the wearer’s field of vision. They’re not the most stylish thing in the world, but who wouldn’t love to throw an 87″ TV into their backpack and set it up in the library? Better yet, let’s put these in a bionic eye. Additionally, scientists are trying to use robots to figure out how language evolves in the natural world, including among animals.

In the realm of ethics, Vinton Cerf argues that internet access is neither a human right nor a civil right in the New York Times opinion pages. This is in response, of course, to the argument that internet access -is- a human right, including a UN Report to that effect. Unsurprisingly, the blogosphere (I’ve wanted to use that word for a while) has lit up with responses on both sides. Here’s one example, from JD Rucker.

Finally, if you’re still feeling down about the world, check out Jason Silva’s videos on techno-optimism. The pattern video at the beginning is particularly good.

Advertisements

The Eyeborg Documentary: Bridging Reality and Sci-Fi

September 3, 2011 2 comments

As I continue to talk with people about transhumanism, I’m often asked whether the sort of hyper-advanced technology I’m so excited about is really possible. After all, we’ve been expecting flying cars since the 1990’s, and those still aren’t here. We don’t have robots cleaning up our homes, ala The Jetsons (Roombas don’t quite make the cut.) We’re not even close to intergalactic travel ala Star Trek, and we don’t have the luxury of mind vacations ala Total Recall. In short, technology often seems to fall short of people’s (probably unrealistic) expectations, and so those same people are understandably skeptical about claims of advanced cybernetic limbs, mind-uploading (or substrate independent minds, as Randal Koene is now calling it), and artificial intelligence.

A few months ago, I posted my thoughts on a few men who chose to have their hands replaced with cybernetic arms. When talking with people, I try to point them in the direction of stories like these; stories that illustrate that we already have the limb replacement part down, and that suggest we’re not so many engineering breakthroughs away from human-level functionality in our prosthetics. There are a lot of stories like this out there, but it’s hard to remember where all of them are when I’m a few drinks deep at the bar. Fortunately, Rob Spence and Deus Ex teamed up to make a short summary of cybernetic technology as compared to hyper-advanced technology still (barely) in the realm of sci-fi. They call it Deus Ex: The Eyeborg Documentary.

Rob Spence is the aforementioned Eyeborg, a man who lost his eye in a shooting accident and replaced it with a prosthetic that has a small wireless camera that transmits video to a screen a few feet away. Rob’s prosthetic doesn’t connect to his optical nerve, so he doesn’t actually see the video captured by the camera unless he looks at the player with his ‘good’ eye. Miika Terho (1:28), on the other hand, had a small chip implanted into his retina that does connect to his optical nerve, allowing his brain to process the incoming visual signal. The resolution is still … crude (to put it mildly) BUT: The blind can see again in some sense. That has to count for something. This procedure is still in experimental phases, and probably won’t be approved for the public for several years yet, but much like most of the technologies highlighted in this video, seems only a few engineering obstacles away from offering excellent solutions to people struggling with blindness and other eye problems. Joseph Junke (2:35) rounds out the Eyeborg tour of the eyes with his HUD display for firefighters; a system that augments reality with information gathered from sensors and other technology. Augmented reality has captured a lot of interest recently because it seems like something we already know how to do; and indeed Junke think we’ll have a sellable product within two years or so. Combining these building blocks, it seems like we can put together the video input capability of a mini-camera, the optical-nerve-attached chip, and the augmented reality display to produce an implant that allows for vision that meets or exceeds human-level while offering a few nice extras. If cell phones are any indication, we’ll see have lots of other small technologies piggybacking on the basic technology, such that we could take pictures of what we’re seeing, transmit them wirelessly, and alter coloration at whim. Just like the Deus Ex implant.

At 4:00 we meet two people who have had their arms replaced; Jason Henderson from West Virginia and Keiron McCammon from California. Both of these people have hands that approximate the human hand; they offer fine motor control, wrist rotation, and grip strength. They also have a few bonuses in the form of attachments at the wrist; Jason can put on fin-shaped scoops for more powerful swimming, for instance. The hands of the prosthetics could certainly use a little better control, but because the prosthetics work by reading the electrical signals traveling down the natural muscles remaining in the arm via sensor, the ability to have very fine motor skills (of the sort needed to type quickly on a keyboard as opposed to hunting and pecking) is somewhat limited. A direct neural connection would work best, but we’re not quite there yet.

A 7:20 we meet Staff Sergant Heath Calhoun, who lost his legs during service in Iraq in 2003. Both of his legs were amputated above the knee and replaced with prosthetics that monitor his movement some 50 times a second, automatically adjusting the hydraulic pressure at the knee and helping Heath to keep his balance. Heath additionally skis for the Mens US Disabled Ski Team, and is able to attach a snowboard more directly to his remaining legs. He’s also into running, swimming, and biking. Despite the impressive array of attachments, Heath has a problem: His knee doesn’t provide power (as needed for, say, getting up stairs) and isn’t able to use his thigh muscles in the same way people with natural legs do. This is an everyday hinderance that takes away from the enhanced ability to replace his prosthetic limbs with attachments that fit the activity Heath is participating in. David Jonsson (8:59) at Ossur Prosthetics out of Iceland has addressed this problem by creating the Power Knee; a prosthetics that does just what it sounds like. The Power Knee provides power to the knee area of a prosthetic, allowing the user to walk up stairs and stand up more easily. Combining the two technologies, we see that the array of attachments Heath has access to, coupled with the Power Knee, leads to prosthetic legs nearly as functional as natural legs, except they can additionally be tasked to particular activities as needed.

The Eyeborg Documentary doesn’t cover every possible prosthetic on the market, and it isn’t supposed to. What it does, and very well, is show how technology as it exists today is already quite close to what we currently consider sci-fi levels, and indicates some of the technical challenges that must be overcome to bring prosthetic technology the rest of the way. The video bridges the gap between fantasy and reality, showing why it is reasonable to expect the technology will continue to advance. The documentary also provides just one place for people to go who wonder whether we’ll continue to have increasingly sophisticated prosthetics and gives me a single place to direct people interested in seeing how close to a truly transhumanist future we currently are. So, the next time someone asks me, I’ll smile, sip my drink, and say “Google the Eyeborg Documentary; it’ll blow your mind.”

That’s What She Said!

May 3, 2011 1 comment

Physorg.com writes that U of Washington researchers have created a computer program capable of making double entendre jokes based on words with “high sexiness value, including “hot” and “meat”…” Despite the serious language analysis involved in such a silly exercise, I can’t help but think that this just means that computers are a little closer to being able to ice their bros once they attain sentience.

In other news, researchers at the University of Electro-Communications in Japan have created a device that lets you simulate a kiss with your partner of choice over the internet: As long as you routinely kiss with a straw in your mouth it seems. However, with better technology and a less pencil-sharpener looking device, users in long distance relationships (of the serious, or more casual kind) could build some level of intimacy despite miles of separation. One of the inventors suggests that if a major pop star were to program their kiss into the device, it might be a new and powerful way of connecting with fans; subject to the technology getting better, that seems like a great point. And it’s not to difficult to imagine other remote-tactile applications. I think that remote-tactile interfaces are going to become immensely popular expansions of the general cyber-sex phenomenon that currently exists, but the devices are going to have to be more realistic than a straw on spin cycle. Certainly the adult entertainment industry is throwing money into the idea, and has even created a racy term for the technology: teledildonics.

Finally, German researchers have created an eye-computer interface where a sub-dermal power supply connects to a chip implanted under the retina to restore some vision to the blind. No longer the stuff of miracles, restoring sight to the blind is both important in its own right (for obvious reasons) and a great step toward understanding how the brain processes visual information. With a little more understanding, and a little better tech, it should be possible to enhance the visual range of people with perfectly normal vision, including such nifty (and useful) additions as zoom, night vision, and wirelessly updated heads-up-displays. After all, basic augmented reality exists currently in goggles, the military is working on more advanced technology, and it seems just a hop, skip, and a jump to the augmented reality not just being a heads-up display, but a display superimposed from our biotic or cybernetic eyes into our field of vision.

Exciting stuff, from the silly to the useful.