Archive

Archive for May, 2011

Robot Language and Data Transmission Speeds

One demarcating line between sentient creatures and non-sentient creatures is the ability to use language, or otherwise communicate, with other members of the same species. Dolphins, ants, bats, and dogs (among others) all seem to be able to communicate within their own species, and they obviously don’t any human language to communicate. Thus far, only Koko the gorilla has been able to communicate directly with humans using a language that both understand (with apologies to owners of expressive pets.)

Now, researchers at the University of Queensland have created robots that are, themselves, independently creating a unique language. Finding human words too complicated for the efficient transfer of information, the robots now generate a random sequence of sounds to name a new place, then share that sequence with other robots to allow those robots to find the same place. Because each robot travels independently, they each make up their own random name for a place, and when two robots have both come to the same place but given it a different name, they seem to essentially negotiate with each other until they agree on a common name. Robot A, having traveled to some place, can tell Robot B (who has not been to that place) about that place: Robot B can then tell Robot C about the place that Robot A went to, even though Robot B has not been there. The ability of a robot to talk about places that that robot has not been to (indeed, they seem to be able to speculate about places that no robot has been to) seems like an important step toward more robust consciousness.

Considering that we don’t have a perfect explanation of how language maps onto representations of the world for humans, recreating a similar experience within robots also allows for realistic hope that we could create a sentient robot without fully understanding how our own sentience works. There is danger in that as well, because we don’t have a proven method of recognizing consciousness researchers might inadvertently destroy a sentient creature (or a great many) without realizing what they’ve done. Hopefully we can create such a reliable test quickly, or barring that, that sentient robots are more forgiving than people have a tendency to be.

If robots can create and share their unique language, they will soon be able to transfer that information across vast distances much more quickly. Scientists in the UK have broken a single laser information transmission record by transferring data at an astounding 26 terabytes per second. That’s roughly 700 DVDs, or 400 million phone calls, every second. Japanese researchers have used a multi-laser setup to transfer data at an even more amazing 109 terabytes per second. Confirming some of what Kurzweil has been saying, data transmission rates, according to researchers, are nearly doubling every 18 months. That means in about six years we should be able to transfer nearly 400 terabytes per second on a single laser using mathematical algorithms to encode and decode data. My fuzzy math says that’s about the data encoded in 10,000 DVDs transferred via single laser every second – at that rate much more complicated things like virtual reality ought to be transferable quickly enough to create decent representations of the ‘real world’ without too much lag (or stuttering, as the computer attempts to display the graphical information.)

Both discoveries are impressive in their own right, but the combination of technologies means that robots can potentially begin to create their own language and then transfer that information to other robots at incredible speeds. Moreover, there is little reason why language needs to be tied exclusively to location – robots should be able to use their language to describe anything in the world, and then transfer that information to other robots who can speak about the thing describes without ever having directly experienced it – not at all unlike what people can do.

Better, as computer components miniaturize and become integrated with people, we too ought to be able to take advantage of the data transmission rates. Of course, a biological brain could never process that much information that quickly, but with some mechanical augmentation we might well be able to translate the optical signals into chemical reactions that the brain can understand.

As a formatting note, I’m going to move to a once-a-week publishing schedule. Although today is Saturday, I expect to publish on “Futurist Fridays” from now on.

Advertisements

Biotic Hands and Programmable Brains

May 20, 2011 2 comments

Every so often I come across an article that really illustrates how near the future is. This week, I came across two of them.

The first article, by Singularity Hub, is about Milo and Patrick, two men who have chosen to have their hands removed and replaced with biotic hands. Both went through extensive surgery (Milo for around 10 years) but eventually they decided that because the surgeries were ineffective, replacing their hands with biotic hands was a sensible alternative. These stories are important for at least three reasons.

First, both surgeries were elective procedures in the sense that neither man had his hand replaced to save his life, or as part of a traumatic incident. Both men had biological hands, although they were damaged beyond reasonable use. Elective replacement for limbs is on tricky ethical ground because, for many people, replacing limbs is largely a procedure of last resort. Previously, limbs were removed to prevent the spread of gangrene, or to save the person from further infection spreading, or for reasons otherwise necessary to protect the person’s life. Here, for at least two men, given one hand of lesser functionality than normal each, replacing a less functional human hand with a biotic hand more functional than their damaged hand (but, seemingly, less functional than a ‘normal’ hand) made sense.

Second, if two men are able to choose to replace a biological hand with a more functional biotic hand, then others should be allowed to make the same decision. Despite the amazing progress made by Otto Bock (creator of the biotic hand), the hand still doesn’t provide all of the benefits of a normal human hand, and offers only one benefit that a human hand does not: 360 degree range of motion at the wrist. However, the limitations on the hand are technological, and with a sensory feedback system like the one Bock is currently working on those limitations ought to be cured quickly. Once a biotic hand is as functional as a biological hand, scientists ought to be able to craft improvements for the biotic hand that include all sorts of functional improvements (increased grip strength, additional sensory inputs like the ability to sense electric currents, more range of motion, enhanced durability) and some more cosmetic improvements (perhaps a small storage space, or wifi, an OLED screen, or other patient-specific enhancements.) Very quickly, a biotic hand will be superior to a normal hand, and not just a severely damaged hand,

Finally, one cannot escape the naked truth that this is what people have had in mind when they used the word “cyborg” for decades. Although it’s true that eye glasses, pacemakers, and seizure-reducing brain implants are all mechanical augmentations to a biological person such that the term cyborg is properly applied, few people tend to think of their uncle with the pacemaker as a cyborg. In part, that’s true because pacemakers are not visible, and even hearing aids are more like eye glasses than biotic hands because they are removable and not an integral part of the human body. These hands, however, are replacing a major part of the body with a clearly mechanical device. The article is unclear whether these hands come with some sort of synthetic skin that masks their metal servos and pivot points, but from the pictures there is just no mistaking that these men are now part robot.

We have reached a point where we can program mechanical devices so that they can communicate with the brain through the nervous system. But what about programing the brain itself?

Ed Boyden thinks he has created a solution for that too. I highly recommend watching the video yourself. The gist is that neurons in our brains communicate via electrical signals. By using engineered viruses to implant DNA encoded with photo-receptor cells taken from parts of algae into brain cells, Boyden can then shine a light onto parts of the brain and only those cells so implanted with photo-receptors activate for as long as the light shines. By activating particular groups of neurons, Boyden can stop seizures, or overcome fear responses that would otherwise cripple an animal. Using fiber optics and genetic encoding, Boyden has found a way to direct the brain to act just as he wants: He has, in essence, figured out how to program a brain, or at least how to hack the brain to add or remove particular functionality.

Further, when photo-receptor cells are implanted and neural activity can be activated via light, the human brain begins to look even more like a computer. By regulating light inputs the cells implanted with photo-receptors activate and produce particular effects depending on the type of cell that they are. With a basic on-off activation scheme, neurons become a lot like the chips in our computer that we activate with electricity turned either on or off. This on-off sequence is represented by 0’s and 1’s in binary code and upscaled to more complicated programming languages. With an implanted light array, programmers ought to be able to create flashing light sequences that affect the brain in preset ways, essentially writing a code that controls parts of the brain. Even if scientists simply read the signals of the neurons, all of human experience ought to be reducible to groups of neurons firing or remaining dormant in complicated patterns. If that is so, then there is no reason why we couldn’t download a stream of our experiences in complete detail, and perhaps eventually upload them as well.

The viral DNA distribution method has also been used to restore sight to mice with particular forms of blindness, apparently at the same level of functionality as mice who had normal sight their entire lives. This distribution system ought to be able to introduce whatever bits of DNA seem useful, essentially taking parts of the DNA from other animals and implanting them into human cells to augment our own biology. The color-shifting ability of a chameleon, the ultra-sensitive scent glands of a snake, or the incredible eyesight of a hawk are certainly products of their DNA, and conceptually ought to be transferable with the right encoding. Boyden is quick to point out that the technology is just getting started, but given the exponential increase in technological progress I suspect that we will see vast progress in their fields, and perhaps even human testing, in the next five to ten years.

Despite the exciting prospects of viral DNA introduction, I can’t help but flash back to the beginning of movies like Resident Evil and I Am Legend. Even for technophiles like myself, some of this technology is a little unnerving. That’s all the more reason to start taking a hard look at what seems like science fiction now and figure out what ethical lines we are prepared to draw, and what the legal consequences for stepping outside of those lines ought to be. Much of this technology, if used correctly, is a powerful enabler of humanity for overcoming the frailties of our haphazard evolutionary path. The very same technology used incorrectly, however, could have dramatic and catastrophic consequences for individual patients and for humanity as a whole.

These two stories indicate the dual tracks of transhumanism: The mechanical augmentation side replaces biological hands with mechanically superior components while the biological enhancement side introduces bits of foreign DNA into our own cells to provide additional functionality. If the rate of progress continues, both of these tracks ought to be commonplace within the next 20 years or so. At the point where we can reprogram the human brain and replace limbs with mechanically superior prosthetics, Kurzweil’s Singularity will be here.

I, for one, am very excited.

3-D Printing and Piracy

The New York Times recently wrote an article about MakerBot, a consumer grade 3-D printer that uses plastic to create physical objects like piggy banks and Darth Vader heads. Although the technology for 3-D printing has been around for quite some time, MakerBot is one of the first 3-D printers affordable enough for a home user (although, at $1300 or so, it is an expensive home purchase.) As the article states, MakerBot’s creators encourage its users to share their designs, and one creation can be printed by another user by transferring a data file just like a picture or sound clip is transferred now.

One of the well known side effects of the easy information transferability created by the internet is piracy. Entire websites are dedicated to allowing users to share information, some of it legal; much of it copyrighted, and the users of such sites are often well ahead of attempts by the music, video, and game industries to stop illegal sharing. Complicated digital rights management software, verification codes, periodic authenticity checks, and other creative methods of ensuring that only legitimate users (read: purchasers) of software, games, and movies can use those products have had virtually no impact on the ability of relatively unsophisticated users to share and use unpaid for software. Even lawsuits have not deterred the majority of illicit downloaders. Despite the best efforts of the various distribution agencies, software is often “cracked” (the protection scheme is disabled) within days of release – sometimes the software is available on sharing websites even before release to the general public. Worse, the data protection schemes employed by the distribution companies sometimes cause legitimate users unforeseen trouble, from an inability to use the software they’ve purchased to the failure of other components of their system and, in some cases, the digital rights management (DRM) software has compromised users computers and left them vulnerable to hacks and virii.

The distribution companies insist that piracy has cost their industries hundreds of millions of dollars. Pirates often point out record profits and suggest that the distribution companies are overstating their damages (and particularly when pirates have been sued in court for hundred of thousands of dollars in damages stemming from their sharing a few dozen songs.) Whichever group you sympathize with, there is no disputing that there has been -some- impact on the distribution agencies. However, because only software can take advantage of the internet’s transferability, the impact of piracy has so far been limited to information: Music, movies, books, and the like are sharable whereas physical objects like coffee mugs and televisions have been outside the scope of pirates.

MakerBot, however, introduces the potential for piracy of physical objects, and the existing permissive (if illegal) information sharing culture suggests that users will readily take advantage of MakerBot’s ability to create physical objects on demand. Whereas the CD-RW took the traditionally difficult task of pressing CD’s for sale and allowed users to burn their own music discs for mere pennies, MakerBot and its ilk could put an end to the endless trinkets purchased by people for lack of any other way to acquire those objects. Certainly plastic cups, ping pong balls, and army men are printable by MakerBot, but if those are printable then why not Legos, Tupperware, and ice trays? If plastic can be printed now (and think of all the things made of plastic that you use) how long will it be before steel and glass are likewise printable? What about hybrid objects?

MakerBot introduces the possibility that, before too long, schematics for Playstations will be shared just like the information for Playstation games are now. When hybrid products are printable at home, interesting possibilities arise, including the potential to print a newer, more efficient printer. Technology, by creating the next product that will replace it, feeds on its own momentum and, aside from the cost of materials and the capabilities of the printer, there is no reason why people could not print whatever television, home appliance, or vehicle suits their fancy. After all, if you could print your own Lamborghini for pennies on the dollar as compared to the MSRP, wouldn’t you at least consider the possibility?

If information piracy has taught us anything, it’s that distribution companies cannot stop the sharing of information. MakerBot’s creators have tried to encourage a culture of sharing, and their users seem to be buying into the idea. I doubt that Lego is shaking in their corporate boots just now, but perhaps they ought to consider making something other than a plastic block soon.

Robot Cars? In My Nevada?

May 16, 2011 2 comments

Singularity Hub wrote an article last week about Nevada’s Assembly Bill 511: A proposed law setting down some guidelines for robot cars in the state. Actually, most of the bill seems to be directed to plug-in electric cars, giving them a distinctive decal and access to HOV lanes along with free parking at some public areas around Nevada. Section 8 covers autonomous cars, and directs the DMV to come up with licensing and operation guidelines for autonomous cars. Singularity Hub does a great job breaking the bill down (although it’s only a few pages, as written, and not too chock full of legalese).

Instead of rehashing what Singularity Hub said, I’ll approach the bill from a different direction: Will people actually want these cars?

The benefits are clear. If the car is driving itself, then there is little reason to worry whether the person in the car is distracted by conversation, or texting, or drunk. People can read the paper on the way to work, or watch a movie on a road trip across country. There are efficiency gains to be had as well. If owners can upload a list of locations, the car can optimize those locations to save time and fuel. With a network managing all the autonomous cars, each individual car can be routed through the most efficient roads, bypassing accidents. With complete control, computers can drive each car more quickly, and more safely, than humans can: NASCAR drivers already know about drafting (driving closely behind another car to take advantage of the displaced wind resistance) but Grandma Smith certainly doesn’t, and couldn’t perform the maneuver safely if she did. As cars become less individualized and more integrated through central hubs they begin to act more like a swarm, and can operate in tandem because the computer always knows what the car next to you is currently doing and about to do. There are safety advantages too. Computers can fly airplanes so aerodynamically unstable that the plane would fall out of the sky without computer control. Given that, managing a blowout or oil slick should be no problem for a computer controlled car.

In short, with a smart network cars should be able to maximize the roads to provide each person a better driving experience.

On the other hand, there won’t be much of a driving experience, will there?

I’m far from a driving junkie: Most times I just want to get where I’m going. All of my cars had automatic transmissions, including my newest Mustang. I promised myself that if I ever got a real sports car I’d get a manual transmission, but I hadn’t really felt the ‘thrill’ of a manual transmission. It just seemed like unnecessary work. Then I got my first motorcycle, and before too long I started to ‘get it.’ While I still enjoy driving my car, compared to a sportbike, driving a car is boring. Some of the experience has to do with the environment: The engine roaring, the wind enveloping you, the agility and acceleration of sheer horsepower available at a flick of the wrist. A computer ought to be able to control a motorcycle and the person riding it ought to have many of those same experiences.

The overall riding experience, however, will be very different just because the rider is no longer controlling the bike (or car.) It’s not -just- that a bike is fast and agile and that the engine roars: It’s that, for a short time, a rider really feels in control of the machine. The two merge, and the thoughts of the rider through small wrist movements and balance shifts translate into a crisp, quick turn. The rider gets a sense of control, and perhaps a little pride, from mastery over the machine. Computerized cars (or motorcycles) lose that sense of control because, by definition, autonomous cars will have a computer in control, and the passenger is literally just along for the ride.

I can imagine wanting an autonomous car for day to day driving. There is no excitement (hopefully), and no pride, in managing stop and go traffic. For about three weeks when I was 16 I took some pride in navigating stop and go traffic without incident, but since then driving in the city has mostly been a chore. I certainly don’t need (or expect) much excitement on the way to school or the office, and so a car that just gets me there works out nicely. But sometime I want to get out and ride. Sometimes I want to be the one in control, and hit some mountain roads for a burst of adrenaline.

Certainly some people will want to continue driving their cars, and that presents a new level of difficulty for autonomous cars. All of the integrated swarm analysis above depends on the computer being able to predict what the vehicles around each other will do: If a human is driving one of those cars the computer will not be able to predict that car’s actions. Drafting at 150mph is (perhaps) efficient if computers are controlling the entire chain, but dangerous if a person slides into the middle of that chain and loses control of their car. There are, I think, only two ways of handling the computer-human mix: Either people will be required to buy autonomous cars, or they will have to be separated from autonomous cars.

The first proposition seems unlikely. First, plenty of people (including, but not limited to, criminals) will want to drive sometimes. They bought their vehicles, and for as long as they run people want to drive them. While Congress could probably pass a law that forbids the sale of human controlled vehicles, enterprising folks will continue to make them (they should be substantially the same except for a few electronic components.)

The second proposition is more likely, but expensive. At minimum, a barricade in the middle of every road would need to be built, along with separate merging ramps. Some way of identifying that each car on that side of the barricade is autonomous would be needed as well, though a short transmission from each car should keep the honest people honest. None of that construction is going to be cheap, although it might provide a national recovery effort, New Deal style, as tens or hundreds of thousands of people are hired to modernize the roads in the US.

Barring either of those propositions, people will need to accept that accidents will happen at a more frequent rate than if computers were controlling all of the cars, but still probably a lesser rate than now when humans are driving (almost) everything.

As with most technology, there seems to be a spectrum of ideals with the major outliers marking the end points. At one extreme, people fear and dislike change, and are hesitant to adopt any new technology; especially one that involves a personalized activity like driving a vehicle. On the other extreme are the technophiles, eager to use the next new thing undeterred by early bugs. Because I imagine the vast majority of people often just want to get to their destination safely, I suspect that the technophiles will jump on automated cars, the bugs will get worked out and their safety will be proven over the next five or ten years, and then the majority will begin to replace their cars with automated vehicles. Some subset will never get them, including the elderly (who, generally, are resistant to technology) despite how useful automated cars would be to people reaching the end of their safe-driving ability.

Autonomous cars are coming, and I hope the human driving experience remains after they get here.
Categories: Uncategorized

The Appeal of Stem Cells

Okay, that was a terrible pun.

A few weeks ago, an appellate court overturned the ban on the testing of some human embryonic stem cells. The appellate court reasoned that the trial court analyzed one factor related to the permissibility of an injunction incorrectly when the trial court decided that a stay on stem cell research would not seriously affect researchers. The appellate court instead held that by enjoining research on stem cells, the researcher’s loss of staff jobs and loss of investments in research and equipment caused “certain and substantial” harm. In addition, the appellate court found that the trial court misapplied an amendment to the 2009 federal budget. The Dickey-Wicker Amendment, in addition to being tremendously fun to say, prevents the allotment of funds from fiscal year 2009 from being used to support either the creation or (somewhat more nuanced) destruction of human embryos for research. The Amendment does not, according to the appellate court, bar the allotment funds from fiscal year 2009 for research using embryonic stem cells obtained in other ways.

Now that the appellate court has ruled, the case will move back to the trial court for more arguments and a consistent verdict. which is not to say that the defendant researchers’ victory is assured, only that the verdict cannot contradict the findings of the appellate court. Whatever the ruling, I don’t think the issue will be particularly controversial going forward for two reasons.

First, the Dickey-Wicker Amendment only constrained funds for FY2009, and money from 2009 is unlikely to still be lurking about. However, any future budget could contain a similar amendment, in which case more suits would almost certainly challenge the use of that money for stem cell research.

Second, and more importantly, the need for embryonic stem cells seems to be declining, and the moral bickering that accompanies embryonic stem cell research seems largely absent from adult stem cell research.

Embryonic stem cells are valuable because they are “pluripotent,” which means that they can become any type of cell. At conception, two cells merge and from that comes the entire complexity of human beings. As people age, however, cells become more specialized and reproduce only the same kind of cell: this helps to ensure that we don’t grow an eyeball in our esophagus. Using some scientific magic, however, researchers have found a way to regress an adult cell back into a pluripotent cell that can then, with some additional tinkering, become whatever sort of cell scientists desire. Just yesterday, GEN reported that human liver cells derived from other adult cells suffered no loss of functionality as compared to embryonic stem cells when grafted onto mice. Importantly, the research showed no signs of tumor, and thus cancer, over the life of the experiment. Although more research needs to be done, it seems that the initial indications suggest embryonic stem cells will not be needed much longer.

Why use stem cells, embryonic or otherwise, at all? There are several excellent reasons. Because stem cells are pluripotent, they (and other cells regressed into that state) can rejuvenate already existing organs. Where an individual with severe heart problems now needs to wait for a donor and hope that the heart doesn’t trigger an adverse immune reaction, stem cells and created pluripotent cells offer the same individual the opportunity to get a heart made from their own cells. Because the heart will be created with the patient’s own cells, the patient’s body will not reject the new organ. Further, when adult cells can be reliably turned into whatever sort of cell is needed, the patient need only have enough cells excised from a convenient part of their body to create the new pluripotent cells – no one else need give up their heart, and the wait list experience could be more like ordering a custom car and less like hoping some poor sap with a compatible heart kicks the bucket. Plainly, stem cells save lives and reduce scarcity – two very important goals.

Potentially, the same effect could be obtained mechanically through an artificial heart rather than a heart created biologically from other cells. Despite that, there are excellent reasons for continuing stem cell research. For large organs, such mechanical replacement probably works out fine, although if the choice is available some people would probably prefer to remain (quasi)au-natural. Even for a large organ like skin, however, mechanical replacement can pose some problems. First, there are few synthetic skins available, and none of them are as good as human skin or have a proven ability to integrate with human skin. The closest that I’ve found is a synthetic skin designed for use with robots. This seems to be an all or nothing proposition, however: either all of the skin is replaced, or none of it is (if it can even work well on a human at all.) Even if a synthetic skin is invented that can integrate with human skin, it will need to look very much like human skin (and, particularly, the skin of the human to whom it is attached) to gain widespread acceptance: No one wants to look like a patchwork quilt if they can avoid it. For other types of cells; neurons and nervous system cells included, there is, as yet, no mechanical substitute.

Eventually mechanical engineering will overtake biological engineering. Cells have been created over millions of years through a painstakingly slow process with only a few dozen raw materials to choose from. Everything is carbon based because carbon is extremely adaptable. However, synthetic polymers, alloys, and substrates ought to provide many more choices in crafting future organs, and human experimentation means that changes that either are not conducive to survival (say, the purely cosmetic) and so not available via evolution, or else not made of materials to which evolution has access can be used to create organs with features designed by and for the user. It’s very nice (and much appreciated) that our skin holds our insides in and provides us with tactile sensory information, but wouldn’t it be nice if it was more resistant to wear (with, say, properties similar to Kevlar) or could maintain its shape without suffering for cellular decay and loss of elasticity that leads to wrinkles? Perhaps we could imbue skin biologically with chameleon-like qualities, but wouldn’t it be better if skin could displace light entirely, or if we could moderate pain sensations in a modular way, such that a cut that has been noticed could stop hurting while we repair it? People will largely choose mechanical engineering over biology for the same reasons that people use hearing aids and contact lenses: more functionality is better.

The benefits of mechanical engineering aside, biological engineering is largely more attainable at the moment. Even if mechanical engineering was up to par, because some people will choose to remain quasi-natural (i.e., biological) instead of becoming a hybrid or a fully mechanical being. For that reason alone, stem cell research is worth pursuing: more options are always better.

On deck for Monday: Mechanical Cars.

That’s What She Said!

May 3, 2011 1 comment

Physorg.com writes that U of Washington researchers have created a computer program capable of making double entendre jokes based on words with “high sexiness value, including “hot” and “meat”…” Despite the serious language analysis involved in such a silly exercise, I can’t help but think that this just means that computers are a little closer to being able to ice their bros once they attain sentience.

In other news, researchers at the University of Electro-Communications in Japan have created a device that lets you simulate a kiss with your partner of choice over the internet: As long as you routinely kiss with a straw in your mouth it seems. However, with better technology and a less pencil-sharpener looking device, users in long distance relationships (of the serious, or more casual kind) could build some level of intimacy despite miles of separation. One of the inventors suggests that if a major pop star were to program their kiss into the device, it might be a new and powerful way of connecting with fans; subject to the technology getting better, that seems like a great point. And it’s not to difficult to imagine other remote-tactile applications. I think that remote-tactile interfaces are going to become immensely popular expansions of the general cyber-sex phenomenon that currently exists, but the devices are going to have to be more realistic than a straw on spin cycle. Certainly the adult entertainment industry is throwing money into the idea, and has even created a racy term for the technology: teledildonics.

Finally, German researchers have created an eye-computer interface where a sub-dermal power supply connects to a chip implanted under the retina to restore some vision to the blind. No longer the stuff of miracles, restoring sight to the blind is both important in its own right (for obvious reasons) and a great step toward understanding how the brain processes visual information. With a little more understanding, and a little better tech, it should be possible to enhance the visual range of people with perfectly normal vision, including such nifty (and useful) additions as zoom, night vision, and wirelessly updated heads-up-displays. After all, basic augmented reality exists currently in goggles, the military is working on more advanced technology, and it seems just a hop, skip, and a jump to the augmented reality not just being a heads-up display, but a display superimposed from our biotic or cybernetic eyes into our field of vision.

Exciting stuff, from the silly to the useful.