Thursday, June 24, 2010

How the Avatar Machine Works

Introduction to How the Avatar Machine Works

Unless you've had an out-of-body expe­rience, you've spent your whole life experiencing everything from the first-person perspective. But what if you could step outside your body temporarily and experience life from a more remote point of view? That's a question that interested a design student named Marc Owens, who invented a device called the Avatar Machine in an attempt to create this kind of experience.

In terms of computer games and virtual environments, an avatar is a digital representation of a real human being. Avatars can be almost anything as long as an actual person controls them. By contrast, digital characters or creatures under the control of a computer are called bots.

If you've ever played a video game like World of Warcraft (WOW), Tomb Raider or Grand Theft Auto, you know about games that use a third-person perspective. In a third-person perspective game, you can see the character you're playing. In most games like this, the standard camera position is above and behind the character, as if you were floating a few feet behind it. Owens designed the Avatar Machine to let users see themselves from this perspective as if they were characters within a virtual environment.

Owens has said that the MMORPG WOW specifically inspired his invention. The device only really requires a harness and helmet, but Owens didn't stop there -- he designed a full costume. He added long spikes on the helmet as well as a red padded section on the upper back of the torso, brown arm bands, black gloves and puffy white pants. The costume resembles the sort of fantasy characters you'd encounter in WOW.

Owens has said that the MMORPG WOW specifically inspired his invention [source: Régine Debatty]. The device only really requires a harness and helmet, but Owens didn't stop there -- he designed a full costume. He added long spikes on the helmet as well as a red padded section on the upper back of the torso, brown arm bands, black gloves and puffy white pants. The costume resembles the sort of fantasy characters you'd encounter in WOW.

About the Inventor
Marc Owens graduated from the University of Brighton with a B.A. in 3-D Design before continuing his education at the Royal College of Art. In his studies, he concentrates on projects related to concepts like identity and ethics. He has designed several inventions that help him observe how human interaction with objects can affect emotions. These include an obsessive-compulsive disorder (OCD) light switch (a digital device that counts the number of times someone turns a light on or off) and a silicon surgical light (part of a visceral, interactive experience that tests how people react to cutting into an inanimate object).


Avatar Machine Hardware

Apart from the costume elements of the invention, which were included to give the Avatar Machine the appearance of a video game character, the device includes:

* A harness that straps to the user's torso
* Three two-meter long aluminum rods that attach to the user's harness at the lower back and shoulders, forming a tripod
* A wide-angle pinhole video camera at the end of the tripod, not to be confused with a simple pinhole camera used in photography
* A head-mounted display (HMD) contained in a helmet
* A power supply attached to the harness

Owens' wide-angle pinhole video camera was an excellent choice for this type of application. Not only are pinhole video cameras small -- they are often used in covert surveillance operations because they're hard to spot -- but they also have infinite fixed focus with the proper lens. You can't adjust the focus on a fixed-focus camera, but the camera is designed so that all objects beyond a certain point remain in focus no matter how far away they might be.

The wide-angle lens on the camera is necessary to make sure that the user can see enough of him or herself in the frame. Wide-angle lenses have shorter focal lengths than normal lenses. The focal length is the distance between the lens and the charge-coupled device (CCD), a semiconductor image sensor in the camera that creates video images by interpreting the intensity of light coming through the lens.

The shorter focal length allows Owens to set his camera in a fixed position only three feet behind the user. Without the wide-angle lens, he would have had to make the tripod longer to get the same angle of view, making the Avatar Machine more difficult to use.

Owens' head-mounted display is a helmet that contains a monitor. A user wearing the Avatar Machine sees a video image of his or her back. The helmet doesn't have any integrated tracking systems, which means that if the user turns his or her head, the point of view doesn't change. He or she can only change the point of view by moving forward, backward or turning his or her torso in another direction.

While the padded suit isn't necessary for the invention to work, it provides a helpful layer of padding for clumsy users. Owens observed that most people took a few minutes to get used to the Avatar Machine's interface before moving around comfortably in it.

Avatar Machine Applications

Owens' fascination regarding the boundaries between virtual and physical environments was instrumental in his design of the Avatar Machine. Much of his research focuses on how people behave differently within a virtual space from how they do in real life. The Avatar Machine blurs the lines between the digital and physical worlds, particularly for people who are familiar with third-person video games.

Sociologists and psychologists might find the Avatar Machine useful when studying human behavior. Owens theorizes that people might feel a diminished sense of social responsibility when wearing the Avatar Machine. He believes users might also feel a sense of invincibility when viewing themselves from a third-person perspective. Through the Avatar Machine, users might experience a sense of disconnection from their physical presence.

Owens thinks that the disconnected feeling users experience could lead to interesting behaviors, many of which are rare in real life but common in the world of video games. Users might become less self-conscious and behave in ways they normally wouldn't in public. This behavior could range from dancing and striking silly poses to acting like a bully toward other people.

Another application for the Avatar Machine is as a form of entertainment. The Avatar Machine has the potential to become a high-tech game or toy. When users tested the Avatar machine, Owens observed that they seemed to enjoy possessing the physical characteristics and traits of a creature much larger and more powerful than themselves. He also saw that many of them had fun observing the reactions of passers-by. It's not hard to imagine the Avatar Machine becoming a curiosity at a high-tech amusement park.

Out of Body, Back in Five Minutes
Scientists are using techniques similar to the Avatar Machine to induce out-of-body experiences in normal, healthy subjects. At Ecole Polytechnique Fédérale, an education, research and technology institution in Lausanne, Switzerland, scientists used virtual reality gear to give people the sensation of being in another body. In one version of the experiment, each subject entered a room and put on a pair of virtual reality goggles. A camera behind the subject projected a virtual image of the person as if he or she were standing six feet in front of him or herself. A scientist would gently touch the subject using a stick while the subject viewed a video feed of a virtual stick stroking the illusory image of his or her body. Many subjects reported a sense of being inside the illusory bodies several feet away. Similar experiments have shown that when different sensory data -- like sight and touch -- don't match up, the sense of being within a physical body can disappear.

Sunday, June 20, 2010

How can doctors use virtual reality to treat phobias?

How Virtual Medicine Works

Millions of people suffer from phobias that limit their activities and negatively impacting their lives. Many seek psychological treatment in order to manage or conquer their fears. For years, a popular form of treatment was exposure therapy, in which a therapist would expose a patient to stimuli related to his fear in a controlled environment. In many cases, patients would learn to manage their anxiety through repeated exposure coupled with encouragement from a therapist.

Exposure therapy is time consuming. Often it's also expensive and inconvenient, and it can compromise patient confidentiality. For example, treating a patient with aerophobia, or the fear of flying, usually involves a trip to the airport. It might take several visits for a therapist and patient to make their way through security to a gate. Eventually both have to get on a plane and fly to a destination. Now that you have to be a ticketed passenger to pass through security at airports, it can be prohibitively expensive to treat a patient with exposure therapy. Because patients and therapists travel together, the patient's confidentiality is compromised because the public has the opportunity to see the therapy in action.



Virtually Better, Inc. uses virtual therapy to treat a patient's fear of flying.

One alternative to traditional exposure therapy is virtual reality exposure therapy. This kind of therapy uses a virtual reality unit to simulate situations that cause anxiety in phobia patients. It has several advantages over traditional therapy. Doctors don't have to leave their offices. Scheduling treatment is easier. It's less expensive in the long run. And patients are often more willing to participate in a program they know will allow them to deal with their fears in a nonphysical setting. Since patients can undergo therapy inside the doctor's office, confidentiality isn't an issue.

Dr. Larry Hodges, a virtual reality computer scientist at the University of North Carolina -- Charlotte, became interested in a possible therapeutic application of VR technology in the early 1990s. He approached Dr. Barbara Rothbaum, a professor of Psychiatry at Emory University, and together they collaborated on a project that would test VR technology's efficacy in recreating patients' fears. They decided to design a simulation for patients suffering from acrophobia, or a fear of heights. Dr. Hodges felt that it would be relatively easy to create a program giving the illusion of height compared to other, more complex fears.

Dr. Hodges and his team worked with Dr. Rothbaum and volunteer patients to determine what stimuli were particularly powerful. Volunteers would wear a head-mounted display (HMD) that would create the illusion that they were on a tall ledge. Going into the project, Hodges and Rothbaum weren't certain that they would get the same reactions from volunteers in a virtual environment as they would a real one, nor were they sure that by treating someone using virtual environments that progress would translate into the real world.

Virtual Medicine Treatments

Very early in the simulation, Dr. Rothbaum observed that the volunteer patients were exhibiting classic signs of anxiety, including an accelerated heart rate and shortness of breath. Rothbaum and Hodges had successfully demonstrated that a virtual environment could evoke real physical reactions from users. Dr. Rothbaum began to use the simulations to work with patients as if they were undergoing regular exposure therapy. Before long, several of the volunteers reported they had purposefully sought out experiences in real situations that tested their fear. These were patients who normally would have avoided these situations at all costs before trying the virtual therapy.

After some additional research, Hodges created the company Virtually Better, Inc. The company designs and sells virtual reality systems that accurately recreate several different classic phobia situations, including social phobias involving crowds of people. Now a therapist can take a patient on a virtual flight without the hassle of scheduling travel, go on a virtual elevator ride without ever stepping out of the office, or give a speech in front of a crowd of people, all without leaving the office or compromising patient confidentiality.



Virtually Better may create an environment such as this to help patients overcome their fear of heights.

The company also offers scenarios to treat veterans with post-traumatic stress disorder. A program designed for Vietnam veterans can recreate a couple of different situations that were common for most soldiers in the conflict. One simulates a helicopter ride over jungle landscape, while the other puts the patient in the middle of a virtual clearing. Engineers created both scenarios based on veterans' descriptions of situations that triggered their anxiety.

In addition to treating fears and anxieties, Virtually Better uses VR technology to help treat addiction. These scenarios put the user in a situation where characters within the virtual environment are indulging in alcohol or drugs. While it might seem strange to think a virtual character can trigger addiction cravings, Dr. Hodges says their research shows that once someone is habituated to a virtual environment (meaning the user feels as if he's inside and a part of the virtual world) he reacts as if it were the real world. In fact, according to some research projects, virtual characters can impact a real person as if they were acutally real. Dr. Hodges says that the gender of a character seems to make a bigger difference in users' reactions rather than whether the character they see is virtual or real.

Virtually Better has sold units to therapists around the world and continues to develop new therapy applications of VR technology. Dr. Hodges is also continuing his research in the VR field, studying how virtual persons and environments can impact human users.

How 3-D Glasses Work

Although the 1950s are most often considered the 3-D movie decade, the first feature length 3-D film, "The Power of Love," was made in 1922. Since that time the use of 3-D technology in theaters and on television has drifted in and out of mainstream popularity. But, whether you've used them for the big screen or at home in front of your television, you have to admit 3-D glasses are incredibly cool.

They make the movie or television show you're watching look like a 3-D scene that's happening right in front of you. With objects flying off the screen and careening in your direction, and creepy characters reaching out to grab you, wearing 3-D glasses makes you feel like you're a part of the action - not just someone sitting there watching a movie. Considering they have such high entertainment value, you'll be surprised at how amazingly simple 3-D glasses are.

Saturday, June 19, 2010

Teleportation

Introduction to How Teleportation Will Work


Ever since the wheel was invented more than 5,000 years ago, people have been inventing new ways to travel faster from one point to another. The chariot, bicycle, automobile, airplane and rocket have all been invented to decrease the amount of time we spend getting to our desired destinations. Yet each of these forms of transportation share the same flaw: They require us to cross a physical distance, which can take anywhere from minutes to many hours depending on the starting and ending points.

But what if there were a way to get you from your home to the supermarket without having to use your car, or from your backyard to the International Space Station without having to board a spacecraft? There are scientists working right now on such a method of travel, combining properties of telecommunications and transportation to achieve a system called teleportation. In this article, you will learn about experiments that have actually achieved teleportation with photons, and how we might be able to use teleportation to travel anywhere, at anytime.

Teleportation involves dematerializing an object at one point, and sending the details of that object's precise atomic configuration to another location, where it will be reconstructed. What this means is that time and space could be eliminated from travel -- we could be transported to any location instantly, without actually crossing a physical distance.

Many of us were introduced to the idea of teleportation, and other futuristic technologies, by the short-lived Star Trek television series (1966-69) based on tales written by Gene Roddenberry. Viewers watched in amazement as Captain Kirk, Spock, Dr. McCoy and others beamed down to the planets they encountered on their journeys through the universe.

In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. This revelation, first announced by Bennett at an annual meeting of the American Physical Society in March 1993, was followed by a report on his findings in the March 29, 1993 issue of Physical Review Letters. Since that time, experiments using photons have proven that quantum teleportation is in fact possible.

Teleportation: Recent Experiments

In 1998, physicists at the California Institute of Technology (Caltech), along with two European groups, turned the IBM ideas into reality by successfully teleporting a photon, a particle of energy that carries light. The Caltech group was able to read the atomic structure of a photon, send this information across 3.28 feet (about 1 meter) of coaxial cable and create a replica of the photon. As predicted, the original photon no longer existed once the replica was made.

In performing the experiment, the Caltech group was able to get around the Heisenberg Uncertainty Principle, the main barrier for teleportation of objects larger than a photon. This principle states that you cannot simultaneously know the location and the speed of a particle. But if you can't know the position of a particle, then how can you teleport it? In order to teleport a photon without violating the Heisenberg Principle, the Caltech physicists used a phenomenon known as entanglement. In entanglement, at least three photons are needed to achieve quantum teleportation:

* Photon A: The photon to be teleported
* Photon B: The transporting photon
* Photon C: The photon that is entangled with photon B

If researchers tried to look too closely at photon A without entanglement, they would bump it, and thereby change it. By entangling photons B and C, researchers can extract some information about photon A, and the remaining information would be passed on to B by way of entanglement, and then on to photon C. When researchers apply the information from photon A to photon C, they can create an exact replica of photon A. However, photon A no longer exists as it did before the information was sent to photon C.

In other words, when Captain Kirk beams down to an alien planet, an analysis of his atomic structure is passed through the transporter room to his desired location, where a replica of Kirk is created and the original is destroyed.

In 2002, researchers at the Australian National University successfully teleported a laser beam.

The most recent successful teleportation experiment took place on October 4, 2006 at the Niels Bohr Institute in Copenhagen, Denmark. Dr. Eugene Polzik and his team teleported information stored in a laser beam into a cloud of atoms. According to Polzik, "It is one step further because for the first time it involves teleportation between light and matter, two different objects. One is the carrier of information and the other one is the storage medium" [CBC]. The information was teleported about 1.6 feet (half a meter).

Quantum teleportation holds promise for quantum computing. These experiments are important in developing networks that can distribute quantum information. Professor Samuel Braunstein, of the University of Wales, Bangor, called such a network a "quantum Internet." This technology may be used one day to build a quantum computer that has data transmission rates many times faster than today's most powerful computers.

Human Teleportation

We are years away from the development of a teleportation machine like the transporter room on Star Trek's Enterprise spaceship. The laws of physics may even make it impossible to create a transporter that enables a person to be sent instantaneously to another location, which would require travel at the speed of light.

For a person to be transported, a machine would have to be built that can pinpoint and analyze all of the 1028 atoms that make up the human body. That's more than a trillion trillion atoms. This machine would then have to send this information to another location, where the person's body would be reconstructed with exact precision. Molecules couldn't be even a millimeter out of place, lest the person arrive with some severe neurological or physiological defect.

In the Star Trek episodes, and the spin-off series that followed it, teleportation was performed by a machine called a transporter. This was basically a platform that the characters stood on, while Scotty adjusted switches on the transporter room control boards. The transporter machine then locked onto each atom of each person on the platform, and used a transporter carrier wave to transmit those molecules to wherever the crew wanted to go. Viewers watching at home witnessed Captain Kirk and his crew dissolving into a shiny glitter before disappearing, rematerializing instantly on some distant planet.

If such a machine were possible, it's unlikely that the person being transported would actually be "transported." It would work more like a fax machine -- a duplicate of the person would be made at the receiving end, but with much greater precision than a fax machine. But what would happen to the original? One theory suggests that teleportation would combine genetic cloning with digitization.

In this biodigital cloning, tele-travelers would have to die, in a sense. Their original mind and body would no longer exist. Instead, their atomic structure would be recreated in another location, and digitization would recreate the travelers' memories, emotions, hopes and dreams. So the travelers would still exist, but they would do so in a new body, of the same atomic structure as the original body, programmed with the same information.

But like all technologies, scientists are sure to continue to improve upon the ideas of teleportation, to the point that we may one day be able to avoid such harsh methods. One day, one of your descendents could finish up a work day at a space office above some far away planet in a galaxy many light years from Earth, tell his or her wristwatch that it's time to beam home for dinner on planet X below and sit down at the dinner table as soon as the words leave his mouth.

How Light Works

We see things every day, from the moment we get up in the morning until we go to sleep at night. We look at everything around us using light.

We apprecia­te kids' crayon drawings, fine oil paintings, swirling computer graphics, gorgeous sunsets, a blue sky, shooting stars and rainbows. We rely on mirrors to make ourselves presentable, and sparkling gemstones to show affection.



But did you ever stop to think that when we see any of these things, we are not directly con­nected to it? We are, in fact, seeing light -- light that somehow left objects far or near and reached our eyes. Light is all our eyes can really see.

­The other way that we encounter light is in devi­ces that produce light -- incandescent bulbs, fluorescent bulbs, lasers, lightning bugs, the sun. Each one uses a different technique to generate photons.

Ways of Thinking About Light

You have probably heard two different ways of talking about light:

* There is­ the "particle" theory, expressed in part by the word photon.
* There is the "wave" theory, expressed by the term light wave.

­From the time of the ancient Greeks, people have thought of light as a stream of tiny particles. After all, light travels in straight lines and bounces off a mirror much like a ball bouncing off a wall. No one had actually seen particles of light, but even now it's easy to explain why that might be. The particles could be too small, or moving too fast, to be seen, or perhaps our eyes see right through them.

The idea of the light wave came from Christian Huygens, who proposed in the late 1600s that light acted like a wave instead of a stream of particles. In 1807, Thomas Young backed up Huygens' theory by showing that when light passes through a very narrow opening, it can spread out, and interfere with light passing through another opening. Young shined a light through a very narrow slit. What he saw was a bright bar of light that corresponded to the slit. But that was not all he saw. Young also perceived additional light, not as bright, in the areas around the bar. If light were a stream of particles, this additional light would not have been there. This experiment suggested that light spread out like a wave. In fact, a beam of light radiates outward at all times.

Albert Einstein advanced the theory of light further in 1905. Einstein considered the photoelectric effect, in which ultraviolet light hits a surface and causes electrons to be emitted from the surface. Einstein's explanation for this was that light was made up of a stream of energy packets called photons.

Modern physicists believe that light can behave as both a particle and a wave, but they also recognize that either view is a simple explanation for something more complex. In this article, we will talk about light as waves, because this provides the best explanation for most of the phenomena our eyes can see.

What is Light?

Why is it that a beam of light radiates outward, as Young proved? What is really going on? To understand light waves, it helps to start by discussing a more familiar kind of wave -- the one we see in the water. One key point to keep in mind about the water wave is that it is not made up of water: The wave is made up of energy traveling through the water. If a wave moves across a pool from left to right, this does not mean that the water on the left side of the pool is moving to the right side of the pool. The water has actually stayed about where it was. It is the wave that has moved. When you move your hand through a filled bathtub, you make a wave, because you are putting your energy into the water. The energy travels through the water in the form of the wave.

All waves are traveling energy, and they are usually moving through some medium, such as water. You can see a diagram of a water wave in Figure 1. A water wave consists of water molecules that vibrate up and down at right angles to the direction of motion of the wave. This type of wave is called a transverse wave.

Light waves are a little more complicated, and they do not need a medium to travel through. They can travel through a vacuum. A light wave consists of energy in the form of electric and magnetic fields. The fields vibrate at right angles to the direction of movement of the wave, and at right angles to each other. Because light has both electric and magnetic fields, it is also referred to as electromagnetic radiation.

Light waves come in many sizes. The size of a wave is measured as its wavelength, which is the distance between any two corresponding points on successive waves, usually peak-to-peak or trough-to-trough (Figure 1). The wavelengths of the light we can see range from 400 to 700 billionths of a meter. But the full range of wavelengths included in the definition of electromagnetic radiation extends from one billionth of a meter, as in gamma rays, to centimeters and meters, as in radio waves. Light is one small part of the spectrum.

Holographic Environments

If you've ever wa­tched "Star Trek," you may remember seeing the crew of the Starship Enterprise live out their fantasies in a room called the holodeck. The holodeck was a giant, holographic projection room that allowed the crew to touch and interact with projections as if they were in a big video game. Scientists today are developing a new communications technology that will allow you and your friends to interact inside a simulated environment even if you are thousands of miles apart. ­

Most of the basic components for this network are already in place to allow the development of tele-immersion. Tele-immersion is the scientific community's answer to the holodec.


Introduction to How Holographic Environments Will Work

By combining cameras and Internet telephony, videoconferencing has allowed the real-time exchange of more information than ever without physically bringing each person into one central room. Tele-immersion takes videoconferencing to the next level. It will create a central, simulated environment that will allow everyone to come together in one virtual room, and no one will have to leave their physical location to do so.

Tele-immersion: Beyond Videoconferencing

­­ In the office of the future, you won't call people with a phone. You will com­mand your computer to call a particular person. "Computer, call Joe at the London office." Suddenly, one of your office walls will flicker and Joe will be sitting across from you as if he were right there. On the other end, Joe will experience the same immersive connection. That's the idea behind tele-immersion: bringing two or more distant people together in a single, simulated office setting. Business travel will be practically unnecessary.

This type of communication will take us beyond the limits of videoconferencing. Have you ever used a Webcam to teleconference with another person? If you have, then you've seen that videoconferencing via the Internet is not a perfect form of communication. The image is close to real-time, but there are delays that cause a jerking video. Also, if someone walks out of view of that one camera, you can no longer see the person. With tele-immersion, people can't walk out of the view of the camera. In fact, you can peer around their office just by looking at the display screen from different angles. It's like having a window to look through.

Holographic environments will be generated by computers with computing speeds thousands of times faster than your PC. There are several steps to constructing a holographic environment:

1. The computer recognizes the presence and movements of people and objects.
2. The computer tracks those images.
3. The computer projects those images on a stereo-immersive surface.

­ There are several groups working together on the National Tele-Immersion Initiative (NTII) to bring us all together in these virtual environments. In May 2000, researchers at the University of North Carolina (UNC), the University of Pennsylvania and Advanced Networks and Services reached a milestone in developing this technology. A user sitting in an office at UNC in Chapel Hill, NC, was able to see lifelike, three-dimensional images of colleagues hundreds of miles away, one in Philadelphia and the other in New York.

Tele-immersion may sound similar to virtual reality, but there are key differences between the two technologies. Virtual reality allows you to move in a computer-generated 3-D environment, but tele-immersion, for now, can only create a 3-D environment that you can see -- you can't interact in it. The next step is to merge tele-immersion and virtual reality to allow users to alter the scenes that they see.

The applications for immersive holographic environments are endless. Imagine a video game free of joysticks, in which you become a participant in the game, fighting monsters or scoring touchdowns. Instead of traveling hundreds of miles to visit your relatives for the holidays, you'll simply call them up and join them in a shared holographic room. Eventually, you will even be able to hug other people using this technology. Doctors and soldiers could use tele-immersion to train in a simulated environment.

Holographic Displays

The early prototypes of tele-immersive displays require users to wear special goggles and ­a head device that tracks the viewpoints of users looking at the screen. On the other end, the people that appear as 3-D images are being tracked with an array of seven ordinary video cameras, while two other video cameras capture real light patterns projected in each room to calculate distances. This enables the proper depth to be re-created on the screen. So, if a viewer moves his head to the right, he can see the corresponding images that would be seen if he were actually in the room with the person on the screen.



Images on the screen are split and polarized to create a different image for each eye. The goggles then combine these images so that the brain recognizes only one 3-D image. This process is similar to how those old 3-D movie glasses work. Early experiments, like the one at UNC in May, have experienced some glitches, similar to those of normal video-conferencing. The scenes being projected are only refreshed three times per second, which creates a jerky image. If that rate could be improved to 10 frames per second, it would create a seamless projected image that would be like looking through a window at another person. Scientists are developing new technologies to support this type of communication, including:

* Internet2 - This would replace the current Internet infrastructure. This new network will have a higher bandwidth and speeds 1,000 times faster than today's Internet. This high-bandwidth, high-speed network is necessary to transfer the large amounts of data that tele-immersion will produce.
* Display technologies - Stereo-immersive displays would have to present a clear view of the scenes being transmitted.
* Haptic sensors would allow people to touch projections as if they were real.
* Desktop supercomputers would perform the trillions of calculations needed to create a holographic environment. Another possibility to support these environments would be a network of computers that share power.

­ Tele-immersion will blur the lines between real and computer-generated images. It will be the ultimate tele-commuting technology, almost entirely eliminating the rush-hour drive to work. Instead of commuting, people could attend board meetings by projecting themselves into the company's conference room. And if your job requires you to travel, you could still be home for dinner by tele-immersing yourself into the family kitchen. Because this technology is still in the early stages of development, the possibilities are truly endless.

Tuesday, June 15, 2010

Augmented Reality

Introduction to How Augmented Reality Works

Video games have been entertaining us for nearly 30 years, ever since Pong was introduced to arcades in the early 1970s. Computer graphics have become much more sophisticated since then, and game graphics are pushing the barriers of photorealism. Now, researchers and engineers are pulling graphics out of your television screen or computer display and integrating them into real-world environments. This new technology, called augmented reality, blurs the line between what's real and what's computer-generated by enhancing what we see, hear, feel and smell.

On the spectrum between virtual reality, which creates immersive, computer-generated environments, and the real world, augmented reality is closer to the real world. Augmented reality adds graphics, sounds, haptic feedback and smell to the natural world as it exists. Both video games and cell phones are driving the development of augmented reality. Everyone from tourists, to soldiers, to someone looking for the closest subway stop can now benefit from the ability to place computer-generated graphics in their field of vision.



Augmented reality is changing the way we view the world -- or at least the way its users see the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view, and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head. Similar devices and applications already exist, particularly on smartphones like the iPhone.

Augmenting Our World

The basic idea of augmented reality is to superimpose graphics, audio and other sensory enhancements over a real-world environment in real time. Sounds pretty simple. Besides, haven't television networks been doing that with graphics for decades? However, augmented reality is more advanced than any technology you've seen in television broadcasts, although some new TV effects come close, such as RACEf/x and the super-imposed first down line on televised U.S. football games, both created by Sportvision. But these systems display graphics for only one point of view. Next-generation augmented-reality systems will display graphics for each viewer's perspective.

Some of the most exciting augmented-reality work is taking place in research labs at universities around the world. In February 2009, at the TED conference, Pattie Maes and Pranav Mistry presented their augmented-reality system, which they developed as part of MIT Media Lab's Fluid Interfaces Group. They call it SixthSense, and it relies on some basic components that are found in many augmented reality systems:
* Camera
* Small projector
* Smartphone
* Mirror



These components are strung together in a lanyardlike apparatus that the user wears around his neck. The user also wears four colored caps on the fingers, and these caps are used to manipulate the images that the projector emits.

SixthSense is remarkable because it uses these simple, off-the-shelf components that cost around $350. It is also notable because the projector essentially turns any surface into an interactive screen. Essentially, the device works by using the camera and mirror to examine the surrounding world, feeding that image to the phone (which processes the image, gathers GPS coordinates and pulls data from the Internet), and then projecting information from the projector onto the surface in front of the user, whether it's a wrist, a wall, or even a person. Because the user is wearing the camera on his chest, SixthSense will augment whatever he looks at; for example, if he picks up a can of soup in a grocery store, SixthSense can find and project onto the soup information about its ingredients, price, nutritional value -- even customer reviews.

By using his capped fingers -- Pattie Maes says even fingers with different colors of nail polish would work -- a user can perform actions on the projected information, which are then picked up by the camera and processed by the phone. If he wants to know more about that can of soup than is projected on it, he can use his fingers to interact with the projected image and learn about, say, competing brands. SixthSense can also recognize complex gestures -- draw a circle on your wrist and SixthSense projects a watch with the current time.


Augmented Reality on Cell Phones

While it may be some time before you buy a device like SixthSense, more primitive versions of augmented reality are already here on some cell phones, particularly in applications for the iPhone and phones with the Android operating system. In the Netherlands, cell phone owners can download an application called Layar that uses the phone's camera and GPS capabilities to gather information about the surrounding area. Layar then shows information about restaurants or other sites in the area, overlaying this information on the phone's screen. You can even point the phone at a building, and Layar will tell you if any companies in that building are hiring, or it might be able to find photos of the building on Flickr .



Layar isn't the only application of its type. In August 2009, some iPhone users were surprised to find an augmented-reality "easter egg" hidden within the Yelp application. Yelp is known for its user reviews of restaurants and other businesses, but its hidden augmented-reality component, called Monocle, takes things one step further. Just start up the Yelp app, shake your iPhone 3GS three times and Monocle activates. Using your phone's GPS and compass, Monocle will display information about local restaurants, including ratings and reviews, on your cell phone screen. You can touch one of the listings to find out more about a particular restaurant.

There are other augmented reality apps out there for the iPhone and other similar phones -- and many more in development. Urbanspoon has much of the same functionality as Yelp's Monocle. Then there's Wikitude, which finds information from Wikipedia about sites in the area. Underlying most of these applications are a phone's GPS and compass; by knowing where you are, these applications can make sure to offer information relevant to you. We're still not quite at the stage of full-on image recognition, but trust us, people are working on it.

Augmented Reality in Video Games and the Military

Video game companies are quickly hopping aboard the augmented-reality locomotive. A company called Total Immersion makes software that applies augmented reality to baseball cards. Simply go online, download the Total Immersion software and then hold up your baseball card to a webcam. The software recognizes the card (and the player on it) and then displays related video on your computer screen. Move the card in your hands -- make sure to keep it in view of the camera -- and the 3-D figure on your screen will perform actions, such as throwing a ball at a target.



Total Immersion's efforts are just the beginning. In the next couple of years, we'll see games that take augmented reality out into the streets. Consider a scavenger-hunt game that uses virtual objects. You could use your phone to "place" tokens around town, and participants would then use their phones (or augmented-reality enabled goggles) to find these invisible objects.

Demos of many games of this order already exist. There's a "human Pac-Man" game that allows users to chase after each other in real life while wearing goggles that make them look like characters in Pac-Man.

Arcane Technologies, a Canadian company, has sold augmented-reality devices to the U.S. military. The company produces a head-mounted display -- the sort of device that was supposed to bring us virtual reality -- that superimposes information on your world. Consider a squad of soldiers in Afghanistan, performing reconnaissance on an opposition hideout. An AR-enabled head-mounted display could overlay blueprints or a view from a satellite or overheard drone directly onto the soldiers' field of vision.

Limitations and the Future of Augmented Reality

Augmented reality still has some challenges to overcome. For example, GPS is only accurate to within 30 feet (9 meters) and doesn't work as well indoors, although improved image recognition technology may be able to help [source: Metz].

People may not want to rely on their cell phones, which have small screens on which to superimpose information. For that reason, wearable devices like SixthSense or augmented-reality capable contact lenses and glasses will provide users with more convenient, expansive views of the world around them. Screen real estate will no longer be an issue. In the near future, you may be able to play a real-time strategy game on your computer, or you can invite a friend over, put on your AR glasses, and play on the tabletop in front of you.

There is such a thing as too much information. Just as the "CrackBerry" phenomenon and Internet addiction are concerns, an overreliance on augmented reality could mean that people are missing out on what's right in front of them. Some people may prefer to use their AR iPhone applications rather than an experienced tour guide, even though a tour guide may be able to offer a level of interaction, an experience and a personal touch unavailable in a computer program. And there are times when a real plaque on a building is preferable to a virtual one, which would be accessible only by people with certain technologies.

There are also privacy concerns. Image-recognition software coupled with AR will, quite soon, allow us to point our phones at people, even strangers, and instantly see information from their Facebook, Twitter, Amazon, LinkedIn or other online profiles. With most of these services people willingly put information about themselves online, but it may be an unwelcome shock to meet someone, only to have him instantly know so much about your life and background.

Despite these concerns, imagine the possibilities: you may learn things about the city you've lived in for years just by pointing your AR-enabled phone at a nearby park or building. If you work in construction, you can save on materials by using virtual markers to designate where a beam should go or which structural support to inspect. Paleontologists working in shifts to assemble a dinosaur skeleton could leave virtual "notes" to team members on the bones themselves, artists could produce virtual graffiti and doctors could overlay a digital image of a patient's X-rays onto a mannequin for added realism.

The future of augmented reality is clearly bright, even as it already has found its way into our cell phones and video game systems.