Friday, April 30, 2010

Virtual Reality

Introduction to How Virtual Reality Works

What do you think of when you hear the words virtual reality (VR)? Do you imagine someone wearing a clunky helmet attached to a computer with a thick cable? Do visions of crudely rendered pterodactyls haunt you? Do you think of Neo and Morpheus traipsing about the Matrix? Or do you wince at the term, wishing it would just go away?

If the last applies to you, you're likely a computer scientist or engineer, many of whom now avoid the words virtual reality even while they work on technologies most of us associate with VR. Today, you're more likely to hear someone use the words virtual environment (VE) to refer to what the public knows as virtual reality.



Naming discrepancies aside, the concept remains the same - using computer technology to create a simulated, three-dimensional world that a user can manipulate and explore while feeling as if he were in that world. Scientists, theorists and engineers have designed dozens of devices and applications to achieve this goal. Opinions differ on what exactly constitutes a true VR experience, but in general it should include:

* Three-dimensional images that appear to be life-sized from the perspective of the user
* The ability to track a user's motions, particula­rly his head and eye movements, and correspondingly adjust the images on the user's display to reflect the change in perspective.

Virtual Reality Immersion

In a virtual reality environment, a user experiences immersion, or the feeling of being inside and a part of that world. He is also able to interact with his environment in meaningful ways. The combination of a sense of immersion and interactivity is called telepresence. Computer scientist Jonathan Steuer defined it as “the extent to which one feels present in the mediated environment, rather than in the immediate physical environment.” In other words, an effective VR experience causes you to become unaware of your real surroundings and focus on your existence inside the virtual environment.



Jonathan Steuer proposed two main components of immersion: depth of information and breadth of information. Depth of information refers to the amount and quality of data in the signals a user receives when interacting in a virtual environment. For the user, this could refer to a display’s resolution, the complexity of the environment’s graphics, the sophistication of the system’s audio output, et cetera. Steuer defines breadth of information as the “number of sensory dimensions simultaneously presented.” A virtual environment experience has a wide breadth of information if it stimulates all your senses. Most virtual environment experiences prioritize visual and audio components over other sensory-stimulating factors, but a growing number of scientists and engineers are looking into ways to incorporate a users’ sense of touch. Systems that give a user force feedback and touch interaction are called haptic systems.

For immersion to be effective, a user must be able to explore what appears to be a life-sized virtual environment and be able to change perspectives seamlessly. If the virtual environment consists of a single pedestal in the middle of a room, a user should be able to view the pedestal from any angle and the point of view should shift according to where the user is looking. Dr. Frederick Brooks, a pioneer in VR technology and theory, says that displays must project a frame rate of at least 20 - 30 frames per second in order to create a convincing user experience.

The Virtual Reality Environment


Other sensory output from the VE system should adjust in real time as a user explores the environment. If the environment incorporates 3-D sound, the user must be convinced that the sound’s orientation shifts in a natural way as he maneuvers through the environment. Sensory stimulation must be consistent if a user is to feel immersed within a VE. If the VE shows a perfectly still scene, you wouldn’t expect to feel gale-force winds. Likewise, if the VE puts you in the middle of a hurricane, you wouldn’t expect to feel a gentle breeze or detect the scent of roses.

Lag time between when a user acts and when the virtual environment reflects that action is called latency. Latency usually refers to the delay between the time a user turns his head or moves his eyes and the change in the point of view, though the term can also be used for a lag in other sensory outputs. Studies with flight simulators show that humans can detect a latency of more than 50 milliseconds. When a user detects latency, it causes him to become aware of being in an artificial environment and destroys the sense of immersion.

n immersive experience suffers if a user becomes aware of the real world around him. Truly immersive experiences make the user forget his real surroundings, effectively causing the computer to become a non entity. In order to reach the goal of true immersion, developers have to come up with input methods that are more natural for users. As long as a user is aware of the interaction device, he is not truly immersed.


Virtual Reality Interactivity

Immersion within a virtual environment is one thing, but for a user to feel truly involved there must also be an element of interaction. Early applications using the technology common in VE systems today allowed the user to have a relatively passive experience. Users could watch a pre-recorded film while wearing a head-mounted display (HMD). They would sit in a motion chair and watch the film as the system subjected them to various stimuli, such as blowing air on them to simulate wind. While users felt a sense of immersion, interactivity was limited to shifting their point of view by looking around. Their path was pre-determined and unalterable.



Today, you can find virtual roller coasters that use the same sort of technology. DisneyQuest in Orlando, Florida features CyberSpace Mountain, where patrons can design their own roller coaster, then enter a simulator to ride their virtual creation. The system is very immersive, but apart from the initial design phase there isn't any interaction, so it's not an example of a true virtual environment.

Interactivity depends on many factors. Steuer suggests that three of these factors are speed, range and mapping. Steuer defines speed as the rate that a user's actions are incorporated into the computer model and reflected in a way the user can perceive. Range refers to how many possible outcomes could result from any particular user action. Mapping is the system's ability to produce natural results in response to a user's actions.

Navigation within a virtual environment is one kind of interactivity. If a user can direct his own movement within the environment, it can be called an interactive experience. Most virtual environments include other forms of interaction, since users can easily become bored after just a few minutes of exploration. Computer Scientist Mary Whitton points out that poorly designed interaction can drastically reduce the sense of immersion, while finding ways to engage users can increase it. When a virtual environment is interesting and engaging, users are more willing to suspend disbelief and become immersed.

True interactivity also includes being able to modify the environment. A good virtual environment will respond to the user's actions in a way that makes sense, even if it only makes sense within the realm of the virtual environment. If a virtual environment changes in outlandish and unpredictable ways, it risks disrupting the user's sense of telepresence.

The Virtual Reality Headset

Today, most VE systems are powered by normal personal computers. PCs are sophisticated enough to develop and run the software necessary to create virtual environments. Graphics are usually handled by powerful graphics cards originally designed with the video gaming community in mind. The same video card that lets you play World of Warcraft is probably powering the graphics for an advanced virtual environment.



VE systems need a way to display images to a user. Many systems use HMDs, which are headsets that contain two monitors, one for each eye. The images create a stereoscopic effect, giving the illusion of depth. Early HMDs used cathode ray tube (CRT) monitors, which were bulky but provided good resolution and quality, or liquid crystal display (LCD) monitors, which were much cheaper but were unable to compete with the quality of CRT displays. Today, LCD displays are much more advanced, with improved resolution and color saturation, and have become more common than CRT monitors.

Other VE systems project images on the walls, floor and ceiling of a room and are called Cave Automatic Virtual Environments (CAVE). The University of Illinois-Chicago designed the first CAVE display, using a rear projection technique to display images on the walls, floor and ceiling of a small room. Users can move around in a CAVE display, wearing special glasses to complete the illusion of moving through a virtual environment. CAVE displays give users a much wider field of view, which helps in immersion. They also allow a group of people to share the experience at the same time (though the display would track only one user’s point of view, meaning others in the room would be passive observers). CAVE displays are very expensive and require more space than other systems.



Closely related to display technology are tracking systems. Tracking systems analyze the orientation of a user’s point of view so that the computer system sends the right images to the visual display. Most systems require a user to be tethered with cables to a processing unit, limiting the range of motions available to him. Tracker technology developments tend to lag behind other VR technologies because the market for such technology is mainly VR-focused. Without the demands of other disciplines or applications, there isn’t as much interest in developing new ways to track user movements and point of view.

Input devices are also important in VR systems. Currently, input devices range from controllers with two or three buttons to electronic gloves and voice recognition software. There is no standard control system across the discipline. VR scientists and engineers are continuously exploring ways to make user input as natural as possible to increase the sense of telepresence. Some of the more common forms of input devices are:

* Joysticks
* Force balls/tracking balls
* Controller wands
* Datagloves
* Voice recognition
* Motion trackers/bodysuits
* Treadmills

Virtual Reality Games

Scientists are also exploring the possibility of developing biosensors for VR use. A biosensor can detect and interpret nerve and muscle activity. With a properly calibrated biosensor, a computer can interpret how a user is moving in physical space and translate that into the corresponding motions in virtual space. Biosensors may be attached directly to the skin of a user, or may be incorporated into gloves or bodysuits. One limitation to biosensor suits is that they must be custom made for each user or the sensors will not line up properly on the user’s body.



Mary Whitton, of UNC-Chapel Hill, believes that the entertainment industry will drive the development of most VR technology going forward. The video game industry in particular has contributed advancements in graphics and sound capabilities that engineers can incorporate into virtual reality systems’ designs. One advance that Whitton finds particularly interesting is the Nintendo Wii’s wand controller. The controller is not only a commercially available device with some tracking capabilities; it’s also affordable and appeals to people who don’t normally play video games. Since tracking and input devices are two areas that traditionally have fallen behind other VR technologies, this controller could be the first of a new wave of technological advances useful to VR systems.

­Some programmers envision the Internet developing into a three-dimensional virtual space, where you navigate through virtual landscapes to access information and entertainment. Web sites could take form as a three-dimensional location, allowing users to explore in a much more literal way than before. Programmers have developed several different computer languages and Web browsers to achieve this vision. Some of these include:

* Virtual Reality Modeling Language (VRML)- the earliest three-dimensional modeling language for the Web.
* 3DML - a three-dimensional modeling language where a user can visit a spot (or Web site) through most Internet browsers after installing a plug-in.
* X3D - the language that replaced VRML as the standard for creating virtual environments in the Internet.
* Collaborative Design Activity (COLLADA) - a format used to allow file interchanges within three-dimensional programs.

Of course, many VE experts would argue that without an HMD, Internet-based systems are not true virtual environments. They lack critical elements of immersion, particularly tracking and displaying images as life-sized.

Virtual Reality Applications

In the early 1990s, the public's exposure to virtual reality rarely went beyond a relatively primitive demonstration of a few blocky figures being chased around a chessboard by a crude pterodactyl. While the entertainment industry is still interested in virtual reality applications in games and theatre experiences, the really interesting uses for VR systems are in other fields.

Some architects create virtual models of their building plans so that people can walk through the structure before the foundation is even laid. Clients can move around exteriors and interiors and ask questions, or even suggest alterations to the design. Virtual models can give you a much more accurate idea of how moving through a building will feel than a miniature model.

Car companies have used VR technology to build virtual prototypes of new vehicles, testing them thoroughly before producing a single physical part. Designers can make alterations without having to scrap the entire model, as they often would with physical ones. The development process becomes more efficient and less expensive as a result.



Virtual environments are used in training programs for the military, the space program and even medical students. The military have long been supporters of VR technology and development. Training programs can include everything from vehicle simulations to squad combat. On the whole, VR systems are much safer and, in the long run, less expensive than alternative training methods. Soldiers who have gone through extensive VR training have proven to be as effective as those who trained under traditional conditions.

In medicine, staff can use virtual environments to train in everything from surgical procedures to diagnosing a patient. Surgeons have used virtual reality technology to not only train and educate, but also to perform surgery remotely by using robotic devices. The first robotic surgery was performed in 1998 at a hospital in Paris. The biggest challenge in using VR technology to perform robotic surgery is latency, since any delay in such a delicate procedure can feel unnatural to the surgeon. Such systems also need to provide finely-tuned sensory feedback to the surgeon.

Another medical use of VR technology is psyc­hological therapy. Dr. Barbara Rothbaum of Emory University and Dr. Larry Hodges of the Georgia Institute of Technology pioneered the use of virtual environments in treating people with phobias and other psychological conditions. They use virtual environments as a form of exposure therapy, where a patient is exposed -- under controlled conditions -- to stimuli that cause him distress. The application has two big advantages over real exposure therapy: it is much more convenient and patients are more willing to try the therapy because they know it isn't the real world. Their research led to the founding of the company Virtually Better, which sells VR therapy systems to doctors in 14 countries.

Virtual Reality Challenges and Concerns


The big challenges in the field of virtual reality are developing better tracking systems, finding more natural ways to allow users to interact within a virtual environment and decreasing the time it takes to build virtual spaces. While there are a few tracking system companies that have been around since the earliest days of virtual reality, most companies are small and don’t last very long. Likewise, there aren’t many companies that are working on input devices specifically for VR applications. Most VR developers have to rely on and adapt technology originally meant for another discipline, and they have to hope that the company producing the technology stays in business. As for creating virtual worlds, it can take a long time to create a convincing virtual environment - the more realistic the environment, the longer it takes to make it. It could take a team of programmers more than a year to duplicate a real room accurately in virtual space.

Another challenge for VE system developers is creating a system that avoids bad ergonomics. Many systems rely on hardware that encumbers a user or limits his options through physical tethers. Without well-designed hardware, a user could have trouble with his sense of balance or inertia with a decrease in the sense of telepresence, or he could experience cybersickness, with symptoms that can include disorientation and nausea. Not all users seem to be at risk for cybersickness -- some people can explore a virtual environment for hours with no ill effects, while others may feel queasy after just a few minutes.



Some psychologists are concerned that immersion in virtual environments could psychologically affect a user. They suggest that VE systems that place a user in violent situations, particularly as the perpetuator of violence, could result in the user becoming desensitized. In effect, there’s a fear that VE entertainment systems could breed a generation of sociopaths. Others aren’t as worried about desensitization, but do warn that convincing VE experiences could lead to a kind of cyber addiction. There have been several news stories of gamers neglecting their real lives for their online, in-game presence. Engaging virtual environments could potentially be more addictive.

Another emerging concern involves criminal acts. In the virtual world, defining acts such as murder or sex crimes has been problematic. At what point can authorities charge a person with a real crime for actions within a virtual environment? Studies indicate that people can have real physical and emotional reactions to stimuli within a virtual environment, and so it’s quite possible that a victim of a virtual attack could feel real emotional trauma. Can the attacker be punished for causing real-life distress? We don’t yet have answers to these questions.

Virtual Reality History

The concept of virtual reality has been around for decades, even though the public really only became aware of it in the early 1990s. In the mid 1950s, a cinematographer named Morton Heilig envisioned a theatre experience that would stimulate all his audiences’ senses, drawing them in to the stories more effectively. He built a single user console in 1960 called the Sensorama that included a stereoscopic display, fans, odor emitters, stereo speakers and a moving chair. He also invented a head mounted television display designed to let a user watch television in 3-D. Users were passive audiences for the films, but many of Heilig’s concepts would find their way into the VR field.

Philco Corporation engineers developed the first HMD in 1961, called the Headsight. The helmet included a video screen and tracking system, which the engineers linked to a closed circuit camera system. They intended the HMD for use in dangerous situations -- a user could observe a real environment remotely, adjusting the camera angle by turning his head. Bell Laboratories used a similar HMD for helicopter pilots. They linked HMDs to infrared cameras attached to the bottom of helicopters, which allowed pilots to have a clear field of view while flying in the dark.



In 1965, a computer scientist named Ivan Sutherland envisioned what he called the “Ultimate Display.” Using this display, a person could look into a virtual world that would appear as real as the physical world the user lived in. This vision guided almost all the developments within the field of virtual reality. Sutherland’s concept included:

* A virtual world that appears real to any observer, seen through an HMD and augmented through three-dimensional sound and tactile stimuli
* A computer that maintains the world model in real time
* The ability for users to manipulate virtual objects in a realistic, intuitive way

In 1966, Sutherland built an HMD that was tethered to a computer system. The computer provided all the graphics for the display (up to this point, HMDs had only been linked to cameras). He used a suspension system to hold the HMD, as it was far too heavy for a user to support comfortably. The HMD could display images in stereo, giving the illusion of depth, and it could also track the user’s head movements so that the field of view would change appropriately as the user looked around.

Virtual Reality Development

NASA, the Department of Defense and the National Science Foundation funded much of the research and development for virtual reality projects. The CIA contributed $80,000 in research money to Sutherland. Early applications mainly fell into the vehicle simulator category and were used in training exercises. Because the flight experiences in simulators were similar but not identical to real flights, the military, NASA, and airlines instituted policies that required pilots to have a significant lag time (at least one day) between a simulated flight and a real flight in case their real performance suffered.



For years, VR technology remained out of the public eye. Almost all development focused on vehicle simulations until the 1980s. Then, in 1984, a computer scientist named Michael McGreevy began to experiment with VR technology as a way to advance human-computer interface (HCI) designs. HCI still plays a big role in VR research, and moreover it lead to the media picking up on the idea of VR a few years later.

Jaron Lanier coined the term Virtual Reality in 1987. In the 1990s, the media latched on to the concept of virtual reality and ran with it. The resulting hype gave many people an unrealistic expectation of what virtual reality technologies could do. As the public realized that virtual reality was not yet as sophisticated as they had been lead to believe, interest waned. The term virtual reality began to fade away with the public’s expectations. Today, VE developers try not to exaggerate the capabilities or applications of VE systems, and they also tend to avoid the term virtual reality.

Quantum Computers

Introduction to How Quantum Computers Will Work

The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn't count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.



Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.

Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you'll learn what a quantum computer is and just what it'll be used for in the next era of computing.

You don't have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine.

Defining the Quantum Computer

The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.



Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.

This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).

Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.

Today's Quantum Computers

Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.

The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers. Several key advancements have been made in quantum computing in the last few years. Let's look at a few of the quantum computers that have been developed.

1998
Los Alamos and MIT researchers managed to spread a single qubit across three nuclear spins in each molecule of a liquid solution of alanine (an amino acid used to analyze quantum state decay) or trichloroethylene (a chlorinated hydrocarbon used for quantum error correction) molecules. Spreading out the qubit made it harder to corrupt, allowing researchers to use entanglement to study interactions between states as an indirect method for analyzing the quantum information.

2000
In March, scientists at Los Alamos National Laboratory announced the development of a 7-qubit quantum computer within a single drop of liquid. The quantum computer uses nuclear magnetic resonance (NMR) to manipulate particles in the atomic nuclei of molecules of trans-crotonic acid, a simple fluid consisting of molecules made up of six hydrogen and four carbon atoms. The NMR is used to apply electromagnetic pulses, which force the particles to line up. These particles in positions parallel or counter to the magnetic field allow the quantum computer to mimic the information-encoding of bits in digital computers.

Researchers at IBM-Almaden Research Center developed what they claimed was the most advanced quantum computer to date in August. The 5-qubit quantum computer was designed to allow the nuclei of five fluorine atoms to interact with each other as qubits, be programmed by radio frequency pulses and be detected by NMR instruments similar to those used in hospitals (see How Magnetic Resonance Imaging Works for details). Led by Dr. Isaac Chuang, the IBM team was able to solve in one step a mathematical problem that would take conventional computers repeated cycles. The problem, called order-finding, involves finding the period of a particular function, a typical aspect of many mathematical problems involved in cryptography.

2001
Scientists from IBM and Stanford University successfully demonstrated Shor's Algorithm on a quantum computer. Shor's Algorithm is a method for finding the prime factors of numbers (which plays an intrinsic role in cryptography). They used a 7-qubit computer to find the factors of 15. The computer correctly deduced that the prime factors were 3 and 5.

2005
The Institute of Quantum Optics and Quantum Information at the University of Innsbruck announced that scientists had created the first qubyte, or series of 8 qubits, using ion traps.



2006
Scientists in Waterloo and Massachusetts devised methods for quantum control on a 12-qubit system. Quantum control becomes more complex as systems employ more qubits.

2007
Canadian startup company D-Wave demonstrated a 16-qubit quantum computer. The computer solved a sudoku puzzle and other pattern matching problems. The company claims it will produce practical systems by 2008. Skeptics believe practical quantum computers are still decades away, that the system D-Wave has created isn't scaleable, and that many of the claims on D-Wave's Web site are simply impossible (or at least impossible to know for certain given our understanding of quantum mechanics).

If functional quantum computers can be built, they will be valuable in factoring large numbers, and therefore extremely useful for decoding and encoding secret information. If one were to be built today, no information on the Internet would be safe. Our current methods of encryption are simple compared to the complicated methods possible in quantum computers. Quantum computers could also be used to search large databases in a fraction of the time that it would take a conventional computer. Other applications could include using quantum computers to study quantum mechanics, or even to design other quantum computers.

But quantum computing is still in its early stages of development, and many computer scientists believe the technology needed to create a practical quantum computer is years away. Quantum computers must have at least several dozen qubits to be able to solve real-world problems, and thus serve as a viable computing method.