Friday, April 30, 2010

Virtual Reality

Introduction to How Virtual Reality Works

What do you think of when you hear the words virtual reality (VR)? Do you imagine someone wearing a clunky helmet attached to a computer with a thick cable? Do visions of crudely rendered pterodactyls haunt you? Do you think of Neo and Morpheus traipsing about the Matrix? Or do you wince at the term, wishing it would just go away?

If the last applies to you, you're likely a computer scientist or engineer, many of whom now avoid the words virtual reality even while they work on technologies most of us associate with VR. Today, you're more likely to hear someone use the words virtual environment (VE) to refer to what the public knows as virtual reality.



Naming discrepancies aside, the concept remains the same - using computer technology to create a simulated, three-dimensional world that a user can manipulate and explore while feeling as if he were in that world. Scientists, theorists and engineers have designed dozens of devices and applications to achieve this goal. Opinions differ on what exactly constitutes a true VR experience, but in general it should include:

* Three-dimensional images that appear to be life-sized from the perspective of the user
* The ability to track a user's motions, particula­rly his head and eye movements, and correspondingly adjust the images on the user's display to reflect the change in perspective.

Virtual Reality Immersion

In a virtual reality environment, a user experiences immersion, or the feeling of being inside and a part of that world. He is also able to interact with his environment in meaningful ways. The combination of a sense of immersion and interactivity is called telepresence. Computer scientist Jonathan Steuer defined it as “the extent to which one feels present in the mediated environment, rather than in the immediate physical environment.” In other words, an effective VR experience causes you to become unaware of your real surroundings and focus on your existence inside the virtual environment.



Jonathan Steuer proposed two main components of immersion: depth of information and breadth of information. Depth of information refers to the amount and quality of data in the signals a user receives when interacting in a virtual environment. For the user, this could refer to a display’s resolution, the complexity of the environment’s graphics, the sophistication of the system’s audio output, et cetera. Steuer defines breadth of information as the “number of sensory dimensions simultaneously presented.” A virtual environment experience has a wide breadth of information if it stimulates all your senses. Most virtual environment experiences prioritize visual and audio components over other sensory-stimulating factors, but a growing number of scientists and engineers are looking into ways to incorporate a users’ sense of touch. Systems that give a user force feedback and touch interaction are called haptic systems.

For immersion to be effective, a user must be able to explore what appears to be a life-sized virtual environment and be able to change perspectives seamlessly. If the virtual environment consists of a single pedestal in the middle of a room, a user should be able to view the pedestal from any angle and the point of view should shift according to where the user is looking. Dr. Frederick Brooks, a pioneer in VR technology and theory, says that displays must project a frame rate of at least 20 - 30 frames per second in order to create a convincing user experience.

The Virtual Reality Environment


Other sensory output from the VE system should adjust in real time as a user explores the environment. If the environment incorporates 3-D sound, the user must be convinced that the sound’s orientation shifts in a natural way as he maneuvers through the environment. Sensory stimulation must be consistent if a user is to feel immersed within a VE. If the VE shows a perfectly still scene, you wouldn’t expect to feel gale-force winds. Likewise, if the VE puts you in the middle of a hurricane, you wouldn’t expect to feel a gentle breeze or detect the scent of roses.

Lag time between when a user acts and when the virtual environment reflects that action is called latency. Latency usually refers to the delay between the time a user turns his head or moves his eyes and the change in the point of view, though the term can also be used for a lag in other sensory outputs. Studies with flight simulators show that humans can detect a latency of more than 50 milliseconds. When a user detects latency, it causes him to become aware of being in an artificial environment and destroys the sense of immersion.

n immersive experience suffers if a user becomes aware of the real world around him. Truly immersive experiences make the user forget his real surroundings, effectively causing the computer to become a non entity. In order to reach the goal of true immersion, developers have to come up with input methods that are more natural for users. As long as a user is aware of the interaction device, he is not truly immersed.


Virtual Reality Interactivity

Immersion within a virtual environment is one thing, but for a user to feel truly involved there must also be an element of interaction. Early applications using the technology common in VE systems today allowed the user to have a relatively passive experience. Users could watch a pre-recorded film while wearing a head-mounted display (HMD). They would sit in a motion chair and watch the film as the system subjected them to various stimuli, such as blowing air on them to simulate wind. While users felt a sense of immersion, interactivity was limited to shifting their point of view by looking around. Their path was pre-determined and unalterable.



Today, you can find virtual roller coasters that use the same sort of technology. DisneyQuest in Orlando, Florida features CyberSpace Mountain, where patrons can design their own roller coaster, then enter a simulator to ride their virtual creation. The system is very immersive, but apart from the initial design phase there isn't any interaction, so it's not an example of a true virtual environment.

Interactivity depends on many factors. Steuer suggests that three of these factors are speed, range and mapping. Steuer defines speed as the rate that a user's actions are incorporated into the computer model and reflected in a way the user can perceive. Range refers to how many possible outcomes could result from any particular user action. Mapping is the system's ability to produce natural results in response to a user's actions.

Navigation within a virtual environment is one kind of interactivity. If a user can direct his own movement within the environment, it can be called an interactive experience. Most virtual environments include other forms of interaction, since users can easily become bored after just a few minutes of exploration. Computer Scientist Mary Whitton points out that poorly designed interaction can drastically reduce the sense of immersion, while finding ways to engage users can increase it. When a virtual environment is interesting and engaging, users are more willing to suspend disbelief and become immersed.

True interactivity also includes being able to modify the environment. A good virtual environment will respond to the user's actions in a way that makes sense, even if it only makes sense within the realm of the virtual environment. If a virtual environment changes in outlandish and unpredictable ways, it risks disrupting the user's sense of telepresence.

The Virtual Reality Headset

Today, most VE systems are powered by normal personal computers. PCs are sophisticated enough to develop and run the software necessary to create virtual environments. Graphics are usually handled by powerful graphics cards originally designed with the video gaming community in mind. The same video card that lets you play World of Warcraft is probably powering the graphics for an advanced virtual environment.



VE systems need a way to display images to a user. Many systems use HMDs, which are headsets that contain two monitors, one for each eye. The images create a stereoscopic effect, giving the illusion of depth. Early HMDs used cathode ray tube (CRT) monitors, which were bulky but provided good resolution and quality, or liquid crystal display (LCD) monitors, which were much cheaper but were unable to compete with the quality of CRT displays. Today, LCD displays are much more advanced, with improved resolution and color saturation, and have become more common than CRT monitors.

Other VE systems project images on the walls, floor and ceiling of a room and are called Cave Automatic Virtual Environments (CAVE). The University of Illinois-Chicago designed the first CAVE display, using a rear projection technique to display images on the walls, floor and ceiling of a small room. Users can move around in a CAVE display, wearing special glasses to complete the illusion of moving through a virtual environment. CAVE displays give users a much wider field of view, which helps in immersion. They also allow a group of people to share the experience at the same time (though the display would track only one user’s point of view, meaning others in the room would be passive observers). CAVE displays are very expensive and require more space than other systems.



Closely related to display technology are tracking systems. Tracking systems analyze the orientation of a user’s point of view so that the computer system sends the right images to the visual display. Most systems require a user to be tethered with cables to a processing unit, limiting the range of motions available to him. Tracker technology developments tend to lag behind other VR technologies because the market for such technology is mainly VR-focused. Without the demands of other disciplines or applications, there isn’t as much interest in developing new ways to track user movements and point of view.

Input devices are also important in VR systems. Currently, input devices range from controllers with two or three buttons to electronic gloves and voice recognition software. There is no standard control system across the discipline. VR scientists and engineers are continuously exploring ways to make user input as natural as possible to increase the sense of telepresence. Some of the more common forms of input devices are:

* Joysticks
* Force balls/tracking balls
* Controller wands
* Datagloves
* Voice recognition
* Motion trackers/bodysuits
* Treadmills

Virtual Reality Games

Scientists are also exploring the possibility of developing biosensors for VR use. A biosensor can detect and interpret nerve and muscle activity. With a properly calibrated biosensor, a computer can interpret how a user is moving in physical space and translate that into the corresponding motions in virtual space. Biosensors may be attached directly to the skin of a user, or may be incorporated into gloves or bodysuits. One limitation to biosensor suits is that they must be custom made for each user or the sensors will not line up properly on the user’s body.



Mary Whitton, of UNC-Chapel Hill, believes that the entertainment industry will drive the development of most VR technology going forward. The video game industry in particular has contributed advancements in graphics and sound capabilities that engineers can incorporate into virtual reality systems’ designs. One advance that Whitton finds particularly interesting is the Nintendo Wii’s wand controller. The controller is not only a commercially available device with some tracking capabilities; it’s also affordable and appeals to people who don’t normally play video games. Since tracking and input devices are two areas that traditionally have fallen behind other VR technologies, this controller could be the first of a new wave of technological advances useful to VR systems.

­Some programmers envision the Internet developing into a three-dimensional virtual space, where you navigate through virtual landscapes to access information and entertainment. Web sites could take form as a three-dimensional location, allowing users to explore in a much more literal way than before. Programmers have developed several different computer languages and Web browsers to achieve this vision. Some of these include:

* Virtual Reality Modeling Language (VRML)- the earliest three-dimensional modeling language for the Web.
* 3DML - a three-dimensional modeling language where a user can visit a spot (or Web site) through most Internet browsers after installing a plug-in.
* X3D - the language that replaced VRML as the standard for creating virtual environments in the Internet.
* Collaborative Design Activity (COLLADA) - a format used to allow file interchanges within three-dimensional programs.

Of course, many VE experts would argue that without an HMD, Internet-based systems are not true virtual environments. They lack critical elements of immersion, particularly tracking and displaying images as life-sized.

Virtual Reality Applications

In the early 1990s, the public's exposure to virtual reality rarely went beyond a relatively primitive demonstration of a few blocky figures being chased around a chessboard by a crude pterodactyl. While the entertainment industry is still interested in virtual reality applications in games and theatre experiences, the really interesting uses for VR systems are in other fields.

Some architects create virtual models of their building plans so that people can walk through the structure before the foundation is even laid. Clients can move around exteriors and interiors and ask questions, or even suggest alterations to the design. Virtual models can give you a much more accurate idea of how moving through a building will feel than a miniature model.

Car companies have used VR technology to build virtual prototypes of new vehicles, testing them thoroughly before producing a single physical part. Designers can make alterations without having to scrap the entire model, as they often would with physical ones. The development process becomes more efficient and less expensive as a result.



Virtual environments are used in training programs for the military, the space program and even medical students. The military have long been supporters of VR technology and development. Training programs can include everything from vehicle simulations to squad combat. On the whole, VR systems are much safer and, in the long run, less expensive than alternative training methods. Soldiers who have gone through extensive VR training have proven to be as effective as those who trained under traditional conditions.

In medicine, staff can use virtual environments to train in everything from surgical procedures to diagnosing a patient. Surgeons have used virtual reality technology to not only train and educate, but also to perform surgery remotely by using robotic devices. The first robotic surgery was performed in 1998 at a hospital in Paris. The biggest challenge in using VR technology to perform robotic surgery is latency, since any delay in such a delicate procedure can feel unnatural to the surgeon. Such systems also need to provide finely-tuned sensory feedback to the surgeon.

Another medical use of VR technology is psyc­hological therapy. Dr. Barbara Rothbaum of Emory University and Dr. Larry Hodges of the Georgia Institute of Technology pioneered the use of virtual environments in treating people with phobias and other psychological conditions. They use virtual environments as a form of exposure therapy, where a patient is exposed -- under controlled conditions -- to stimuli that cause him distress. The application has two big advantages over real exposure therapy: it is much more convenient and patients are more willing to try the therapy because they know it isn't the real world. Their research led to the founding of the company Virtually Better, which sells VR therapy systems to doctors in 14 countries.

Virtual Reality Challenges and Concerns


The big challenges in the field of virtual reality are developing better tracking systems, finding more natural ways to allow users to interact within a virtual environment and decreasing the time it takes to build virtual spaces. While there are a few tracking system companies that have been around since the earliest days of virtual reality, most companies are small and don’t last very long. Likewise, there aren’t many companies that are working on input devices specifically for VR applications. Most VR developers have to rely on and adapt technology originally meant for another discipline, and they have to hope that the company producing the technology stays in business. As for creating virtual worlds, it can take a long time to create a convincing virtual environment - the more realistic the environment, the longer it takes to make it. It could take a team of programmers more than a year to duplicate a real room accurately in virtual space.

Another challenge for VE system developers is creating a system that avoids bad ergonomics. Many systems rely on hardware that encumbers a user or limits his options through physical tethers. Without well-designed hardware, a user could have trouble with his sense of balance or inertia with a decrease in the sense of telepresence, or he could experience cybersickness, with symptoms that can include disorientation and nausea. Not all users seem to be at risk for cybersickness -- some people can explore a virtual environment for hours with no ill effects, while others may feel queasy after just a few minutes.



Some psychologists are concerned that immersion in virtual environments could psychologically affect a user. They suggest that VE systems that place a user in violent situations, particularly as the perpetuator of violence, could result in the user becoming desensitized. In effect, there’s a fear that VE entertainment systems could breed a generation of sociopaths. Others aren’t as worried about desensitization, but do warn that convincing VE experiences could lead to a kind of cyber addiction. There have been several news stories of gamers neglecting their real lives for their online, in-game presence. Engaging virtual environments could potentially be more addictive.

Another emerging concern involves criminal acts. In the virtual world, defining acts such as murder or sex crimes has been problematic. At what point can authorities charge a person with a real crime for actions within a virtual environment? Studies indicate that people can have real physical and emotional reactions to stimuli within a virtual environment, and so it’s quite possible that a victim of a virtual attack could feel real emotional trauma. Can the attacker be punished for causing real-life distress? We don’t yet have answers to these questions.

Virtual Reality History

The concept of virtual reality has been around for decades, even though the public really only became aware of it in the early 1990s. In the mid 1950s, a cinematographer named Morton Heilig envisioned a theatre experience that would stimulate all his audiences’ senses, drawing them in to the stories more effectively. He built a single user console in 1960 called the Sensorama that included a stereoscopic display, fans, odor emitters, stereo speakers and a moving chair. He also invented a head mounted television display designed to let a user watch television in 3-D. Users were passive audiences for the films, but many of Heilig’s concepts would find their way into the VR field.

Philco Corporation engineers developed the first HMD in 1961, called the Headsight. The helmet included a video screen and tracking system, which the engineers linked to a closed circuit camera system. They intended the HMD for use in dangerous situations -- a user could observe a real environment remotely, adjusting the camera angle by turning his head. Bell Laboratories used a similar HMD for helicopter pilots. They linked HMDs to infrared cameras attached to the bottom of helicopters, which allowed pilots to have a clear field of view while flying in the dark.



In 1965, a computer scientist named Ivan Sutherland envisioned what he called the “Ultimate Display.” Using this display, a person could look into a virtual world that would appear as real as the physical world the user lived in. This vision guided almost all the developments within the field of virtual reality. Sutherland’s concept included:

* A virtual world that appears real to any observer, seen through an HMD and augmented through three-dimensional sound and tactile stimuli
* A computer that maintains the world model in real time
* The ability for users to manipulate virtual objects in a realistic, intuitive way

In 1966, Sutherland built an HMD that was tethered to a computer system. The computer provided all the graphics for the display (up to this point, HMDs had only been linked to cameras). He used a suspension system to hold the HMD, as it was far too heavy for a user to support comfortably. The HMD could display images in stereo, giving the illusion of depth, and it could also track the user’s head movements so that the field of view would change appropriately as the user looked around.

Virtual Reality Development

NASA, the Department of Defense and the National Science Foundation funded much of the research and development for virtual reality projects. The CIA contributed $80,000 in research money to Sutherland. Early applications mainly fell into the vehicle simulator category and were used in training exercises. Because the flight experiences in simulators were similar but not identical to real flights, the military, NASA, and airlines instituted policies that required pilots to have a significant lag time (at least one day) between a simulated flight and a real flight in case their real performance suffered.



For years, VR technology remained out of the public eye. Almost all development focused on vehicle simulations until the 1980s. Then, in 1984, a computer scientist named Michael McGreevy began to experiment with VR technology as a way to advance human-computer interface (HCI) designs. HCI still plays a big role in VR research, and moreover it lead to the media picking up on the idea of VR a few years later.

Jaron Lanier coined the term Virtual Reality in 1987. In the 1990s, the media latched on to the concept of virtual reality and ran with it. The resulting hype gave many people an unrealistic expectation of what virtual reality technologies could do. As the public realized that virtual reality was not yet as sophisticated as they had been lead to believe, interest waned. The term virtual reality began to fade away with the public’s expectations. Today, VE developers try not to exaggerate the capabilities or applications of VE systems, and they also tend to avoid the term virtual reality.

Quantum Computers

Introduction to How Quantum Computers Will Work

The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn't count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.



Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.

Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you'll learn what a quantum computer is and just what it'll be used for in the next era of computing.

You don't have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine.

Defining the Quantum Computer

The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.



Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.

This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).

Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.

Today's Quantum Computers

Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.

The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers. Several key advancements have been made in quantum computing in the last few years. Let's look at a few of the quantum computers that have been developed.

1998
Los Alamos and MIT researchers managed to spread a single qubit across three nuclear spins in each molecule of a liquid solution of alanine (an amino acid used to analyze quantum state decay) or trichloroethylene (a chlorinated hydrocarbon used for quantum error correction) molecules. Spreading out the qubit made it harder to corrupt, allowing researchers to use entanglement to study interactions between states as an indirect method for analyzing the quantum information.

2000
In March, scientists at Los Alamos National Laboratory announced the development of a 7-qubit quantum computer within a single drop of liquid. The quantum computer uses nuclear magnetic resonance (NMR) to manipulate particles in the atomic nuclei of molecules of trans-crotonic acid, a simple fluid consisting of molecules made up of six hydrogen and four carbon atoms. The NMR is used to apply electromagnetic pulses, which force the particles to line up. These particles in positions parallel or counter to the magnetic field allow the quantum computer to mimic the information-encoding of bits in digital computers.

Researchers at IBM-Almaden Research Center developed what they claimed was the most advanced quantum computer to date in August. The 5-qubit quantum computer was designed to allow the nuclei of five fluorine atoms to interact with each other as qubits, be programmed by radio frequency pulses and be detected by NMR instruments similar to those used in hospitals (see How Magnetic Resonance Imaging Works for details). Led by Dr. Isaac Chuang, the IBM team was able to solve in one step a mathematical problem that would take conventional computers repeated cycles. The problem, called order-finding, involves finding the period of a particular function, a typical aspect of many mathematical problems involved in cryptography.

2001
Scientists from IBM and Stanford University successfully demonstrated Shor's Algorithm on a quantum computer. Shor's Algorithm is a method for finding the prime factors of numbers (which plays an intrinsic role in cryptography). They used a 7-qubit computer to find the factors of 15. The computer correctly deduced that the prime factors were 3 and 5.

2005
The Institute of Quantum Optics and Quantum Information at the University of Innsbruck announced that scientists had created the first qubyte, or series of 8 qubits, using ion traps.



2006
Scientists in Waterloo and Massachusetts devised methods for quantum control on a 12-qubit system. Quantum control becomes more complex as systems employ more qubits.

2007
Canadian startup company D-Wave demonstrated a 16-qubit quantum computer. The computer solved a sudoku puzzle and other pattern matching problems. The company claims it will produce practical systems by 2008. Skeptics believe practical quantum computers are still decades away, that the system D-Wave has created isn't scaleable, and that many of the claims on D-Wave's Web site are simply impossible (or at least impossible to know for certain given our understanding of quantum mechanics).

If functional quantum computers can be built, they will be valuable in factoring large numbers, and therefore extremely useful for decoding and encoding secret information. If one were to be built today, no information on the Internet would be safe. Our current methods of encryption are simple compared to the complicated methods possible in quantum computers. Quantum computers could also be used to search large databases in a fraction of the time that it would take a conventional computer. Other applications could include using quantum computers to study quantum mechanics, or even to design other quantum computers.

But quantum computing is still in its early stages of development, and many computer scientists believe the technology needed to create a practical quantum computer is years away. Quantum computers must have at least several dozen qubits to be able to solve real-world problems, and thus serve as a viable computing method.

Saturday, March 27, 2010

Nanotechnology

Introduction to How Nanotechnology Works

There's an unprecedented multidisciplinary convergence of scientists dedicated to the study of a world so small, we can't see it -- even with a light microscope. That world is the field of nanotechnology, the realm of atoms and nanostructures. Nanotechnology i­s so new, no one is really sure what will come of it. Even so, predictions range from the ability to reproduce things like diamonds and food to the world being devoured by self-replicating nanorobots.



In order to understand the unusual world of nanotechnology, we need to get an idea of the units of measure involved. A centimeter is one-hundredth of a meter, a millimeter is one-thousandth of a meter, and a micrometer is one-millionth of a meter, but all of these are still huge compared to the nanoscale. A nanometer (nm) is one-billionth of a meter, smaller than the wavelength of visible light and a hundred-thousandth the width of a human hair.

As small as a nanometer is, it's still large compared to the atomic scale. An atom has a diameter of about 0.1 nm. An atom's nucleus is much smaller -- about 0.00001 nm. Atoms are the building blocks for all matter in our universe. You and everything around you are made of atoms. Nature has perfected the science of manufacturing matter molecularly. For instance, our bodies are assembled in a specific manner from millions of living cells. Cells are nature's nanomachines. At the atomic scale, elements are at their most basic level. On the nanoscale, we can potentially put these atoms together to make almost anything.

In a lecture called "Small Wonders:The World of Nanoscience," Nobel Prize winner Dr. Horst Störmer said that the nanoscale is more interesting than the atomic scale because the nanoscale is the first point where we can assemble something -- it's not until we start putting atoms together that we can make anything useful.

The World of Nanotechnology

­Experts sometimes disagree about what constitutes the nanoscale, but in general, you can think of nanotechnology dealing with anything measuring between 1 and 100 nm. Larger than that is the microscale, and smaller than that is the atomic scale.



Nanotechnology is rapidly becoming an interdisciplinary field. Biologists, chemists, physicists and engineers are all involved in the study of substances at the nanoscale. Dr. Störmer hopes that the different disciplines develop a common language and communicate with one another. Only then, he says, can we effectively teach nanoscience since you can't understand the world of nanotechnology without a solid background in multiple sciences.

One of the exciting and challenging aspects of the nanoscale is the role that quantum mechanics plays in it. The rules of quantum mechanics are very different from classical physics, ­which means that the behavior of substances at the nanoscale can sometimes contradict common sense by behaving erratically. You can't walk up to a wall and immediately teleport to the other side of it, but at the nanoscale an electron can -- it's called electron tunneling. Substances that are insulators, meaning they can't carry an electric charge, in bulk form might become semiconductors when reduced to the nanoscale. Melting points can change due to an increase in surface area. Much of nanoscience requires that you forget what you know and start learning all over again.

So what does this all mean? Right now, it means that scientists are experimenting with substances at the nanoscale to learn about their properties and how we might be able to take advantage of them in various applications. Engineers are trying to use nano-size wires to create smaller, more powerful microprocessors. Doctors are searching for ways to use nanoparticles in medical applications. Still, we've got a long way to go before nanotechnology dominates the technology and medical markets.

Nanowires and Carbon Nanotubes

Currently, scientists find two nano-size structures of particular interest: nanowires and carbon nanotubes. Nanowires are wires with a very small diameter, sometimes as small as 1 nanometer. Scientists hope to use them to build tiny transistors for computer chips and other electronic devices. In the last couple of years, carbon nanotubes have overshadowed nanowires. We're still learning about these structures, but what we've learned so far is very exciting.



A carbon nanotube is a nano-size cylinder of carbon atoms. Imagine a sheet of carbon atoms, which would look like a sheet of hexagons. If you roll that sheet into a tube, you'd have a carbon nanotube. Carbon nanotube properties depend on how you roll the sheet. In other words, even though all carbon nanotubes are made of carbon, they can be very different from one another based on how you align the individual atoms.

With the right arrangement of atoms, you can create a carbon nanotube that's hundreds of times stronger than steel, but six times lighter. Engineers plan to make building material out of carbon nanotubes, particularly for things like cars and airplanes. Lighter vehicles would mean better fuel efficiency, and the added strength translates to increased passenger safety.

Carbon nanotubes can also be effective semiconductors with the right arrangement of atoms. Scientists are still working on finding ways to make carbon nanotubes a realistic option for transistors in microprocessors and other electronics.

Products with Nanotechnology

You might be surprised to find out how many products on the market are already benefiting from nanotechnology.

# Sunscreen - Many sunscreens contain nanoparticles of zinc oxide or titanium oxide. Older sunscreen formulas use larger particles, which is what gives most sunscreens their whitish color. Smaller particles are less visible, meaning that when you rub the sunscreen into your skin, it doesn't give you a whitish tinge.

# Self-cleaning glass - A company called Pilkington offers a product they call Activ Glass, which uses nanoparticles to make the glass photocatalytic and hydrophilic. The photocatalytic effect means that when UV radiation from light hits the glass, nanoparticles become energized and begin to break down and loosen organic molecules on the glass (in other words, dirt). Hydrophilic means that when water makes contact with the glass, it spreads across the glass evenly, which helps wash the glass clean.

# Clothing - Scientists are using nanoparticles to enhance your clothing. By coating fabrics with a thin layer of zinc oxide nanoparticles, manufacturers can create clothes that give better protection from UV radiation. Some clothes have nanoparticles in the form of little hairs or whiskers that help repel water and other materials, making the clothing stain-resistant.

# Scratch-resistant coatings - Engineers discovered that adding aluminum silicate nanoparticles to scratch-resistant polymer coatings made the coatings more effective, increasing resistance to chipping and scratching. Scratch-resistant coatings are common on everything from cars to eyeglass lenses.

# Antimicrobial bandages - Scientist Robert Burrell created a process to manufacture antibacterial bandages using nanoparticles of silver. Silver ions block microbes' cellular respiration . In other words, silver smothers harmful cells, killing them.

Swimming pool cleaners and disinfectants - EnviroSystems, Inc. developed a mixture (called a nanoemulsion) of nano-sized oil drops mixed with a bactericide. The oil particles adhere to bacteria, making the delivery of the bactericide more efficient and effective.

New products incorporating nanotechnology are coming out every day. Wrinkle-resistant fabrics, deep-penetrating cosmetics, liquid crystal displays (LCD) and other conveniences using nanotechnology are on the market. Before long, we'll see dozens of other products that take advantage of nanotechnology ranging from Intel microprocessors to bio-nanobatteries, capacitors only a few nanometers thick. While this is exciting, it's only the tip of the iceberg as far as how nanotechnology may impact us in the future.

The Future of Nanotechnology

In the world of "Star Trek," machines called replicators can produce practically any physical object, from weapons to a steaming cup of Earl Grey tea. Long considered to be exclusively the product of science fiction, today some people believe replicators are a very real possibility. They call it molecular manufacturing, and if it ever does become a reality, it could drastically change the world.



Atoms and molecules stick together because they have complementary shapes that lock together, or charges that attract. Just like with magnets, a positively charged atom will stick to a negatively charged atom. As millions of these atoms are pieced together by nanomachines, a specific product will begin to take shape. The goal of molecular manufacturing is to manipulate atoms individually and place them in a pattern to produce a desired structure.

The first step would be to develop nanoscopic machines, called assemblers, that scientists can program to manipulate atoms and molecules at will. Rice University Professor Richard Smalley points out that it would take a single nanoscopic machine millions of years to assemble a meaningful amount of material. In order for molecular manufacturing to be practical, you would need trillions of assemblers working together simultaneously. Eric Drexler believes that assemblers could first replicate themselves, building other assemblers. Each generation would build another, resulting in exponential growth until there are enough assemblers to produce objects .

Trillions of assemblers and replicators could fill an area smaller than a cubic millimeter, and could still be too small for us to see with the naked eye. Assemblers and replicators could work together to automatically construct products, and could eventually replace all traditional labor methods. This could vastly decrease manufacturing costs, thereby making consumer goods plentiful, cheaper and stronger. Eventually, we could be able to replicate anything, including diamonds, water and food. Famine could be eradicated by machines that fabricate foods to feed the hungry.

Nanotechnology may have its biggest impact on the medical industry. Patients will drink fluids containing nanorobots programmed to attack and reconstruct the molecular structure of cancer cells and viruses. There's even speculation that nanorobots could slow or reverse the aging process, and life expectancy could increase significantly. Nanorobots could also be programmed to perform delicate surgeries -- such nanosurgeons could work at a level a thousand times more precise than the sharpest scalpel. By working on such a small scale, a nanorobot could operate without leaving the scars that conventional surgery does. Additionally, nanorobots could change your physical appearance. They could be programmed to perform cosmetic surgery, rearranging your atoms to change your ears, nose, eye color or any other physical feature you wish to alter.

Nanotechnology has the potential to have a positive effect on the environment. For instance, scientists could program airborne nanorobots to rebuild the thinning ozone layer. Nanorobots could remove contaminants from water sources and clean up oil spills. Manufacturing materials using the bottom-up method of nanotechnology also creates less pollution than conventional manufacturing processes. Our dependence on non-renewable resources would diminish with nanotechnology. Cutting down trees, mining coal or drilling for oil may no longer be necessary -- nanomachines could produce those resources.

Many nanotechnology experts feel that these applications are well outside the realm of possibility, at least for the foreseeable future. They caution that the more exotic applications are only theoretical. Some worry that nanotechnology will end up like virtual reality -- in other words, the hype surrounding nanotechnology will continue to build until the limitations of the field become public knowledge, and then interest (and funding) will quickly dissipate.

Nanotechnology Challenges, Risks and Ethics

The most immediate challenge in nanotechnology is that we need to learn more about materials and their properties at the nanoscale. Universities and corporations across the world are rigorously studying how atoms fit together to form larger structures. We're still learning about how quantum mechanics impact substances at the nanoscale.



Because elements at the nanoscale behave differently than they do in their bulk form, there's a concern that some nanoparticles could be toxic. Some doctors worry that the nanoparticles are so small, that they could easily cross the blood-brain barrier, a membrane that protects the brain from harmful chemicals in the bloodstream. If we plan on using nanoparticles to coat everything from our clothing to our highways, we need to be sure that they won't poison us.

Closely related to the knowledge barrier is the technical barrier. In order for the incredible predictions regarding nanotechnology to come true, we have to find ways to mass produce nano-size products like transistors and nanowires. While we can use nanoparticles to build things like tennis rackets and make wrinkle-free fabrics, we can't make really complex microprocessor chips with nanowires yet.

There are some hefty social concerns about nanotechnology too. Nanotechnology may also allow us to create more powerful weapons, both lethal and non-lethal. Some organizations are concerned that we'll only get around to examining the ethical implications of nanotechnology in weaponry after these devices are built. They urge scientists and politicians to examine carefully all the possibilities of nanotechnology before designing increasingly powerful weapons.

If nanotechnology in medicine makes it possible for us to enhance ourselves physically, is that ethical? In theory, medical nanotechnology could make us smarter, stronger and give us other abilities ranging from rapid healing to night vision. Should we pursue such goals? Could we continue to call ourselves human, or would we become transhuman -- the next step on man's evolutionary path? Since almost every technology starts off as very expensive, would this mean we'd create two races of people -- a wealthy race of modified humans and a poorer population of unaltered people? We don't have answers to these questions, but several organizations are urging nanoscientists to consider these implications now, before it becomes too late.

Not all questions involve altering the human body -- some deal with the world of finance and economics. If molecular manufacturing becomes a reality, how will that impact the world's economy? Assuming we can build anything we need with the click of a button, what happens to all the manufacturing jobs? If you can create anything using a replicator, what happens to currency? Would we move to a completely electronic economy? Would we even need money?

Whether we'll actually need to answer all of these questions is a matter of debate. Many experts think that concerns like grey goo and transhumans are at best premature, and probably unnecessary. Even so, nanotechnology will definitely continue to impact us as we learn more about the enormous potential of the nanoscale.

Saturday, February 6, 2010

How Outsourcing Works

Introduction to How Outsourcing Works

A manager for the fictitious Smith & Co. manufacturing faced a dilemma. The company's newest product in development showed great promise, but also represented a departure from the company's in-house expertise. To design and engineer the product, Smith & Co. would require an entire new line of engineers with specialized skills and equipment. However, the cost would take a chunk out of the company's projected profits.

That wasn't the manager's o­nly issue. The product included a component that Smith & Co. didn't manufacture. Adding that equipment, and training workers to use it, would take time -- too much time. The market was ready for their product now. The delay would cost the company.



What could the manager do? He might turn to outsourcing to solve these problems and help his company succeed.

Outsourcing is when a company hires another individual or company to perform a specialized task, whether it's making a product or providing a service such as human relations or information technology.

Think of an individual home owner, for instance, who needs his house painted. He could go out and buy paint brushes, rollers, scaffolding, ladders and insurance and then, take the risk that he can do a good enough job -- and not fall off the scaffolding! He'll also be stuck with the expense of purchasing all that equipment for a task that he only needs to do once every few years.

Or, he could just hire a painting contractor. The decision to outsource works in the same way.

Many countries, including the United States, outsource frequently. As of 2004, U.S. companies had outsourced anywhere between 300,000 to almost 1 million jobs. Goldman Sachs projected that number would grow to 6 million by 2014.

What exactly is outsourcing, and how does it apply in the business world? Is it more economical to outsource locally versus overseas? And, which business functions are most likely to be outsourced? Go to the next page to find out.

What is Outsourcing?

When businesses need expertise or skills that they don't have within their organization, they often turn to outsourcing to solve their problems.

Outsourcing means just what it says -- going "out" to find the "source" of what you need. These days many business outsource for what they need to serve their customers, both internal and external. An external customer is the entity that ultimately purchases a company's product or services, while an internal customer is the company's own employees or shareholders. Business can obtain both products like machine parts, and services like payroll, through outsourcing.

Outsourcing probably can trace its roots to large manufacturing companies, which hired outside companies to produce specialized components that they needed for their products. Automakers, for instance, hired companies to make components for air conditioning units, sound systems and sunroofs. In some cases, they moved entire factories to foreign countries.

The big shift in recent years, however, is service outsourcing, which refers to
companies hiring outside businesses to provide specialized work and expertise.



Outsourcing offers many advantages. For instance, outsourcing allows companies to seek out and hire the best experts for specialized work. Using outsourcing also helps companies keep more cash on hand, freeing resources for other purposes, such as capital improvements. It's also often cheaper in terms of salaries and benefits and reduces risks and costs.

Outsourcing can also help a business focus on its core components without distractions from ancillary and support functions. Another advantage -- such as the one involving the fictitious Smith & Co -- involves speed and nimbleness. It's sometimes quicker and more efficient to hire a specialist to do something than it is to bring your company up to speed.

Many large companies use outsourcing to fill roles in their organization that would be too expensive or inefficient to create themselves. Smaller companies also turn to outsourcing, though the cost savings is sometimes diminished.

Outsourced manufactured components can include building components for aircraft, computer networks or automobiles.

Outsourced service functions can include:

Call centers
Payroll and bookkeeping
Advertising and public relations
Building maintenance
Consulting and engineering
Records
Supply and inventory
Field service dispatch
Purchasing
Food and cafeteria services
Security
Fleet services
This list makes it easy to see why outsourcing has impacted practically all sectors of the business world. Nearly every business has at least one or more of these functions.

However, outsourcing has some inherent disadvantages. The company often has less direct oversight and control of the product or service it's purchasing, which can threaten the relationship between the company and its customers.

Communication can cause problems. Outsourcing overseas can lead to language barrier issues. Outsourcing, especially offshore, is sometimes criticized, which can mean bad public relations for a company. Security issues, such as keeping proprietary information private, also can arise. Hiring an outside company presents challenges to the hiring company.

The Economics of Outsourcing

As with most business trends, outsourcing has its roots in simple economics.

Take the case of the fictitious Smith & Co. Manufacturing we discussed earlier. For the company to hire a new line of engineers and equip them it would spend considerable resources planning the venture.

Where would they work?
How many employees do we need?
Where do we find them?

Companies would have to address those questions before hiring, training, housing and equipping the engineers. And once hired, the engineers must be paid. Benefits like medical insurance and retirement contributions typically cost another 30 percent to 40 percent of an individual's salary. If the venture collapses under its own weight, the company's unemployment insurance also will take a hit.



Putting all this together takes time, and time, as the old saying goes, is money. Using this approach, Smith & Co. would have to be well assured that such a massive investment would pay off steadily over many years to justify it. In many cases, companies would rather be less adventurous with their capital.

On the other hand, if Smith & Co. can find an existing engineering firm with the precise expertise their new product requires, they can save time, money, risk and resources. The new product can launch faster and cheaper, meaning more profits for the company and its shareholders.

Smith & Co. is one example of how outsourcing can impact a company and its workers favorably. The practice is criticized, however, when existing company employees lose their jobs when companies choose instead to outsource them. Outsourcing jobs to companies located overseas is typically singled out for particular condemnation, pitting workers against companies and prompting political debate over regulation of such ventures.

Some critics have said outsourcing overseas results in Americans losing their jobs. There are plenty of examples, however, of overseas outsourcing saving domestic jobs, as well.

According to The Heritage Foundation, a conservative think tank, the most dire predictions estimate that 3.3 million service jobs will become outsourced to a foreign country by 2015. However, over the past decade Americans lost 7.71 million jobs each quarter. Outsourcing, therefore, amounts to a tiny fraction of jobs lost in the United States. The group also reported that studies show countries with policies that encourage economic freedom strongly correlate with high per-capita production, and the very nature of outsourcing -- getting more production output from lower production input -- leads to a higher standard of living and more economic growth.

Also, the group reports that companies in other countries are also outsourcing to the United States. The Organization for International Investment reports the number of jobs sent to the United States has grown by 82 percent while the number the United States outsourced overseas grew by just 23 percent. The jobs coming into the United States often pay more than those it outsources overseas, the group reported.

Outsourcing Locally vs. Overseas

We said earlier that one of the main advantages to outsourcing is that it allows companies to access the world class talent -- that is, a company could hire the best widget engineer, not just settle for the available one.

But when does it make sense to hire someone overseas, as opposed to one across the street or even a nearby city? It's a balance of several things.



In general, companies typically cite factors such as quality commitment, price, reputation, business culture compatibility and location when selecting a vendor for outsourcing. Additional factors like understanding goals, constant management of the business relationship, well-written contracts and strong communication are also important for the ongoing success of an outsourcing relationship.

The choice to go offshore or hire locally can hinge on these factors as well as language compatibility, the labor pool's skill and size, security and privacy, the local education systems ability to support the labor pool and the legal culture and stability of a country.

People who live and work near one another or at least within the same state or national boundaries will have more in common in terms of language, business culture and background. This can make communications and management easier. Also, it's simply easier to conduct on-site meetings and inspections when one doesn't have to travel overseas to do so. If the particular expertise a company is seeking can be found nearby at a good price, it seems logical most companies would select that option.

As always in business, however, price can be a major concern. Outsourcing overseas can be less expensive for a company and is just one of the advantages to going offshore for outsourcing needs. This is true for several reasons.

Wages in many countries are lower than in the United States. Labor is one of the prime costs in manufacturing or the service industry, so that alone can mean a substantial savings to a company. In the case of manufacturing, raw materials may also be less costly in certain countries. Also, foreign countries may offer a more business-friendly regulatory environment, lower corporate taxes and tax shelters and financial incentives for American business to invest in their countries.

Often when a company opens a factory overseas it ends up selling the products made there in that country. This allows the company to access a foreign market more economically.

Overseas outsourcing shows no signs of slowing down. NetworkWorld.com reported in January 2008 that research firms predict more overseas outsourcing companies will open this year, giving companies in both the United States and Europe more selection. The research firm Gartner predicted offshore spending to increase by 60 percent for European companies and 40 percent for U.S. companies.

India is a leader among countries that receive outsourcing work from overseas. But other countries, including Russia, China, Ireland and South Africa are also elbowing their way up the list,

Animation Outsourcing

If you or your children watch cartoons, you're likely benefiting from outsourcing. An industry group recently estimated the worldwide animation production market is between $50 billion and $70 billion .

Many entertainment companies, from Disney to MTV to Cartoon Network to Warner Bros., outsource many of their animated features to studios offshore. Shows like "The Simpsons" and "Futurama," along with feature-length animated movie releases may have been conceived in the United States, but brought to life overseas.



This isn't a new trend. It started decades ago when Hollywood studios began sending pre-production work to lower-priced overseas studios, where the work was polished and completed. India, South Korea and the Philippines were often destinations for this work.

Studios in these and other countries perform all kinds of animation work, including traditional hand-drawn cel, 2-D, 3-D, special effects, modeling, caricatures, medical animation and the recently popular CG format. The extended boom has spawned hundreds of animation companies employing thousands of artists, animators and technicians using state-of-the-art equipment and techniques such as SGI, SFX and other motion-capture software and facilities.

India has been a consistent leader in U.S. animation outsourcing. In 2004, the National Association of Software and Services Companies estimated the country's total animation production sector's revenues at about $250 million.

Its population boasts a large number of English-speaking workers, which is an important advantage when working with English-speaking animated characters. It also helps prevent language barrier issues. The country also has a large domestic animation market, so it has a high number of top-notch studios coupled with a large pool of talented but reasonably priced technicians and engineers.

Price is another India selling point. One survey placed the per-hour cost of Indian animation at $60,000 while the same cost in the United States would cost about $250,000 to $300,000.

At one point, a $100 million full-length animated film made in America could be made in India for about $20 million.

Some overseas studios that once relied on their pricing to gain a toehold in the U.S. animation market are now instead pitching top quality work as their main selling point. An October 2007 article in Variety told how a studio called Rough Draft Korea is thriving by "consistently producing the highest-quality animation and on schedule."

Still, price remains an important point. Some industry watchers think China will become a strong player in the animation business through its large labor pool and high number of artists.

Outsourcing Information Technology

As anyone who works with computers knows, having a good support person on hand is a valuable asset. Companies also are turning to outsourcing for their information technology service needs.

Information Technology, or IT, involves creating and supporting computer-based information systems, including networking, software and hardware. The field is in high demand as more businesses come to rely heavily on IT systems.



The demand for outsourcing IT services is growing worldwide. CNET News in 2005 reported that the global market for IT outsourcing topped $84 billion in 2004 and predicted it would grow by about 6 percent each year through 2010. The U.S. market, placed at $33.8 million in 2004, would grow at 4.2 percent during that time.

The research firm, Gartner, predicted India's domestic IT services market would reach almost $11 billion by 2011. Such growth prompted industry giant IBM to announce in February 2008 that it would open a new IT services center in India by mid-year. Reuters reported that the center would employ about 3,000.

The growing demand for outsourcing IT services stems from the costs companies incur by keeping such functions in-house.

There are many advantages to outsourcing IT services. By shopping around, a company often can find a good match for the exact services it needs. It's often cheaper than keeping the function in-house, both in terms of staffing, training and lost time. The cost savings also allows companies to concentrate on competing in their core business arena.

However, there are some disadvantages. Outsourcing IT services doesn't always bring the instant 30 percent savings many studies cite, especially among smaller companies where economies of scale are not present.

Also, entering into an agreement with a poor quality company might trap the hiring company into a cycle of dependence from which it's difficult to escape. The cost of extricating itself from such an agreement may be substantial in terms of lost time and employee morale. Some business may lose some control over this area and communicating with an organization outside your own can be a hassle. And in the end, some IT services vendors may simply oversell their true capabilities.

Human Resources Outsourcing

For most business, personnel is its largest cost. People are talented, dedicated and valuable, but they're also expensive and sometimes complicated to manage. It's no wonder more businesses are turning to outsourcing for their human resources needs.

A human resources vendor can recruit and screen potential employees, manage their benefits and payroll after they're hired, navigate government regulations and employment laws and even handle disciplinary actions, all at an employer's direction.



A human resources outsourcing firm can improve efficiency and help align a company's workforce with its long-term goals. When a company chooses to outsource human resources, it purchases not only the services but the vendor's expertise and experience in selecting top-quality employees who are well suited for their role.

Outsourcing human resources allows the company to shed the paperwork and business processing aspect of its workforce. It also can choose to turn over recruiting responsibilities to the vendor, which can be an expensive time-consuming chore. Human resources vendors will also manage employee benefits, such as payroll, customer service, Family-Medical Leave Act administration, defined benefit administration, performance reviews and retirement contribution administration.

It's up to the company to choose a service level. On one end of the spectrum, a company may outsource a handful of selected repetitive services such as managing annual open benefits enrollment, administering flexible spending accounts or conducting background checks on prospective employees. This sometimes is called "discreet services."

On the opposite end is total human resources outsourcing, in which a company turns over virtually all human resources functions to an outside vendor. This can include recruiting, payroll, in-house communications and customer relations. The vendor also might consult with the company on long-range workforce recruitment planning and training.

Some disadvantages to outsourcing human resources can include a sense of distance between the company and its employees, dissatisfaction among employees with the vendor and a poorly matched vendor alienating a company's employees through poor practices.

As markets continue toward globalization and technology improves, outsourcing will continue to grow. Companies continue to re-evaluate and re-invent the way they do business, and outsourcing goods and services is often a consideration. Despite some political opposition, the practice continues to offer cost-benefit incentives, which can help companies all over the world be more competitive.

5 Telecommuting Careers That Can Make You Rich

What's the definition of the perfect commute? From the kitchen table to the couch. Working from home, or telecommuting, is no longer the stuff of cubicle daydreams. In 2007, 2.4 million corporate employees worked from home full time,With improvements in networking technology and changes in corporate philosophy, the right career might be closer than you think.­



Of course, you have to be careful. The Internet is flooded with "work at home" scams promising thousands of dollars a month for 20 hours a week, no college degree required. Yes, it's possible to make good money working from home, but it takes planning and patience.

The first tip is to ignore any job that advertises itself as a "telecommute" or "work at home" job. These are most likely pyramid schemes or outright scams. Remember that telecommuting is a benefit of a job, not the job itself. The best way to land a well-paying telecommute job is to look for jobs that offer telecommuting as an option. If you're qualified for the position, then you might be able to negotiate a part-time or full-time telecommute.

1: Programmers

­Experience­d software and Web programmers can easily pick up fre­elance projects and work from home. If you have proven skills in Java, C++, SQL and other Web development scripts, you'll have no shortage of work. There are even special Web sites, like Programming from Home, that specialize in freelance programming job listings.



2: Transcription and Translation

Are you a fast typer? Do you speak a foreign language? Do you have a mind for legal or medical terminology? Welcome to the world of transcription and translation.



Translation gigs offer the best pay. Businesses, book publishers, Web sites -- just about anyone who produces marketing or editorial content -- need the help of experienced translators to push their products into new global markets.

In the medical field, hospitals and doctors offices need trained transcriptionists and coders to document procedures for insurance and record-keeping purposes. Law offices also need fast, accurate typers who can transcribe an audio or video recording of a deposition.

3: Freelance Writing/Editing

For experienced journalists, editors and copywriters, freelance writing and editing can be an excellent career move. Many large newspapers and magazines have taken on more freelancers to save money on full-time employees. And Web content is in high demand. For editors, there are opportunities to manage a team of freelancers, help an executive write his memoirs or do freelance editing for fiction and non-fiction authors.




4: Call Center

Outsourcing isn't just for India. Nearly 700,000 Americans and Canadians work as call-center agents from their homes. With IP (Internet Protocol)-based phone networks, it's much easier to route help desk calls to any phone, anywhere. You won't get rich answering calls from home, but you could earn around $2,000 a month working 30 to 35 hours a week.



5: Home Business

­By setting aside space for a home office, any small business can be turned into a home business. Get a 1-800 number, a fax number, an e-mail address and a well-designed Web page, and no one needs to know that your corporate headquarters are in your basement. There are virtually unlimited options for running a small business out of your home: Web design, graphic design, computer and electronics repair, Web-based retail, virtual executive assistant, life coach, psychologist, dog groomer. The list goes on and on.

Saturday, January 2, 2010

10 Popular Cell Phones

If you were to look back at movies from the early 1990s, every once in a while you can catch a glimpse of one of the characters using a cell phone. The device is about the size of a home phone, maybe bigger, and is weighed down by a huge battery. The gargantuan size of the cell phone is usually meant to poke fun at the character in question, who's probably a pompous, wealthy executive sporting sunglasses and driving a convertible.­



Now, of course, the cell phones you see in movies and in advertisements are painted in a much sleeker, hipper light -- they're several times smaller and slimmer, designed to slip into your pocket easily, and flip or glide open with the slightest gesture. Cell phones aren't just cell phones anymore. Now so-called smartphones combine extra features such as camera and video, MP3 players, Internet browsing and e-mail and GPS navigation, making them all the more useful and attractive to buyers.


The cell ph­one boom tha­t started in the late '90s is still ongoing, and every new innovation creates more buzz and skyrockets a device into popularity. Which cell phones have made some of the biggest impacts on the industry?

10: Samsung Flipshot

Most cell phones these days come with the ability to snap digital photos, whether the owner wants it or not. Because a cell phone is only so large and is designed to perform many different tasks, the quality of pictures taken from cell phones varies, ranging from OK to grainy, pixilated and blurry. In other words, cell phones aren't generally known for taking great pictures.



The Samsung Flipshot, however, caught people's attention with its high-resolution pictures and unique design. The phone comes with a built-in 3.0-megapixel resolution camera, along with a flash for low-light photos, auto focus for sharper images and camcorder capability to capture video.


But taking a photo isn't a simple matter of flipping open your phone and pressing a button or two -- the Flipshot's design highlights what Samsung calls the "flip-and-twist" method. While it's open, the phone's screen has the ability to swivel 180 degrees. Once it's facing away from the user, the screen can flip backwards; it then closes, and turns horizontally to become a small digital camera.


Of course, it does all the things a typical cell phone does, too -- it makes phone calls and also supports Bluetooth technology. But the Flipshot's unique camera features attracted people who simply like to take pictures on the run, making it one of the most popular camera phones on the market.

9: Sony Ericsson s500i

The Sony Ericsson s500i phone offers what many phones come with these days: Bluetooth capability, a 2.0-megapixel camera and a music player that supports MP3 and AAC audio formats. An alarm clock, calculator, calendar, phonebook, timer and access to a Web browser and RSS feeds make the s500i seem like a useful but fairly regular cell phone. So what sets it apart?



The main draw to the s500i, it turns out, is purely visual. The phone's screen and external lights allow you to set ever-changing themes: Depending on the time of the day, week or even the season, the screen's color layout and the phone's button illumination will change to reflect your surroundings. The phone has a thin-film transistor (TFT) screen that can display 262,000 colors at a resolution of 240 x 320 pixels. The s500i's screen and its slim slider form make it one of the flashier models out there.

8: LG Vu

The name of this cell phone says it all: The LG Vu. The main focus for the Vu is, as it turns out, the viewing aspect. With most smartphones like the iPhone and the BlackBerry scrambling to offer everything in the way of downloadable television episodes, music videos and recordable video, LG decided to put visuals front and center with this model.



The Vu sports a 3-inch touch screen and supports AT&T Mobile TV, a live mobile broadcast service sent straight to the phone. By turning the Vu sideways, the phone turns into a mini widescreen television -- there's even an extendable antenna at the top for better reception. Watching live programs costs viewers, of course, just like subscribers to cable or satellite television. A basic package costs $15 per month; a bigger, Plus package doubles the price to $30 per month. In addition showing TV on the go, the Vu also plays MP3s, hosts a GPS navigator, supports Bluetooth, has a 2.0 megapixel camera and camcorder and, of course, makes phone calls.

7: Nokia 6555

Another flashy cell phone meant to be easy on the eyes is the Nokia 6555. While most smartphones with bright, impressive video displays host around 262,000 colors, the internal display of the Nokia 6555 ups the ante by offering an astonishing 16 million colors, well above the average. It's so far above the average, in fact, it's twice as many colors as the human eye can physically see.



The viewing screen is also large, featuring 240 x 340 pixels, and the built-in 1.3-megapixel camera has 6x zoom capability, despite its small size. Indeed, the Nokia 6555, a flip phone, measures 3.93 x 1.72 x 0.78 inches; it's tall but very slim. And despite this small size, the phone houses 30 MB of memory -- the cell's phone book can hold nearly everything, including 1,000 contacts with room for five phone numbers each, an e-mail address, birthdays and more. Photos or videos of each contact can also be assigned to pop up during calls. For video and music selections, the 6555 is compatible with AT&T Music and AT&T Video, offering tunes from Napster and Yahoo and video segments from NBC, Comedy Central, ESPN and CNN.

6: Apple iPhone

Leading up to the release of the original Apple iPhone in June 2007, the buzz surrounding the new smartphone was deafening among the news media and the blogosphere. When asked if the iPhone was an example of the convergence of computers and communication, Steve Jobs made sure to downplay the computer angle, calling it "the reinvention of the phone."



Even though there were other smartphones coming out around this time, the iPhone certainly seemed like an entirely new type of cell phone. It's a sleek combination of a mobile phone, the Internet (complete with e-mail, browsing and map search) and the iPod MP3 player. On top of this, the iPhone interface features a multi-touch screen that allows users to make calls simply by pointing at a person's name and number, a trend many other smartphones would follow.


The iPhone also uses an accelerometer, which detects the movement of the device. This allows the user to rotate the phone from a vertical position to a horizontal one, changing the video display into a widescreen landscape -- perfect for watching TV shows and music videos downloaded from the Apple iTunes store.­

5: Samsung Glyde (U940)

Many smartphones that attempt to feature a QWERTY keyboard (the same type of keyboard you use with your desktop or laptop computer) end up with problems -- buttons can be too small and difficult to press, usually because developers are trying to keep the phone small. Composing text messages and dialing on a smartphone, therefore, can be frustrating to some users; instead of making things smooth, streamlined and quick, too many mistakes slow things down.


The Samsung Glyde's large QWERTY keyboard slides out for easy typing.

The Samsung Glyde attempts to address the QWERTY problem by including a slide-out keyboard that's nearly the size of the entire phone, making typing messages much easier. The keyboard also makes surfing the Web a smoother experience, since the phone has a full HTML browser, and users can download entertainment with Verison Wireless' VCAST music and video. On top of this, the Glyde takes pictures and video with a 2.0 megapixel camera, supports Bluetooth technology and comes with an eternal memory port, called a microSD card slot, to store up to 8 GB of pictures, video and music.

4: Motorola RAZR

When you look at it from the front, the Motorola RAZR looks like any other flip phone. There's a small external display about the size of a stamp that shows the current time or incoming calls, and the entire phone is only a little bigger than a pack of gum. What made this ­cell phone such a big deal?



Once you turn the Motorola RAZR sideways, its design appeal becomes instantly recognizable. While the phone measures 3.8 inches tall and 2.0 inches wide, its thickness measures a mere 0.5 inches, making it appear paper-thin compared to other cell phones. Most recent upgrades to the RAZR family include even thinner models, larger external displays that offer external texting with pre-programmed messages for when the phone is closed, full HTML browsing and e-mail and sturdier cases -- users initially worried that early models were too thin and could easily snap if handled improperly.

3: LG Chocolate

The LG VX8500, more commonly known as the LG Chocolate, was one of the more hyped cell phones on the market. When it debuted at the CTIA (Cellular Telecommunications and Internet Association) show in 2005, rumors flew back and forth as to where it would end up in the United States. Verizon Wireless finally provided service for the Chocolate, and the phone, thanks to flashy advertising campaigns and a low original price for a debut phone ($149), became a hit.



The name "Chocolate" apparently comes from the phone's basic shape -- a somewhat boxy, rectangular form that resemble a chocolate candy bar -- and not because it tastes good or comes with a box of assorted chocolates (although that probably would make the phone even more popular). The original Chocolate slid up to reveal a number keypad, adding to its sleek appeal, while newer versions have switched to the slimmer flip phone design and added speakerphone, music players and VZ navigators.

2: RIM BlackBerry Pearl

The BlackBerry smartphone, developed by Research in Motion (RIM), has become so popular in the cell phone world that talk of addiction is a recurring topic. Some users have become so attached to their BlackBerries, they claim to suffer from "ringxiety" -- the constant feeling that your BlackBerry is ringing or about to ring, whether or not it actually does. The technology has even earned the nickname of "CrackBerry," again alluding to its extreme addictive nature. Some owners reportedly wake up in the middle of the night just to check their e-mail.



The extensive connectivity of the BlackBerry no doubt lends itself to a potentially negative, obsessive-compulsive habit, but it also accounts for its huge popularity -- especially with businesses relying on easy communication. The BlackBerry Pearl, for instance, RIM's smallest model, combines phone, e-mail, text messaging, Internet and organizational applications into one tiny smartphone. It also offers an enhanced version of the QWERTY keyboard for easy typing, and it's the first BlackBerry to have still camera (1.3 megapixel), video capabilities and a music player.

1: Apple iPhone 3G

The original iPhone sold for as high as $599. So with a significantly reduced price, ranging from $199 to $399, and a name that flaunted the faster, more-connected third generation cell phone technology, the iPhone 3G was arguably the most anticipated smartphone to reach consumers' hands. Excitement was so high for the sequel to the iPhone, in fact, RBC analyst Mike Abramsky described the feeling among buyers as an "unprecedented pent-up demand" . Indeed, bloggers and the media alike were covering Apple's Worldwide Developer's Conference 2008 seemingly by the second. Rumors bounced back and forth, cementing the iPhone 3G as "The Second Coming" .



The sales followed, as customers lined up days in advance and 3Gs sold out in 21 states over the course of five days . Despite activation problems on the first few days of sale, things smoothed out more or less and some analysts predicted Apple would sell nearly 5 million iPhones in its fourth quarter.

In terms of appearance, the iPhone 3G didn't differ too much from the original iPhone. The only noticeable change was in material -- the back of the new 3G is made out of plastic instead of metal, making it a little lighter. As far as features, the 3G offers, of course, faster 3G wireless performance, GPS mapping and support for the new App Store for unique applications.