AUGMENTED REALITY

Technology has advanced to the point where realism in virtual reality is very achievable. However, in our obsession to reproduce the world and human experience in virtual space, we overlook the most important aspects of what makes us who we are—our reality. Yet, it isn’t enough just to trick the eye or fool the body and mind. One must capture the imagination in order to create truly compelling experiences.

                On the spectrum between virtual reality, which creates immersible, computer-generated environments, and the real world, augmented reality is closer to the real world. Augmented reality adds graphics, sounds, haptics and smell to the natural world as it exists. You can expect video games to drive the development of augmented reality, but this technology will have countless applications. Everyone from tourists to military troops will benefit from the ability to place computer-generated graphics in their field of vision.

              Augmented reality will truly change the way we view the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head. In this article, we will take a look at this future technology, its components and how it will be used.

 With the introduction of Augmented Reality (AR) as being coined the term in the early nineties, we were able to apply virtual objects within physical reality.  Combining techniques of Sutherland/Sproull’s first optical see-through Head Mounted Display (HMD) from the early 1960’s with complex, real-time computer-

generated wiring diagrams and manuals.  Both were registered with each other and manuals were embedded within the actual aircraft for intensely detailed procedures.

         Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. Most AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time.

            Getting the right information at the right time and the right place is the key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices. But what makes augmented reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. This kind of interface minimizes the extra mental effort that a user has to expend when switching his or her attention back and forth between real-world tasks and a computer screen. In augmented reality, the user's view of the world and the computer interface literally become one.
       Augmented reality is far more advanced than any technology you've seen in television broadcasts, although early versions of augmented reality are starting to appear in televised races and football games. These systems display graphics for only one point of view. Next-generation augmented-reality systems will display graphics for each viewer's perspective.

2. AR: OVERVIEW
2.1. DEFINITION

              Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data. Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. It is a method for visual improvement or enrichment of the surrounding environment by overlaying spatially aligned computer-generated information onto a human's view (eyes)

                Augmented Reality (AR) was introduced as the opposite of virtual reality: instead of immersing the user into a synthesized, purely informational environment, the goal of AR is to augment the real world with information handling capabilities.

              AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time.

                An AR system adds virtual computer-generated objects, audio and other sense enhancements to a real-world environment in real-time. These enhancements are added in a way that the viewer cannot tell the difference between the real and augmented world.

2.2 PROPERTIES

  AR system to have the following properties:

1.      Combines real and virtual objects in a real environment;
2.     Runs interactively, and in real time; and
3.     Registers (aligns) real and virtual objects with each other.
             Definition of AR to particular display technologies, such as a head mounted display (HMD). Nor do we limit it to our sense of sight. AR can potentially apply to all senses, including hearing, touch, and smell.

2.3 HISTORY

                    The beginnings of AR, as we define it, date back to Sutherland’s work in the 1960s, which used a see-through HMD to present 3D graphics. However, only over the past decade has there been enough work to refer to AR as a research field. In 1997, Azuma published a survey that defined the field, described many problems, and summarized the developments up to that point. Since then, AR’s growth and progress have been remarkable.

                     In the late 1990s, several conferences on AR began, including the international Workshop and Symposium on Augmented Reality, the International Symposium on Mixed Reality, and the Designing Augmented Reality Environments workshop. Some well-funded organizations formed that focused on AR, notably the Mixed Reality Systems Lab in Japan and the Arvika consortium in Germany


3. AUGMENTED REALITY Vs VIRTUAL REALITY
           The term Virtual Reality was defined as "a computer generated, interactive, three-dimensional environment in which a person is immersed." There are three key points in this definition. First, this virtual environment is a computer generated three-dimensional scene which requires high performance computer graphics to provide an adequate level of realism. The second point is that the virtual world is interactive. A user requires real-time response from the system to be able to interact with it in an effective manner. The last point is that the user is immersed in this virtual environment
       One of the identifying marks of a virtual reality system is the head mounted display worn by users. These displays block out all the external world and present to the wearer a view that is under the complete control of the computer. The user is completely immersed in an artificial world and becomes divorced from the real environment.
            A very visible difference between these two types of systems is the immersiveness of the system. Virtual reality strives for a totally immersive environment. The visual, and in some systems aural and proprioceptive, senses are under control of the system.
               In contrast, an augmented reality system is augmenting the real world scene necessitating that the user maintains a sense of presence in that world. The virtual images are merged with the real view to create the augmented display. There must be a mechanism to combine the real and virtual that is not present in other virtual reality work. Developing the technology for merging the real and virtual image streams is an active research topic .

The real world and a totally virtual environment are at the two ends of this continuum with the middle region called Mixed Reality. Augmented reality lies near the real world end of the line with the predominate perception being the real world augmented by computer generated data.

4. DISPLAYS

          Displays for viewing the merged virtual and real environments can be classified into the following categories: head worn, handheld, and projective.

4.1 Head-worn displays (HWD).

            Users mount this type of display on their heads, providing imagery in front of their eyes. Two types of HWDs exist: optical see-through and video see-through. The latter uses video capture from head-worn video cameras as a background for the AR overlay, shown on an opaque display, where as the optical see-through method provides the AR overlay through a transparent display.
         
            Established electronics and optical companies (for example, Sony and Olympus) have manufactured color, liquid crystal display (LCD)-based consumer head-worn displays intended for watching videos and playing video games. While these systems have relatively low resolution (180,000 to 240,000 pixels), small fields of view (approximately 30 degrees horizontal), and don’t support stereo, they’re relatively lightweight (under 120grams) and offer an inexpensive option see-through displays (later discontinued) that have been used extensively in AR research.
           
           A different approach is the virtual retinal display which forms images directly on the retina. These displays, which Micro Vision is developing commercially, literally draw on the retina with low-power lasers whose modulated beams are scanned by microelectromechanical mirror assemblies that sweep the beam horizontally and vertically. Potential advantages include high brightness and contrast, low power consumption, and large depth of the field.
                                                          
               Ideally, head-worn AR displays would be no larger than a pair of sunglasses. Several companies are developing displays that embed display optics within conventional eyeglasses. Micro Optical produced a family of eyeglass displays in which two right-angle prisms are embedded in a regular prescription eyeglass lens and reflect the image of a small color display, mounted facing forward on an eyeglass temple piece.5 The intention of the Minolta prototype forgettable display is to be light and inconspicuous enough that users forget that they’re wearing it.6 Others see only a transparent lens, with no indication that the display is on, and the display adds less than 6 grams to the weight of the eyeglasses

4.2 Handheld displays.
Some AR systems use handheld, flat-panel LCD displays that use an attached camera to provide video see-through-based augmentations. The handheld display acts as a window or a magnifying glass that shows the real objects with an AR overlay.
  
4.3. Projection displays.

          In this approach, the desired virtual information is projected directly on the physical objects to be augmented. In the simplest case, the intention is for the augmentations to be coplanar with the surface onto which they project and to project them from a single room-mounted projector, with no need for special eyewear. Projectors can cover large irregular surfaces using an automated calibration procedure that takes into account surface geometry and image overlap.

Another approach for projective AR relies on head worn projectors, whose images are projected along the viewer’s line of sight at objects in the world. The target objects are coated with a retroreflective material that reflects light back along the angle of incidence. Multiple users can see different images on the same target projected by their own head-worn systems, since the projected images can’t be seen except along the line of projection. By using relatively low output projectors, nonretroreflective real objects can obscure virtual objects.

5. DIFFERENT AR TECHNIQUES
                There are two basic techniques for combining real and virtual objects; optical and video techniques. While optical technique uses an optical combiner, video technique uses a computer for combining the video of the real world (from video cameras) with virtual images (computer generated). AR systems use either Head Mounted Display (HMD), which can be closed-view or see-through HMDs, or use monitor-based configuration. While closed-view HMDs do not allow real world direct view, see-through HMDs allow it, with virtual objects added via optical or video techniques

6. What Makes AR Work?

The main components that make an AR system works are,
1.       Display
                   This corresponds to head mounted devices where images are formed. Many objects that do not exist in the real world can be put into this environment and users can view and exam on these objects. The properties such as complexity, physical properties etc. are just parameters in simulation.
2.       Tracking
                   Getting the right information at the right time and the right place is the key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices
3.       Environment Sensing
                    It is the process of viewing or sensing the real world scenes or even physical environment which can be done either by using an optical combiner, a video combiner or simply retinal view.
4.       Visualization and Rendering                    
            Some emerging trends in the recent development of human-computer interaction (HCI) can be observed. The trends are augmented reality, computer supported cooperative work, ubiquitous computing, and heterogeneous user interface. AR is a method for visual improvement or enrichment of the surrounding environment by overlaying spatially aligned computer-generated information onto a human's view (eyes).
     
This is how AR works.

§   Pick A Real World Scene
                        Real world. User's view through the see-through head-worn display of the real world, showing two struts and a node without any overlaid graphics.
§  Add your Virtual Objects in it
User's view of the virtual world intended to overlay the view of the real world.
§  Delete Real World Objects
§  Not Virtual Reality since Environment Real
 
7. AUGMENTED REALITY APPLICATION DOMAINS                                                               
                 Only recently have the capabilities of real-time video image processing, computer graphic systems and new display technologies converged to make possible the display of a virtual graphical image correctly registered with registered with a view of the 3D environment surrounding the user. Researchers working with augmented reality systems have proposed them as solutions in many domains. The areas that have been discussed range from entertainment to military training. Many of the domains, such as medical are also proposed for traditional virtual reality systems.


7.1. Medical                   
This domain is viewed as one of the more important for augmented reality systems. Most of the medical applications deal with image guided surgery. Pre-operative imaging studies, such as CT or MRI scans, of the patient provide the surgeon with the necessary view of the internal anatomy. From these images the surgery is planned. Visualization of the path through the anatomy to the affected area where, for example, a tumor must be removed is done by first creating a 3D model from the multiple views and slices in the preoperative study. Being able to accurately register the images at this point will enhance the performance of the surgical team and eliminate the need for the painful and cumbersome stereo tactic 
       
7.2 Entertainment     
  
        A simple form of augmented reality has been in use in the entertainment and news business for quite some time. Whenever we are watching the evening weather report the weather reporter is shown standing in front of changing weather maps. In the studio the reporter is actually standing in front of a blue or green screen. This real image is augmented with computer generated maps using a technique called chroma-keying. It is also possible to create a virtual studio environment so that the actors can appear to be positioned in a studio with computer generated decorating
In this the environments are carefully modeled ahead of time, and the cameras are calibrated and precisely tracked. For some applications, augmentations are added solely through real-time video tracking. Delaying the video broadcast by a few video frames eliminates the registration problems caused by system latency. Furthermore, the predictable environment (uniformed players on a green, white, and brown field) lets the system use custom chroma-keying techniques to draw the yellow line only on


the field rather than over the players. With similar approaches, advertisers can embellish broadcast video with virtual ads and product placements

           The military has been using displays in cockpits that present information to the pilot on the windshield of the cockpit or the visor of their flight helmet. This is a form of augmented reality display.
             By equipping military personnel with helmet mounted visor displays or a special purpose rangefinder the activities of other units participating in the exercise can be imaged. In wartime, the display of the real battlefield scene could be augmented with annotation information or highlighting to emphasize hidden enemy units.

7.4. Engineering Design
               Imagine that a group of designers are working on the model of a complex device for their clients. The designers and clients want to do a joint design review even though they are physically separated. If each of them had a conference room that was equipped with an augmented reality display this could be accomplished. The physical prototype that the designers have mocked up is imaged and displayed in the client's conference room in 3D. The clients can walk around the display looking at different aspects of it

7.5. Robotics and Telerobotics
In the domain of robotics and telerobotics an augmented display can assist the user of the system. A telerobotic operator uses a visual image of the remote workspace to guide the robot. Annotation of the view would still be useful just as it is when the scene is in front of the operator. There is an added potential benefit. The


robot motion could then be executed directly which in a telerobotics application would eliminate any oscillations caused by long delays to the remote site.

7.6. Manufacturing, Maintenance and Repair
Recent advances in computer interface design, and the ever increasing power and miniaturization of computer hardware, have combined to make the use of augmented reality possible in demonstration test beds for building construction, maintenance and renovation. When the maintenance technician approaches a new or unfamiliar piece of equipment instead of opening several repair manuals they could put on an augmented reality display. In this display the image of the equipment would be augmented with annotations and information pertinent to the repair. The military has developed a wireless vest worn by personnel that is attached to an optical see-through display. The wireless connection allows the soldier to access repair manuals and images of the equipment. Future versions might register those images on the live scene and provide animation to show the procedures that must be performed.

7.7. Consumer Design
         Virtual reality systems are already used for consumer design. Using perhaps more of a graphics system than virtual reality, when you go to the typical home store wanting to add a new deck to your house, they will show you a graphical picture of what the deck will look like
         When we head into some high-tech beauty shops today you can see what a new hair style would look like on a digitized image of yourself. But with an advanced augmented reality system you would be able to see the view as you moved. If the dynamics of hair are included in the description of the virtual object you would also see the motion of your hair as your head moved.

7.8. Augmented mapping

Paper maps can be brought to life using hardware that adds up-to-the-minute information, photography and even video footage. Using AR technique the system, which augments an ordinary tabletop map with additional information by projecting it onto the map’s surface. can be implemented.
It would help emergency workers and have developed a simulation that projects live information about flooding and other natural calamities. The system makes use of an overhead camera and image recognition software on a connected computer to identify the region from the map’s topographical features. An overhead projector then overlays relevant information - like the location of a traffic accident or even the position of a moving helicopter - onto the map

8.  MOBILE AUGMENTED REALITY SYSTEM (MARS)
                        Augmented reality (AR), in which 3D displays are used to overlay a synthesized world on top of the real world, and mobile computing, in which increasingly small and inexpensive computing devices, linked by wireless networks, allow us to use computing facilities while roaming the real world.
In exploring user interfaces, systems software, and application scenarios for MARS, our main focus is on the following lines of research:
  • Identifying generic tasks a mobile user would want to carry out using a context-aware computing system
  • Defining a comprehensive set of reusable user interface components for mobile augmented reality applications.
  • Making combined use of different display technologies, ranging from head-worn, to hand-held, to palm-top to best support mobile users.
   Main components of MARS

·          computer (with 3D graphics acceleration),

·          GPS system

·          A see-through head-worn display

·          A wireless network

          The MARS user interfaces that we will present embody three techniques that we are exploring to develop effective augmented reality user interfaces: information filtering, user interface component design, and view management. Information filtering helps select the most relevant information to present, based on data about the user, the tasks being performed, and the surrounding environment, including the user's location. User interface component design determines the format in which this information should be conveyed, based on the available display resources and tracking accuracy. For example, the absence of high accuracy position tracking would favor body- or screen-stabilized components over world-stabilized ones that would need to be registered with the physical objects to which they refer. View management attempts to ensure that the virtual objects that are selected for display are arranged appropriately with regard to their projections on the view plane. For example, those virtual objects that are not constrained to occupy a specific position in the 3D world should be laid out so that they do not obstruct the view to other physical or virtual objects in the scene that are more important. We believe that user interface techniques of this sort will play a key role in the MARS devices that people will begin to use on an everyday basis over the coming decade.

Implementation Framework

 Hardware

              The main components of our system are a computer (with 3D graphics acceleration), a GPS system originally differential GPS, and now real-time kinematic GPS+GLONASS, a see-through head-worn display with orientation tracker, and a wireless network all attached to the backpack. The user also holds a small stylus-operated computer that can talk to the backpack computer via the spread spectrum radio channel. Thus we can control the material presented on the headworn display from the handheld screen. We also provide a more direct control mechanism of a

cursor in the headworn display by mounting a track pad on the back of the handheld display where it can easily be manipulated (we inverted the horizontal axis) while holding the display upright.
          To make the system to be as lightweight and comfortable as possible, off-the-shelf hardware can be used to avoid the expense, effort, and time involved in building our own. Over the years, lighter and faster battery-powered computers with 3D graphics cards, and finally graduated to laptops with 3D graphics processors

Software
           Software infrastructure Coterie, a prototyping environment that provided language-level support for distributed virtual environments. The main mobile AR application ran on the backpack computer and received continuous input from the GPS system, the orientation head tracker, and the track pad (mounted on the back of the handheld computer). It generated and displayed at an interactive frame rate the overlaid 3D graphics and user interface components on the head worn display. In the handheld computer we ran arbitrary applications that talked to the main backpack application via Coterie/Repo object communications. In our first prototype, we simply ran a custom HTTP server and a web browser on the handheld computer, intercepted all URL requests and link selections, and thus established a two-way communication channel between the backpack and the handheld.

Applications  of MARS
1. Touring machine
                    MARS unit acts as a campus information system, assisting a user in finding places and allowing her to pose queries about items of interest, such as buildings and statues.

2.                 Mobile journalist workstation

             It extends the campus tour application to present additional multimedia information (sound, text, image, video) in the spatial context of the campus. The current prototype was used to present several situated documentaries to roaming users, including stories about the student revolt on Columbia's Campus in 1968, about the tunnel system underneath Columbia's campus, and about the early history of our campus.

3.   UIs for Indoor/Outdoor Collaboration
                It allows a roaming outdoor user to be monitored and provided with guidance by remote experts. In exchange, outdoor users can report their observations to the indoor personnel. For this project we developed a distributed infrastructure that allows us to connect diverse user interfaces (wearable, hand-held, stationary desk-top, stationary wall-sized, and stationary immersive AR) to the same repository of campus-related information. A key goal is to explore collaboration in such heterogeneous computing environments.

9. CHALLANGES
Technological limitations

                       Although there is much progress in the basic enabling technologies, they still primarily prevent the deployment of many AR applications. Displays, trackers, and AR systems in general need to become more accurate, lighter, cheaper, and less power consuming. Since the user must wear the PC, sensors, display, batteries, and everything else required, the end result is a heavy backpack. Laptops today have only one CPU, limiting the amount of visual and hybrid tracking that we can do.

User interface limitation
        We need a better understanding of how to display data to a user and how the user should interact with the data. AR introduces many high-level tasks, such as the need to identify what information should be provided, what’s the appropriate representation for that data, and how the user should make queries and reports. Recent work suggests that the creation and presentation of narrative performances and structures may lead to more realistic and richer AR experience.

Social acceptance
            The final challenge is social acceptance. Given a system with ideal hardware and an intuitive interface, how AR can become an accepted part of a user’s everyday life, just like a mobile phone or a personal digital assistant. Through films and television, many people are familiar with images of simulated AR. However, persuading a user to wear a system means addressing a number of issues. These range from fashion to privacy concerns. To date, little attention has been placed on these fundamental issues. However, these must be addressed before AR becomes widely accepted


10. CONCLUSION
           The research topic "Augmented Reality" (AR) is receiving significant attention due to striking progress in many subfields triggered by the advances in computer miniaturization, speed, and capabilities and fascinating live demonstrations. AR, by its very nature, is a highly inter-disciplinary field, and AR researchers work in areas such as signal processing, computer vision, graphics, user interfaces, human factors, wearable computing, mobile computing, computer networks, distributed computing, information access, information visualization, and hardware design for new displays.

           Augmented reality is a term created to identify systems which are mostly synthetic with some real world imagery added such as texture mapping video onto virtual objects. This is a distinction that will fade as the technology improves and the virtual elements in the scene become less distinguishable from the real ones. 

No comments:

Post a Comment

leave your opinion