Harman Patil (Editor)

Augmented reality

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Information about the environment and its objects is overlaid on the real world. This information can be virtual or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space. Augmented reality brings out the components of the digital world into a person's perceived real world. One example is an AR Helmet for construction workers which displays information about the construction sites. The first functional AR systems that provided immersive mixed reality experiences for users were invented in the early 1990s, starting with the Virtual Fixtures system developed at the U.S. Air Force's Armstrong Labs in 1992.

Contents

Hardware

Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.

Display

Various technologies are used in Augmented Reality rendering including optical projection systems, monitors, hand held devices, and display systems worn on the human body.

Head-mounted

A head-mounted display (HMD) is a display device paired to the forehead such as a harness or helmet. HMDs place images of both the physical world and virtual objects over the user's field of view. Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user's head movements. HMDs can provide VR users mobile and collaborative experiences. Specific providers, such as uSens and Gestigon, are even including gesture controls for full virtual immersion.

In January 2015, Meta launched a project led by Horizons Ventures, Tim Draper, Alexis Ohanian, BOE Optoelectronics and Garry Tan. On February 17, 2016, Meta announced their second-generation product at TED, Meta 2. The Meta 2 head-mounted display headset uses a sensory array for hand interactions and positional tracking, visual field view of 90 degrees (diagonal), and resolution display of 2560 x 1440 (20 pixels per degree), which is considered the largest field view (FOV) currently available.

Eyeglasses

AR displays can be rendered on devices resembling eyeglasses. Versions include eyewear that employ cameras to intercept the real world view and re-display its augmented view through the eye pieces and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear lens pieces.

HUD

Near eye augmented reality devices can be used as portable head-up displays as they can show data, information, and images while the user views the real world. Many definitions of augmented reality only define it as overlaying the information. This is basically what a head-up display does; however, practically speaking, augmented reality is expected to include tracking between the superimposed information, data, and images and some portion of the real world.

CrowdOptic, an existing app for smartphones, applies algorithms and triangulation techniques to photo metadata including GPS position, compass heading, and a time stamp to arrive at a relative significance value for photo objects. CrowdOptic technology can be used by Google Glass users to learn where to look at a given point in time.

In January 2015, Microsoft introduced HoloLens, which is an independent smartglasses unit. Brian Blau, Research Director of Consumer Technology and Markets at Gartner, said that "Out of all the head-mounted displays that I've tried in the past couple of decades, the HoloLens was the best in its class.". First impressions and opinions have been generally that HoloLens is a superior device to the Google Glass, and manages to do several things "right" in which Glass failed.

Contact lenses

Contact lenses that display AR imaging are in development. These bionic contact lenses might contain the elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for wireless communication. The first contact lens display was reported in 1999 and subsequently, 11 years later in 2010/2011 Another version of contact lenses, in development for the U.S. Military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time. The futuristic short film Sight features contact lens-like augmented reality devices.

Virtual retinal display

A virtual retinal display (VRD) is a personal display device under development at the University of Washington's Human Interface Technology Laboratory. With this technology, a display is scanned directly onto the retina of a viewer's eye. The viewer sees what appears to be a conventional display floating in space in front of them.

EyeTap

The EyeTap (also known as Generation-2 Glass) captures rays of light that would otherwise pass through the center of a lens of an eye of the wearer, and substitutes synthetic computer-controlled light for each ray of real light. The Generation-4 Glass (Laser EyeTap) is similar to the VRD (i.e. it uses a computer controlled laser light source) except that it also has infinite depth of focus and causes the eye itself to, in effect, function as both a camera and a display, by way of exact alignment with the eye, and resynthesis (in laser light) of rays of light entering the eye.

Handheld

Handheld displays employ a small display that fits in a user's hand. All handheld AR solutions to date opt for video see-through. Initially handheld AR employed fiducial markers, and later GPS units and MEMS sensors such as digital compasses and six degrees of freedom accelerometer–gyroscope. Today SLAM markerless trackers such as PTAM are starting to come into use. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR is the portable nature of handheld devices and ubiquitous nature of camera phones. The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times as well as distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye. Such examples as Pokémon Go and Ingress utilize an Image Linked Map (ILM) interface, where approved geotagged locations appear on a stylized map for the user to interact with.

Spatial

Spatial Augmented Reality (SAR) augments real world objects and scenes without the use of special displays such as monitors, head mounted displays or hand-held devices. SAR makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users.

Examples include shader lamps, mobile projectors, virtual tables, and smart projectors. Shader lamps mimic and augment reality by projecting imagery onto neutral objects, providing the opportunity to enhance the object’s appearance with materials of a simple unit- a projector, camera, and sensor.

Other applications include table and wall projections. One innovation, the Extended Virtual Table, separates the virtual from the real by including beam-splitter mirrors attached to the ceiling at an adjustable angle. Virtual showcases, which employ beam-splitter mirrors together with multiple graphics displays, provide an interactive means of simultaneously engaging with the virtual and the real. Many more implementations and configurations make spatial augmented reality display an increasingly attractive interactive alternative.

A SAR system can display on any number of surfaces of an indoor setting at once. SAR supports both a graphical visualisation and passive haptic sensation for the end users. Users are able to touch physical objects in a process that provides passive haptic sensation.

Tracking

Modern mobile augmented-reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID and wireless sensors. These technologies offer varying levels of accuracy and precision. Most important is the position and orientation of the user's head. Tracking the user's hand(s) or a handheld input device can provide a 6DOF interaction technique.

Input devices

Techniques include speech recognition systems that translate a user's spoken words into computer instructions and gesture recognition systems that can interpret a user's body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear. Some of the products which are trying to serve as a controller of AR Headsets include Wave by Seebright Inc. and Nimble by Intugine Technologies.

Computer

The computer analyzes the sensed visual and other data to synthesize and position augmentations.

Software and algorithms

A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration which uses different methods of computer vision, mostly related to video tracking. Many computer vision methods of augmented reality are inherited from visual odometry.

Usually those methods consist of two parts. The first stage is to detect interest points, fiducial markers or optical flow in the camera images. This step can use feature detection methods like corner detection, blob detection, edge detection or thresholding and/or other image processing methods. The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiducial markers) are present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.

Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC), which consists of XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects.

To enable rapid development of Augmented Reality Applications, some software development kits (SDKs) have emerged. A few SDKs such as CloudRidAR leverage cloud computing for performance improvement. Some of the well known AR SDKs are offered by Vuforia, ARToolKit, Catchoom CraftAR Mobinett AR, Wikitude, Blippar Layar, and Meta.

Applications

Augmented reality has many applications. First used for military, industrial, and medical applications, by 2012 its use expanded into entertainment and other commercial industries. By 2016, powerful mobile devices allowed AR to become a useful learning aid even in primary schools.

Since the 1970s and early 1980s, Steve Mann has been developing technologies meant for everyday use i.e. "horizontal" across all applications rather than a specific "vertical" market. Examples include Mann's "EyeTap Digital Eye Glass", a general-purpose seeing aid that does dynamic-range management (HDR vision) and overlays, underlays, simultaneous augmentation and diminishment (e.g. diminishing the electric arc while looking at a welding torch).

Literature

In 2011, AR was blended with poetry by ni ka from Sekai Camera in Japan, Tokyo. The prose of these AR poems come from Paul Celan, "Die Niemandsrose", expressing the mourning of "3.11," March 2011 Tōhoku earthquake and tsunami.

Archaeology

AR was applied to aid archaeological research. By augmenting archaeological features onto the modern landscape, AR allowed archaeologists to formulate possible site configurations from extant structures.

Computer generated models of ruins, buildings, landscapes or even ancient people have been recycled into early archaeological AR applications.

Architecture

AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view of a property before the physical building is constructed there; this was demonstrated publicly by Trimble Navigation in 2004. AR can also be employed within an architect's work space, rendering into their view animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout.

With the continual improvements to GPS accuracy, businesses are able to use augmented reality to visualize georeferenced models of construction sites, underground structures, cables and pipes using mobile devices. Augmented reality is applied to present new projects, to solve on-site construction challenges, and to enhance promotional materials. Examples include the Daqri Smart Helmet, an Android-powered hard hat used to create augmented reality for the industrial worker, including visual instructions, real time alerts, and 3D mapping.

Following the Christchurch earthquake, the University of Canterbury released CityViewAR, which enabled city planners and engineers to visualize buildings that had been destroyed. Not only did this provide planners with tools to reference the previous cityscape, but it also served as a reminder to the magnitude of the devastation caused, as entire buildings had been demolished.

Visual art

AR applied in the visual arts allows objects or places to trigger artistic multidimensional experiences and interpretations of reality.

AR technology aided the development of eye tracking technology to translate a disabled person's eye movements into drawings on a screen.

By 2011, augmenting people, objects, and landscapes had become a recognized art style. For example, in 2011, artist Amir Bardaran's work, "Frenchising the Mona Lisa" overlaid video on Da Vinci's painting using an AR mobile application called Junaio. The AR app allowed the user to train his or her smartphone on Da Vinci's Mona Lisa and watch the lady loosen her hair and wrap a French flag around her visage in the form an Islamic hijab. The wearing of a hijab was controversial in France at the time.

Commerce

AR is used to integrate print and video marketing. Printed marketing material can be designed with certain "trigger" images that, when scanned by an AR enabled device using image recognition, activate a video version of the promotional material. A major difference between Augmented Reality and straight forward image recognition is that you can overlay multiple media at the same time in the view screen, such as social media share buttons, in-page video even audio and 3D objects. Traditional print only publications are using Augmented Reality to connect many different types of media.

AR can enhance product previews such as allowing a customer to view what's inside a product's packaging without opening it. AR can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views of additional content such as customization options and additional images of the product in its use.

By 2010, Virtual dressing rooms were developed for e-commerce.

In 2012, a mint used AR techniques to market a commemorative coin for Aruba. Using the coin itself as an AR trigger, when held in front of an AR-enabled device it revealed additional objects and layers of information that were not visible without the device.

In 2013, L'Oreal used CrowdOptic technology to create an augmented reality experience at the seventh annual Luminato Festival in Toronto, Canada.

In 2014, L'Oreal Paris brought the AR experience to a personal level with their "Makeup Genius" app. It allowed users to try out make-up and beauty styles utilizing a mobile device.

In 2015, the Bulgarian startup iGreet developed its own AR technology and used it to make the first premade “live” greeting card. A traditional paper card was augmented with digital content which is revealed by using the iGreet app.

In late 2015, the Luxembourg startup itondo.com launched an AR app for the art market that lets art buyers accurately visualize 2D artworks to scale on their own walls to scale before they buy. The app has two AR-enabled functionalities: 1. Live Preview for viewing to scale as the user moves around their room; 2. Backgrounds Preview where the user previews artwork to scale on their pre-saved wall photos. The app allows for searching of the entire marketplace catalog or their saved Favorites while in AR mode, so the user can jump between artworks without having to return to the Home screen. Virtually installed works can be photographed and then shared through native channels.

Education

In educational settings, AR has been used to complement a standard curriculum. Text, graphics, video and audio were superimposed into a student’s real time environment. Textbooks, flashcards and other educational reading material contained embedded “markers” or triggers that, when scanned by an AR device, produced supplementary information to the student rendered in a multimedia format.

As AR evolved students could participate interactively. Computer generated simulations of historical events, exploring and learning details of each significant area of the event site could come alive. On higher education, there are some applications that can be used. Construct3D, a Studierstube system, allowed students to learn mechanical engineering concepts, math or geometry. Chemistry AR apps allowed students to visualize and interact with the spatial structure of a molecule using a marker object held in a hand. Anatomy students could visualize different systems of the human body in three dimensions.

Augmented reality technology enhanced remote collaboration, allowing students and instructors in different locales to interact by sharing a common virtual learning environment populated by virtual objects and learning materials.

Primary school children learn easily from interactive experiences. For instance, astronomical constellations and the movements of objects in the solar system were orient in 3D and overlaid in the direction the device was held and expanded with supplemental video information. Paper-based science book illustrations could seem to come alive as video without requiring the child to navigate to web-based materials.
For teaching anatomy, teachers could use devices to superimpose hidden anatomical structures like bones and organs on any person in the classroom.

While some educational apps are available for AR in 2016, it is not broadly used. Apps that leverage augmented reality to aid learning, included SkyView for studying astronomy, and AR Circuits for building simple electric circuits.

Emergency management/search and rescue

Augmented reality systems are used in public safety situations – from super storms to suspects at large.

As early as 2009, two articles from Emergency Management magazine discussed the power of the technology for emergency management. The first was "Augmented Reality--Emerging Technology for Emergency Management" by Gerald Baron. Per Adam Crowe: "Technologies like augmented reality (ex: Google Glass) and the growing expectation of the public will continue to force professional emergency managers to radically shift when, where, and how technology is deployed before, during, and after disasters."

Another early example was a search aircraft is looking for a lost hiker in rugged mountain terrain. Augmented reality systems provided aerial camera operators with a geographic awareness of forest road names and locations blended with the camera video. As a result, the camera operator was better able to search for the hiker knowing the geographic context of the camera image. Once located, the operator could more efficiently direct rescuers to the hiker's location because the geographic position and reference landmarks were clearly labeled.

Video games

The gaming industry embraced AR technology. A number of games were developed for prepared indoor environments, such as AR air hockey, Titans of Space, collaborative combat against virtual enemies, and AR-enhanced pool-table games.

Augmented reality allowed video game players to experience digital game play in a real world environment. Companies and platforms like Niantic and LyteShot emerged as major augmented reality gaming creators. Niantic is notable for releasing the record-breaking Pokémon Go game. However, though the popular press overwhelmingly calls Pokémon Go an augmented reality game, most experts in AR and experts in game development agree that it is best described as a location-based game.

Industrial design

AR allowed industrial designers to experience a product's design and operation before completion. Volkswagen used AR for comparing calculated and actual crash test imagery. AR was used to visualize and modify car body structure and engine layout. AR was also used to compare digital mock-ups with physical mock-ups for finding discrepancies between them.

Medical

Since 2005, a device that films subcutaneous veins, processes and projects the image of the veins onto the skin has been used to locate veins. This device is called a near-infrared vein finder.

AR provided surgeons with patient monitoring data in the style of a fighter pilot's heads up display or allowed patient imaging records, including functional videos, to be accessed and overlaid. Examples include a virtual X-ray view based on prior tomography or on real time images from ultrasound and confocal microscopy probes, visualizing the position of a tumor in the video of an endoscope, or radiation exposure risks from X-ray imaging devices. AR can enhance viewing a fetus inside a mother's womb. Siemens, Karl Storz and IRCAD have developed a system for laparoscopic liver surgery that uses AR to view sub-surface tumors and vessels. AR has been used for cockroach phobia treatment. Patients wearing augmented reality glasses can be reminded to take medications.

Spatial immersion and interaction

Augmented reality applications, running on handheld devices utilized as virtual reality headsets, can also digitalize human presence in space and provide a computer generated model of them, in a virtual space where they can interact and perform various actions. Such capabilities are demonstrated by "Project Anywhere" developed by a postgraduate student at ETH Zurich, which was dubbed as an "out-of-body experience".

Military

An interesting early application of AR occurred when Rockwell International created video map overlays of satellite and orbital debris tracks to aid in space observations at Air Force Maui Optical System. In their 1993 paper "Debris Correlation Using the Rockwell WorldView System" the authors describe the use of map overlays applied to video from space surveillance telescopes. The map overlays indicated the trajectories of various objects in geographic coordinates. This allowed telescope operators to identify satellites, and also to identify – and catalog – potentially dangerous space debris.

Starting in 2003 the US Army integrated the SmartCam3D augmented reality system into the Shadow Unmanned Aerial System to aid sensor operators using telescopic cameras to locate people or points of interest. The system combined both fixed geographic information including street names, points of interest, airports and railroads with live video from the camera system. The system offered "picture in picture" mode that allows the system to show a synthetic view of the area surrounding the camera's field of view. This helps solve a problem in which the field of view is so narrow that it excludes important context, as if "looking through a soda straw". The system displays real-time friend/foe/neutral location markers blended with live video, providing the operator with improved situation awareness.

Researchers at USAF Research Lab (Calhoun, Draper et al.) found an approximately two-fold increase in the speed at which UAV sensor operators found points of interest using this technology. This ability to maintain geographic awareness quantitatively enhances mission efficiency. The system is in use on the US Army RQ-7 Shadow and the MQ-1C Gray Eagle Unmanned Aerial Systems.

In combat, AR can serve as a networked communication system that renders useful battlefield data onto a soldier's goggles in real time. From the soldier's viewpoint, people and various objects can be marked with special indicators to warn of potential dangers. Virtual maps and 360° view camera imaging can also be rendered to aid a soldier's navigation and battlefield perspective, and this can be transmitted to military leaders at a remote command center.

The NASA X-38 was flown using a Hybrid Synthetic Vision system that overlaid map data on video to provide enhanced navigation for the spacecraft during flight tests from 1998 to 2002. It used the LandForm software and was useful for times of limited visibility, including an instance when the video camera window frosted over leaving astronauts to rely on the map overlays. The LandForm software was also test flown at the Army Yuma Proving Ground in 1999. In the photo at right one can see the map markers indicating runways, air traffic control tower, taxiways, and hangars overlaid on the video.

AR can augment the effectiveness of navigation devices. Information can be displayed on an automobile's windshield indicating destination directions and meter, weather, terrain, road conditions and traffic information as well as alerts to potential hazards in their path. Aboard maritime vessels, AR can allow bridge watch-standers to continuously monitor important information such as a ship's heading and speed while moving throughout the bridge or performing other tasks.

Workplace

AR was used to facilitate collaboration among distributed team members via conferences with local and virtual participants. AR tasks included brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed control rooms.

Complex tasks such as assembly, maintenance, and surgery were simplified by inserting additional information into the field of view. For example, labels were displayed on parts of a system to clarify operating instructions for a mechanic performing maintenance on a system. Assembly lines benefited from the usage of AR. In addition to Boeing, BMW and Volkswagen were known for incorporating this technology into assembly lines for monitoring process improvements. Big machines are difficult to maintain because of the multiple layers or structures they have. AR permitted them to look through the machine as if it was with x-ray, pointing them to the problem right away.

Broadcast and Live Events

Weather visualizations were the first application of augmented reality to television. It has now become common in weathercasting to display full motion video of images captured in real-time from multiple cameras and other imaging devices. Coupled with 3D graphics symbols and mapped to a common virtual geo-space model, these animated visualizations constitute the first true application of AR to TV.

AR has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. AR is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker ball trajectories.

Augmented reality for Next Generation TV allowed viewers to interact with the programs they were watching. They could place objects into an existing program and interact with them, such as moving them around. Objects included avatars of real persons in real time who were also watching the same program.

AR was used to enhance concert and theater performances. For example, artists allowed listeners to augment their listening experience by adding their performance to that of other bands/groups of users.

Tourism and sightseeing

Travelers used AR to access real time informational displays regarding a location, its features and comments or content provided by previous visitors. Advanced AR applications included simulations of historical events, places and objects rendered into the landscape.

AR applications linked to geographic locations presented location information by audio, announcing features of interest at a particular site as they became visible to the user.

Translation

AR systems such as Word Lens can interpret foreign text on signs and menus and, in a user's augmented view, re-display the text in the user's language. Spoken words of a foreign language can be translated and displayed in a user's view as printed subtitles.

Music

It has been suggested that augmented reality may be used in new methods of music production, mixing, control and visualization.

A tool for 3D music creation in clubs that, in addition to regular sound mixing features, allows the DJ to play dozens of sound samples, placed anywhere in 3D space, has been conceptualized.

Leeds College of Music teams have developed an AR app that can be used with Audient desks and allow students to use their smartphone or tablet to put layers of information or interactivity on top of an Audient mixing desk.

ARmony is a software package that makes use of augmented reality to help people to learn an instrument.

In a proof-of-concept project Ian Sterling, interaction design student at California College of the Arts, and software engineer Swaroop Pal demonstrated a HoloLens app whose primary purpose is to provide a 3D spatial UI for cross-platform devices — the Android Music Player app and Arduino-controlled Fan and Light — and also allow interaction using gaze and gesture control.

AR Mixer is an app that allows to select and mix between songs by manipulating objects - such as changing the orientation of a bottle or can.

In a video Uriel Yehezkel, demonstrates using the Leap Motion controller & GECO MIDI to control Ableton Live with hand gestures and states that by this method he was able to control more than 10 parameters simultaneously with both hands and take full control over the construction of the song, emotion and energy.

A novel musical instrument that allows novices to play electronic musical compositions, interactively remixing and modulating their elements, by manipulating simple physical objects has been proposed.

A system using explicit gestures and implicit dance moves to control the visual augmentations of a live music performance that enables more dynamic and spontaneous performances and—in combination with indirect augmented reality—leading to a more intense interaction between artist and audience has been suggested.

Privacy concerns

The concept of modern augmented reality depends on the ability of the device to record and analyze the environment in real time. Because of this, there are potential legal concerns over privacy. While the First Amendment to the United States Constitution allows for such recording in the name of public interest, the constant recording of an AR device makes it difficult to do so without also recording outside of the public domain. Legal complications would be found in areas where a right to certain amount of privacy is expected or where copyrighted media are displayed. In terms of individual privacy, there exists the ease of access to information that one should not readily possess about a given person. This is accomplished through facial recognition technology. Assuming that AR automatically passes information about persons that the user sees, there could be anything seen from social media, criminal record, and marital status.

Notable researchers

  • Ivan Sutherland invented the first AR head-mounted display at Harvard University.
  • Steven Feiner, Professor at Columbia University, is author of a 1993 first paper on an AR system prototype, KARMA (the Knowledge-based Augmented Reality Maintenance Assistant), along with Blair MacIntyre and Doree Seligmann. He is also an advisor to Meta.
  • Meron Gribetz, conceptualized the Meta mounted display headset. He is also founder and CEO of Meta, a Silicon Valley company that is known for producing innovative augmented reality products.
  • S. Ravela, B. Draper, J. Lim and A. Hanson develop marker/fixture-less augmented reality system with computer vision in 1994. They augmented an engine block observed from a single video camera with annotations for repair. They use model-based pose estimation, aspect graphs and visual feature tracking to dynamically register model with the observed video.
  • Steve Mann formulated an earlier concept of mediated reality in the 1970s and 1980s, using cameras, processors, and display systems to modify visual reality to help people see better (dynamic range management), building computerized welding helmets, as well as "augmediated reality" vision systems for use in everyday life. He is also an adviser to Meta.
  • Louis Rosenberg developed one of the first known AR systems, called Virtual Fixtures, while working at the U.S. Air Force Armstrong Labs in 1991, and published the first study of how an AR system can enhance human performance. Rosenberg's subsequent work at Stanford University in the early 90's, was the first proof that virtual overlays, when registered and presented over a user's direct view of the real physical world, could significantly enhance human performance.
  • Mike Abernathy pioneered one of the first successful augmented reality applications of video overlay using map data for space debris in 1993, while at Rockwell International. He co-founded Rapid Imaging Software, Inc. and was the primary author of the LandForm system in 1995, and the SmartCam3D system. LandForm augmented reality was successfully flight tested in 1999 aboard a helicopter and SmartCam3D was used to fly the NASA X-38 from 1999–2002. He and NASA colleague Francisco Delgado received the National Defense Industries Association Top5 awards in 2004.
  • Francisco "Frank" Delgado is a NASA engineer and project manager specializing in human interface research and development. Starting 1998 he conducted research into displays that combined video with synthetic vision systems (called hybrid synthetic vision at the time) that we recognize today as augmented reality systems for the control of aircraft and spacecraft. In 1999 he and colleague Mike Abernathy flight-tested the LandForm system aboard a US Army helicopter. Delgado oversaw integration of the LandForm and SmartCam3D systems into the X-38 Crew Return Vehicle. In 2001, Aviation Week reported NASA astronaut's successful use of hybrid synthetic vision (augmented reality) to fly the X-38 during a flight test at Dryden Flight Research Center. The technology was used in all subsequent flights of the X-38. Delgado was co-recipient of the National Defense Industries Association 2004 Top 5 software of the year award for SmartCam3D.
  • Dieter Schmalstieg and Daniel Wagner jump started the field of AR on mobile phones. They developed the first marker tracking systems for mobile phones and PDAs.
  • Bruce H. Thomas and Wayne Piekarski develop the Tinmith system in 1998. They along with Steve Feiner with his MARS system pioneer outdoor augmented reality.
  • Mark Billinghurst is one of the world's leading augmented reality researchers. Director of the HIT Lab New Zealand (HIT Lab NZ) at the University of Canterbury in New Zealand, he has produced over 250 technical publications and presented demonstrations and courses at a wide variety of conferences.
  • Reinhold Behringer performed important early work in image registration for augmented reality, and prototype wearable testbeds for augmented reality. He also co-organized the First IEEE International Symposium on Augmented Reality in 1998 (IWAR'98), and co-edited one of the first books on augmented reality.
  • History

  • 1901: L. Frank Baum, an author, first mentions the idea of an electronic display/spectacles that overlays data onto real life (in this case 'people'), it is named a 'character marker'.
  • 1957–62: Morton Heilig, a cinematographer, creates and patents a simulator called Sensorama with visuals, sound, vibration, and smell.
  • 1968: Ivan Sutherland invents the head-mounted display and positions it as a window into a virtual world.
  • 1975: Myron Krueger creates Videoplace to allow users to interact with virtual objects for the first time.
  • 1980: Steve Mann creates the first wearable computer, a computer vision system with text and graphical overlays on a photographically mediated reality, or Augmediated Reality. See EyeTap.
  • 1981: Dan Reitan geospatially maps multiple weather radar images and space-based and studio cameras to virtual reality Earth maps and abstract symbols for television weather broadcasts, bringing Augmented Reality to TV.
  • 1987: Douglas George and Robert Morris create a working prototype of an astronomical telescope-based AR system which superimposed in the telescope eyepiece, over the actual sky images, multi-intensity star and celestial body images, and other relevant information.
  • 1989: Jaron Lanier coins the phrase Virtual Reality and creates the first commercial business around virtual worlds.
  • 1990: The term 'Augmented Reality' is attributed to Thomas P. Caudell, a former Boeing researcher.
  • 1992: Louis Rosenberg develops one of the first functioning AR systems, called Virtual Fixtures, at the U.S. Air Force Research Laboratory—Armstrong, and demonstrates benefits to human performance.
  • 1992: Steven Feiner, Blair MacIntyre and Doree Seligmann present the first major paper on an AR system prototype, KARMA, at the Graphics Interface conference.
  • 1993: Mike Abernathy, et al., report the first use of augmented reality in identifying space debris using Rockwell WorldView by overlaying satellite geographic trajectories on live telescope video.
  • 1993 A widely cited version of the paper above is published in Communications of the ACM – Special issue on computer augmented environments, edited by Pierre Wellner, Wendy Mackay, and Rich Gold.
  • 1993: Loral WDL, with sponsorship from STRICOM, performed the first demonstration combining live AR-equipped vehicles and manned simulators. Unpublished paper, J. Barrilleaux, "Experiences and Observations in Applying Augmented Reality to Live Training", 1999.
  • 1994: Julie Martin creates first 'Augmented Reality Theater production', Dancing In Cyberspace, funded by the Australia Council for the Arts, features dancers and acrobats manipulating body–sized virtual object in real time, projected into the same physical space and performance plane. The acrobats appeared immersed within the virtual object and environments. The installation used Silicon Graphics computers and Polhemus sensing system.
  • 1995: S. Ravela et al. at University of Massachusetts introduce a vision-based system using monocular cameras to track objects (engine blocks) across views for augmented reality.
  • 1998: Spatial Augmented Reality introduced at University of North Carolina at Chapel Hill by Ramesh Raskar, Welch, Henry Fuchs.
  • 1999: Frank Delgado, Mike Abernathy et al. report successful flight test of LandForm software video map overlay from a helicopter at Army Yuma Proving Ground overlaying video with runways, taxiways, roads and road names.
  • 1999: The US Naval Research Laboratory engage on a decade long research program called the Battlefield Augmented Reality System (BARS) to prototype some of the early wearable systems for dismounted soldier operating in urban environment for situation awareness and training NRL BARS Web page
  • 1999: Hirokazu Kato (加藤 博一) created ARToolKit at HITLab, where AR later was further developed by other HITLab scientists, demonstrating it at SIGGRAPH.
  • 2000: Bruce H. Thomas develops ARQuake, the first outdoor mobile AR game, demonstrating it in the International Symposium on Wearable Computers.
  • 2001: NASA X-38 flown using LandForm software video map overlays at Dryden Flight Research Center.
  • 2004: Outdoor helmet-mounted AR system demonstrated by Trimble Navigation and the Human Interface Technology Laboratory.
  • 2008: Wikitude AR Travel Guide launches on 20 Oct 2008 with the G1 Android phone.
  • 2009: ARToolkit was ported to Adobe Flash (FLARToolkit) by Saqoosha, bringing augmented reality to the web browser.
  • 2012: Launch of Lyteshot, an interactive AR gaming platform that utilizes smartglasses for game data
  • 2013: Meta announces the Meta 1 developer kit, the first to market AR see-through display
  • 2013: Google announces an open beta test of its Google Glass augmented reality glasses. The glasses reach the Internet through Bluetooth, which connects to the wireless service on a user’s cellphone. The glasses respond when a user speaks, touches the frame or moves the head.
  • 2014: Mahei creates the first generation of augmented reality enhanced educational toys.
  • 2015: Microsoft announces Windows Holographic and the HoloLens augmented reality headset. The headset utilizes various sensors and a processing unit to blend high definition "holograms" with the real world.
  • 2016: Niantic released Pokémon Go for iOS and Android in July 2016. The game quickly became one of the most used applications and has brought augmented reality to the mainstream.
  • References

    Augmented reality Wikipedia