Research Paper On Google Glass In Ieee Format For Microsoft

On By In 1

Not to be confused with Virtual Reality.

Augmented reality (AR) is a direct or indirect live view of a physical, real-world environment whose elements are "augmented" by computer-generated perceptual information, ideally across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.[1] The overlaid sensory information can be constructive (i.e. additive to the natural environment) or destructive (i.e. masking of the natural environment) and is spatial registered with the physical world such that it is perceived as an immersive aspect of the real environment.[2] In this way, Augmented reality alters one’s current perception of a real world environment, whereas virtual reality replaces the real world environment with a simulated one.[3][4] Augmented Reality is related to two largely synonymous terms: mixed reality and computer-mediated reality.

The primary value of Augmented reality is that it brings components of the digital world into a person's perception of the real world, and does so not as a simple display of data, but through the integration of immersive sensations that are perceived as natural parts of an environment. The first functional AR systems that provided immersive mixed reality experiences for users were invented in the early 1990s, starting with the Virtual Fixtures system developed at the U.S. Air Force's Armstrong Labs in 1992.[2][5][6] The first commercial augmented reality experiences were used largely in the entertainment and gaming businesses, but now other industries are also getting interested about AR's possibilities for example in knowledge sharing, educating, managing the information flood and organizing distant meetings. Augmented reality is also transforming the world of education, where content may be accessed by scanning or viewing an image with a mobile device.[7] Another example is an AR helmet for construction workers which display information about the construction sites.

Augmented reality is used to enhance the natural environments or situations and offer perceptually enriched experiences. With the help of advanced AR technologies (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Information about the environment and its objects is overlaid on the real world. This information can be virtual[8][9][10][11][12][13] or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space.[14][15] Augmented reality also has a lot of potential in gathering and sharing tacit knowledge. Augmentation techniques are typically performed in real time and in semantic context with environmental elements. Immersive perceptual information is sometimes combined with supplemental information like scores over a live video feed of a sporting event. This combines the benefits of augmented reality technology and heads up display technology (HUD).

Technology[edit]

Hardware[edit]

Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.[16]

Display[edit]

Various technologies are used in augmented reality rendering, including optical projection systems, monitors, handheld devices, and display systems worn on the human body.

A head-mounted display (HMD) is a display device worn on the forehead, such as a harness or helmet. HMDs place images of both the physical world and virtual objects over the user's field of view. Modern HMDs often employ sensors for six degrees of freedom monitoring that allow the system to align virtual information to the physical world and adjust accordingly with the user's head movements.[17][18][19] HMDs can provide VR users with mobile and collaborative experiences.[20] Specific providers, such as uSens and Gestigon, include gesture controls for full virtual immersion.[21][22]

In January 2015, Meta launched a project led by Horizons Ventures, Tim Draper, Alexis Ohanian, BOE Optoelectronics and Garry Tan.[23][24][25] On February 17, 2016, Meta announced their second-generation product at TED, Meta 2. The Meta 2 head-mounted displayheadset uses a sensory array for hand interactions and positional tracking, visual field view of 90 degrees (diagonal), and resolution display of 2560 x 1440 (20 pixels per degree), which is considered the largest field of view (FOV) currently available.[26][27][28][29]

Eyeglasses[edit]

AR displays can be rendered on devices resembling eyeglasses. Versions include eyewear that employs cameras to intercept the real world view and re-display its augmented view through the eyepieces[30] and devices in which the AR imagery is projected through or reflected off the surfaces of the eyewear's lenspieces.[31][32][33]

HUD[edit]

See also: Head-up display

A head-up display (HUD) is a transparent display that presents data without requiring users to look away from their usual viewpoints. A precursor technology to augmented reality, heads-up displays were first developed for pilots in the 1950s, projecting simple flight data into their line of sight, thereby enabling them to keep their "heads up" and not look down at the instruments. Near-eye augmented reality devices can be used as portable head-up displays as they can show data, information, and images while the user views the real world. Many definitions of augmented reality only define it as overlaying the information.[34][35] This is basically what a head-up display does; however, practically speaking, augmented reality is expected to include registration and tracking between the superimposed perceptions, sensations, information, data, and images and some portion of the real world.[36]

CrowdOptic, an existing app for smartphones, applies algorithms and triangulation techniques to photo metadata including GPS position, compass heading, and a time stamp to arrive at a relative significance value for photo objects.[37] CrowdOptic technology can be used by Google Glass users to learn where to look at a given point in time.[38]

A number of smartglasses have been launched for augmented reality. Due to encumbered control, smartglasses are primarily designed for micro-interaction like reading a text message but still far from more well-rounded applications of augmented reality.[39] In January 2015, Microsoft introduced HoloLens, an independent smartglasses unit. Brian Blau, Research Director of Consumer Technology and Markets at Gartner, said that "Out of all the head-mounted displays that I've tried in the past couple of decades, the HoloLens was the best in its class."[40] First impressions and opinions were generally that HoloLens is a superior device to the Google Glass, and manages to do several things "right" in which Glass failed.[40][41]

Contact lenses[edit]

Contact lenses that display AR imaging are in development. These bionic contact lenses might contain the elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for wireless communication. The first contact lens display was reported in 1999,[42] then 11 years later in 2010-2011.[43][44][45][46] Another version of contact lenses, in development for the U.S. military, is designed to function with AR spectacles, allowing soldiers to focus on close-to-the-eye AR images on the spectacles and distant real world objects at the same time.[47][48]

The futuristic short film Sight[49] features contact lens-like augmented reality devices.[50][51]

Many scientists have been working on contact lenses capable of many different technological feats. The company Samsung has been working on a contact lens as well. This lens, when finished, is meant to have a built-in camera on the lens itself. [52]The design is intended to have you blink to control its interface for recording purposed. It is also intended to be linked with your smartphone to review footage, and control it separately. When successful, the lens would feature a camera, or sensor inside of it. It is said that it could be anything from a light sensor, to a temperature sensor.

In Augmented Reality, the distinction is made between two distinct modes of tracking, known as ''marker'' and ''markerless''. Marker are visual cues which trigger the display of the virtual information [53]. A piece of paper with some distinct geometries can be used. The camera recognizes the geometries by identifying specific pointsin the drawing. Markerless also called instant tracking does not use marker. Instead the user positions the object in the camera view preferably in an horizontal plane.It uses sensors in mobile devices to accurately detect the real-world environment, such as the locations of walls and points of intersection[54].

Virtual retinal display[edit]

A virtual retinal display (VRD) is a personal display device under development at the University of Washington's Human Interface Technology Laboratory under Dr. Thomas A. Furness III[55]. With this technology, a display is scanned directly onto the retina of a viewer's eye. This results in bright images with high revolution and high contrast. The viewer sees what appears to be a conventional display floating in space.[56]

Several of tests were done in order to analyze the safety of the VRD.[55] In one test, patients with partial loss of vision were selected to view images using the technology having either macular degeneration (a disease that degenerates the retina) or keratoconus. In the macular degeneration group, 5 out of 8 subjects preferred the VRD images to the CRT or paper images and thought they were better and brighter and were able to see equal or better resolution levels. The Kerocunus patients could all resolve smaller lines in several line tests using the VDR as opposed to their own correction. They also found the VDR images to be easier to view and sharper. As a result of these several tests, virtual retinal display is considered safe technology.

Virtual retinal display creates images that can be seen in ambient daylight and ambient roomlight. The VRD is considered a preferred candidate to use in a surgical display due to it's combination of high resolution and high contrast and brightness. Additional tests show high potential for VRD to be used as a display technology for patients that have low vision.

EyeTap[edit]

The EyeTap (also known as Generation-2 Glass[57]) captures rays of light that would otherwise pass through the center of the lens of the eye of the wearer, and substitutes synthetic computer-controlled light for each ray of real light.

The Generation-4 Glass[57] (Laser EyeTap) is similar to the VRD (i.e. it uses a computer-controlled laser light source) except that it also has infinite depth of focus and causes the eye itself to, in effect, function as both a camera and a display by way of exact alignment with the eye and resynthesis (in laser light) of rays of light entering the eye.[58]

Handheld[edit]

A Handheld display employs a small display that fits in a user's hand. All handheld AR solutions to date opt for video see-through. Initially handheld AR employed fiducial markers,[59] and later GPS units and MEMS sensors such as digital compasses and six degrees of freedomaccelerometer–gyroscope. Today SLAM markerless trackers such as PTAM are starting to come into use. Handheld display AR promises to be the first commercial success for AR technologies. The two main advantages of handheld AR are the portable nature of handheld devices and the ubiquitous nature of camera phones. The disadvantages are the physical constraints of the user having to hold the handheld device out in front of them at all times, as well as the distorting effect of classically wide-angled mobile phone cameras when compared to the real world as viewed through the eye.[60] The issues arising from the user having to hold the handheld device (manipulability) and perceiving the visualisation correctly (comprehensibility) have been summarised into the HARUS usability questionnaire.[61]

Games such as Pokémon Go and Ingress utilize an Image Linked Map (ILM) interface, where approved geotagged locations appear on a stylized map for the user to interact with.[62]

Spatial[edit]

Spatial augmented reality (SAR) augments real-world objects and scenes without the use of special displays such as monitors, head-mounted displays or hand-held devices. SAR makes use of digital projectors to display graphical information onto physical objects. The key difference in SAR is that the display is separated from the users of the system. Because the displays are not associated with each user, SAR scales naturally up to groups of users, thus allowing for collocated collaboration between users.

Examples include shader lamps, mobile projectors, virtual tables, and smart projectors. Shader lamps mimic and augment reality by projecting imagery onto neutral objects, providing the opportunity to enhance the object's appearance with materials of a simple unit - a projector, camera, and sensor.

Other applications include table and wall projections. One innovation, the Extended Virtual Table, separates the virtual from the real by including beam-splitter mirrors attached to the ceiling at an adjustable angle.[63] Virtual showcases, which employ beam-splitter mirrors together with multiple graphics displays, provide an interactive means of simultaneously engaging with the virtual and the real. Many more implementations and configurations make spatial augmented reality display an increasingly attractive interactive alternative.

An SAR system can display on any number of surfaces of an indoor setting at once. SAR supports both a graphical visualization and passive haptic sensation for the end users. Users are able to touch physical objects in a process that provides passive haptic sensation.[11][64][65][66]

Tracking[edit]

Modern mobile augmented-reality systems use one or more of the following tracking technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state compasses, RFID and wireless sensors. These technologies offer varying levels of accuracy and precision. Most important is the position and orientation of the user's head. Tracking the user's hand(s) or a handheld input device can provide a 6DOF interaction technique.[67][68]

Networking[edit]

Mobile augmented reality applications are gaining popularity due to the wide adoption of mobile and especially wearable devices. However, they often rely on computationally intensive computer vision algorithms with extreme latency requirements. To compensate for the lack of computing power, offloading data processing to a distant machine is often desired. Computation offloading introduces new constraints in applications, especially in terms of latency and bandwidth. Although there are a plethora of real-time multimedia transport protocols, there is a need for support from network infrastructure as well.[69]

Input devices[edit]

Techniques include speech recognition systems that translate a user's spoken words into computer instructions, and gesture recognition systems that interpret a user's body movements by visual detection or from sensors embedded in a peripheral device such as a wand, stylus, pointer, glove or other body wear.[70][71][72][73] Products which are trying to serve as a controller of AR headsets include Wave by Seebright Inc. and Nimble by Intugine Technologies.

Computer[edit]

The computer analyzes the sensed visual and other data to synthesize and position augmentations. Computers are responsible for the graphics that go with augmented reality. Augmented reality uses a computer-generated image and it has an striking effect on the way the real world is shown. With the improvement of technology and computers, augmented reality is going to have a drastic change on our perspective of the real world.[74] According to Time Magazine, in about 15-20 years it is predicted that Augmented reality and virtual reality are going to become the primary use for computer interactions.[75] Computers are improving at a very fast rate, which means that we are figuring out new ways to improve other technology. The more that computers progress, augmented reality will become more flexible and more common in our society. Computers are the core of augmented reality.

[76]The Computer receives data from the sensors which determine the relative position of objects surface. This translates to an input to the computer which then outputs to the users by adding something that would otherwise not be there. The computer comprises of memory and a processor. [77]The computer takes the scanned environment then generates images or a video and puts it on the receiver for the observer to see. The fixed marks on an objects surface are stored in the memory of a computer. The computer also withdrawals from its memory to present images realistically to the onlooker. The best example of this is of the Pepsi Max AR Bus Shelter.[78]

Software and algorithms[edit]

A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration, and uses different methods of computer vision, mostly related to video tracking.[79][80] Many computer vision methods of augmented reality are inherited from visual odometry.

Usually those methods consist of two parts. The first stage is to detect interest points, fiducial markers or optical flow in the camera images. This step can use feature detection methods like corner detection, blob detection, edge detection or thresholding, and other image processing methods.[81][82] The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiducial markers) are present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.[citation needed]

Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC),[83] which consists of XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects.

To enable rapid development of augmented reality applications, some software development kits (SDKs) have emerged.[84][85] A few SDKs such as CloudRidAR[86] leverage cloud computing for performance improvement. AR SDKs are offered by Vuforia,[87]ARToolKit, Catchoom CraftAR[88] Mobinett AR,[89] Wikitude,[90] Blippar[91]Layar,[92]Meta.[93][94] and ARLab.[95]

Possible applications[edit]

Augmented reality has been explored for many applications.[96] Since the 1970s and early 1980s, Steve Mann has developed technologies meant for everyday use i.e. "horizontal" across all applications rather than a specific "vertical" market. Examples include Mann's "EyeTap Digital Eye Glass", a general-purpose seeing aid that does dynamic-range management (HDR vision) and overlays, underlays, simultaneous augmentation and diminishment (e.g. diminishing the electric arc while looking at a welding torch).[97]

Literature[edit]

The first description of AR as it is known today was in Virtual Light, the 1994 novel by William Gibson. In 2011, AR was blended with poetry by ni ka from Sekai Camera in Tokyo, Japan. The prose of these AR poems come from Paul Celan, "Die Niemandsrose", expressing the aftermath of the 2011 Tōhoku earthquake and tsunami.[98][99][100]

Archaeology[edit]

AR has been used to aid archaeological research. By augmenting archaeological features onto the modern landscape, AR allows archaeologists to formulate possible site configurations from extant structures.[101] Computer generated models of ruins, buildings, landscapes or even ancient people have been recycled into early archaeological AR applications.[102][103][104] For example, implementing a system like, "VITA (Visual Interaction Tool for Archaeology)" will allow users to imagine and investigate instant excavation results without leaving their home. Each user can collaborate by mutually "navigating, searching, and viewing data." Hrvjone Benko, a researcher for the computer science department at Colombia University, points out that these particular systems and others like it can provide "3D panoramic images and 3D models of the site itself at different excavation stages." All a while, it organizes much of the data in a collaborative way that is easy to use. Collaborative AR systems supply multimodal interactions that combine the real world with virtual images of both environments.[105]

Architecture[edit]

AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view of a property before the physical building is constructed there; this was demonstrated publicly by Trimble Navigation in 2004. AR can also be employed within an architect's workspace, rendering animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications, allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout.[106][107][108]

With the continual improvements to GPS accuracy, businesses are able to use augmented reality to visualize georeferenced models of construction sites, underground structures, cables and pipes using mobile devices.[109] Augmented reality is applied to present new projects, to solve on-site construction challenges, and to enhance promotional materials.[110] Examples include the Daqri Smart Helmet, an Android-powered hard hat used to create augmented reality for the industrial worker, including visual instructions, real-time alerts, and 3D mapping.[111]

Following the Christchurch earthquake, the University of Canterbury released CityViewAR,[112] which enabled city planners and engineers to visualize buildings that had been destroyed.[113] Not only did this provide planners with tools to reference the previous cityscape, but it also served as a reminder to the magnitude of the devastation caused, as entire buildings had been demolished.

Visual art[edit]

AR applied in the visual arts allows objects or places to trigger artistic multidimensional experiences and interpretations of reality.

AR technology aided the development of eye tracking technology[114] to translate a disabled person's eye movements into drawings on a screen.[115]

By 2011, augmenting people, objects, and landscapes had become a recognized art style. In 2011, artist Amir Bardaran's work Frenchising the Mona Lisa overlaid video on Da Vinci's painting using an AR mobile application called Junaio.[116] The app allowed the user to train a smartphone on Da Vinci's Mona Lisa and watch the woman loosen her hair and wrap a French flag around her visage in the form an Islamic hijab. The wearing of a hijab was controversial in France at the time.[117]

Commerce[edit]

AR is used to integrate print and video marketing. Printed marketing material can be designed with certain "trigger" images that, when scanned by an AR-enabled device using image recognition, activate a video version of the promotional material. A major difference between augmented reality and straightforward image recognition is that one can overlay multiple media at the same time in the view screen, such as social media share buttons, the in-page video even audio and 3D objects. Traditional print-only publications are using augmented reality to connect many different types of media.[118][119][120][121][122]

AR can enhance product previews such as allowing a customer to view what's inside a product's packaging without opening it.[123] AR can also be used as an aid in selecting products from a catalog or through a kiosk. Scanned images of products can activate views of additional content such as customization options and additional images of the product in its use.[124][125]

By 2010, virtual dressing rooms had been developed for e-commerce.[126]

In 2012, a mint used AR techniques to market a commemorative coin for Aruba. The coin itself was used as an AR trigger, and when held in front of an AR-enabled device it revealed additional objects and layers of information that were not visible without the device.[127][128]

In 2013, L'Oreal Paris used CrowdOptic technology to create an augmented reality experience at the seventh annual Luminato Festival in Toronto, Canada.[38]

In 2014, L'Oreal brought the AR experience to a personal level with their "Makeup Genius" app. It allowed users to try out make-up and beauty styles via a mobile device.[129]

In 2015, the Bulgarian startup iGreet developed its own AR technology and used it to make the first premade "live" greeting card. A traditional paper card was augmented with digital content which was revealed by using the iGreet app.[130][131]

In 2015, the Luxembourg startup itondo.com[132] launched an AR app for the art market that lets art buyers accurately visualize 2D artworks to scale on their own walls before they buy.

Education[edit]

In educational settings, AR has been used to complement a standard curriculum. Text, graphics, video, and audio may be superimposed into a student's real-time environment. Textbooks, flashcards and other educational reading material may contain embedded "markers" or triggers that, when scanned by an AR device, produced supplementary information to the student rendered in a multimedia format.[133][134][135] This makes AR a good alternative method for presenting information and Multimedia Learning Theory can be applied.[136]

As AR evolved, students can participate interactively and interact with knowledge more authentically. Instead of remaining passive recipients, students can become active learners, able to interact with their learning environment. Computer-generated simulations of historical events allow students to explore and learning details of each significant area of the event site.[137]

In higher education, Construct3D, a Studierstube system, allows students to learn mechanical engineering concepts, math or geometry.[138] Chemistry AR apps allow students to visualize and interact with the spatial structure of a molecule using a marker object held in the hand.[139] Anatomy students can visualize different systems of the human body in three dimensions.[140]

Augmented reality technology enhances remote collaboration, allowing students and instructors in different locales to interact by sharing a common virtual learning environment populated by virtual objects and learning materials.[141]

Primary school children learn easily from interactive experiences. Astronomical constellations and the movements of objects in the solar system were oriented in 3D and overlaid in the direction the device was held, and expanded with supplemental video information. Paper-based science book illustrations could seem to come alive as video without requiring the child to navigate to web-based materials.

For teaching anatomy, teachers could use devices to superimpose hidden anatomical structures like bones and organs on any person in the classroom.[citation needed]

While some educational apps were available for AR in 2016, it was not broadly used. Apps that leverage augmented reality to aid learning included SkyView for studying astronomy,[142] AR Circuits for building simple electric circuits,[143] and SketchAr for drawing.[144]

AR would also be a way for parents and teachers to achieve their goals for modern education, which might include providing a more individualized and flexible learning, making closer connections between what is taught at school and the real world, and helping students to become more engaged in their own learning.[145]

A recent research compared the functionalities of augmented reality tools with potential for education [146]

Emergency management/search and rescue[edit]

Augmented reality systems are used in public safety situations, from super storms to suspects at large.

As early as 2009, two articles from Emergency Management magazine discussed the power of this technology for emergency management. The first was "Augmented Reality--Emerging Technology for Emergency Management" by Gerald Baron.[147] Per Adam Crowe: "Technologies like augmented reality (ex: Google Glass) and the growing expectation of the public will continue to force professional emergency managers to radically shift when, where, and how technology is deployed before, during, and after disasters."[148]

Another early example was a search aircraft looking for a lost hiker in rugged mountain terrain. Augmented reality systems provided aerial camera operators with a geographic awareness of forest road names and locations blended with the camera video. The camera operator was better able to search for the hiker knowing the geographic context of the camera image. Once located, the operator could more efficiently direct rescuers to the hiker's location because the geographic position and reference landmarks were clearly labeled.[149]

Video games[edit]

See also: List of augmented reality software § Games

The gaming industry embraced AR technology. A number of games were developed for prepared indoor environments, such as AR air hockey, Titans of Space, collaborative combat against virtual enemies, and AR-enhanced pool table games.[150][151][152]

Augmented reality allowed video game players to experience digital game play in a real world environment. Companies and platforms like Niantic and Proxy42 emerged as major augmented reality gaming creators.[153][154] Niantic is notable for releasing the record-breaking game Pokémon Go.[155]Disney has partnered with Lenovo to create the augmented reality game Star Wars: Jedi Challenges that works with a Lenovo Mirage AR headset, a tracking sensor and a Lightsaber controller, scheduled to launch in December 2017.[156]

Industrial design[edit]

Main article: Industrial Augmented Reality

AR allows industrial designers to experience a product's design and operation before completion. Volkswagen has used AR for comparing calculated and actual crash test imagery.[157] AR has been used to visualize and modify car body structure and engine layout. It has also been used to compare digital mock-ups with physical mock-ups for finding discrepancies between them.[158][159]

Medical[edit]

Since 2005, a device called a near-infrared vein finder that films subcutaneous veins, processes and projects the image of the veins onto the skin has been used to locate veins.[160][161]

AR provides surgeons with patient monitoring data in the style of a fighter pilot's heads-up display, and allows patient imaging records, including functional videos, to be accessed and overlaid. Examples include a virtual X-ray view based on prior tomography or on real-time images from ultrasound and confocal microscopy probes,[162] visualizing the position of a tumor in the video of an endoscope,[163] or radiation exposure risks from X-ray imaging devices.[164][165] AR can enhance viewing a fetus inside a mother's womb.[166] Siemens, Karl Storz and IRCAD have developed a system for laparoscopic liver surgery that uses AR to view sub-surface tumors and vessels.[167] AR has been used for cockroach phobia treatment.[168] Patients wearing augmented reality glasses can be reminded to take medications.[169] Virtual reality has been seen promising in the medical field since the 90's.[170] Augmented reality can be very helpful in the medical field. It could be used to provide crucial information to a doctor or surgeon with having them take their eyes off the patient. On the 30th of April, 2015 Microsoft announced the Microsoft HoloLens, their first shot at augmented reality. The HoloLens has advanced through the years and it has gotten so advanced that it has been used to project holograms for near infrared fluorescence based image guided surgery.[171] As augment reality advances, the more it is implemented into medical use. Augmented reality and other computer based-utility is being used today to help train medical professionals.[172] With the creation of Google Glass and Microsoft HoloLens, has helped pushed Augmented Reality into medical education.

Spatial immersion and interaction[edit]

Augmented reality applications, running on handheld devices utilized as virtual reality headsets, can also digitalize human presence in space and provide a computer generated model of them, in a virtual space where they can interact and perform various actions. Such capabilities are demonstrated by "Project Anywhere", developed by a postgraduate student at ETH Zurich, which was dubbed as an "out-of-body experience".[173][174][175]

Flight training[edit]

Building on decades of perceptual-motor research in experimental psychology, researchers at the Aviation Research Laboratory of the University of Illinois at Urbana-Champaign used augmented reality in the form of a flight path in the sky to teach flight students how to land a flight simulator. An adaptive augmented schedule in which students were shown the augmentation only when they departed from the flight path proved to be a more effective training intervention than a constant schedule.[176][177] Flight students taught to land in the simulator with the adaptive augmentation learned to land a light aircraft more quickly than students with the same amount of landing training in the simulator but with constant augmentation or without any augmentation.[176]

Military[edit]

An interesting early application of AR occurred when Rockwell International created video map overlays of satellite and orbital debris tracks to aid in space observations at Air Force Maui Optical System. In their 1993 paper "Debris Correlation Using the Rockwell WorldView System" the authors describe the use of map overlays applied to video from space surveillance telescopes. The map overlays indicated the trajectories of various objects in geographic coordinates. This allowed telescope operators to identify satellites, and also to identify and catalog potentially dangerous space debris.[178]

Starting in 2003 the US Army integrated the SmartCam3D augmented reality system into the Shadow Unmanned Aerial System to aid sensor operators using telescopic cameras to locate people or points of interest. The system combined both fixed geographic information including street names, points of interest, airports, and railroads with live video from the camera system. The system offered a "picture in picture" mode that allows the system to show a synthetic view of the area surrounding the camera's field of view. This helps solve a problem in which the field of view is so narrow that it excludes important context, as if "looking through a soda straw". The system displays real-time friend/foe/neutral location markers blended with live video, providing the operator with improved situational awareness.

Researchers at USAF Research Lab (Calhoun, Draper et al.) found an approximately two-fold increase in the speed at which UAV sensor operators found points of interest using this technology.[179] This ability to maintain geographic awareness quantitatively enhances mission efficiency. The system is in use on the US Army RQ-7 Shadow and the MQ-1C Gray Eagle Unmanned Aerial Systems.

In combat, AR can serve as a networked communication system that renders useful battlefield data onto a soldier's goggles in real time. From the soldier's viewpoint, people and various objects can be marked with special indicators to warn of potential dangers. Virtual maps and 360° view camera imaging can also be rendered to aid a soldier's navigation and battlefield perspective, and this can be transmitted to military leaders at a remote command center.[180]

Navigation[edit]

See also: Automotive navigation system

The NASA X-38 was flown using a Hybrid Synthetic Vision system that overlaid map data on video to provide enhanced navigation for the spacecraft during flight tests from 1998 to 2002. It used the LandForm software and was useful for times of limited visibility, including an instance when the video camera window frosted over leaving astronauts to rely on the map overlays.[181] The LandForm software was also test flown at the Army Yuma Proving Ground in 1999. In the photo at right one can see the map markers indicating runways, air traffic control tower, taxiways, and hangars overlaid on the video.[182]

AR can augment the effectiveness of navigation devices. Information can be displayed on an automobile's windshield indicating destination directions and meter, weather, terrain, road conditions and traffic information as well as alerts to potential hazards in their path.[183][184][185] Aboard maritime vessels, AR can allow bridge watch-standers to continuously monitor important information such as a ship's heading and speed while moving throughout the bridge or performing other tasks.[186]

Workplace[edit]

Augmented reality may have a good impact on work collaboration as people may be inclined to interact more actively with their learning environment. It may also encourage tacit knowledge renewal which makes firms more competitive. AR was used to facilitate collaboration among distributed team members via conferences with local and virtual participants. AR tasks included brainstorming and discussion meetings utilizing common visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed control rooms.[187][188][189]

Complex tasks such as assembly, maintenance, and surgery were simplified by inserting additional information into the field of view. For example, labels were displayed on parts of a system to clarify operating instructions for a mechanic performing maintenance on a system.[190][191] Assembly lines benefited from the usage of AR. In addition to Boeing, BMW and Volkswagen were known for incorporating this technology into assembly lines for monitoring process improvements.[192][193][194] Big machines are difficult to maintain because of their multiple layers or structures. AR permits people to look through the machine as if with an x-ray, pointing them to the problem right away.[195]

The new wave of professionals, the Millennial workforce, demands more efficient knowledge sharing solutions and easier access to rapidly growing knowledge bases. Augmented reality offers a solution to that.[196]

Broadcast and live events[edit]

Weather visualizations were the first application of augmented reality to television. It has now become common in weathercasting to display full motion video of images captured in real-time from multiple cameras and other imaging devices. Coupled with 3D graphics symbols and mapped to a common virtual geospace model, these animated visualizations constitute the first true application of AR to TV.

AR has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay augmentation through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. AR is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker ball trajectories.[79][197]

Augmented reality for Next Generation TV allows viewers to interact with the programs they were watching. They can place objects into an existing program and interact with them, such as moving them around. Objects include avatars of real persons in real time who are also watching the same program.

AR has been used to enhance concert and theater performances. For example, artists allow listeners to augment their listening experience by adding their performance to that of other bands/groups of users.[198][199][200]

Tourism and sightseeing[edit]

Travelers may use AR to access real-time informational displays regarding a location, its features, and comments or content provided by previous visitors. Advanced AR applications include simulations of historical events, places, and objects rendered into the landscape.[201][202][203]

AR applications linked to geographic locations present location information by audio, announcing features of interest at a particular site as they become visible to the user.[204][205][206]

Companies can use AR to attract tourists to particular areas that they may not be familiar with by name. Tourists will be able to experience beautiful landscapes in first person with the use of AR devices. Companies like Phocuswright plan to use such technology in order to expose the lesser known but beautiful areas of the planet, and in turn, increase tourism. Other companies such as Matoke Tours have already developed an application where the user can see 360 degress from several different places in Uganda. Matoke Tours and Phocuswright have the ability to display their apps on virtual reality headsets like the Samsung VR and Oculus Rift. [207]

Translation[edit]

AR systems such as Word Lens can interpret the foreign text on signs and menus and, in a user's augmented view, re-display the text in the user's language. Spoken words of a foreign language can be translated and displayed in a user's view as printed subtitles.[208][209][210]

Music[edit]

It has been suggested that augmented reality may be used in new methods of music production, mixing, control and visualization.[211][212][213][214]

A tool for 3D music creation in clubs that, in addition to regular sound mixing features, allows the DJ to play dozens of sound samples, placed anywhere in 3D space, has been conceptualized.[215]

Leeds College of Music teams have developed an AR app that can be used with Audient desks and allow students to use their smartphone or tablet to put layers of information or interactivity on top of an Audient mixing desk.[216]

ARmony is a software package that makes use of augmented reality to help people to learn an instrument.[217]

In a proof-of-concept project Ian Sterling, interaction design student at California College of the Arts, and software engineer Swaroop Pal demonstrated a HoloLens app whose primary purpose is to provide a 3D spatial UI for cross-platform devices — the Android Music Player app and Arduino-controlled Fan and Light — and also allow interaction using gaze and gesture control.[218]

NASA X-38 display showing video map overlays including runways and obstacles during flight test in 2000
Vuzix AR3000 AugmentedReality SmartGlasses
An example of an AR code containing a QR code
The AR-Icon can be used as a marker on print as well as on online media. It signals the viewer that digital content is behind it. The content can be viewed with a smartphone or tablet.
Augment SDK offers brands and retailers the capability to personalize their customers' shopping experience by embedding AR product visualization into their eCommerce platforms.
LandForm video map overlay marking runways, road, and buildings during 1999 helicopter flight test

This article is about wearable computing. For window glass with variable opacity, see smart glass. For the Xbox control application, see Xbox SmartGlass.

Smartglasses or smart glasses are wearable computerglasses that add information alongside or to what the wearer sees.[1][2][3][4][5][6][7][8] Alternatively smartglasses are sometimes defined as wearable computer glasses that are able to change their optical properties at runtime. Smart sunglasses which are programmed to change tint by electronic means are an example of the latter type of smartglasses [9]. Superimposing information onto a field of view is achieved through an optical head-mounted display (OHMD) or embedded wireless glasses with transparent heads-up display (HUD) or augmented reality (AR) overlay that has the capability of reflecting projected digital images as well as allowing the user to see through it, or see better with it. While early models can perform basic tasks, such as just serve as a front end display for a remote system, as in the case of smartglasses utilizing cellular technology or Wi-Fi, modern smart glasses are effectively wearable computers which can run self-contained mobile apps. Some are handsfree that can communicate with the Internet via natural language voice commands, while other use touch buttons.[10][11][12][13][14][15][16]

Like other computers, smartglasses may collect information from internal or external sensors. It may control or retrieve data from other instruments or computers. It may support wireless technologies like Bluetooth, Wi-Fi, and GPS. While a smaller number of models run a mobile operating system and function as portable media players to send audio and video files to the user via a Bluetooth or WiFi headset.[17][18] Some smartglasses models, also feature full lifelogging and activity tracker capability.[19][20][21][22]

Such smartglasses devices may also have all the features of a smartphone.[23][24] Some also have activity tracker functionality features (also known as "fitness tracker") as seen in some GPS watches.[8][25]

Features and applications[edit]

As with other lifelogging and activity tracking devices, the GPS tracking unit and digital camera of some smartglasses can be used to record historical data. For example, after the completion of a workout, data can be uploaded onto a computer or online to create a log of exercise activities for analysis. Some smart watches can serve as full GPS navigation devices, displaying maps and current coordinates. Users can "mark" their current location and then edit the entry's name and coordinates, which enables navigation to those new coordinates.[26][27]

Although some smartglasses models manufactured in the 21st century are completely functional as standalone products, most manufacturers recommend or even require that consumers purchase mobile phone handsets that run the same operating system so that the two devices can be synchronized for additional and enhanced functionality. The smartglasses can work as an extension, for head-up display (HUD) or remote control of the phone and alert the user to communication data such as calls, SMS messages, emails, and calendar invites.[28]

Security applications[edit]

Smart glasses could be used as a body camera. In 2018, Chinese police in Zhengzhou were using smart glasses to take photos which are compared against a government database using facial recognition to identify suspects, retrieve an address, and track people moving beyond their home areas.[29]

Healthcare applications[edit]

Several proofs of concept for Google Glasses have been proposed in healthcare. In July 2013, Lucien Engelen started research on the usability and impact of Google Glass in health care. As of August 2013, Engelen, who is based at Singularity University and in Europe at Radboud University Medical Center,[30] is the first healthcare professional in Europe to participate in the Glass Explorer program.[31] His research on Google Glass (starting August 9, 2013) was conducted in operating rooms, ambulances, a trauma helicopter, general practice, and home care as well as the use in public transportation for visually or physically impaired. His research consisted of taking pictures, streaming videos to other locations, dictating operative log, and tele-consultation through Hangout. Engelen documented his findings in blogs,[32] videos,[33] pictures, on Twitter,[34] and on Google+.[35] and is still ongoing.

Key findings of Engelen's research included:

  1. The quality of pictures and video are usable for healthcare education, reference, and remote consultation.The camera needs to be tilted to different angle[36] for most of the operative procedures
  2. Tele-consultation is possible—depending on the available bandwidth—during operative procedures.[37]
  3. A stabilizer should be added to the video function to prevent choppy transmission when a surgeon looks to screens or colleagues.
  4. Battery life can be easily extended with the use of an external battery.
  5. Controlling the device and/or programs from another device is needed for some features because of sterile environment.
  6. Text-to-speech ("Take a Note" to Evernote) exhibited a correction rate of 60 percent, without the addition of a medical thesaurus.
  7. A protocol or checklist displayed on the screen of Google Glass can be helpful during procedures.[citation needed]

Dr. Phil Haslam and Dr. Sebastian Mafeld demonstrated the first concept for Google Glass in the field of interventional radiology. They demonstrated the manner in which the concept of Google Glass could assist a liver biopsy and fistulaplasty, and the pair stated that Google Glass has the potential to improve patient safety, operator comfort, and procedure efficiency in the field of interventional radiology.[38] In June 2013, surgeon Dr. Rafael Grossmann was the first person to integrate Google Glass into the operating theater, when he wore the device during a PEG (percutaneous endoscopic gastrostomy) procedure.[39] In August 2013, Google Glass was also used at Wexner Medical Center at Ohio State University. Surgeon Dr. Christopher Kaeding used Google Glass to consult with a colleague in a distant part of Columbus, Ohio. A group of students at The Ohio State University College of Medicine also observed the operation on their laptop computers. Following the procedure, Kaeding stated, "To be honest, once we got into the surgery, I often forgot the device was there. It just seemed very intuitive and fit seamlessly."[40]

The November 16, 2013, in Santiago de Chile, the maxillofacial team led by Dr.gn Antonio Marino conducted the first orthognathic surgery assisted with Google Glass in Latin America, interacting with them and working with simultaneous three-dimensional navigation. The surgical team was interviewed by the ADN radio medium and the LUN newspaper. In January 2014, Indian Orthopedic Surgeon Selene G. Parekh conducted the foot and ankle surgery using Google Glass in Jaipur, which was broadcast live on Google website via the internet. The surgery was held during a three-day annual Indo-US conference attended by a team of experts from the US, and co-organized by Dr Ashish Sharma. Sharma said Google Glass allows looking at an X-Ray or MRI without taking the eye off of the patient, and allows a doctor to communicate with a patient's family or friends during a procedure. "The image which the doctor sees through Google Glass will be broadcast on the internet. It's an amazing technology. Earlier, during surgeries, to show something to another doctor, we had to keep moving and the cameraman had to move as well to take different angles. During this, there are chances of infection. So in this technology, the image seen by the doctor using Google Glass will be seen by everyone throughout the world," he said.[citation needed]

In Australia, during January 2014, Melbourne tech startup Small World Social collaborated with the Australian Breastfeeding Association to create the first hands-free breastfeeding Google Glass application for new mothers.[41] The application, named Google Glass Breastfeeding app trial, allows mothers to nurse their baby while viewing instructions about common breastfeeding issues (latching on, posture etc.) or call a lactation consultant via a secure Google Hangout, who can view the issue through the mother's Google Glass camera.[42] The trial was successfully concluded in Melbourne in April 2014, and 100% of participants were breastfeeding confidently.[43][44]Small World SocialBreastfeeding Support Project

Display types[edit]

Various techniques have existed for see-through HMDs. Most of these techniques can be summarized into two main families: “Curved Mirror” (or Curved Combiner) based and “Waveguide” or "Light-guide" based. The curved mirror technique has been used by Vuzix in their Star 1200 product, by Olympus, and by Laster Technologies. Various waveguide techniques have existed for some time. These techniques include diffraction optics, holographic optics, polarized optics, reflective optics, and projection:

  • Diffractive waveguide – slanted diffraction grating elements (nanometric 10E-9). Nokia technique now licensed to Vuzix.
  • Holographic waveguide – 3 holographic optical elements (HOE) sandwiched together (RGB). Used by Sony and Konica Minolta.
  • Polarized waveguide – 6 multilayer coated (25-35) polarized reflectors in glass sandwich. Developed by Lumus.
  • Reflective waveguide – thick light guide with single semi reflective mirror. This technique is used by Epson in their Moverio product.
  • "Clear-Vu" reflective waveguide – thin monolithic molded plastic w/ surface reflectors and conventional coatings developed by Optinvent and used in their ORA product.
  • Switchable waveguide – developed by SBG Labs.
  • Virtual retinal display (VRD) – Also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display (like a television) directly onto the retina of the eye.

The Technical IllusionscastAR uses a different technique with clear glass. The glasses have a projector, and the image is returned to the eye by a reflective surface.

Smart sunglasses[edit]

Smart sunglasses which are able to change their light filtering properties at runtime generally use liquid crystal technology. As lighting conditions change, for example when the user goes from indoors to outdoors, the brightness ratio also changes and can cause undesirable vision impairment. An attractive solution for overcoming this issue is to incorporate a dimming filters into smart sunglasses which control the amount of ambient light reaching the eye. An innovative liquid crystal based component for use in the lenses of smart sunglasses is PolarView[45] by LC-Tec[46] . PolarView offers analog dimming control, with the level of dimming being adjusted by an applied drive voltage.

Another type of smart sunglasses uses adaptive polarization filtering (ADF). ADF-type smart sunglasses can change their polarization filtering characteristics at runtime. For example, ADF-type smart sunglasses can change from horizontal polarization filtering to vertical polarization filtering at the touch of a button.

The lenses of smart sunglasses can be manufactured out of multiple adaptive cells, therefore different parts of the lens can exhibit different optical properties. For example the top of the lens can be electronically configured to have different polarization filter characteristics and different opacity than the lower part of the lens.[47]

Human Computer Interface (HCI) control input[edit]

Head-mounted displays are not designed to be workstations, and traditional input devices such as keyboard and mouse do not support the concept of smartglasses. Instead Human Computer Interface (HCI) control input needs to be methods lend themselves to mobility and/or hands-free use are good candidates. A wide body of literature in human computer interface can be classified into three main categories, which are hand-held, touch, and touchless input [48] The examples are listed as follows.

Products[edit]

In development[edit]

  • AiR (Augmented interactive Reality) Platform by Atheer Labs – gesture-controlled mobile AR smartglasses for industrial applications
  • b.g. (Beyond Glasses) by Meganesuper Co., Ltd. – adjustable wearable display than can be attached to regular prescription glasses [50]
  • AMA Xperteye - Advanced Mobile Applications (AMA Studios) software for off the shelf customizable smart glasses interface[51]
  • castAR by Technical Illusions – wearable AR device for gaming
  • Mirama by Brilliantservice Co., Ltd.– gesture controlled augmented reality smartglasses
  • Meta Company "spaceglasses"
  • Vuzix "Vuzix M300 and Vuzix M3000, expected summer 2016"[52]
  • Magic Leap
  • NeckTec smart necklace – the universal B2B form-factor for AR glasses/wearable computers/communicators with extensive battery pack, retractable high-quality earphones, wide microphone array and occipital connection node for lite smart glasses.

Current[edit]

  • Airscouter, a virtual retinal display made by Brother Industries[53][54]
  • Epiphany Eyewear - smartglasses developed by Vergence Labs, a subsidiary of Snap Inc.
  • Epson Moverio BT-300 and Moverio Pro BT-2000/2200 – augmented reality smartglasses by Epson.[55]
  • EyeTap – eye-mounted camera and head-up display (HUD).
  • Microsoft HoloLens - a pair of mixed reality smartglasses with high-definition 3D optical head-mounted display and spatial sound developed and manufactured by Microsoft, using the Windows Holographic platform.
  • Optinvent ORA-1 – eye-mounted camera and heads-up display (HUD) wearable computing platform[56]
  • Pivothead SMART – "Simple Modular Application-Ready Technology", released in October 2014[57]
  • Recon Snow 2 – eye-mounted camera and head-up display (HUD) snow goggles[58]
  • Recon Jet – rugged eye-mounted camera and head-up display (HUD) for sporting[59]
  • SixthSense – wearable AR device.
  • Spectacles - sunglasses with an embedded wearable camera by Snap Inc.
  • Vuzix – Augmented reality glasses for 3D gaming, manufacturing training, and military applications.
  • Google Glass – optical head-mounted display.
  • SOLOS – smartglasses for cyclists.
  • Everysight Raptor – smartglasses for cyclists by Everysight.

Discontinued[edit]

  • Looxcie – ear-mounted streaming video camera[60]
  • BuBBles glasses – augmented reality glasses by BuBBles lab[61]
  • Golden-i – head-mounted computer

2010s[edit]

2012[edit]

  • On 17 April 2012, Oakley's CEO Colin Baden stated that the company has been working on a way to project information directly onto lenses since 1997, and has 600 patents related to the technology, many of which apply to optical specifications.[62]
  • On 18 June 2012, Canon announced the MR (Mixed Reality) System which simultaneously merges virtual objects with the real world at full scale and in 3D. Unlike the Google Glass, the MR System is aimed for professional use with a price tag for the headset and accompanying system is $125,000, with $25,000 in expected annual maintenance.[63]

2013[edit]

  • At MWC 2013, the Japanese company Brilliant Service introduced the Viking OS, an operating system for HMD's which was written in Objective-C and relies on gesture control as a primary form of input. It includes a facial recognition system and was demonstrated on a revamp version of Vuzix STAR 1200XL glasses ($4,999) which combined a generic RGB camera and a PMD CamBoard nano depth camera.[64]
  • At Maker Faire 2013, the startup company Technical Illusions unveiled CastAR augmented reality glasses which are well equipped for an AR experience: infrared LEDs on the surface detect the motion of an interactive infrared wand, and a set of coils at its base are used to detect RFID chip loaded objects placed on top of it; it uses dual projectors at a frame rate of 120 Hz and a retro reflective screen providing a 3D image that can be seen from all directions by the user; a camera sitting on top of the prototype glasses is incorporated for position detection, thus the virtual image changes accordingly as a user walks around the CastAR surface.[65]
  • At D11 Conference 2013, the startup company Atheer Labs unveild its 3D augmented reality glasses prototype. The prototype includes binicular lens, 3D images support, a rechargeable battery, WiFi, Bluetooth 4.0, accelerometer, gyro and an IR. User can interact with the device by voice commands and the mounted camera allows the users to interact naturally with the device with gestures.[66]

2014[edit]

  • The Orlando Magic, Indiana Pacers, and other NBA teams used Google Glass on the CrowdOptic platform to enhance the in-game experience for fans.[67]
  • Rhode Island Hospital's Emergency Department became the first emergency department to experiment with Google Glass applications.[68]

2016[edit]

  • Latvian-based company NeckTec announced the smart necklace form-factor designed to facilitate AR glasses development due to transfer of processor and batteries in the necklace, thus making facial frame lite and elegant while augmenting the power and usage life of the AR device. The smart necklace serves as media player with almost unlimited storage and as Bluetooth headset for smartphone with cozy earphones storage, has patented key elements for AV glasses connection.

2018[edit]

  • Intel announces Vaunt, a set of smart glasses that are designed to appear like conventional glasses and are display-only, using retinal projection.[69]

Market structure[edit]

Analytics company IHS has estimated that the shipments of smart glasses may rise from just 50,000 units in 2012 to as high as 6.6 million units in 2016.[70] According to a survey of more than 4,600 U.S. adults conducted by Forrester Research, around 12 percent of respondents are willing to wear Google Glass or other similar device if it offers a service that piques their interest.[71]Business Insider's BI Intelligence expects an annual sales of 21 million Google Glass units by 2018.[72][73][74] Samsung and Microsoft are expected to develop their own version of Google Glass within six months with a price range of $200 to $500. Samsung has reportedly bought lenses from Lumus, a company based in Israel. Another source says Microsoft is negotiating with Vuzix.[75] In 2006, Apple filed patent for its own HMD device.[76] In July 2013, APX Labs founder and CEO Brian Ballard stated that he knows of 25 to 30 hardware companies which are working on their own versions of smartglasses, some of which APX is working with.[77]

In fact, there were only about 150K AR glasses shipped to customers through the world in 2016 despite strong opinion of CEOs of leading tech companies that AR is entering our life. This outlines some serious technical limitations that prevent OEMs from offering a product that would balance functionality and customers’ desire not to wear daily a massive facial/cephalic device. The solution could be in transfer of battery, processing power and connectivity from the AR glasses frame to an external wire-connected device such as smart necklace. This could allow development of AR glasses serving as display only – lite, cheap and stylish.

Public reception for commercial usage[edit]

Critical reception[edit]

In November 2012, Google Glass received recognition by Time Magazine as one of the "Best Inventions of the Year 2012", alongside inventions such as the Curiosity Rover.[78] After a visit to the University of Cambridge by Google's chairman Eric Schmidt in February 2013, Wolfson College professor[79] John Naughton praised the Google Glass and compared it with the achievements of hardware and networking pioneer Douglas Engelbart. Naughton wrote that Engelbart believed that machines "should do what machines do best, thereby freeing up humans to do what they do best".[80] Lisa A. Goldstein, a freelance journalist who was born profoundly deaf, tested the product on behalf of people with disabilities and published a review on August 6, 2013. In her review, Goldstein states that Google Glass does not accommodate hearing aids and is not suitable for people who cannot understand speech. Goldstein also explained the limited options for customer support, as telephone contact was her only means of communication.[81]

In December 2013, David Datuna became the first artist to incorporate Google Glass into a contemporary work of art.[82][83] The artwork debuted at a private event at The New World Symphony in Miami Beach, Florida, US and was moved to the Miami Design District for the public debut.[84] Over 1500 people used Google Glass to experience Datuna's American flag from his "Viewpoint of Billions" series.[16]

After negative public reaction, the retail availability of Google Glass ended in January 2015, and the company moved to focus on business customers in 2017.

Privacy concerns[edit]

The EyeTap's functionality and minimalist appearance have been compared to Steve Mann's EyeTap,[85] also known as "Glass" or "Digital Eye Glass", although Google Glass is a "Generation-1 Glass" compared to EyeTap, which is a "Generation-4 Glass".[86] According to Mann, both devices affect both privacy and secrecy by introducing a two-sided surveillance and sousveillance.[87] Concerns have been raised by various sources regarding the intrusion of privacy, and the etiquette and ethics of using the device in public and recording people without their permission.[88][89][90] There is controversy that Google Glass would violate privacy rights due to security problems and others.[91][92][93]

Privacy advocates are concerned that people wearing such eyewear may be able to identify strangers in public using facial recognition, or surreptitiously record and broadcast private conversations.[13] Some companies in the U.S. have posted anti-Google Glass signs in their establishments.[94][95] In July 2013, prior to the official release of the product, Stephen Balaban, co-founder of software company Lambda Labs, circumvented Google’s facial recognition app block by building his own, non-Google-approved operating system. Balaban then installed face-scanning Glassware that creates a summary of commonalities shared by the scanned person and the Glass wearer, such as mutual friends and interests.[96] Additionally, Michael DiGiovanni created Winky, a program that allows a Google Glass user to take a photo with a wink of an eye, while Marc Rogers, a principal security researcher at Lookout, discovered that Glass can be hijacked if a user could be tricked into taking a picture of a malicious QR code.[97][98]

Other concerns have been raised regarding legality of Google Glass in a number of countries, particularly in Russia, Ukraine, and other post-USSR countries. In February 2013, a Google+ user noticed legal issues with Google Glass and posted in the Google Glass community about the issues, stating that the device may be illegal to use according to the current legislation in Russia and Ukraine, which prohibits use of spy gadgets that can record video, audio or take photographs in an inconspicuous manner.[99] Concerns were also raised in regard to the privacy and security of Google Glass users in the event that the device is stolen or lost, an issue that was raised by a US congressional committee. As part of its response to the governmental committee, Google stated in early July that is working on a locking system and raised awareness of the ability of users to remotely reset Google Glass from the web interface in the event of loss. Several facilities have banned the use of Google Glass before its release to the general public, citing concerns over potential privacy-violating capabilities. Other facilities, such as Las Vegas casinos, banned Google Glass, citing their desire to comply with Nevada state law and common gaming regulations which ban the use of recording devices near gambling areas.[100]

[edit]

Concerns have also been raised on operating motor vehicles while wearing the device. On 31 July 2013 it was reported that driving while wearing Google Glass is likely to be banned in the UK, being deemed careless driving, therefore a fixed penalty offense, following a decision by the Department for Transport.[101] In the U.S., West Virginia state representative Gary G. Howell introduced an amendment in March 2013 to the state's law against texting while driving that would include bans against "using a wearable computer with head mounted display." In an interview, Howell stated, "The primary thing is a safety concern, it [the glass headset] could project text or video into your field of vision. I think there's a lot of potential for distraction."[102]

In October 2013, a driver in California was ticketed for "driving with monitor visible to driver (Google Glass)" after being pulled over for speeding by a San Diego Police Department officer. The driver was reportedly the first to be ticketed for driving while wearing a Google Glass.[103] While the judge noted that 'Google Glass fell under "the purview and intent" of the ban on driving with a monitor', the case was thrown out of court due to lack of proof the device was on at the time.[104] In November 2013, a Canadian company, Vandrico, released a study that highlighted the fact that the bone conduction transducer's audibility is improved while wearing foam ear plugs, which could encourage workers to wear hearing protection in loud work environments.[105]

[edit]

Today most AR devices look bulky, and applications such as navigation, a real-time tourist guide, and recording, can drain smart glasses' batteries in about 1-4 hours. Battery life might be improved by using lower-power display systems (as with the Vaunt), wearing a battery pack elsewhere on the body (such as a belt pack or companion smart necklace).

See also[edit]

References[edit]

  1. ^ abIEEE Spectrum, "Vision 2.0" IEEE Spectrum, Volume 50, Issue 3, Digital Object Identifier: 10.1109/MSPEC.2013.6471058, pp42-47
  2. ^Wearable Computing: A First Step Toward Personal Imaging, IEEE Computer, Vol. 30, Iss. 2 Feb. 1997, pp. 25-32,
  3. ^"Quantigraphic camera promises HDR eyesight from Father of AR", Chris Davies, Slashgear, Sept. 12, 2012
  4. ^Ari Brockman. "Best Smart Glasses of 2015". Viewer. Archived from the original on 28 February 2014. 
  5. ^Mike Elgan (21 December 2013). "Why 2014 is the 'year of smart glasses'". Computerworld. 
  6. ^"We get a faceful of smartglasses at 2014 -- and it ain't pretty". CNET. CBS Interactive. 
  7. ^Jessica Dolcourt (8 January 2014). "Lumus DK40 Preview – CNET". CNET. CBS Interactive. 
  8. ^ abScott Stein (18 February 2014). "Epson Moverio BT-200 Smart Glasses Preview – CNET". CNET. CBS Interactive. 
  9. ^"Smart eyewear - LC-Tec". LC-Tec (in Swedish). Retrieved 2017-06-14. 
  10. ^Goldman, David (4 April 2012). "Google unveils 'Project Glass' virtual-reality glasses". Money. CNN. Retrieved 4 April 2012. 
  11. ^Albanesius, Chloe (4 April 2012). "Google 'Project Glass' Replaces the Smartphone With Glasses". PC Magazine. Retrieved 4 April 2012. 
  12. ^Newman, Jared (4 April 2012). "Google's 'Project Glass' Teases Augmented Reality Glasses". PC World. Retrieved 4 April 2012. 
  13. ^ abBilton, Nick (23 February 2012). "Behind the Google Goggles, Virtual Reality". The New York Times. Retrieved 4 April 2012. 
  14. ^These Are Google Glass's CPU and RAM Specs | Gizmodo UK April 26, 2013 – 7:30pm
  15. ^"Faqs – Google Glass – Press FAQ". 
  16. ^ abAdrianne Jeffries (December 4, 2013). "'Viewpoint of Billions' uses Google Glass to make art look back at you". The Verge. Retrieved December 13, 2013. 
  17. ^"Smart glasses: The first wave of wearable and connected devices integrating Imagination IP". Imagination Blog. Retrieved 16 August 2015. 
  18. ^"Epson announces second-gen Moverio smart glasses". Retrieved 16 August 2015. 
  19. ^Andy Bowen. "Lumus reveals classy two-tone Glass competitor with in-lens display". Engadget. AOL. Retrieved 16 August 2015. 
  20. ^Alexis Santos. "Lumus turns its military-grade eyewear into a Google Glass competitor (video)". Engadget. AOL. Retrieved 16 August 2015. 
  21. ^Sean Cooper. "Lumus see-through wearable display hands-on". Engadget. AOL. Retrieved 16 August 2015. 
  22. ^Jessica Dolcourt (13 January 2014). "Pivothead Smart Colfax Preview – CNET". CNET. CBS Interactive. Retrieved 16 August 2015. 
  23. ^Samantha Murphy Kelly (19 December 2013). "Smart Glasses Reveal What It's Like to Have Superpowers". Mashable. Retrieved 16 August 2015. 
  24. ^"Top 7 Google Glass Alternatives". Retrieved 16 August 2015. 
  25. ^Paul McDougall. "When Everybody Starts Wearing Smartglasses, Google Won't Be the Only Player". Retrieved 16 August 2015. 
  26. ^Ari Brockman. "It's 2013: Put On Your Smart Glasses – Viewer". Viewer. Archived from the original on 28 February 2014. Retrieved 16 August 2015. 
  27. ^Smart glasses for the oil and gas industry: A look into the future?
  28. ^"Gartner Says Smartglasses Will Bring Innovation to Workplace Efficiency". Retrieved 16 August 2015. 
  29. ^Chinese police are using smart glasses to identify potential suspects
  30. ^Radboud University Nijmegen Medical Centre
  31. ^"FutureMed | FutureMed Faculty". Futuremed2020.com. Retrieved 2013-08-18. 
  32. ^Čeština. "Is Google Glass Useful in the Operating Room?". LinkedIn. Retrieved 2013-08-18. 
  33. ^"Google Glass in Operating Room @umcn". YouTube. Retrieved 2013-08-18. 
  34. ^"REshapewithGlass (REshapeglass) on Twitter". Twitter.com. Retrieved 2013-08-17. 
  35. ^"REshape withglass – Google". Plus.google.com. Retrieved 2013-08-17. 
  36. ^All sizes | Viewing angles of Google Glass and surgeon | Flickr – Photo Sharing!. Flickr. Retrieved on 2013-11-29.
  37. ^All sizes | Google Glass – Operation #4 | Flickr – Photo Sharing!. Flickr. Retrieved on 2013-11-29.
  38. ^Phil Haslam and Sebastian Mafeld (31 October 2013). "Google Glass: Finding True Clinical Value". Which Medical Device. Which Medical Device. Retrieved 23 December 2013. 
  39. ^John Nosta (21 June 2013). "Inside the Operating Room with Google Glass". Forbes. Forbes, LLC. Retrieved 30 December 2013. 
  40. ^"First US surgery transmitted live via Google Glass (w/ Video)". Medical Xpress. Medical Xpress. 27 August 2013. Retrieved 29 August 2013. 
  41. ^"Google glass connects breastfeeding moms with lactation help/". Inquisitr. Inquisitr. Retrieved 12 June 2014. 
  42. ^"Exclusive Clips Google glasses help breastfeeding mums". Jumpin Today Show. Mi9 Pty. Ltd. Retrieved 12 June 2014. 
  43. ^"Breastfeeding mothers get help from Google Glass and Small World". The Sydney Morning Herald. 
  44. ^"Turns Out Google Glass Is Good for Breastfeeding". Motherboard Vice Media Inc. 21 April 2014. Retrieved 1 May 2014. 
  45. ^"Polar View by LC-Tec"(PDF). LC-Tec. 
  46. ^"LC-TEC Displays AB: Private Company Information - Bloomberg". www.bloomberg.com. Retrieved 2017-06-14. 
  47. ^"Patent US20160282639 - Apparatus and method for augmenting human vision by means of adaptive polarization filter grids". Google Books. 2016-05-19. 
  48. ^Lik-Hang Lee and Pan Hui. "Interaction Methods for Smart Glasses"(PDF). The Hong Kong University of Science and Technology. ARXIV. Retrieved 3 August 2017. 
  49. ^James Trew. "Lumus and eyeSight deal brings gesture control to DK-40 smart glasses hand-on". Engadget. AOL. Retrieved 16 August 2015.
Man wearing a 1998 EyeTap, Digital Eye Glass.[1]
Baby Eve with Georgia for the Breastfeeding Support Project

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *