Mixed reality (MR) is the merging of real and virtual worlds to produce new environments and visualizations, where physical and digital objects co-exist and interact in real time. Mixed reality does not exclusively take place in either the physical world or virtual world, but is a hybrid of reality and virtual reality. Augmented reality, a related term, takes place in the physical world, with information or objects added virtually.
There are many practical applications of mixed reality, including design, entertainment, military training, and remote working. There are also different display technologies used to facilitate the interaction between users and mixed reality applications.
Mixed reality was first defined in 1994 by Paul Milgram and Fumio Kishino as "...anywhere between the extrema of the virtuality continuum" (VC), where the virtuality continuum extends from the completely real through to the completely virtual environment, with augmented reality and augmented virtuality ranging between. The mediality continuum can be implemented in a welding helmet or eyeglasses that can block out advertising or replace real-world ads with useful information. This Mediated Reality Continuum stands as the basis for describing how objects in both the physical and virtual worlds are interacting. Rather than simply relying on reality and virtuality as two entirely separate entities, it has been accepted that there is a continuum between these two concepts and applications of mixed reality can reside anywhere between the two. In their paper that first introduced the term mixed reality, Milgram and Kishino argued that such a term is necessary to refer to "a particular subclass of VR related technologies that involve the merging of real and virtual worlds," a specification previously not given a word.
Mixed reality refers to everything in the reality-virtuality continuum except for applications on the two extremes. This includes virtual reality (VR), augmented reality (AR), and augmented virtuality (AV). On one end of the spectrum lies the real world with no technological overlays. On the other end of the spectrum lies virtual reality, which refers to "an artificial environment which is experienced through sensory stimuli (such as sights and sounds) provided by a computer and in which one's actions partially determine what happens in the environment." Augmented reality lies between those two points and refers to "an enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device." Mixed reality is unique in that the term usually refers to artificial products that interact with users in the real world. Augmented virtuality (AV) is a subcategory of mixed reality that refers to the merging of real-world objects into virtual worlds.
As an intermediate case in the virtuality continuum, it refers to predominantly virtual spaces, where physical elements (such as physical objects or people) are dynamically integrated into and can interact with the virtual world in real time. This integration is achieved with the use of various techniques, such as streaming video from physical spaces, like through a webcam, or using the 3D digitalization of physical objects. The use of real-world sensor information, such as gyroscopes, to control a virtual environment is an additional form of augmented virtuality, in which external inputs provide context for the virtual view.
In a physics context, the term "interreality system" refers to a virtual reality system coupled with its real-world counterpart. A 2007 paper describes an interreality system comprising a real physical pendulum coupled to a pendulum that only exists in virtual reality. This system has two stable states of motion: a "Dual Reality" state in which the motion of the two pendula are uncorrelated, and a "Mixed Reality" state in which the pendula exhibit stable phase-locked motion, which is highly correlated. The use of the terms "mixed reality" and "interreality" is clearly defined in the context of physics and may be slightly different in other fields, however, it is generally seen as, "bridging the physical and virtual world".
Mixed reality has been used in applications across fields including design, education, entertainment, military training, healthcare, product content management, and human-in-the-loop operation of robots.
By making use of the MR technology, the geometry of 3-dimensional objects can be visualized. Users can also interact with the virtual model through gestures and voice commands. MR can help students or designers not only to comprehend the design of digital models through visualizing 3-D geometry, but also to understand product functions, geometric relationships, and cultivating their creativity. It can be applied from primary to tertiary education.
Simulation-based learning includes VR and AR based training and interactive, experiential learning. There are many potential use cases for Mixed Reality in both educational settings and professional training settings. Notably in education, AR has been used to simulate historical battles, providing an unparalleled immersive experience for students and potentially enhanced learning experiences. In addition, AR has shown effectiveness in university education for health science and medical students within disciplines that benefit from 3D representations of models, such as physiology and anatomy.
From television shows to game consoles, mixed reality has many applications in the field of entertainment.
The 2004 British game show Bamzooki called upon child contestants to create virtual "Zooks" and watch them compete in a variety of challenges. The show used mixed reality to bring the Zooks to life. The television show ran for one season, ending in 2010.
The 2003 game show FightBox also called upon contestants to create competitive characters and used mixed reality to allow them to interact. Unlike Bamzoomi's generally non-violent challenges, the goal of FightBox was for new contestants to create the strongest fighter to win the competition.
In 2003, PlayStation released the EyeToy as a webcam accessory for the PlayStation 2 gaming console. The EyeToy provided computer vision and gesture recognition support for games. By November 6, 2008, 10.5 million EyeToy units were sold worldwide. The EyeToy was succeeded by the 2007 PlayStation Eye, then the 2013 PlayStation Camera, which is used in the PlayStation 4 and PlayStation 5.
In 2009, researchers presented to the International Symposium on Mixed and Augmented Reality (ISMAR) their social product called "BlogWall," which consisted of a projected screen on a wall. Users could post short text clips or images on the wall and play simple games such as Pong. The BlogWall also featured a poetry mode where it would rearrange the messages it received to form a poem and a polling mode where users could ask others to answer their polls.
The 2016 mobile game Pokémon Go gave players an option to view the Pokémon they encountered in a generic 2-D background or use the mixed reality feature called AR mode. When AR mode was enabled, the mobile device's camera and gyroscope were used to generate an image of the encountered Pokémon in the real world. By July 13, 2016, the game reached 15 million global downloads.
Niantic, the creators of mixed reality games Pokémon Go and Ingress, released a new mixed reality game in June 2019 called Harry Potter: Wizards Unite. The gameplay was similar to that of Pokémon Go.
Mario Kart Live: Home Circuit is a mixed reality racing game for the Nintendo Switch that was released in October 2020.[16a-New] The game allows players to use their home as a race track Within the first week of release, 73,918 copies were sold in Japan, making it the country's best selling game of the week.
Other research has examined the potential for mixed reality to be applied to theatre, film, and theme parks.
The first fully immersive mixed reality system was the Virtual Fixtures platform, which was developed in 1992 by Louis Rosenberg at the Armstrong Laboratories of the United States Air Force. It enabled human users to control robots in real-world environments that included real physical objects and 3D virtual overlays ("fixtures") that were added enhance human performance of manipulation tasks. Published studies showed that by introducing virtual objects into the real world, significant performance increases could be achieved by human operators.
Combat reality can be simulated and represented using complex, layered data and visual aides, most of which are head-mounted displays (HMD), which encompass any display technology that can be worn on the user's head. Military training solutions are often built on commercial off-the-shelf (COTS) technologies, such as Virtual Battlespace 3 and VirTra, both of which are used by the United States Army. As of 2018[update], VirTra is being used by both civilian and military law enforcement to train personnel in a variety of scenarios, including active shooter, domestic violence, and military traffic stops. Mixed reality technologies have been used by the United States Army Research Laboratory to study how this stress affects decision-making. With mixed reality, researchers may safely study military personnel in scenarios where soldiers would not likely survive.
In 2017, the U.S. Army was developing the Synthetic Training Environment (STE), a collection of technologies for training purposes that was expected to include mixed reality. As of 2018[update], STE was still in development without a projected completion date. Some recorded goals of STE included enhancing realism and increasing simulation training capabilities and STE availability to other systems.
It was claimed that mixed-reality environments like STE could reduce training costs, such as reducing the amount of ammunition expended during training. In 2018, it was reported that STE would include representation of any part of the world's terrain for training purposes. STE would offer a variety of training opportunities for squad brigade and combat teams, including Stryker, armory, and infantry teams.
Mixed reality allows a global workforce of remote teams to work together and tackle an organization's business challenges. No matter where they are physically located, an employee can wear a headset and noise-canceling headphones and enter a collaborative, immersive virtual environment. As these applications can accurately translate in real time, language barriers become irrelevant. This process also increases flexibility. While many employers still use inflexible models of fixed working time and location, there is evidence that employees are more productive if they have greater autonomy over where, when, and how they work. Some employees prefer loud work environments, while others need silence. Some work best in the morning; others work best at night. Employees also benefit from autonomy in how they work because of different ways of processing information. The classic model for learning styles differentiates between Visual, Auditory, and Kinesthetic learners.
Machine maintenance can also be executed with the help of mixed reality. Larger companies with multiple manufacturing locations and a lot of machinery can use mixed reality to educate and instruct their employees. The machines need regular checkups and have to be adjusted every now and then. These adjustments are mostly done by humans, so employees need to be informed about needed adjustments. By using mixed reality, employees from multiple locations can wear headsets and receive live instructions about the changes. Instructors can operate the representation that every employee sees, and can glide through the production area, zooming in to technical details and explaining every change needed. Employees completing a five-minute training session with such a mixed-reality program have been shown to attain the same learning results as reading a 50-page training manual. An extension to this environment is the incorporation of live data from operating machinery into the virtual collaborative space and then associated with three dimensional virtual models of the equipment. This enables training and execution of maintenance, operational and safety work processes, which would otherwise be difficult in a live setting, while making use of expertise, no matter their physical location.
Mixed reality can be used to build mockups that combine physical and digital elements. With the use of simultaneous localization and mapping (SLAM), mockups can interact with the physical world to gain control of more realistic sensory experiences  like object permanence, which would normally be infeasible or extremely difficult to track and analyze without the use of both digital and physical aides.
It has been hypothesized that a hybrid of mixed and virtual reality could pave the way for human consciousness to be transferred into a digital form entirely—a concept known as Virternity, which would leverage blockchain to create its main platform.
Smartglasses can be incorporated into the operating room to aide in surgical procedures; possibly displaying patient data conveniently while overlaying precise visual guides for the surgeon. Mixed reality headsets like the Microsoft HoloLens allow for efficient sharing of information between doctors, in addition to providing a platform for enhanced training. This can, in some situations (i.e. patient infected with contagious disease), improve doctor safety and reduce PPE use. While mixed reality has lots of potential for enhancing healthcare, it does have some drawbacks too. The technology will may never fully integrate into scenarios when a patient is present, as there are ethical concerns surrounding the doctor not being able to see the patient.
Product content management before the advent of Mixed Reality consisted largely of brochures and little customer-product engagement outside of this 2-dimensional realm. With mixed reality technology improvements, new forms of interactive product content management has emerged. Most notably, 3-dimensional digital renderings of normally 2-dimensional products have increased reachability and effectiveness of consumer-product interaction.
Recent advances in mixed-reality technologies have renewed interest in alternative modes of communication for human-robot interaction. Human operators wearing mixed reality glasses such as HoloLens can interact with (control and monitor) e.g. robots and lifting machines on site in a digital factory setup. This use case typically requires real-time data communication between a mixed reality interface with the machine / process / system, which could be enabled by incorporating digital twin technology.
While Mixed Reality refers to the intertwining of the virtual world and the physical world at a high level, there are a variety of digital mediums used to accomplish a mixed reality environment. They may range from handheld devices to entire rooms, each having practical uses in different disciplines.
The Cave Automatic Virtual Environment (CAVE) is an environment, typically a small room located in a larger outer room, in which a user is surrounded by projected displays around them, above them, and below them. 3D glasses and surround sound complement the projections to provide the user with a sense of perspective that aims to simulate the physical world. Since being developed, CAVE systems have been adopted by engineers developing and testing prototype products. They allow product designers to test their prototypes before expending resources to produce a physical prototype, while also opening doors for "hands-on" testing on non-tangible objects such as microscopic environments or entire factory floors. After developing the CAVE, the same researchers eventually released the CAVE2, which builds off of the original CAVE's shortcomings. The original projections were substituted for 37 megapixel 3D LCD panels, network cables integrate the CAVE2 with the internet, and a more precise camera system allows the environment to shift as the user moves throughout it.
Head-up display (HUD), as the name implies, is a display projected into a user's field of view that provides them additional information without obfuscating the environment in front of them or forcing them to look away. A standard HUD is composed of three elements: a projector, which is responsible for overlaying the graphics of the HUD, the combiner, which is the surface the graphics are projected onto, and the computer, which integrates the two other components and computes any real-time calculations or adjustments. Prototype HUDs were first used in military applications to aid fighter pilots in combat, but eventually evolved to aid in all aspects of flight - not just combat. HUDs were then standardized across commercial aviation as well, eventually creeping into the automotive industry. One of the first applications of HUD in automotive transport came with Pioneer's Heads-up system, which replaces the driver-side sun visor with a display that projects navigation instructions onto the road in front of the driver. Major manufacturers such as General Motors, Toyota, Audi, and BMW have since included some form of head-up display in certain models.
A head-mounted display (HMD), worn over the entire head or worn in front of the eyes, is a device that uses one or two optics to project an image directly in front of the user's eyes. Its applications range across medicine, entertainment, aviation, and engineering, providing a layer of visual immersion that traditional displays cannot achieve. Head-mounted displays are most popular with consumers in the entertainment market, with major tech companies developing HMDs to complement their existing products. However, these head-mounted displays are virtual reality displays and do not integrate the physical world. Popular augmented reality HMDs, however, are more favorable in enterprise environments. Microsoft's HoloLens is an augmented reality HMD that has applications in medicine, giving doctors more profound real-time insight, as well as engineering, overlaying important information on top of the physical world. Another notable augmented reality HMD has been developed by Magic Leap, a startup developing a similar product with applications in both the private sector and the consumer market.
Mobile devices, which primarily include smartphones and tablets, have continued to increase in computing power and portability. While originally displaying a computer-generated interface on an LED screen, modern mobile devices come equipped with a toolkit for developing augmented reality applications. These applications allow developers to overlay computer graphics over videos of the physical world. The first augmented reality mobile game with widespread success was Pokémon GO, which released in 2016 and accumulated 800 million downloads. While entertainment applications utilizing AR have proven successful, productivity and utility apps have also begun integrating AR features. Google has released updates to their Google Maps application that includes AR navigation directions overlaid onto the streets in front of the user, as well as expanding their translate app to overlay translated text onto physical writing in over 20 foreign languages. Mobile devices are unique display technologies due to the fact that they are commonly equipped at all times.
Media related to Mixed reality at Wikimedia Commons