The Status of VR in Military Training Environments
by David Alexander
Copyright (C) 1994 David Alexander. All rights reserved.
This article may be freely copied and distributed so long as its content, byline, and copyright notice are not changed or deleted.
Any discussion of virtual reality (VR) technologies suited to military applications of whatever sort must envision warfare through the eyes of Sun Tzu rather than Clausewitz. As a pure simulator technology VR is in many ways redundant, for it is a simulator technology within a simulator technology. On another level, it can be a way to make the concept of "Nintendo Warfare" popularized by media coverage of the Gulf War more than a mere buzzword. When incorporated in aircraft design, virtual reality command interfaces can enable flight simulation to effectively become flight control.
In the wake of the successes gained in the Gulf War employing high-tech weaponry, the trade-off now more than ever is in destructive power versus accuracy of delivery of the ordnance. The Pentagon realizes that the U.S. Congress will mandate funds for systems promising maximum lethality on a point target with little collateral damage. VR, then, is like a genie let out of a bottle; make a wish and any system you can name can go from brilliant to genius.
A brief discussion of what VR is and how it differs from conventional simulator technologies may prove beneficial. After all, are not all simulator/trainer systems by definition, virtual reality environments? The answer, in the spirit of the opening paragraph, is both yes and no.
Virtual reality can be defined as a simulation technology enabling users to immerse themselves to varying degrees in an artificial environment and to interact with objects in the environment. By means of the technology, abstract data and non-pictorials, such as temperature gradients, time flow, radar envelopes, orbital decay rates, depth and sound, can all be manipulated as objects by users. Therefore, VR offers cognitive enhancement and skill-mapping via direct involvement through an interaction with objects and environments and a direct participation in procedural flow.
Because the ability of the human nervous system to process image-based data has been determined to be far greater than its capacity to absorb print and information presented in traditional linear structures (it has been estimated, for example, that the human brain can accurately retain only about ten percent of what has been read, about twenty percent of what has been heard but ninety percent and up for active involvement learning) information absorption, management and interchange are nearly instantaneous and tend to occur on a deeper perceptual level when data is manipulated in objectified, nonlinear form.
It should be clear from the above that the key features of VR to keep in mind are immersion and interaction. It is the degree by which these two components come into play that primarily sets VR apart from conventional simulator technology. Whereas in a conventional flight simulator, for example, a trainee might utilize a physical joystick, HOTAS or sidestick controllers, an immersive VR simulation system would generate virtual controllers; ones that would feel quite solid to the pilot, who would interact with the controls in a normal manner, although in fact they were electronic objects.
However, such a VR interface, because it is in essence nothing more than a construct of ones and zeroes pumped through a computer processor, can be altered with a keystroke (or a virtual pointing device, such as a spaceball) as well as involve sensory processes beyond the normal range. Threat balloons, radar coverage envelopes from SAM sites, location of fighters flying CAP, ballistic tracks of incoming or outbound missiles and the like can all be as "real" to the pilot as landmarks on the ground, other planes or, indeed, the pilot's own virtualized controls.
Additionally, the pilot could interact with the system in different ways, such as by manipulating the above-mentioned spaceballs to click on missile icons or by using eye movements or even thoughts (brain patterns can be scanned using evoked response potential (ERP) technology) in order to achieve the same end. Since directional orientation in cyberspace is artificial, and the computer is the arbiter of what the pilot sees, hears and feels, strike baskets can become objects in the same sense as the ordnance being put on target.
In the realm of military simulator applications of VR, the statement often made by its civilian adherents that VR is not "just goggles and gloves" is reversed completely. While more familiar procedural simulator systems attempt to isolate the trainee in a physical environment that seeks to at least approximate that which is to be encountered under actual combat conditions, VR systems create an internalized environment in digitized space. Indeed, one of DOD's goals in head-mounted display (HMD) development as far back as 1979 had been to reduce both the cost and physical size of military simulators; the capability to project imagery directly onto the retina could eliminate large screens and bulky projection systems.
On the hardware end of the picture, a typical VR configuration includes three main components. The first is a head-mounted display or HMD. An HMD itself consists of three basic elements: image generators commonly utilizing small displays such as charge-coupled devices (CCDs), CRTs or LCDs and currently supporting CGA and VGA graphics at the highest-end. Because of resolution problems with LCDs, military HMDs predominantly employ CRTs mounted near the ears and mirrors to reflect the image into the viewer's eyes. This arrangement has the added advantage of permitting dual usage as head-up displays if the optics are semi-reflective but has the disadvantage of placing high voltages quite close to the wearer's head.
Ultrasonic, mechanical, optical or inertial head position trackers map viewing perspective into CPU memory. These utilize a six-degree-of-freedom principle (6DOF) to measure and define position and orientation by means of calculations of three translational and three rotational values along x, y and z axes comparable to roll, yaw and pitch used in flight terminology. A rear counterweight to provide stability completes the standard HMD assembly.
Many of these devices are available commercially, although it is not unfair to state that almost every U.S. manufacturer has at one time had a DOD connection --indeed most would still be servicing ARPA contracts if economic pressures had not forced them out into the private sector. Polhemus Incorporated, for example, one of the earliest manufacturers of a type of head-motion tracker commonly found in HMDs, has recently been awarded a contract to manufacture head-mounted sight trackers for adoption in HUDs used in the Commanche helicopter fire control and navigation system.
As for the two other main components of a VR system, force feedback gloves (FFGs), sometimes called wired gloves, and less commonly an environmentally insulated "Cybersuit" such as the VPL Datasuit, are used in tandem with the HMD when virtual object manipulation and/or full-body immersion are required. The FFGs -- most based upon the pioneering VPL Dataglove model -- enable users to manipulate objects in the virtual world. Some designs involve gloves that are air-filled to simulate weight and other tactile features of virtual objects; these are tactile feedback gloves.
Fiber optic or electromechanical sensors in both gloves and suit use 6DOF protocols to translate motor responses into digital data processed by the CPU in order to facilitate interaction with the virtual environment. Objects can be manipulated as well as physically altered, overlayed upon and/or combined with non-pictorials to highlight specific areas of interest. Force-feedback handgrips of various designs and configurations, as well as wands, mice and force-balls, are also employed in this capacity.
These systems have obvious significant implications for real-world as well as training applications, which in medicine are already paying off in improved surgical procedures. If a surgeon using VR can pinpoint a tiny cancerous cell and zap it with a laser, much the same can be true of an F-117 pilot about to put ordnance on target, a submarine WSO about to launch a torpedo or an AWACS or JSTARS RSO detecting a launch signature in the air or mechanized armor movement on the ground.
The ultimate weapon of the future may not be the one that will blow up the world but one that will merely blow up Saddam, Khadafi or whomever happens to be the villain of the moment, preferably without soiling the draperies. One day, perhaps not too far off, the famous philosophical conundrum concerning the push of a button resulting in a fatality three thousand miles away may not only be a testable proposition; utilizing VR's capabilities it might have become tactical doctrine.
If the reader is beginning to get the impression that this discussion has veered somewhat from simulators to real-world applications, the reader is absolutely correct. As should be evident from the above, VR technologies can share the properties of both simulation and control systems. Indeed, the development of VR systems was never undertaken purely as a better means to teach pilots to fly planes; it was meant to be a command and control interface to assist them in flying them. It was never intended to solely effect economies by training Tomahawk crews using simulated ordnance; it was meant to enable them to better guide their rounds to the intended targets. It had not been intended to make sonar men more astute at detecting submerged or surface contacts but to automate military personnel and the interactions between military personnel and the systems which they control in much the same way that weapons systems have been automated over the last twenty-five to thirty years.
Actually, there does seem to be an approximate thirty-year development cycle in high-end U.S. military systems. Witness Stealth technology, for example. The prototypical Stealth bomber, the YB-49 "Flying Wing" required some three decades before the nascent technology matured into the B-2. If such is the case, then VR-based systems, first introduced circa 1963, are now poised for incorporation into mission-critical systems.
In fact, among DOD's chief R&D goals for the decade is the development of synthetic, digital environments in which computer modeling and simulation would serve roles from rapid prototyping of advanced weapons systems down to the molecular level and, conversely, tracking the course of a threat-rich combat environment down to the ballistic tracks of individual warheads. Utilizing massively parallel processors (MPPs) which can at present perform some one billion operations per second (OPS) and which are projected to show a tenfold increase in processing speed within the next three to five years, digital modeling has been applied by Defense's Supercomputing Alliance to clarifying images generated by synthetic aperture radar (SAR) in order to pinpoint and identify weapons platforms otherwise lost in electronic clutter, and do so at high speeds.
Modeling and simulation technologies utilizing MPPs also have applications in wargaming simulations to accurately replicate theater combat situations, increasing realism and anticipating otherwise unforeseen tactical and strategic developments with a high degree of accuracy. In the realm of networked environments, the Pentagon's Joint Warfare Center utilizes interactive hardware and software in a digital video branch exchange (DVBX) that can involve thousands of participants over long haul computer networks spanning the globe.
On a related front, strategic/tactical mission planning and support systems utilizing computer simulation technologies, having proven their worth in the Gulf, are now an essential component of AirLand Battle doctrine. The USAF's mission support system (AFMSS) and USN's special operations forces planning and rehearsal system (SOFPARS) are two programs geared toward extending the technological potential offered by computer modeling of synthetic combat environments.
With the above as preamble, the proliferation of VR-based systems into military simulator environments can be framed in somewhat broader terms and explains why in an era of shrinking military expenditures overall VR budgeting has been conspicuously spared the swing of the Congressional axe. What, then, are some of the factors influencing this trend?
VR is cost effective, permitting the downsizing/rightsizing of training systems. An HMD, suit and gloves linked to an 80486 or RISC CPU which is in turn networked to a mainframe host, for example, can perform many of the functions previously requiring the use of simulators costing millions of dollars. System elements can frequently be purchased off the shelf, facilitating cost-effective stockpiling of spare parts.
VR is an enhancement of existing technology. Legacy systems need not be retired; they can be enhanced/upgraded by inclusion of VR technologies. Unlike the case with artificial intelligence (AI), limitations inherent in VR are evolutionary rather than revolutionary. For robust AI applications, significant breakthroughs in such technologies as object recognition and neural networks were required, whereas more mundane improvements, as in the realm of graphical resolution, processing speed, etc., will provide marked improvements in VR systems at relatively low cost.
VR technologies support cross-platform diversity. Reconfiguration of simulated mission parameters can be achieved with relative ease by reprogramming the computer system to compensate for incompatibilities in systems utilized by different branches of the armed services. When planes, tanks, helos and communications linkages, to say nothing of military personnel, are all virtual objects in virtual space, marked leeway in the types of battlefield scenarios is possible.
One virtual reality technology, televisual reality (TR), is especially suited to mission simulation applications. TR is a networking technology which enables non-colocated participants linked on the net to share cyberspace. TR should not be confused with telepresence (TP) which it closely resembles. Unlike TR, telepresence does not necessarily imply interactions among occupants of the virtual world.
Televisual reality enables distributed processing across large, even global, distances. Using the technology, an instructor located thousands of miles from fledgling sonarmen could appear to be in the same simulated crew station aboard a virtual Centurion submarine. The AN/SQQ-89 interactive video delivery system (IVDS) being currently deployed by the USN, is in concept such a type of multi-user integrated training system whereby trainees interactively learn to use a sonar operator watchstation console. The system creates what has been described as a "virtual classroom" in which student stations consisting of console, keyboard and pointing device are networked with each other and with an instructor station.
Trainee performance and progress can be measured and assistance provided by the instructor in real-time and group- as well as self-paced training are supported. High density color video graphics combined with digital audio instructions and simulated tactical communications prompt trainees to interact with IVDS with a high degree of realism not attainable by other methods. The system has been demonstrated to typically achieve forty-percent reductions in error rates. Because the technology used is primarily "off the shelf," incorporating commercial 80386 and 80486 CPUs and videodisc mass storage media, it is easily upgradable and maintainable at low cost.
The U.S. Navy has long been the branch of the armed services on the cutting edge of VR research, development and fielding of prototype systems, such as teleoperation (TO). This trend can be explained by two overriding factors. The first is the USN's need to maintain multilevel and multilayered force structures (e.g., a carrier battlegroup) arguably unique to the service branch. The second is its need to conduct undersea missions. In both situations non-pictorials such as radar and sonar envelopes, depth, isothermals, simultaneous location of hostiles and friendlies, surface and submerged contacts, makes a technology that can effectively integrate these tactical elements critical to the success of the mission. Along these lines, the USN's advanced combat direction system (ACDS) program incorporates advanced data fusion technologies to create an integrated tactical simulation of the operations area for battlegroup commanders.
Televisual reality simulator technology is central to the operation of the SIMNET administered by the Advanced Research Projects Agency (ARPA) which a common slip of the tongue will still call by its former name, DARPA. With joint DARPA-Army development begun in 1982, SIMNET is one of the initial and perhaps still the most ambitious military utilizations of simulations technology for training applications.
Upwards of 270 simulators have been installed at locations in the United States and Europe since the program's inception. These distributed SIMNET nodes are linked together via both local- and wide-area computer networks (LANS and WANS) of up to one hundred individual simulators and also via satellite in the form of a long haul network or LHN in order to facilitate team processing operations in a networked televisual reality environment.
Patched into the televisual reality network, armor, mechanized infantry, helos, fixed-wing aircraft such as A-10s and F-16s as well as FAAD emplacements can participate in interactive wargaming utilizing highly realistic audiovisuals. By means of the multiple simulator node linkages, crew commanders can train their units against one another or against OPFOR units in a wide variety of simulated battlefield engagements over a wide assortment of terrain including a generic battlefield (ARPA claims any terrain on earth can be simulated by the system) with a "combat arena" of some thirty square miles and local terrain patches of some two square miles. At the heart of the SIMNET system is a management, command and control (MCC) interface, consisting of network clusters linked to a mainframe host platform. These linkages include staff officers overseeing operations in a virtual tactical operations center.
Initially limited to relatively low-resolution polygonal picture elements with faithfulness in representation reserved for military hardware and generic icons serving to connote background objects such as trees, buildings and the like, advances in image processing and in dynamic modeling technologies and in raw CPU processing speed and power have vastly improved the quality and variety of the simulated environment. By means of dynamic terrain algorithms, for example, craters can be programmed into the environment following an ordnance strike as well as tank treads left in the wake of the passage of mechanized armor across the virtual landscape.
Real-time update rates for individual simulator nodes have been increased as well, resulting in decreased perceptual lag-times and image density in the area of a reported sixty frames per second. While SIMNET continues to expand its capabilities, improved features continue to be added.
However this may be, it is the military's aviation arms which in the aftermath of spectacularly successful air operations over Iraq have reaped the budgetary spoils of victory in the Gulf. The U.S. Air Force as well as Naval and Marine aviation have been allocated funding for implementation of VR systems in flight training applications. USAF R&D has led the way. Since the debut of the VCASS or visually coupled airborne systems simulator at Wright-Patterson AFB in 1982, with its oversized helmet that came to be called "the Darth Vader," the USAF has actively pursued R&D for systems incorporating VR in both flight simulation and actual cockpit C2 functions.
The man-machine interface offers the possibility of changing the pilots' favored metaphor of "strapping the aircraft to their backs" to something more akin to "grafting it to their skin" where such asensory processes as radar paints and servomechanical linkages become extensions of the pilot's body and sensory apparatus. Today, advanced HMDs capable of displaying high-density image resolution and interacting with an advanced graphical display interface, as well as interactive simulation technologies which link participants across networked environments, are being used in the realm of pilot simulation training. Additionally, a variety of systems improvements, such as massively parallel processing (MPP) have made possible a far broader spectrum of mission training sets than were previously in the simulation repertoire, including, for example, pure pursuit maneuvers.
The USAF, utilizing advanced HMD technology developed by CAE-Link, General Electric and other technology providers, continues to modify and improve its virtual or supercockpit command interface pioneered with the VCASS program. Here graphical icons, such as SAM threat envelopes and onboard weapons stores as well as text blocks displaying systems status and other tactical information are overlayed on real-time video of the air combat zone, replacing traditional HUD functions with an integrated command and control interface. (The overlaying of real-time visuals with digitized graphics is a technological subset of virtual reality known as augmented reality or AR.)
There is, of course, no point in training F-15 and F-16 fighter crews utilizing such a "god screen" command interface, since the planes do not support it, nor are they ever likely to. What rationale, then, for the intensive ongoing R&D effort? The answer is that advanced tactical fighter (ATF) designs such as the F-22 and AFX, upgrades to extant stealth aircraft such as F-117 and B-2, and certain still officially classified high-altitude transonic stealth aircraft (AURORA), may incorporate some or most of the virtual cockpit technology in future generations or even at the present time.
By all accounts, interactive simulation applications utilizing virtual reality technologies will continue to play an increasing role in training military personnel on ever more sophisticated and ever more expensive combat platforms, but there is a downside. A DOD-sponsored review of VR simulation research and development completed in early 1993 warned that VR might not be the panacea for flight training that its adherents claim it to be and suggested that for a number of pilot training programs more conventional flight simulators and/or training in actual aircraft was advisable.
Simulator sickness is a common drawback cited by VR simulation critics. It is caused by the discrepancy between visual motion cues generated by the system and those provided by the human senses. In interactive VR networks these symptoms can be exacerbated by the added stresses to the human nervous system brought on by update lag-time, producing such reactions as headache, nausea and vertigo. But be that as it may, post-Cold War pressures toward increased cost-effectiveness and time-efficiency show every indication of continuing to drive the market both in the U.S. and abroad for increasingly more sophisticated simulation technologies in which VR will continue to play a major role, certainly to the end of the decade, and perhaps well into the next century.