Synthetic Reality and Blind Flight : The Operationalization of Perception

In our contribution we trace the emergence of human perception as an actual object of design and engineering in the early history of aviation through the postwar rise of human factors studies, and into military research and its respective industries today. In this survey, (perceptual) engineering and high-technology weapons such as remote- controlled or (partially) automated drones blend into a military policy that distances human operators from the ‘theater of war.’ As we argue, this new form of warfare with its visual technologies (‘Synthetic Vision Systems’) and mental ecologies (Situational Awareness) is aligned with a ‘logic of simulation,’ as discusses by Jacques Ranciere in his political theory of in/visibility.

Remote-controlled or (partially) automated contemporary weapons systems by their very principle increase the distance between human actors and their targets in military engagements.Thereby they reduce the risk of harm on the side of the technologically superior party and increase their access to geographically remote regions.At the same time, the growing distance of human operators from the 'theater of war' and the complex command structures in network-centric warfare pose new problems for decision-making and control.By creating new demands on human perception, they are presented largely as problems of 'situational awareness', a concept developed within the engineering discipline of human factors studies that has recently found its way into a media theoretical discourse (Suchman 2015).Even though these issues concern a wider variety of current technological applications that rely on screen-based procedures, they are particularly acute in a military context, where the perceptual basis for command and control is instrumental for the constitutive power of the state over life and death.
In today's remote warfare, 'situational awareness' is the domain of a new work force of visual data analysts on the one hand and of new technologies of visualization that are programmed to make data visible to the operators on the other.Central to these visualization technologies are 'Synthetic Vision Systems' (SVS), claiming to be on the forefront of a revolution in 'situational awareness ' (General Atomics 2016).We argue that this relates directly to questions of in/visibility that are most pertinently expressed in the political philosophy of Jacques Rancière.In Disagreement, Rancière delineates a 'logic of simulation', in which the appearance of "objects whose mode of presentation is not homogeneous with the ordinary mode of existence" is made impossible and eventually amounts to "a real indissociable from its image" (Rancière 1999, 104).
This contribution focuses on a specific aspect within the history of aviation up to present debates around Unmanned Aircraft: That is the central issue of human perception, and more specifically, the term 'situational awareness'.In the US-American context, this term is influential both in the field of engineering and in shaping military policy.In the following, we will trace the emergence of human perception as an actual object of design and engineering in the early history of aviation through the post-war rise of human factors studies.
The key issue when debating so-called autonomous systems and their possible implications in legal, ethical, and political terms is trying to locate the exact point at which a decision is made, or at which it is actually prefigured by software, preprogrammed algorithms or interface design.Determining what this can teach us about the policies and political mind-sets built into today's machinic assemblages is not a simple task.Yet, a closer look at how these systems are designed, what assumptions inform that design and what kind of conceptual frames and epistemological claims are at stake, may contribute to a better understanding of the parameters of decision making.

Rancière and the Regime of the All-Visible
In Disagreement, Jacques Rancière describes post-democracy as a form of government that, "in the name of democracy, emphasizes the consensual practice of effacing the forms of democratic action" (Rancière 1999, 101 f.).This effacement occurs within a structure of the visible, "where everything is on show," and where "there is thus no longer any place for appearance" (Rancière 1999, 103).In this world of total visibility, a 'real' and harmonized 'image of the whole' establishes itself, in which any appearance of politics is compelled to disappearance by the "media proliferation of whatever is visible" (Rancière 1999, 104).Rancière thereby brings the technologies of visualization -as the media of visual simulation -explicitly within the context of the police "order of the visible and sayable" (Rancière 1999, 29), whose description constitutes the core of his political philosophy.In doing so, he locates the relevance of the post-democratic 'regime of the all-visible' in a logic of simulation that is opposed not so much to the 'real and realist faith, but to appearance and its powers' 1 .Visibility here emerges as categorical blindness, namely, as intrinsic opposition to all that no longer corresponds to reality in terms of the order of the 'image of the whole'.In what follows, we would like to suggest that Rancière's reflections can be related to the procedures of 'blind flight' 2 in a more than metaphorical sense.Furthermore, we propose that the technologies of simulation that are in many respects paradigmatic for the proliferation of images can be examined within the context of applied psychology and with the techniques of control over the perception of pilots.
In his essay on the 'Work of Art in the Age of Mechanical Reproduction', Walter Benjamin positions the cinematic apparatus3 and its effects on the human sensory experience in the context of aptitude tests developed within the new military science of applied psychology ('Psychotechnik'). 4He thereby exposes the superiority of the technological recording process over human capabilities of observation, and discusses its role as a training device for the "new tasks of apperception" (Benjamin 1991, 505;Benjamin 2003, 276). 5In our view, the early tests in applied psychology and their historical continuation after WWII through the emergent discipline of human factors studies in the United States serve as the historical background for an on-going operationalization of human perception that was and continues to be most clearly pursued within the field of military research in general, and military aviation in particular.From the outset, optimizing the interaction between humans and machines was one of the central preoccupations of aviation.To this day, both in manned and unmanned flight, this is the chief domain for the development and application of techniques geared towards human perception -most notably, the field of vision -and, accordingly, for image-guided procedures that determine exactly what can be seen, and how.In the present analysis of these procedures, we aim to expose the function of technical images within them as surfaces for action that implement control.We will start with an investigation of 'psycho-technological' experimental designs in the aftermath of WWI, as setting the historical precedent for a more extensive operationalization of perception that is highlighted now by contemporary applications of sophisticated image-guided technologies in remote-controlled aircraft.We argue that a recent discourse, which has evolved from the intersection of experimental psychology and engineering sciences during the 20 th century, approximates the political theory advanced by Rancière.
Learning to Fly: Attention and Applied Psychology "Problems of sensation and perception… how a man sees and how he hears" (Chapanis et al. 1949, 12) were the major concerns of American human factors studies during the formation of the field in the late 1940s.They had already been the major preoccupation of the experimentally oriented applied psychology in Germany since the late 1910s. 6specially in areas where the customary patterns of perceptual activity had been disrupted through the introduction of new technologies, it was understood that they presented new challenges to experimental psychologists working closely with engineers.Aviation began its military history shortly before WWI,7 and proved that human perception was especially susceptible to error under changed environmental conditions (Asendorf 1997, 158ff).The rapid development of military aviation in the course of WWI also provided a testing ground for experiments with the media of photography and film that later inspired Benjamin's prescient theoretical remarks.It was in this tradition that Paul Virilio could describe aviation as a 'logistics of perception', which by 1914 was "becoming one way, or perhaps even the ultimate way, of seeing" (Virilio 1989, 17).Aviation thus played a pivotal role in turning the battlefield into a 'field of perception', in which the war machine came to be seen by "the military commander as an instrument of representation" (Virilio 1989, 20, see also Siegert 1992).The decisive role of aerial photography for operational reconnaissance during the wars of the 20th century is transferred today to remote-controlled or partially autonomous technological devices.This has increasingly automated the airplane as a medium of seeing, at the same time as it has established 'blind flight' as a paradigm for human perception within these systems.
The act of seeing, which Virilio ascribed entirely to the media of photography and film, was indeed one of the most important objects of investigation for the procedures used both to test the aptitude of prospective pilots and to conduct their training in the early days of aviation.One of the first examples of this was an aptitude test conducted in 1918 by the psychologist Wilhelm Benary at the Hamburg school of aviation that sought to determine attentional performance during simultaneous observation of earth and sky (Benary 1919a;1919b).To experimentally reproduce these conditions, a strip of terrain with two distinct, intersecting lines, as well as the crisscrossing paths traversed by three arrows in apparently arbitrary sequence, was installed on the laboratory floor.The test subject sat at a table and was given the task of following the path of one arrow in this graphic representation of terrain and then drawing the route it had taken onto another strip (see figure 1).
Hanging above the participant's head was an additional screen upon which various geometric figures flashed in constantly changing sequences that the participant was instructed to remember and indicate by means of keystrokes.This created two visual fields analogous to the observation of earth and sky, which the participant had to observe over a period of an hour and fifteen minutes.The experiment was thus designed to test the individual's ability to survey the spatial relations of terrain, recognize and memorize routes and shapes within it, and then find and draw them in on a map, all while simultaneously having to respond to external (acoustic) stimuli.This is not merely an example for the visual presentation of terrain in which the contours of its operational application already begin to surface, but it also reveals an operative understanding of the faculty of attention, which, as we will show, has more recently gained new prominence through the concept of 'situational awareness'.8One year after the implementation of Benary's orientation test, in response to significant deviations between the test results of the laboratory scenario and actual flight situations, a call was made to redesign the testing procedures in a more methodologically suitable manner involving 'realistic simulation'.This demand was clearly reflected in the experimental design drafted by the psychologist Arthur Kronfeld, who put an emphasis on heightened proximity to 'life' and 'reality' (Kronfeld 1919, 39).It was no coincidence that this was also the context for the emergence of aerial images able to survey terrain and its topographical features with a level of precision and detail in comparison to which human sight was found lacking.Kronfeld's experimental design included such an aerial image mounted upon a kymograph that unrolled itself in front of the test subject to display the panorama of a landscape seen from a height of 2000 meters.The participant sat in a dimly lit room and watched (with both eyes through a sighting tube to which his head was loosely fastened) as the panorama unwound itself in an endless loop before his eyes.This panorama conveyed the impression of being "selfcontained, so that it constantly, unnoticeably repeats itself.Each individual revolution lasts around two minutes and displays thirty stimuli to the spectator during this time" (Kronfeld 1919, 43, see also: Baumgarten 1928, 347 et seq).In contrast to the design of Benary's experiment, the terrain is no longer represented graphically, but rather as a moving landscape in the form of a rotating photographic panorama that the viewer fixes his gaze upon through a sighting device.The task of the participant was to react to an optical stimulus with a keystroke, which corresponded to a marked point in the landscape passing by.At the same time, small, coloured lights flickered (20-24 per minute) outside the central field of vision to test peripheral vision.Here too, the participant's task was to respond with a keystroke, in order to initiate 'evasive manoeuvres'.

Situational Awareness: Attention in Synthetic Reality
What had begun in Germany during the Great War with professional aptitude tests for pilots, was carried on in the US-American context during WWII, first as 'applied experimental psychology', then, beginning at the end of the 1940s, under the name of 'human factors studies'. 9Through the development of technological devices intended both to optimize the use of human capabilities and to minimize the susceptibility of technical systems to disruption by the 'human factor', the human sensory apparatus itself became the actual object of design, while the creation of service devices and display systems to enable effective control by humans -as the primary task of designactually shifted more and more toward effectually moulding human perception to perform as an efficient element within technological systems.As a result, the 9 W.S. Hunter characterizes military psychology as an 'American product.'The US Army first introduced aptitude tests for pilots in 1917.At first the new discipline was concerned "almost exclusively with tests of intelligence and occupational fitness" (Hunter 1946, 479).In 1942, the National Research Council established a 'Committee on Service-Personnel Selection and Training' and psychologists concurrently began research into "the human factor in the design and operation of equipment" (Hunter 1946, 480).
aforementioned "problems of sensation and perception" became crucial for engineers of interfaces to determine "input to the human operator"10 and vision became their primary concern.To the authors of the handbook for Applied Experimental Psychology (1946) the eye functions as "the primary means of knowing things" and "if a man gets himself involved as a functional part of a man-machine system, his effectiveness is very often determined entirely by the acuity and efficiency with which he can use his eyes" (Chapanis et al. 1949, 67).They cite radar and instrument flight as examples for the critical role of equipment design: "We are told that radar is the eyes of the fleet.But radar never sees.It is man who sees.If his eyes fail at a critical time, because they either are inherently poor or are placed at an undue disadvantage by poorly designed equipment, then radar fails, the ship fails, the mission fails.And, again, think of the demands imposed on the eyes of the pilot who is flying blind.In spite of its name, this job is primarily a visual one."(Chapanis et al. 1949, 67) In the early 1990s, at a time when automation and computation presented new challenges to the design of airplane cockpits, the term situational awareness11 was introduced into the human factors discourse, and the systematic revision of the interface between human perception and technological system took a new turn.For Mica Endsley, a human factors scientist who began her career at the arms manufacturer Northrop Aircraft and later became the chief scientist of the US Air Force, it was already clear in 1987 that human perception would rely heavily on automatized data visualization in the computerized cockpit: "Expert systems can be used to further augment the pilot by providing new capabilities never possessed before.Not only may he be better prepared to deal with simultaneous and high workload demands on his attention, expert systems may also be used to provide the pilot with 'look ahead' simulation capabilities or the truly pertinent data from among the multitudes of data available from internetted sources."(Endsley 1987(Endsley , 1389) ) In contrast to the experimental configurations of early aviation, where attention appeared as a motor sensory attribute and was tested and trained by means of stimulireaction schematics, it has now become a technological element susceptible to manipulation and an object for 'new capabilities' of a pilot who is himself augmented and improved by highly sophisticated technological equipment.Such technologically generated 'situational awareness' plays a decisive role in today's remotely piloted devices such as military drones.Since controlling drones by means of camera images transferred in real time is susceptible to disturbances and, despite the mobility of the camera lens, the field of vision is restricted, a second, synthetic, computer-generated type of image supplements the most recent generation of guidance systems.These images simulate the view from the cockpit of the remotely guided aircraft as if the pilot were on board.This is accomplished by bringing together previously collected spatial data from software-based systems stored in various databases, which is then supplemented, as needed, with further information from communications networks.This data is then translated and converted into a simulated image that is superimposed upon the video image so that it appears as a second visual level on the pilot's screen.On this level of symbolic imagery, unlike in the scenarios of early aviation, it is not about achieving a realistic representation of the visual field for the sake of proximity to life or reality.On the contrary, this imagery is about creating an abstraction from reality conducive to operational needs: The primary function of the new synthetic processes of visualization is to make video images that are correctly generated in real time, so that they are interpretable to the extent required for the mission at hand.They also make it possible to navigate without recourse to the underlying level of real-time video footage, thus enabling the aircraft to be controlled without any form of visual reference to reality on the ground.If the discourse of human factors nevertheless makes a distinction between 'real world' and 'synthetic world' (Calhoun & Draper 2010, 236), then this refers particularly to the very contrast between real-time video footage and computer simulations.Given that the veracity of the content of an image (whether 'real' or 'synthetic') must always be viewed with a critical eye, for us the actual significance of this distinction lies in the control function of the augmented and simulated moving images that direct the pilot's efficient selection of visual information.In this respect, the synthetic images serve, on the one hand, to highlight relevant information already available in the image (such as danger zones, potential take-off and landing points, and 'areas of interest') (Fig. 6).On the other hand, it augments the image by inscribing new information into it -thereby making data accessible that would otherwise not be contained in the image perceptible to the operator (Calhoun & Draper 2010).
This process makes the images operational in a double sense: Firstly, by turning them into an interface for "input to the human operator" (Chapanis et al. 1949, 12), and secondly, by using algorithmic processes of selection and visualization in a refined way to transform information that had previously been deemed relevant into the totality of what can be seen at all.The decision about, and the interpretation of what is and is not relevant is made elsewhere, and remains, in principle, inaccessible to the pilot in the cockpit.Instead, it manifests itself in technical applications of engineering, and in extension, as the implementation of decisions made by military and political leaders.The Advanced Cockpit GCS just developed by the American manufacturer General Atomics is set to become the navigational system for the most frequently deployed armed drone systems (Predator and Reaper).Advertised by General Atomics as "leading the situational awareness revolution" (General Atomics 2016), it brings the applications of synthetic vision technologies clearly into view.It is equipped with a tableau of six high-definition screens, on which the view from the non-existent cockpit window of the drone is simulated in three dimensional renderings that run in real-time synchrony with the navigation of the unmanned aircraft (Franz 2015, 27).The spatial position of the aircraft is integrated in the simulated landscape and data -pictograms and symbols as well as text -can be superimposed on the renderings ("Improved synthetic video with 3D graphics and moving maps" [General Atomics 2016]), thereby allowing the pilot to grasp the operationally relevant information at a single glance ("Fused, multi-source data into a Common Operational Picture (COP) on a single display" [General Atomics 2016]).There are only certain situations when the pilot navigates 'hands-on', either with a joystick or by means of the handheld console ("easy to switch between automated point and click' or manual 'hands on' flight operations" [General Atomics 2016]).In systems such as these, complicated flight operations, as well as routine manoeuvres such as take-off and landing, mostly occur automatically in order to improve the pilot's perception of the situation by freeing him/her from attention-demanding involvement in the navigation and allowing him/her to focus on the essential procedures during difficult manoeuvres (Cosenzo & Barnes 2012).The control station is set up so that the actual camera-image plays only a subordinate, supplementary role: it can be switched on when necessary, but is otherwise hidden beneath the layers of visualization.

Controlling Controls
In the Advanced Cockpit GCS, the synthetic landscape simulates a segment of terrain unencumbered by conditions of poor visibility (due to weather, dust, poor data connection etc.) by drawing on a diverse array of data that is processed into visual symbology comprehensible to the eye of the operator.12It is precisely here that we can see the difference from the photographic or filmic images that Benjamin saw as media of "expansion of the field of the testable" (Benjamin 2003, 276).Synthetic vision images are not merely digital or simulated, they are first and foremost processed images, and, as Timothy Lenoir emphasized in his preface to Mark Hansen's New Philosophy for New Media, their very essence is 'processual' (Lenoir 2006, xxi).Their operational proficiency is rooted in their 'highly dynamic' nature: in their ability to be "modified at any moment" (Lenoir 2006, xxi), that is, to be updated so as to conform to the goal of the operation.Borrowing a term from Lev Manovich, they can be described as "image-instruments" (Manovich 2001, 167f.), i.e. as interfaces that not only depict reality, but also control it by functioning as a visually definitive navigational command to the pilot.Functionally as well as in terms of its operational character, synthetic vision technology is closely related to the instruments of early aviation history, whose functionality was already critically discussed and polemically debated in terms of their alleged 'display programming' ['Anzeigeprogrammiertheit']. 13he debate at the time was a technical one that took place in the context of the transition from visual to 'blind', or instrumental, flight.When it became clear that the 'human factor' was, in fact, the most unreliable and disruptive element in the functioning technological systems, the idea of instrument flight became increasingly important (Münnich 1937, 141f).Perception and control, however, are now as then tightly correlated, and perception, whose optimization and augmentation was and continues to be crucial in a military context, is, of course, always trained according to the dictates of a specified 'target'.Augmented and synthetic vision, like the technological modes of visualization used to test the capabilities of pilots by their forerunners in early aviation, are thus not simply aides to sharpening the (human) perceptual apparatus that has always appeared unreliable and deficient in light of technological enhancements.They are also, with respect to psycho-technological 'display programming', to be understood as mechanisms of control that direct human action in a certain situation by setting specific parameters for 'situational awareness'.They are operational in precisely this sense.
If the power over life and death, and over the command to kill, can be said to be the most fundamental control mechanism of state power, a shift occurs here that allows us to grasp the imaging technologies of simulation in their political dimension: Namely, in Rancière's formulation, as a "world of total visibility" that constructs a reality "where appearance has no place to occur" (Rancière 1999, 104).In doing so, these imaging technologies describe a condition of post-democracy, in which a consensual 'image of the whole' eliminates anything that contradicts the self-referentiality of this image.Henceforth, human perception and the visual means of controlling it are functional elements of the "structure of the visible" (Rancière 1999, 103) that determines what is presented as 'real', 'augmented' or 'synthetic' and, as such, a product of processes of selection and interpretation that adhere to predefined parameters and inhibit the possibility of the 'appearance' of anything not already contained within its logic.
The determination and application of these parameters is, as we would like to stress with respect to the examples presented in this contribution, not a matter of autonomous technologies that, under the condition of ever-advancing automation and algorithmization, have removed themselves entirely from human capacities of decision making.Rather, it is the object of an avowedly 'interest-lead' science14 that is peculiarly positioned at the intersection between state and military interests on the one hand, and private sector and arms industries, on the other.If we direct our gaze back to the technologies for the military 'operationalization of perception' that were already investigated at the beginning of the 20 th century, it becomes clear that the 'fields of the testable' have turned into the actual geographies of war.