Link Search Menu Expand Document
  1. Enjoy Your Symptom

Enjoy Your Symptom

Despite its exclusion from consideration, the machine has been and remains fundamental to the system of ethics. In other words, moral theory and practice, although excluding the machine from explicit consideration needs and depends on it for its own systematicity. All too often, however, one misses this fact, because of the way we have (mis)understood and restricted the definition of the machine. Typically, the word “machine” is understood and characterized as a physical mechanism. “We have a naïve notion of a machine as a box with motors, gears, and whatnot in it” (Hall, 2001, unpaginated). It is, for example, the springdriven mechanical clock, introduced in Europe around the middle of the 16th century, which had comprised the principal machinic prototype for much of the modern period. For Descartes, the mechanical clock, with its intricate gears, was a model of the mindless animal body, which moves itself and responds to stimulus like a well-fashioned mechanism. In Sir Isaac Newton’s Philosophiae Naturalis Principia Mathematica, this image was extended to cover the entirety of physical reality, introducing a concept that has come to be called the “clockwork universe” (Newton, 1972). Even after technology had advanced well beyond the gears, springs, and levers of the clock, philosophers continued to fixate on mechanics. For Martin Heidegger (1977), for example, technology was restricted to mechanical apparatuses: sawmills, hydroelectric power plants, high-frequency radar stations, and jet aircraft (p. 5).

This particular definition of the machine is not only restrictive but, as Hall (2001) argues, “incapable of dealing with the machines of the future” (unpaginated). According to Hall, a machine is not simply a combination of gears and motors. It is a set of rules, instructions, or messages.

The most important machine of the twentieth century wasn’t a physical thing at all. It was the Turing Machine, and it was a mathematical idea. It provided the theoretical basis for computers . . . This theoretical concept of a machine as a pattern of operations which could be implemented in a number of ways is called a virtual machine. (Hall, 2001, unpaginated)

Understood in this fashion, “machine” is not merely a collection of physical springs and gears but a system of encoded instructions, an algorithm, which may be implemented and embodied in any number of ways. This general definition of the machine covers mechanical systems, like clocks that implement rules of synchronization in the form of physical space marked out on rotating gears; biological systems, like animals and plants that are composed of and operate by following instructions embedded in their genetic code; and information processing devices, like the computer, which performs different operations based on various program instructions stipulated by software. As Donna Haraway (1991) has argued, following the innovations introduced by Norbert Wiener, we are all understood and constituted as mechanisms of communication.

If the machine, according to this general definition, is a pattern of operations or a set of pre-defined instructions, then ethics has been and continues to be mechanistic. According to the English moralist Henry Sidgwick (1981), “the aim of Ethics is to systematize and free from error the apparent cognitions that most men have of the rightness or reasonableness of conduct” (p. 77). Western conceptions of morality customarily consist in systematic rules of behavior that can be encoded, like an algorithm, and implemented by different moral agents in a number of circumstances and situations. They are, in short, an instruction set that is designed to direct behavior and govern conduct. Take for example, the Ten Commandments, the cornerstone of Judeo-Christian ethics. These ten rules constitute something of a moral subroutine that not only prescribe correct operations for human beings but do so in a way that is abstracted from the particulars of circumstance, personality, and other empirical accidents. “Thou shall not kill” is a general prohibition against murder that applies in any number of situations where one human being confronts another. Like an algorithm, the statements contained within the Ten Commandments are general operations that can be applied to any particular set of data.

Similarly, Immanuel Kant’s moral philosophy is founded on and structured by fundamental rules or what he calls, in a comparison to natural science, “practical laws.” These practical laws are “categorical imperatives.” That is, they are not merely subjective maxims that apply to a particular person’s will under a specific set of circumstances. Instead, they must be universally and objectively valid for the will of every rational being in every possible circumstance.

Laws must completely determine the will as will, even before I ask whether I am capable of achieving a desired effect or what should be done to realize it. They must thus be categorical; otherwise they would not be laws, for they would lack the necessity which, in order to be practical, must be completely independent of pathological conditions, i.e., conditions only contingently related to the will. (Kant, 1985, p. 18)

For Kant, moral action is programmed by principles of pure practical reason—universal laws that are not only abstracted from every empirical condition but applicable to any and all rational agents. It may be said, therefore, that Kant, who took physics and mathematics as the model for a wholesale transformation of the procedures of philosophy, mechanized ethics in a way that was similar to Newton’s mechanization of physical science.

Finally, even the pragmatic alternative to deontological ethics, utilitarianism, operates by a kind of systemic moral computation or what Jeremy Bentham (1988) called “moral arithmetic.” The core utilitarian principle, “seek to act in such a way as to promote the greatest quantity and quality of happiness for the greatest number,” is a general formula that subsequently requires considerable processing to crunch the numbers and decide the best possible outcome. In fact, Michael Anderson, Susan Leigh Anderson, and Chris Armen (2004) have not only constructed computer-based “ethical advisors” but argue that such machines might have an advantage over a human being in following utilitarian theory, because of the sheer number of variables that usually need to be taken into account and calculated accurately (p. 2). For this reason, Joseph Nadeau (2006) has even suggested that machines might be more moral, that is, less biased and more reasonable in executing moral decision-making. In all these cases, ethics—which, according to Sidgwick (1981, p. 77), aims to systematize human cognition and conduct—conforms to the characterization of what is called “the virtual machine.” Commandments, moral imperatives, ethical principles, codes of conduct, practical laws . . . these all endeavor to provide a clear set of instructions or patterns of operation that are designed to program and direct human social behavior and interaction.


Table of Contents