Back to top

In wondering what we've achieved over the past century of technological progress, Philip K. Dick puts it best: Do androids dream of electric sheep? That phrase was the title of his 1968 book that later inspired Ridley Scott's iconic future-noir, Blade Runner. But Dick was asking it of the age where modern computing was just then breaching the horizon of popular consciousness. And the question itself, which packed all of contemporaneity's irony, gloom and anxiety into a six-word phrase, is perhaps now more pertinent than Dick ever imagined.

The book details a dystopian future where bounty hunter Rick Deckard is contracted to "retire" six androids. The hitch is that the androids are hardly distinguishable from their human counterparts. With our ongoing advancements in artificial intelligence, so-called 'cyborg rights' questions of technologized, ersatz emotion, it's clear that we've evolved into the age of the androids: we communicate with mobile devices, we use programmed apps to order our lives and our priorities, we use the Internet to help us recall information otherwise inaccessible to us, we use technological appendages to aid in our own sense function. So, how do we work towards the ethical evolution of technology?

A recent article by technology writer Konstantin Kakaes on Slate.com asks if we can teach computers what “truth” means. He suggests two ideas of truth: one mathematical (1+1=2), and one ‘moral,’ citing the use of the word ‘truth’ as it appears in the Declaration of Independence (“We hold these truths to be self-evident…”). On one hand, it can be said that mathematical truths are self-evident: one and one can’t possibly make anything else but two, but according to scores of 20th-century philosophers—folks like Gottleb Frege, who had one foot in mathematics and the other in philosophy—mathematics correspond to a certain logic. Kakaes quotes Frege on this point, for whom logic means, “those laws of thought that transcend all particulars.” In this way, mathematics is secondary to the ‘laws’ of logic. Math works because it abides by a certain logic, thus suggesting that if this logic could be programmed, computers could learn, at least, a type of moral truth.

Here, Kakaes refers to computer scientists’ efforts to encode certain ethical scenarios into AI programs. The Yale Shooting Anomaly is one such instance of this (and also, given the high rate of violent shootings at large academic institutions, it’s perhaps not so far-fetched an anomaly). As Kakaes explains, the scenario “aims to formally codify the fact that an unloaded shotgun, if loaded and then shot at a person, would kill the person”; 1+1=2. On a simple level, this is how some computer games work: you have ammo, you load your gun, you aim at a bad guy, you shoot the gun, and the bad guy dies. But there’s also an infinite set of non-quantifiables—environmental quality, age of ammunition, firing range, dispersion of shot, point of entry—that don’t always allow the Yale Shooting Anomaly to be ‘true’; 1+1+X=Y. A computer doesn’t know X, so it will definitely not know Y.

There is not yet a way for computer programs to account for these non-quantifiables. Kakaes argues, “to encode common-sense knowledge, computer scientists need a way to allow for events to take place.” He outlines the ‘ramification problem’ here, which is a form of John McCarthy and Patrick J. Hayes’ “frame problem” as stated in their 1969 article, Some Philosophical Problems from the Standpoint of Artificial Intelligencehow can you determine which things remain the same in a changing world?

Here is where we face the difficult question of types of truth: mathematical truths are apparently always true, whereas ‘moral’ truths (“These truths which we hold to be self-evident”) are perhaps nominally ‘true,’ for lack of a better word. But then, to what extent is mathematics a reflection of the human struggle for making sense of the world? Is mathematics a type of moral truth? Consider the nameless protagonist in Dostoyevsky’s Notes From Underground, who, in defending the human impulse towards irrationality, argues that, “The formula ‘two and two make five’ is not without its attractions.” In his own cosmology, the person for whom two and two make five is operating by a perfectly sensible concept, and may in fact be ordering its life around this ‘logic.’ This logic is in defiance of the clear, apparent, dominant logic of things, but if this person who insists on the correctness of incorrect math equations were an android, wouldn’t its insistence on irrationality be what makes it human?

In order to support the ethical evolution of technology as it now tends towards artificial intelligence and bio-technical integration, we need to understand the operations of our own human intelligence, especially as these artificialities are being made (or programmed) in our own image. It’s a Utopian vision to believe that a computer can be taught truth because it supposes there is an absolute truth that can be condensed into a string of code. In Notes From Underground, the protagonist says the only thing that’s reliable about people is that they’re unreliable. That an unreliable narrator is saying this further complicates and supports the ‘certainty’ of human unreliability.

An ethical approach to technological evolution must also account for the existential intricacies of ‘logic.’ Technological evolution is informed by our self-conception and self-consciousness. And despite our best intentions, technology is flawed as such. There is no ‘perfect’ technology, and a reason why teaching a computer ‘truth’ doesn’t just have to do with proper coding. Ideally, technology complements our human function, even if we are, in many ways, subservient to the whims of technological evolution. To assume that a computer can be taught truth assumes that we understand truth. This assumption completely discredits the existential structure of logic.

Consider the case of artist/composer/cyborg activist Neil Harbisson. Born in 1982 and later diagnosed with achromatopsia (the ability to see only in black and white), Harbisson has had a colorful career as an artist and activist. While he was in his second year at the Darlington College of Arts in 2003, he began working with Plymouth University student Adam Montandon to create the ‘eyeborg’ device, which is a head-mounted device that allows Harbisson to ‘see’ color. He was later photographed for his passport wearing the device, and it has since been treated as a formal recognition of Harbisson’s status as a cyborg, later founding the Cyborg Foundation in 2010, which is, “a nonprofit organization that aims to help people become cyborgs (extend their senses by applying cybernetics to the organism); defend cyborg rights and promote the use of cybernetics in the arts.”

In a recent interview, Harbisson says, “I believe that being a cyborg is a feeling, it's when you feel that a cybernetic device is no longer an external element but a part of your organism.” He also speculates that in the space of the next century, cyborgs will be ‘normal.’ The key here, however, is that cyborg-ness is not a logical on-off state, but a feeling. That is, it’s a human state, not a machine state. Note the existential language: the cyborg exists in the same way other conscious creatures exist; that is, in the awareness of their own existence. Currently, the cyborg exists as a human with sense functionality enhanced by technological devices. In this way, there is no concern for proper coding, as the human herself contains the proper programmatic framework for enhanced technological function and corresponding technological development, or evolution.

Perhaps the ‘truth’ impasse Kakaes discusses is a type of techno-Copernican conception, and this idea of cyborg as feeling a techno-Ptolemaic correction. In the techno-Copernican universe, the evolution of technology looks outside, to technological and robot devices as the impetus for resolving the ethical equation. A techno-Ptolemaic correction locates that impetus inwards, on us. Of the two positions, the latter is at least more human (or cyborg), and if not more ethical in its own right, it is nonetheless situated in the dynamic existential framework that allows us to have an ethical conversation about how the mode of technological evolution can and should progress. Algorithms can’t tell us what X equals; only we can. We’re already dreaming of electric sheep.

Benjamin van Loon

Benjamin van Loon is a writer, researcher, and communications professional living in Chicago, IL. He holds a master’s degree in communications and media from Northeastern Illinois University and bachelors degrees in English and philosophy from North Park University. Follow him on Twitter @benvanloon and view more of his work at www.benvanloon.com.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.