Back to top

As technology evolves, so too must our ethical frameworks. It’s a point that’s been made a lot in recent times. But with each passing moment, with each new development, the point gets hammered home with increasing lucidity. A recent experiment performed by U.S. Government may be the impetus in a new trend involving artificial intelligence in weaponry. An Air Force B-1 bomber launched a missile in Southern California. Human pilots initially guided the missile; however, halfway to its destination, communication with the device was severed, leaving the computer systems to decide which of three ships to attack. In fact, this weapon, the Long Range Anti-Ship Missile prototype, is designed to operate without human control.

This experiment is a notable marker in the history of artificial intelligence. AI is beginning to make serious headway in many fields: medical diagnostics, stock trading, and even gaming. Though, in a way, autonomous technology, specifically in relation to weaponry, is a direct product of the Cold War. The Tomahawk cruise missile, now out of commission, had the ability to hunt for Soviet vessels without human guidance. In December 1988, the Navy tested a Harpoon anti-ship missile with self-guiding capabilities with disastrous results. Launched from an F/A-18 Horner fighter jet from the USS Constellation, the missile mistakenly targeted an Indian freighter that had wandered onto the test range. One crewmember was killed. Nonetheless, the Harpoon remains in use.

While currently, armed drones are typically controlled by remote human pilots, arms makers are working to create weapons guided by artificially intelligent software. Rather than humans deciding what to target, and in certain cases whom to kill, this software will have the ability to make such decisions without the aid of humans. Israel, Britain, and Norway already employ missiles and drones that perform attacks without the guidance of humans against enemy radar, as well as tanks and ships. While the details of such advancements are kept secret, it’s clear that a new kind of arms race is taking place.

Of course, there are rather complex ethical dilemmas that arise from this development. Critics argue that it will become increasingly difficult for humans to control artificially intelligent software. And furthermore, by eliminating a degree of human oversight, are we more likely to go to war? Concerns have been raised that the ease with which we opt for war as an option will definitively increase, once the protocol becomes as simple as activating a set of computers.

Are these cursory anxieties caused by the stigma associated with artificial intelligence, or are they legitimate concerns? It’s difficult to say. Of course, pop culture hasn’t done any favors for the public image of artificial intelligence. But if leaders in the field are any indication, there is something real to fear in the advancement of AI. Elon Musk, technology entrepreneur and AI investor, recently voiced concern: “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” And it would appear that government bodies are attempting to do just that. A special UN meeting in Geneva was recently held to address the issue, with a general consensus of nations calling for increased regulation and oversight in order to prevent violations of humanitarian and international law. Such regulations are already in place in the US; for instance, high-level authorization is required before the development of weapons with the ability to kill without the aid of humans. However, with the rapid increase of technological advancement, can our regulations keep up?

In his 2013 TED Talk, Oxford AI researcher Daniel Dewey notes, “Weaponization of AI is ongoing, and accidental harms can arise from unanticipated systemic effects or from faulty assumptions. But on the whole, these sorts of harms should be manageable. Today’s AI is not so different from today’s other technologies”. Ethically, there isn’t really a problem here, at least not yet. But a solid case can be made for the necessity of someone, some human, behind the wheel; e.g. we need human soldiers making decisions in real time, not despite our emotional nature, but indeed because of it. Machines currently cannot comprehend the consequences of their actions. They cannot feel remorse. They cannot, currently, distinguish right from wrong. Machines can only do what we tell them to. They can only distinguish targets.

Though it’s difficult to say at this point whether this lack of emotion is a good or bad thing. After all, soldiers come home with serious psychological problems due to the violence they face in war. Machines cannot be traumatized. And from a utilitarian perspective, weapons aided by artificial intelligence may actually serve to decrease the incidence of civilian casualties. For instance, Britain’s ‘fire and forget’ Brimstone missiles have the ability to distinguish tanks from cars and buses without the aid of humans. In addition, Brimstone missiles are able to communicate with one another, sharing information about targets and surroundings. But what happens when artificial intelligent computers begin to act in unexpected ways?

AI, in its current form, operates by systems and rules dictated by humans. So, it follows that if our missiles hit the wrong targets, thus performing ‘morally reprehensible’ actions, the problem lays in the rules and systems dictated and designed by humans. So on one hand, machines follow rules and codes based on logic inherent within their systems, whereas humans can have their judgment clouded by emotion. Where computers follow logic, people can be reckless and unruly. Computers follow commands. On the other hand, sometimes it is necessary to disobey commands. But then again, disobeying commands is something people also have trouble with in the face of authority. After all, it wasn’t an artificially intelligent machine behind the massacre at Mai Lai. It wasn’t self-guiding manmade machines that were responsible for the Holocaust. It was people.

The problem is that the study of AI is such a fledging field of discovery that we don’t yet know how advanced forms of AI will react in any given scenario. That is not to say that we ought to halt the progression of such technologies—it’s more so that we should proceed, as Musk suggests, with immense caution. Currently, there is no morally relevant difference between a missile’s software deciding which target to destroy, and an individual making the decision. If you follow the logic of the computer’s choice, it was actually never up to the computer to begin with. The responsibility belongs to the human men and women designing and programming these weapons, and the key to all of this is how the computers are designed to make decisions. What if we could program varying degrees of oversight into the weapons so that general codes such as, say, the Geneva Conventions, are inherent within the weapons themselves? Well, that could save lives and prevent tragedies.

It’s clear by this point that autonomous weaponry is a nuanced topic with no clear outcome. We have a responsibility as a society to figure out where the line is drawn. An ethical arms race must match, tic for tack, each of the advancements in the autonomous weapons arm race. And furthermore, while the path is certainly fraught pitfalls and risks associated with the advancement of AI, we as a society now have an opportunity to make something better than ourselves. Indeed, the very fact that we are more invested in the weaponization of artificial intelligence than, say, more benevolent undertaking indicates that there are serious, deep-seated problems with our values. But on the whole, technology made the world better. And with a guided advancement of AI, we can essentially upgrade our own moral codes.

Of course, programming ethical frameworks into artificial intelligence technology is easier said than done. The issue becomes about the source of morality, which unfortunately, is an inherently nonsensical quandary. Morality is a construct. It does not exist outside of our understanding of it. Human morality is expressed with such brevity, in such myriad ways in different historical contexts and cultures, it’s impossible to discuss without bottlenecking the subject with cultural and historical bias. As such, it’s difficult to talk about in a clear and concise way, and even tougher to parse out when it comes to programming. But while morality as a societal construct takes many forms, what can be said, if anything, is that it serves as a regulatory function for humans. We can start from there.

But there may be an even deeper, moral concern at play. Without directly experiencing the consequences of war, are we desensitizing ourselves from the innate horror thereof? In turn, without experiencing said horror, are we sheltering ourselves from the heart of the underlying problems that cause war? Perhaps the real question is as follows: is war essentially a human undertaking? While there are abundant examples of human savagery, atrocities such as the Holocaust are outside the norm of human behavior, even in wartime. The very fact that one can point towards these incidents and evoke disgust indicates that humans, as a general rule, view such actions as repugnant. Thus, such events are not representative of society as a whole; indeed, they are the epitome of society. The fundamental trick to all of this is making what is innate to us, innate to our machines. If we can program distinct ethical frameworks within our technology, we’ll have passed on a higher standard to new generations than we ever could have hoped to see in our lifetime.

At the heart of this discussion is the distinction between humans and machines. But it turns out that all of the assumptions behind this dichotomy are unsound. After all, the human brain really is just an intensely complicated computer. In turn, artificially intelligent software is really just a manifestation of humanity in another form. Ethically, there is no real distinction. In the coming decades, we will make a choice about the things that we value most, and we will instill that into our software. That choice will directly influence how humanity evolves. So what do we value?

David Stockdale

David Stockdale is a freelance writer from the Chicagoland area. His political columns and book reviews have been featured in AND Magazine. His fictional work has appeared in Electric Rather, The Commonline Journal, Midwest Literary Magazine and Go Read Your Lunch.  Two of his essays are featured in A Practical Guide to Digital Journalism Ethics. David can be reached at , and his URL is http://davidstockdale.tumblr.com/.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.