Back to top

In June, the International Association for Computing and Philosophy (IACAP) and the International Society for Ethics and Information Technology (INSEIT) held a joint conference. Bringing together members of both organizations, this conference served as IACAP’s annual meeting, as well as INSEIT’s annual conference, referred to as Computing Ethics: Philosophical Enquiry (CEPE). The conference was hosted by Tom Powers of the University of Delaware’s Center for Science, Ethics, and Public Policy and Department of Philosophy. The American Philosophical Association’s Committee on Philosophy and Computers also helped sponsor the conference.

Philosophers and technologists submitted papers and proposals in February, and a committee put together a program of 31 presentations and six symposia. Topics included the nature of computation, the role of computation in science, big data, privacy and surveillance, the dangers and benefits of AI, autonomous machines, persuasive technologies, research ethics, and the role of ethicists in computing and engineering.

Many of the conference participants displayed an underlying preoccupation with the ways our relationship with machines change as machines acquire characteristics that we have always considered to be distinctively human. Two specific concerns were the danger posed by machines as they become more autonomous, and the potential threat to human virtue as intelligent machines become capable of playing more human-like roles in sexual activities.

Machine ethics and autonomy: Bringsjord and Verdicchio

Selmer Bringsjord and Mario Verdicchio gave presentations on the dangers of machine autonomy. The basic worry motivating these discussions is this: If machines are under the control of a person, then even if the machines are powerful, their danger is limited by the intentions of the controllers. But if machines are autonomous, they are ipso facto not under control—at least not direct control—and, hence, the powerful ones may be quite dangerous. For example, an industrial trash compactor is a powerful piece of equipment that requires careful operation. But a trash compactor that autonomously chooses what to crush would be a much more formidable hazard.

Bringsjord considered a more nuanced proposal about the relationship between power, autonomy and danger, specifically that the degree of danger could be understood as a function of the degree of power and degree of autonomy. This would be useful since most things are at least a little dangerous. From practical and ethical perspectives, we would like to know how dangerous something is. But understanding degrees of danger in this way requires making sense of the idea of degrees of autonomy. Bringsjord aimed to accomplish this while operationalizing the concept of autonomy enough to implement it in a robot. In earlier work, Bringsjord developed a computational logic to implement what philosophers call akrasia, or weakness of will, in an actual robot. His aim in his current work is to do something similar for autonomy. In his presentation, Bringsjord outlined the features of autonomy that a logic of autonomy would have to reflect. Roughly, a robot performing some action autonomously requires the following: The robot actually performed the action, it entertained doing it, it entertained not doing it, it wanted to do it, it decided to do it, and it could have either done it or not done it. Bringsjord concluded that a powerful machine with a high degree of autonomy, thus understood, would indeed be quite dangerous.

Again, the special danger in autonomous machines is that, to the extent they are autonomous, they are outside of human control. When present, human control is a safeguard against danger, since humans are typically bound and guided by moral judgment. For a machine beyond human control, then, it is natural to want some substitute for morality. Machine ethics is the study of the implementation of moral capacities in artificial agents. The prospect of autonomous machines makes machine ethics particularly pressing.

However, Verdicchio argued that only some forms of what we might think of as autonomy require machine ethics. Like Bringsjord, Verdicchio understands autonomy as the conjunction of a variety of factors. They agree that autonomous action requires several alternative courses of action, consideration (put in more naturally computational terms, simulation) of those possibilities, and something like desire or goal-directedness toward the action actually performed. But it is regarding this last element that we also find some disagreement between Verdicchio and Bringsjord. According to Verdicchio, even with the other elements that plausibly constitute autonomy, goal-directedness is not enough to make machines distinctively dangerous. He argued the kind of machine autonomy that should worry us—the kind that calls for machine ethics—would be realized only when the machine sets its own goals, only when it is the source of its own desires. Without this capacity, it is simply a complex machine, perhaps one that has become more complex on its own, but still one that is directed toward the ends of its creators or operators.

Is Verdicchio right that we can dispense with machine ethics unless machines can set their own ends? It is an interesting question. Likely, the answer depends on how we understand the scope of machine ethics. A distinctive feature of AI systems is that they are capable of solving problems or achieving goals in novel ways, including ways their programmers did not anticipate. This is the point of machine learning: Not all of the relevant information and instructions need to be given to the machine in advance; it can figure out some things on its own. So, even if the machine’s goal is set by programmers or operators, the means to this end may not be.

To frame the point in familiar philosophical terms, a machine’s instrumental desires may vary in unpredictable ways, even if its intrinsic desires are fixed. If constraining these unpredictable instrumental desires within acceptable limits is part of machine ethics, then it seems clear that machine ethics is required for some machines that lack the capacity to set their own ultimate goals. But, on the other hand, putting constraints on the means to one’s given ends is a rather thin and limited part of what we usually consider ethics. And perhaps we can think of such constraints simply as additional goals given to the machine in advance. Ultimately, whether or not we consider the construction of such constraints part of machine ethics probably depends on the generality, structure and content of these constraints. If they involve codifications of recognizably ethical concepts, then the label ‘machine ethics’ will seem appropriate. If not, then we will be more likely to withhold the label.

But this semantic issue should not detract from the important point raised by Verdicchio’s presentation. The autonomy of a machine that could set and adjust its own ultimate goals would raise much deeper concerns than one that could not, since such a machine might eventually abandon any constraints specified in advance.

Symposium on sex, virtue, and robots: Ess, Taddeo, and Vallor

Sophisticated, AI-powered robots with sexual abilities, or sexbots, do not yet exist, but we have every reason to believe they will soon. It is hard to imagine the ever-booming and lucrative digital sex industry not taking advantage of the development of new advances in personally interactive devices. Sexbots were the focus of a symposium called: “Sex, Virtue, and Robots.” John Sullins moderated a discussion between the audience and a panel composed of Charles EssMariarosaria Taddeo, and Shannon Vallor. The panelists applied the framework of virtue ethics to address the question of whether having sex with intelligent robots was morally problematic.

More than competing theories of normative ethics, virtue ethics puts special emphasis on human character traits. Specifically, virtue ethicists hold that actions are to be evaluated in terms of the character traits—the virtues and vices—that the actions exemplify and produce. Given that people have been using sex dolls—human-size dolls with carefully crafted sexual anatomy—and a variety of artificial masturbation devices for years, one might have thought that robots designed to provide sexual services would not raise new ethical issues. Regarding virtue ethics in particular, one might think that sex with robots is no different, in relation to a person’s character, from the use of masturbation aids. But that is not quite so obvious as it might have seemed at first. What distinguishes sexbots is AI. Not only are sexbots supposed to be realistic in their look and feel, the interactive experience they promise is intended to be realistic as well.

So, sexbots promise a more interactive, personalized—perhaps intimate—experience than the minimally animated sex dolls and toys of today. Why would that matter? The panelists were largely in agreement that there was nothing intrinsically wrong with people having occasional sexual experiences with robots. But they all shared some version of the worry that sex with AI-powered robots might displace other intrinsically valuable sorts of activity, as well as the character development said activities might enable and promote. Ess invoked philosopher Sara Ruddick’s distinction between complete sex and merely good sex, the former being distinguished not just by the participants’ enjoyment but also by equal, mutual desire and respect between the individuals involved. Ess’s worry is that, without a capacity for deep practical wisdom and the genuine sort of autonomy we believe ourselves to have, sexbots couldn’t possibly be participants in complete sex. If robots became a replacement for human sexual partners, then complete sex is an important good on which we might miss out.

One part of Taddeo’s worry is quite similar. Her focus was eros—an ancient Greek conception of love discussed by Plato. As Taddeo characterized it, eros is a maddening kind of love, the experience of which shapes a person’s character. Taddeo’s concern with eros is similar to Ess’s concern with complete sex. In both cases, the worry is that, to the extent that sexbots replace human partners, a distinctively valuable sort of experience would be impossible. Taddeo adduced several other pressing worries as well. One was that female sexbots would exacerbate a problem that we already find caused by pornography—specifically, the promotion of unrealistic stereotypes about women and the misogyny that this might produce. She also noted that reliance on robots for sex might complicate our romantic relationships in unfortunate ways.

Vallor’s primary worry about sexbots is similar to Ess’s concern about complete sex and Taddeo’s concern about eros. Like Ess and Taddeo, Vallor suggested that sexbots might displace some important human good. However, instead of focusing on the intrinsically desirable forms of sex and love on which we might miss out, Vallor focused on the processes of maturation and growth that comes from having sex (whether good or bad) with real humans. Our human bodies are fleshy, hairy, moist, and imperfect in a variety of ways. When people are sexually immature, they react to these features of bodies with fear and disgust. Sex with humans, Vallor suggested, is part of the process of leaving behind these immature reactions. She noted that failure to outgrow these sorts of fear and disgust is associated with vices like racism, misogyny, and self-loathing. Furthermore, the persistence of such fear and disgust can inhibit the development of those virtues—like empathy, care and courage—that have essential practical ties to our bodies. Sexbots offer the possibility of sexual gratification without engaging biological realities. Hence, use of sexbots, to the extent it replaced sex with human persons, might result in a stunted maturation process, producing persons who were more vicious and less virtuous.

The notes of caution sounded by the panelists were generally compelling. Not only does new technology absorb our time and attention, it necessarily alters and displaces activities we otherwise would have continued. This is unfortunate when the displaced activities were valuable—or, more precisely, when the old activities were more valuable than the activities that displaced them. But it was not altogether clear to all of the audience members that sex with robots was less valuable overall than the traditional option. A question along these lines was posed to the panel by Deborah Johnson, one of the conference’s keynote speakers. Her question was directed primarily at Vallor’s point about how sex facilitates the development of certain virtues. Johnson suggested that, perhaps, the elimination of traditional forms of sex would eliminate any important role for these particular virtues in sexual relations, and, perhaps, we could still develop these virtues as they applied to other contexts. And, if so, a world in which we lacked both traditional sex and the virtuous traits acquired through it might be just as good as our present situation. In response, Vallor held that the practical scope of the virtuous traits sex helps us learn is broader than just sexual activity, and so their loss would be felt in other areas, too.

Vallor’s response seems correct, though the issue ultimately depends on psychological facts about exactly what experiences the acquisition of particular character traits requires. Regardless, Johnson’s objection is exactly the sort of challenge we should take seriously. As technological change creates new sources of value at the expense of earlier sources, too often the focus is exclusively on what has been lost or exclusively on what has been gained. In contrast, a better approach looks at both, comparing the old and the new. This, of course, is not easy, and sometimes the old and the new will be incommensurable. Even so, it is vital that we bring the comparisons, the trade-offs, into clear view.

Concluding thoughts

Issues about autonomous machines and sexbots bring out two aspects of the uneasiness we experience as artificial entities become more like humans. For one thing, we care how machines behave. As they become more autonomous and less subject to our direct control, we want their behavior to serve our needs without endangering us. Secondly, we care about how the behavior of machines changes us—whether enhancing or supplanting our cherished human capacities and traits.

Reflection shows that the two sets of issues are bound together in complicated ways. When we wonder what sorts of changes are good for us, this calls the very notions of harm and danger into question. The risk of an industrial robot ripping a person in half is just one sort of danger. But we might well consider the potential of sexbots to arrest our development of virtue a different, but also quite fearsome, sort of danger. Furthermore, although machine ethics must attend to how machines’ choices affect persons’ health and physical well-being, a richer machine ethics would also consider how the actions of robots affect persons’ character, psychological well-being, and overall quality of life.

As more intelligent machines are developed, no doubt, we will encounter many new situations that raise difficult questions about the relationship between machine ethics and human ethics. The philosophers of IACAP and INSEIT will have plenty of important work to do for years to come.

Owen King

Owen King is the NEWEL Postdoctoral Researcher in Ethics, Well-Being, and Data Science in the Department of Philosophy at the University of Twente. His research is primarily focused on well-being, from both theoretical and practical perspectives.  He also investigates ethical issues raised by new computing and data technologies.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.