Back to top

Will big business compromise the ethics of artificial intelligence?

We are entering a new era of technology adoption. We have passed the point of using new tools to perform old tasks, and our behavior is changing. I grew up with a 1970s television that had four push-buttons on it; these were laboriously and manually tuned to U.K. channels BBC1, BBC2, and ITV. (The mysterious fourth button was 1970s redundancy-in-design at its finest, and came into use with the launch of Channel 4.) Two decades later, I had a remote control in my hand, enjoying the novelty of flicking through an ever-growing list of cable channels each evening. In 2017, the dizzying array of shows and multiple streaming services has changed the game. You can’t simply flick through the evening’s listings. When does that episode stream? Is that U.K. or U.S. release time? Which provider is the new show on? These days, I rely on voice control, asking Alexa to find a title, an episode, or a genre. Artificial intelligence (AI) has come to the rescue, it is a tool for its time, a behavioral shift in my home.

There is no obvious ethical problem around a company producing a clever TV remote. This, however, is the tip of the artificial intelligence iceberg, and it’s not at all clear if ethics is a priority for those developing AI tools.

Do you always know if you’re talking to a machine?

AI comes in many forms, from game-winning, game-changing AI Go player AlphaGo, to machine learning in the background of software, to chatbots at the forefront of interaction. In the last three months, I have watched robots discussing sushi and sashimi with startled humans around a table (in Austin, at South by Southwest’s Japan Factory).  I’ve talked to a chatbot about accounts payable on a financial system (in London, at Sage Summit). I’ve engaged with several customer service chat representatives that clearly identify themselves as chatbots. It’s also very likely that I’ve commented, liked, or shared a post by a chatbot on a social media channel, without being aware of its source. Whether you know it or not, you probably have too.

Most interactions are via a keyboard, which instantly removes the voice and visual cues that identify humans. We judge intelligence by verbal reasoning: If  machines were to answer questions in a manner indistinguishable from humans, they  might be considered intelligent. This is the fundamental test of ‘judging intelligence’ devised by Alan Turing in 1950, and it has been passed.  We can no longer be sure  whether we are dealing with people or machines in our interactions across the World Wide Web.

Research published in March 2017 about online human-bot interactions suggests that as many as 15 percent of Twitter accounts are not, in fact, run by humans. The MIT Technology Review suggested in November 2016 that around 20 percent of all election-related tweets on the day before the U.S. presidential election were made by an army of influential chatbots. The sheer volume can distort the online debate and may have influenced the outcome of the election. Similar concerns have been raised about the U.K’s vote on Brexit earlier in the year.

How can we trust the entities with whom we interact to uphold human values and exercise ethical judgement? With whom does the responsibility lie? Will ethics be their priority in development?

Teaching Artificial Intelligence to think

AI started with a very narrow focus: Think of IBM's Deep Blue beating Kasparov at chess. The goal was to select the right chess move from hundreds of thousands of possibilities with strategic implications. Experts agree that, due to this niche programming, the same machine would struggle to win a round of tic-tac-toe with a five-year-old. Artificial intelligence has evolved to cover a broader spectrum. The continued success of AlphaGo, consistently winning against champions in a more complex gaming environment, is a product of machine learning, the development of artificial intuition, and a process of mimicking human practice and study. The responsibility of programmers is to kick-start the machine learning and give the new mind proper direction. This is standard practice, and much faster than learning from scratch.  It is worth noting, however, that the co-creator of AlphaGo, artificial intelligence researcher, neurologist, and co-founder of DeepMind, Dr. Demis Hassabis, believes that AI could learn from a zero state rather than being "supervised” to start its learning in a particular direction.

Guiding the learning of a new intelligence is an onerous responsibility. Dr. Hassabis recently spoke about the challenges facing the builders of AI. The majority of AI is built with benevolent intent, he said, adding that since there are so few people able to do this, the current risk of overtly negative programming is small. I am still uneasy, having grown up with the fiction of ‘evil scientist in hidden volcano lair’ from “Thunderbirds” to James Bond films. The growing body of evidence that social media is rife with chatbots, indistinguishable from human accounts and influencing popular opinion, suggests that ethical behavior is not always a priority.

Corporations are leading the way in bringing artificial intelligence into the public domain, using chatbots to enhance interaction with customers. In a social media dominated world, we have an expectation of responsiveness that a human workforce cannot meet: Chatbots allow an instant rapport to develop, keeping customers on your site, building loyalty, and improving satisfaction. There is significant effort on the part of large developers to build ethics in from the start. IBM Watson’s CTO Rob High recently outlined several key areas that ethical developers must consider, including basic trust, openly identifying as a chatbot, and managing data that is shared in a human-bot interaction. It’s a legal minefield. A simple example has parallels in flawed goal-setting. We observe that human behavior changes according to the goals that are set, which is often not as expected by the management! A goal to hit a raw sales target can have unscrupulous teams discounting for volume and consequently losing margin despite reaching their target. We see this as unethical because we are human, and ingrained ethics will ensure that such behavior is short-lived, whether through the actions of management or as a result of peer pressure.  A chatbot needs to have ethical ‘gut feeling’ programmed in, and that takes time, effort, and money.

Investment in ethics requires diverse and ethical investors

Much of the innovation in emerging technology is coming from the exciting tech startup sector. Ideas are flying around from talented millennials, grabbing the imagination and hitting the tech and investment headlines. The skills of these young founders are not in question, but business models historically leave much to be desired. We are currently watching the struggles of one of the early tech successes, Uber, as its ethical stance comes into question from all sides: The CEO, Travis Kalanick, has recently stepped aside in the face of growing criticism of the business culture and values. The Financial Times cites “reckless pursuit of increased shareholder value” as a dangerous habit. Unfortunately, the rapid success of Silicon Valley businesses over the past 15 years has led to aggressive venture capital investment based on growth and users, rather than reliability and revenue, and is weighted heavily towards white male founders. According to AOL founder Steve Case, speaking at South by Southwest in Austin this year, only 10 percent of U.S. tech investments went to women, and 1 percent to African-Americans. This ‘bro-culture’ is unhealthy —  Dan Lyons’ book "Disrupted: My Misadventure in the Startup Bubble” describes the “grow fast, lose money, go public, cash out” process and the male-and-pale majority in the industry. At what point in this gold rush do founders take a sober look at ethical business and diverse, ethical technology?

There is a joint responsibility for developers and for those non-technical parties to ensure that artificial intelligence retains its “benevolent intent” and reflects the best of our diverse human society. Ethical AI can only be guaranteed by ethical business practices. We hope that these can evolve fast enough to keep up with the rapid advances in technology.

Kate Baucherel

Kate Baucherel BA(Hons) FCMA is a digital strategist specialising in emerging tech, particularly blockchain and distributed ledger technology. She is COO of City Web Consultants, working on the application of blockchain, AR/VR and machine learning, for blue chip clients in the UK and overseas. Kate’s first job was with an IBM business partner in Denver, back when the AS/400 was a really cool piece of hardware, and the World Wide Web didn’t exist. She has held senior technical and financial roles in businesses across multiple sectors and is a published author of non-fiction and sci-fi. Find out more at www.katebaucherel.com

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.