Machine Morality – Living Alongside Robots

The practical & moral issues of living side-by-side with thinking machines

They’re already in the factories, the mines and on the ocean floor repairing deep water oil rigs. They are machines that tirelessly assemble, pack, repair or transport. Robots never come in late, their standards don’t dip when they’re hungover, tired or emotional and they can work round the clock in hot, cold, dusty or dangerous environments.

Talking about robots taking our jobs isn’t the timeliest of discussions since in many cases, it’s already happened. But even though they build our cars, sort our mail and harvest our crops, the robots we know all work behind bars. These fast-moving, highly articulated and often massive machines exist in segregation, within areas marked by yellow and black chevrons, flashing lights and cages that protect weak, fleshy humans from their crushing, mashing power. When we say we are now entering a new era of robotics, what we mean is that we’re preparing to remove those barriers. In its current iteration, our robotic future will be one where the machines coexist with us in public and private spaces, interacting not just with their owners who can control them but also the general public, their pets and, probably, even other privately owned robots.

The thing is – no one’s entirely sure how that’s going to work out. We’ve started to address the big issues about an automated work force, such as looming job losses. And we’ve dared to get excited about the prospect of such convenience as within-the-hour drone delivery from Amazon. But what about issues such as privacy and public safety, or even morality? Surely they’re just as pressing?

While it’s certain that robots will eventually enter the public domain, until we know their function and form, how many or their degree of autonomy, we can only speculate about the impact they’ll have on our daily lives. So let’s do that right now, picking a few questions about how unleashing autonomous thinking machines on the world might make us question our lives, responsibilities and even our ethical obligations…

Programming cars to kill

If every vehicle on every road was controlled by perfectly functioning AI systems, our cities and motorways would be cleaner, safer, more efficient spaces. Traffic would flow smoothly, with minimum braking spaces maintained between vehicles. Yet even this system wouldn’t be 100% safe as robot vehicles would still kill people. A tyre blowing out on a busy stretch of road, a pedestrian distracted by a phone call, a little kid running out after a ball – the outcome of each could easily be fatal. How will society feel about these road deaths when there’s no one to blame? Once the bodies have been hauled away and the blood hosed off the asphalt, how will we rationalise accidents that have been recorded in high-def roadside CCTV, from the vehicle’s geolocation, speed and sensor data and, maybe, even from the victim’s own IoT-enabled devices and clothing? What comfort will relatives take from the news that all the vehicles worked perfectly right up to, and even beyond, the moment of sickening impact?

As dystopian as it sounds, no amount of bug fixes will ever prevent this. Self driving cars must be programmed to kill. Here’s why…

Calculating all the known knowns

It’s not just the big-hitters of the automobile industry that are working on self-driving cars. Along with the likes of Ford, GM and BMW, relative newcomers such as Tesla as well as tech firms Apple and Google are also investing heavily. That’s because by 2030, over half of vehicle sales in the US are expected to be either fully or semi-autonomous.

No two developers are using the same approach to overcome the technical difficulties. Instead, each one is employing a range of technologies to make their vehicles aware of their surroundings. Cameras ‘see’ the road but so too do ultrasonic sensors, radar and lasers. Embedded sensors ‘feel’ the road surface, detecting loss of traction due to sand, oil, ice or rain. GPS or IoT-enabled systems let each vehicle ‘know’ where it is as well as well as what weather and traffic conditions await it down the road.

In most ways, these approaches replicate what a human driver does, only with enhanced 360° awareness and without distraction. On an open road, autonomous cars mimic a safe driver by following the rules of the road. They stick to the road, stay in lane, obey speed limits and overtake safely. But real-world driving can be far messier than that. A human driver seeing a parked car can instantly deduce whether or not the occupant is about to open the door and get out, yet for an AI system, that’s a hugely complicated task. It needs to first be able to spot a human face, then detect the slightly open door and, finally, have some idea of typical human behaviour. And that’s just one of any number of complicated situations an autonomous vehicle might face every few seconds.

The ethics of impacts

Autonomous vehicles differ from humans in one very significant way – they don’t panic. Faced with an impending crash, a self-driving car won’t make a snap decision to yank the steering wheel one way or the other. Rather, it will react in a rational, logical way, which is why every car must be programmed to kill.

Have a look at the ‘Who lives, who dies?’ diagram below. With a human driver, any outcome will be less of a decision and more of a reaction. But to an autonomous vehicle, the result will be based on an algorithm that weighs the occupant survivability up against external factors.

Machines Ethics

In 2015, Stanford professor Chris Gerdes expanded his research on self-driving technologies by teaming up with Cal Poly philosophy professor Patrick Lin to look at dilemmas like these. Polling hundreds of people via Amazon’s Mechanical Turk platform, the pair discovered that while most people favoured the idea of autonomous vehicles minimising casualties, few actually wanted to own such a vehicle. In other words, they thought that everyone but them should have a car designed to sacrifice its owner. This shows how tricky it will be to implement popular but safe robot cars. Should new owners take a morality test in order to determine the responses of their cars? Should self-driving vehicles have an ‘Us vs them’ survivability switch next to the air con? Or will society as a whole have to get used to the idea of their own robot vehicle favouring the needs of the many over the needs of the few?

Would you work for a robot boss?

It’s a bit of a stretch to move from cars to flipping burgers and then flying planes but stick with us here because these next two jobs can tell us a lot about the value of a job, the importance of self-esteem and how the purely financial doesn’t give us a full picture. Burgers first. . .

A fast food business is one that’s primed to be fully automated. A fixed supply chain feeds ingredients of known size, weight and texture into a production line that produces an end product notable for its homogeneity. When someone orders a Big Mac with fries, that’s all they want, no artistic flourishes necessary.

It follows that in an automated world, so-called ‘McJobs’ can and will be some of the first to be replaced by robots. A spotless, branded restaurant producing identical burgers all day and night is what the customers want, while the owner wants maximum profit from minimum operating costs. Automation can certainly deliver all this, but is it necessarily what fast food workers want?

‘McJob’ is used as a term of derision by some but to many others, such minimum wage service sector jobs are their only available employment opportunity. To a family on a low income, these flexible working hours may be the only way to fit a job around other commitments. So when a low paid job is a better alternative to no job at all, would the automation of the fast food sector really eliminate the drudgery of menial labour, or would it further limit prospects for those with already limited opportunities?

There’s another possibility – that the robots will land the prime jobs in this sector and leave only a support role tier for a reduced number of humans. Sooner or later, a tomato’s going to roll off every production line or a grease trap get gunged up and the most flexible way to deal with the unexpected would be to hire someone. With robots feeding hundreds of customers every hour, a couple of humans could wipe, sweep, restock and unjam all the robots on even the most epic scaled production line.

While that’s a logical use of a worker, would it be a fulfilling one? While there’s a certain degree of satisfaction in preparing and presenting food to hungry customers, could degreasing robots ever tick the same number of boxes? Could it be that being subservient to a robot would make this a profoundly demeaning occupation, even compared to a similar role cleaning up after human workers? To get a fresh perspective on how we might feel about working, for rather than with, machines, let’s hold that thought and move onto being an airline pilot.

Rob is my co pilot

In November 2014, systems on a Lufthansa Airbus A321 detected a dangerously low airspeed. To prevent stalling, the plane went into a programmed dive that could not be overridden by its pilots. Since the dive didn’t seem to increase the airspeed, the plane just kept on diving. The quick thinking pilots eventually manually disabled all the fly-by-wire protections and took control – a procedure that no part of their training had ever taught them to do. If they’d followed the correct procedures, everyone would have died.

What went wrong? It turned out that two of the plane’s three ‘angle-of-attack’ sensor vanes had frozen into position, triggering the stall warning. Since the logic system always disregarded the outlier – in this case the third, fully functioning sensor – the aircraft faithfully relied on the two frozen sensors. It needed two humans looking at the bigger picture to save the day.

What does this have to do with burgers? Well, both jobs show the changing roles of humans in systems that, for most of the time, can do without them. A fully automated fast food restaurant won’t need human intervention unless something goes wrong. Similarly, until the sensors on that Airbus froze up, the plane was more or less flying itself. Yet while the pilots contributed little, their crucial inputs were the difference between life and death.

In both cases, the robots do the work while the humans step in when needed, not as a role replacement but to perform an alternate function. The fast food worker cleaning the production line isn’t the secondary chef, they are the primary cleaner. In any fully automated airline of the future, the plane itself would be the pilot while any human operator would be more akin to an IT analyst than a pilot.

There are use cases like this for every sector. Future surgeons and dentists who monitor robots wielding the scalpels, grounded fighter pilots remotely overseeing robot planes on combat missions, a single office-bound driver keeping track of a whole fleet of delivery trucks on their daily routes. Ultimately, these ‘operator’ roles may be the only jobs that remain in these sectors and the feelings of dissatisfaction a fast food cleaner feels may well be the same as a pilot who rarely touches the controls, or perhaps is no longer trained to do so. Will switching to safe, cheap, reliable robots feel like progress to them?

The slaves that spy on us

Most agree that the IoT is both the next big thing and just around the corner. This transformation of manufactured objects into physical extensions of the internet will, we’re told, usher in a new era of personalised products and services. It will do so by feeding real-time sensor data back to the manufacturers, letting them learn more about our behaviour the same way that Google learns about us through our search requests. And while this should result in better services, what it certainly will be is a widespread yet almost silent erosion of our privacy.

A recent report from Reuters detailed Roomba manufacturer iRobot’s aim to expand into the smart home market by turning their autonomous roving cleaner bots into IoT-enable devices capable of measuring the size of each room along with activity levels and pedestrian footfall. By selling this data to third parties such as Apple or smart lighting companies, iRobot intends to feed into shared lifestyle data and also start a new line of revenue for the Roombas.

While this is of obvious commercial benefit, what’s less clear is the payoff to consumers for having their homes mapped. Unless a clear ‘opt in’ option is included, there’s also the possibility of owners unwittingly allowing such information to be constantly streamed out of their homes. What we say, what we do In July this year, a New Mexico police department credited an Amazon Echo for dialling 911 after a woman gave it a verbal command during an assault by her boyfriend. Following a standoff between SWAT and the armed assailant, the woman and her child were rescued and while Amazon were quick to clarify that Alexa currently doesn’t have the capability to call the police, the continued reporting of this story has started a discussion about whether it should. Elderly people needing assistance would be a widely used application of such a feature, yet this does raise the issue of what Alexa hears. More specifically, what it records. . .

The 911 story came hot on the heels of other recent Echo stories. In Dallas, there was the surge of Echo orders after a local TV news anchor said “Alexa, order me a dollhouse” on air, prompting many Echo-enabled homes to do just that. There’s also an ongoing murder case in Arkansas where an Echo’s audio recording is the key piece of evidence of what occurred immediately before and after a killing. Both stories show that with Alexa, ‘always online’ may mean ‘always listening’ but might mean ‘constantly recording.’ Who has access to this data and what they can use it for is something that needs to be decided now. If we know of a problem with recorded audio, imagine the privacy implications of future robots with cameras, or smart clothing streaming location and activity data to the cloud.

Share your thoughts and comments on the themes discussed in this piece below. . .