Legal and Safety Issues Are Looming Around Ethics, AI, and Robots | Robotics

Legal and Safety Issues Are Looming Around Ethics, AI, and Robots

Source: iStock

As artificial intelligence and machine learning algorithms continue to advance, making sure that the AI entities can explain their decision-making process will be extremely important, especially after an accident that results in injury or death, says a leading legal expert on workplace .

Matthew Linton Ethics AI decision-making

Linton will be discussing these and other legal issues around AI and robotics at RoboBusiness 2018, to be held Sept. 25-27, 2018, in Santa Clara, Calif. (Robotics Business Review produces RoboBusiness.)

The panel session, “Safe vs. Safer: Is Public Perception on AI and Robotics Changing?” will take place on Thursday, Sept. 27, at 4:45 p.m., as the closing keynote for RoboBusiness. Joining Linton on the panel is Dawn N. Castillo, MPH, from the National Institute for Occupational and Health, and Jeff Burnstein, president of the Association for Advancing Automation (A3). Linton spoke with Robotics Business Review ahead of the show to discuss the topic of workplace safety, AI, ethics and robotics.

Why did the AI do what it did?

Linton outlined a hypothetical example of a 200-ton autonomous coal truck moving down a path when somebody walks in front of the truck. The software has to make a decision – does it stop, go around, sound an alarm? In most cases, the algorithm has rules for what the vehicle does. However, Linton said that the algorithms are not particularly capable of explaining to humans of the reason why it made the decision it made.

“We know we can train algorithms in effect to come up with answers to big data questions in a way that is relatively predictable in that we know what it’s supposed to be doing, but when you go and talk to some of the computer scientists that are actually doing these things, the sophisticated and state-of-the-art algorithms are taking in so much data and drawing so many different comparisons and conclusions, that we are unable to understand ultimately why the thing did what it did,” Linton said.

This will become a problem when government inspectors from the Occupational Safety and Health Administration (OSHA) or the Mining Safety and Health Administration (MSHA) come to investigate accidents with robots, automation, or autonomous vehicles, Linton said.

“If you’re an employer and somebody dies on your worksite and it’s work-related, you’re going to have to take certain actions in order for the government to let you continue to do that work going forward,” he said. “If you can’t explain why the thing did what it did, you won’t be able to say that you’ve done what is necessary to ensure safety going forward.”

Adding to the problem are few government regulations that specifically address things like robotics, automation, or autonomous vehicles. OSHA can issue general duty clause violations. This basically entails that if there’s not a specific OSHA regulation around a workplace safety issue, the government can still cite companies for failing to protect against generally recognized hazards within the industry that could result in death or serious physical harm.

Linton said businesses will need to get ahead of the issues by working to collect more data and work with government on figuring out better regulations around workplace safety in robotics and AI.

“I think now is the time to get out ahead and take a leadership position in the business community, to say ‘Here’s what we’re going to do as an industry to protect the workers on our own,’” he said. “That sometimes can head off more restrictive government regulation.”

Math clipart decision-making article

Getting AI to “show its work” will be a challenge in coming years. Source: Clipart.com

In addition, companies deploying robotics and AI at their own businesses need to ask more questions and guard against the potential for errors before an accident happens, Linton said.

“I live in a world where accidents can, will, and do happen every day,” he said. “Nobody ever thinks that it’s going to happen at their company, and it almost always does. It’s too late once the accident happens.”

Linton said companies need to “think defensively.” If they deploy an AI process with the assumption that if it goes wrong, companies can respond when private individuals, lawyers, and even criminal investigators ask about what it did to prevent accidents or injuries.

Linton said that fewer problems will happen for companies that say, “‘Look, we did everything we could, here’s what we did, we had all these people look at it,’ versus ‘Well, we just thought this was an amazing product and we thought we could sell a ton of it and make a ton of money and we didn’t really think through the safety implications.’”

More action needed around AI and ethics

Businesses also need to get involved with the U.S. government on crafting AI ethical rules, Linton said, especially as other countries worldwide begin to craft guidelines around things like autonomous vehicle ethics.

“If our government is not effectively going to participate in these international discussions, our companies are going to have to follow rules that are being dictated by European or other governments,” Linton said. “We’re seeing that in the data privacy arena right now, and it’s going to happen with AI, which will be terrible for American businesses.”

Linton said he’d potentially like to see an open and accessible federal task force on AI and ethics that could work with businesses to develop protocols and guidelines to bring to the international stage. “Right now, it’s just going to be dictated to us,” he said.

While many of these issues are still being debated by academics, businesses, and governments, Linton said the time is now for everyone to build the moral and ethical foundation around machine learning and deep learning technology.

“When I hear from people who are developing the technology, I think they have real fears that once the cat’s out of the bag, it’s really hard to put it back,” Linton said. “We’re building the foundation of a great structure right now, and that foundation needs to be built correctly or we may regret it.”

You might also like
Leave A Reply

Your email address will not be published.