All Thayer News

Artificial intelligence needs guardrails

Jun 24, 2019   |   by Eugene Santos Jr.   |   The Hill

With the recent launch of the website AI.gov as “Artificial Intelligence for the American People," AI will clearly be an integral part of our future. While some may still wonder, “what can AI do for us?," many more may be asking, "what can AI do to us?” given some recent tragic events.

The crashes of the Boeing 737 MAXs and Uber and Tesla’s self-driving car fatalities point to AI’s unintended consequences and highlight how technologists as well as users of AI have both fallen short at making proper guardrails in deploying AI technology.

People often think of AI as the panacea that will enable technology to solve our most pressing problems. In that way, AI brings to mind a seeming panacea of an earlier age: aspirin. Even now, medical research continues to expand the list of applicable diseases aspirin can help treat. But that panacea comes with important parameters. Like all medications, aspirin undergoes extensive study to determine its efficacy on a new disease as part of the Food & Drug Administration’s (FDA) review and approval.

Even with all this verification, important guardrails remain around how we use aspirin, such as a usage label. These outline what the medication treats, thus setting our expectations on its intended outcome. Additionally, warning labels, such as those on aspirin bottles that warn it increases the risk of Reye's syndrome for children and teens, educate users about potential risks. Certainly, not all consequences can be anticipated, but the procedures established by the FDA safeguard and inform us so we can make the best decisions.

Similarly, AI needs a set of guardrails both for its creators and its users. In simple terms, AI is any device that chooses one action or answer over any number of other possibilities. So, AI guardrails must include clearly specifying the intended outcome of the AI.

For example, do drivers know the extent and limitations of what Tesla’s Enhanced Autopilot driver’s assistance technology actually does? Another guardrail would require laying out the risks and costs of using the AI’s action choice when it is wrong (an unintended consequence), such as when Uber’s AI failed to classify and detect a pedestrian in sufficient time. The AI had misclassified her as a vehicle and then a bicycle.

Such guardrails are common sense and have been fundamental to how new non-AI technologies are used. They are embodied within our organizations such as FDA, the Federal Aviation Administration, and others.

But if AI is being used for something, are there safeguards already in place, say, through such agencies? Only if it falls in the right purview. Even when it does, the ability of an AI to adapt through machine learning can make it impossible for those who are using it to fully account for all consequences.

For instance, even though the NHTSAhas been moving forward on addressing self-driving vehicles, public understanding and trust has lagged behind — a recent AAA poll found that 71 percent of those surveyed are afraid to ride in a self-driving car. The evolution of AI can easily outpace our existing best practices and user expectations. Unless both creators and users are careful and pay attention, AI can readily overcome even common sense guardrails.

Recently, the FBI’s chief technology officer indicated to me that AI outcome accuracy is a clear requirement for the bureau. But the fundamental problem the FBI continues to have with purveyors of AI-based investigative systems has been their inability to adequately identify the unintended outcomes and their consequences. As such, any law enforcement agency should be very cautious about using such a system.

At the very least, those in the AI profession together with consumers of AI need to establish a framework of guardrails for self-oversight of AI technologies addressing intended outcomes and unintended consequences. Two things must be done immediately. First is to establish concrete principles on creating AI-driven technologies adhering to such guardrails.

We can potentially use our existing medical framework as a starting point. Second is getting both the AI creators and AI users to think hard about such guardrails even before attempting to build new technologies embedded with AI. We can move forward using AI to improve our lives while also providing adequate safeguards.

Link to source:

https://thehill.com/opinion/technology/446715-artificial-intelligence-needs-guardrails

For contacts and other media information visit our Media Resources page.