AI Researchers Warn of Urgent Need of Algorithm Supervision

It has been yet another drastic year for Artificial Intelligence. The public has applauded its achievements, but also couldn’t help but frown upon its faults.

Facebook has gone under fire for facilitating a genocide in Myanmar; Google was accused of building a censored version of their search engine for the Chinese government; not to mention the Cambridge Analytica data scandal which got under every Facebook user’s skin. The list goes on.

It seems like the public is in a blind spot, where they don’t know enough about AI, but are the victims of anything AI has done wrong. This is the focus of the annual report just launched by AI Now, a research group made up of employees from tech companies including Microsoft and Google.

The report elaborates on what the termed as “the accountability gap,” meaning there are many social challenges of AI and its algorithms that are urgent to be addressed but remain unregulated because the public lacks the tools or knowledge to hold tech giants accountable. It also puts forward recommendations on the steps to be taken to address these problems.

The case studies showed in this report are nerve-racking. According to the report, throughout the year, it has been actual people who suffered from the fails of experimental AI systems. In March, AI-powered cars killed drivers and pedestrians; in May, an AI voice recognition system developed in the UK falsely detected immigration frauds, canceled thousands of visas and deported people; in July, IBM’s Watson system gave “unsafe and incorrect treatment recommendations.” It’s horrifying to think that a lot more cases remain unreported.

Usually, AI software is introduced into public domains with the purpose of cutting costs and increasing efficiency. However, results from these actions are systems making decisions that can neither be explained, nor appealed. “A lot of their claims about benefit and utility are not backed by publicly accessible scientific evidence,” AI Now’s co-founder Meredith Whittaker told Onlyinfotech.

The report puts forward ten recommendations on securing a healthier future of AI, among them are the need to have more precise supervision systems, and matching marketing promises to reality.

First, sector-specific agencies need to be put in place to oversee, audit and monitor tech companies that are developing new systems. The report claims that a nationwide, standardized AI monitor model will not meet the specific requirements needed for detailed regulation, especially since domains like health, education, criminal justice, etc. all have their own frameworks, hazards and nuances. For marketers, when implementing AI systems to different clients, this should also be kept in mind so that technologies will not be turned against us.

Second, marketing promises should be accurate when it comes to AI products and services. This especially applies to consumer protection agencies to ensure “truth-in-advertising” laws to AI products. The reports warns AI vendors to be held to even higher standards for what they can promise, especially since the scientific evidence that are supposed to be backing these promises is still inadequate.

You might also like
Leave A Reply

Your email address will not be published.