IDG Contributor Network: Recognizing and solving for AI bias | Artificial intelligence
Today, artificial intelligence (AI) is helping us uncover new insight from data and enhance human decision-making. For instance, we use facial recognition to sign into our cell phones, and voice comprehension and intent analytics to get assistance. E-commerce retailers work with AI to predict and recommend new products to consumers. Banks use conversational AI to reduce fraud and better manage client experiences.
Most of the AI that is in use today is narrow AI. General AI, which is more akin to human intelligence and can span a very broad range of decisions, emotions, and judgement, will not be here anytime soon. Narrow AI, which is here today, is actually very good at specific tasks, but “narrowness,” by definition, can introduce some limitations, making it prone to bias.
Bias may come from incomplete data samples or incorrect datasets. There is also interaction bias – skewed learning that happens through interactions over time. And, sometimes bias may result from a sudden change in the business, such a new law or business rule. Finally, ineffective training algorithms can cause bias. Recognizing where biases come from helps with mitigation and can ensure that the AI application yields its intended business results.
What leads to AI bias?
While unintended bias can come from many causes, two of the largest drivers are bias in data and bias in training.
The most obvious cause of bias in data is lack of diversity in the data samples used to train the AI system. For example, we routinely run sensor data from aircraft engines through AI algorithms to predict part replacements and optimize asset performance. But if the AI is primarily trained for flights from the United States to Europe – flying over the cold Northern Hemisphere – and then used for flights in sub-Saharan Africa, it is easy to see that the dataset will fall outside of the trained model’s parameters, and generate the wrong results. Put another way, the algorithm is only as smart as the data put into it.
The reality is that it can be hard to get comprehensive data to train AI systems, so many systems use only easy, readily available data. Sometimes, the data might not even exist to train the AI algorithm for all its potential use cases. For instance, AI software for recruiting struggles with recommending diverse candidates if it is trained only on a historical pool of non-diverse workers.
Another large driver of bias – bias in training – can come in through rushed and incomplete training algorithms. For example, an AI chatbot designed to learn from conversations and become more intelligent can pick up politically incorrect language that it gets exposed to and start using it, if it was not trained not to do so – as Microsoft learned with Tay. Similarly, the potential use of AI in the criminal justice system is concerning because we do not know yet if the training for the AI algorithms is done correctly.
Agile programming has trained us in short-bite iterative development of products. This approach, coupled with the excitement around AI’s promise, can drive early applications that quickly broaden beyond the intended use case. And because narrow AI does not cover for common sense, or a sense of fairness and equity, eliminating training bias requires a lot of planning and design work. This is where the human in the loop in the man-to-machine continuum becomes so important. Domain experts help think through and train the models accordingly.
Diversity in both data and talent can mitigate bias
The best way to prevent data bias is to use a comprehensive and broad dataset, reflective of all possible edge use cases. If there is underrepresented or disproportionate internal data, external sources may fill in the gaps, and give the machine a richer, more complete picture. In a nutshell, the more comprehensive the dataset, the more accurate the AI predictions will be.
Diversity in the teams working with AI also solves for training bias. When there is only a small group working on a system’s design and algorithms, it becomes susceptible to the thinking of what could be like-minded individuals. Bringing in new team members with different skills, thinking, approaches, and background drives more holistic design. One of our biggest learnings is that AI is best trained by diverse teams that help identify the right questions for AI algorithms to solve.
For example, several teams used multi-terabytes of operational data in wealth management to train algorithms to drive higher trading income. The obvious approach was to focus on day traders, who are mostly single, 30-35 year old white males. One of the teams – with a set of diverse members beyond the usual data engineers and neural net experts – addressed that objective and also identified an even larger opportunity targeting single 50-55 year old women, which uncovered a high investible assets segment that previously had gone untapped. Diverse teams think of questions others may not even know to ask.
AI also helps minimize bias
For all that has been said so far about the perils of bias in AI, the reality is that with proper design and thoughtful usage, we can help reduce bias in AI. In fact, in many situations, AI can minimize bias otherwise present in human decision-making. For example, in human resource recruiting functions, job descriptions can be run through AI programs to eliminate unconscious discrimination by flagging and removing words that contain gender biases, such as replacing “war room” with “nerve center.”
In summary, proper design and a few key principles can mitigate unintended bias in AI applications. Proper governance practices are a must. Data coverage needs to be comprehensive. And diverse teams deliver better results.
This article is published as part of the IDG Contributor Network. Want to Join?