IN AI (CAN) WE TRUST?

SAMIRAN GHOSH | April 20, 2021

article image
Artificial intelligence (AI) is the best thing to happen to our lives. It helps us read our emails, complete our sentences, get directions, do online shopping, get dining and entertainment recommendations, and even make it easier to connect with old friends or make new ones on social media. AI is not only skilling itself at many human jobs; it is also making decisions for us.

The question is whether these decisions can be trusted. To elaborate, does AI-aided recruitment facilitate or reject the right candidate selection? Is the Tinder match made in heaven or by the algorithm? Who is being sent to jail — criminals or innocents predicted by AI bias?

As humans, we come from a diverse range of sociopolitical, racial and cultural backgrounds. The idea of what is right — and the mere question of morality itself — changes depending on the context. How does the AI decide what is right — and for whom? Faced with the decision to save the driver in a smart car or the pedestrian, who does the onboard AI choose? How does it arrive at this decision?

"Debiasing humans is harder than debiasing AI systems," believes Olga Russakovsky, an assistant professor in the Department of Computer Science at Princeton University


A Question Of Ethics

Before AI can think for humans, humans have to think for AI. Essentially, the ethics of AI technology is the embodiment of its creators' ethics. And this is where the "ethical AI conundrum" begins.

AI is good and evil, but the truth is that the underlying concern that dominates every invention or innovation is human bias. There is enough evidence pointing in this direction, the recent and most prominent one being Apple. In 2019, the company's new credit card was accused of offering some women a lower limit despite them having better credit scores than their male spouses. Of such intensity was the bias that Apple co-founder Steve Wozniak noted that his wife got a lower credit limit than he did despite the fact that they had "no separate bank or credit card accounts or any separate assets."

AI is open to biases because it makes decisions based on its human creators' information, and this information contains biases. Many of the creators are males who grew up in the western world, which can predispose them to individual communities and geographies. There has been enough debate around COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm that courts in the United States devised to anticipate the likelihood of repeat offenders. The algorithm indicated twice as many false positives for black offenders (45%) as white offenders (23%).


Garbage In, Garbage Out

TechTarget defines the concept of "garbage in, garbage out" this way: "The quality of the input determines the quality of output." Apart from humans, bias can also permeate a machine's intelligence. After all, as B Nalini noted, it is humans who frame the problem, train the model and deploy the system. Even with unbiased data, there is no guarantee of accuracy, as the very process by which machine learning models achieve this can yield biased outcomes.


Teaching AI Morality

In a 2001 article, futurist and inventor Raymond Kurzweil stated that our view of progress is linear. The more we adapt to change, the rate of change itself increases exponentially. We may expect to see 20,000 years of progress in the decades encompassing the 21st century. However, even while we acknowledge the exponential growth, we must also accept that AI is a relatively new technology. The word itself came into existence a mere 60 years ago, meaning we are closer to the beginning or maybe even in the middle rather than the end.

AI is just a toddler, learning the differences between moral right and wrong and inheriting its creators' biases. It still struggles to do much more than detect statistical patterns in large datasets. Human understanding and intelligence extend far beyond static ideas of right and wrong, the rules themselves changing according to sociocultural and historical contexts. If, as humans, we are still struggling with morality, it is rather presumptuous of us to expect a machine — that we have created — to outshine us in this regard.

As the Harvard Business Review noted, there are two conclusions. The first involves acknowledging how AI can help improve the process of human decision-making itself by predicting outcomes from available data while disregarding variables that lead human decision-makers to generalize and segregate without even realizing their inherent biases. The second alludes to a more complicated need to technically define and measure the ever-fleeting idea of "fairness."


Conclusion

Bias is as fundamental as the air we breathe or the environment we live in, and it is prevalent among us all, either as individuals or as a community. At this point in human history, the world is getting ready to industrialize AI tech and deploy it more widely. Thus, addressing the "inherent" AI biases at this moment becomes exceptionally critical.


AI is just a toddler, learning the differences between moral right and wrong and inheriting its creators' biases. If, as humans, we are still struggling with morality, it is rather presumptuous of us to expect a machine that we have created will outshine us in this regard


Just as a pet blindly mirrors its trainer's instructions and personality, AI mirrors its creators' input, biased or not. Thus, the root of the problem goes far deeper than AI ethics but becomes a question of human morality and the concept of "fairness" itself and how it can be defined and measured.

"Debiasing humans is harder than debiasing AI systems," believes Olga Russakovsky, an assistant professor in the Department of Computer Science at Princeton University and co-founder of the AI4ALL Foundation, which works to increase diversity and inclusion within AI. "I am optimistic that automated decision making will become fairer," she mentioned in an interview with Wired.

First printed in Forbes on Feb 9, 2021

Spotlight

Impetus

Impetus is a product development, software services and solutions company. We are a leading provider of Big Data solutions for the Fortune 1000®. We help customers effectively manage the “3-Vs” of Big Data and create new business insights across their enterprises. Our customers include Financial, Healthcare, Manufacturing, Telecom, and Digital Media. We partner across the landscape including companies like Oracle, Hortonworks, Microsoft, EMC, DataStax, MapR, Talend and more.

OTHER ARTICLES

NVIDIA Helps Transportation Industry With AI Technology

Article | March 1, 2020

NVIDIA is helping the transportation industry by giving it access to its deep neural networks (DNNs) for autonomous vehicles. NVIDIA is providing access to its AI (artificial intelligence) model and introducing advanced training tools. This helps the company to strengthen its end-to-end platform for autonomous vehicle development and, eventually, deployment. Automakers and other companies that develop autonomous vehicles (AVs) on the NVIDIA GPU Cloud container registry will get access. NVIDIA DRIVE is pretty much the standard for the development of autonomous vehicles. It is used by automakers, truck manufacturers, and robotaxi companies along with related software companies and universities.

Read More

How Much Does Cleo like AI App Development Cost?

Article | April 14, 2020

Among the many differences that lie between millennials and the Generation X lies the art of saving. Sometimes enough to create a life for their next generations. Now we are not saying that millennials are not thinking about saving and investing – if not for their future family then for their next vacation or funding their dream car. What we are saying that the Gen Y is struggling with making savings a natural habit. The intensity of intention that they carry towards maintaining a non-vacant bank account is something that has led to the onset of digital budgeting solution, which reminds them from time to time of their spending abilities.

Read More

IN AI (CAN) WE TRUST?

Article | April 20, 2021

Artificial intelligence (AI) is the best thing to happen to our lives. It helps us read our emails, complete our sentences, get directions, do online shopping, get dining and entertainment recommendations, and even make it easier to connect with old friends or make new ones on social media. AI is not only skilling itself at many human jobs; it is also making decisions for us. The question is whether these decisions can be trusted. To elaborate, does AI-aided recruitment facilitate or reject the right candidate selection? Is the Tinder match made in heaven or by the algorithm? Who is being sent to jail — criminals or innocents predicted by AI bias? As humans, we come from a diverse range of sociopolitical, racial and cultural backgrounds. The idea of what is right — and the mere question of morality itself — changes depending on the context. How does the AI decide what is right — and for whom? Faced with the decision to save the driver in a smart car or the pedestrian, who does the onboard AI choose? How does it arrive at this decision? "Debiasing humans is harder than debiasing AI systems," believes Olga Russakovsky, an assistant professor in the Department of Computer Science at Princeton University A Question Of Ethics Before AI can think for humans, humans have to think for AI. Essentially, the ethics of AI technology is the embodiment of its creators' ethics. And this is where the "ethical AI conundrum" begins. AI is good and evil, but the truth is that the underlying concern that dominates every invention or innovation is human bias. There is enough evidence pointing in this direction, the recent and most prominent one being Apple. In 2019, the company's new credit card was accused of offering some women a lower limit despite them having better credit scores than their male spouses. Of such intensity was the bias that Apple co-founder Steve Wozniak noted that his wife got a lower credit limit than he did despite the fact that they had "no separate bank or credit card accounts or any separate assets." AI is open to biases because it makes decisions based on its human creators' information, and this information contains biases. Many of the creators are males who grew up in the western world, which can predispose them to individual communities and geographies. There has been enough debate around COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm that courts in the United States devised to anticipate the likelihood of repeat offenders. The algorithm indicated twice as many false positives for black offenders (45%) as white offenders (23%). Garbage In, Garbage Out TechTarget defines the concept of "garbage in, garbage out" this way: "The quality of the input determines the quality of output." Apart from humans, bias can also permeate a machine's intelligence. After all, as B Nalini noted, it is humans who frame the problem, train the model and deploy the system. Even with unbiased data, there is no guarantee of accuracy, as the very process by which machine learning models achieve this can yield biased outcomes. Teaching AI Morality In a 2001 article, futurist and inventor Raymond Kurzweil stated that our view of progress is linear. The more we adapt to change, the rate of change itself increases exponentially. We may expect to see 20,000 years of progress in the decades encompassing the 21st century. However, even while we acknowledge the exponential growth, we must also accept that AI is a relatively new technology. The word itself came into existence a mere 60 years ago, meaning we are closer to the beginning or maybe even in the middle rather than the end. AI is just a toddler, learning the differences between moral right and wrong and inheriting its creators' biases. It still struggles to do much more than detect statistical patterns in large datasets. Human understanding and intelligence extend far beyond static ideas of right and wrong, the rules themselves changing according to sociocultural and historical contexts. If, as humans, we are still struggling with morality, it is rather presumptuous of us to expect a machine — that we have created — to outshine us in this regard. As the Harvard Business Review noted, there are two conclusions. The first involves acknowledging how AI can help improve the process of human decision-making itself by predicting outcomes from available data while disregarding variables that lead human decision-makers to generalize and segregate without even realizing their inherent biases. The second alludes to a more complicated need to technically define and measure the ever-fleeting idea of "fairness." Conclusion Bias is as fundamental as the air we breathe or the environment we live in, and it is prevalent among us all, either as individuals or as a community. At this point in human history, the world is getting ready to industrialize AI tech and deploy it more widely. Thus, addressing the "inherent" AI biases at this moment becomes exceptionally critical. AI is just a toddler, learning the differences between moral right and wrong and inheriting its creators' biases. If, as humans, we are still struggling with morality, it is rather presumptuous of us to expect a machine that we have created will outshine us in this regard Just as a pet blindly mirrors its trainer's instructions and personality, AI mirrors its creators' input, biased or not. Thus, the root of the problem goes far deeper than AI ethics but becomes a question of human morality and the concept of "fairness" itself and how it can be defined and measured. "Debiasing humans is harder than debiasing AI systems," believes Olga Russakovsky, an assistant professor in the Department of Computer Science at Princeton University and co-founder of the AI4ALL Foundation, which works to increase diversity and inclusion within AI. "I am optimistic that automated decision making will become fairer," she mentioned in an interview with Wired. First printed in Forbes on Feb 9, 2021Enable GingerCannot connect to Ginger Check your internet connection or reload the browserDisable in this text fieldRephraseRephrase current sentenceEdit in Ginger×

Read More

SMART ROBOTS: THE POTENTIAL BENEFITS OF COMBINING AI WITH ROBOTICS

Article | February 27, 2020

What would you call a machine that looks like a human? Obviously a Robot! Robots are machines or mechanical human beings that are designed to assist humans with laborious and complex tasks. However, such robots are no more just mechanical design rather they have become smarter with time and advancement of technologies. AI developments have induced evolution and better capacity in robots. Even robotics and AI together can revolutionize almost any industry for the greater good. As the industry is realizing the combined potential of both the technologies, will we see the combination anytime soon?

Read More

Spotlight

Impetus

Impetus is a product development, software services and solutions company. We are a leading provider of Big Data solutions for the Fortune 1000®. We help customers effectively manage the “3-Vs” of Big Data and create new business insights across their enterprises. Our customers include Financial, Healthcare, Manufacturing, Telecom, and Digital Media. We partner across the landscape including companies like Oracle, Hortonworks, Microsoft, EMC, DataStax, MapR, Talend and more.

Events