ETHICS AND BIAS IN ARTIFICIAL INTELLIGENCE WHAT DO YOU NEED TO KNOW
August 19, 2019 | (5.00PM)
USA (United States of America)
Examples of AI gone wrong are increasingly making the headlines. An algorithm used by the criminal justice system to predict which offenders were most likely to become repeat offenders discriminated against people of color. A major corporations AI-powered CEO search showed bias against women. Breakthrough facial recognition software proved woefully inaccurate unless the subject in question was white and male.