TeaThe promotion of ChatGPT has been so intense that John Oliver has a full block During a recent episode on Artificial Intelligence (AI) (you can watch it a bit further down). He explains how the use of AI has become commonplace and is a part of our modern lives, being used in almost every industry and application such as self-driving cars, spam filters, and even training software for therapists. Is. He acknowledges that AI has great potential and how it can transform research, bioengineering, medicine and more. In his words, “AI will change everything.”
After acknowledging the benefits, Oliver spends most of the show discussing the dangers of AI, primarily its biases, ethical issues, and misuse. He provides examples from software, medical research, art and even autonomous cars employing malfunctioning and discriminatory algorithms. He calls for “interpretable” AI and AI regulation, and believes the latest EU proposed AI Act A step in the right direction.
Oliver’s closing remark is particularly pertinent: “AI clearly has tremendous potential and can do a lot if it’s anything like most technological advances of the last few centuries. Unless we are very careful, it can harm the disenfranchised, enrich the powerful and widen the gulf between them… AI is a mirror and will reflect who we are – of us From the best of us to the worst of us.
The challenge is how do we support and promote the benefits of this technology and how it can benefit our lives and our global economy and society while controlling biases and ethical issues and minimizing its harmful, nefarious use . This is a difficult challenge and must be addressed with careful investigation and an understanding of the full spectrum of the technology’s capabilities and benefits, as well as its limitations and shortcomings.
But before we discuss this challenge and offer some suggestions, let us first understand how AI works and why it can lead to bias and unethical results.
Is AI “smart” or “stupid”?
John Oliver said that “the problem with AI is not that it is smart, but that it is stupid in a way that we cannot predict.”
As much as we would like to call it “artificial intelligence,” there is still a lot of human input involved in building these algorithms. Humans write the code, humans decide what methods and methodologies to use and humans decide what data to use and how to use it. Most importantly, the algorithm and the data it is fed are subject to human error. Therefore, AI is only as smart as the person who codes it and the data it was trained on.
Human beings naturally have biases – both consciously and unconsciously. These biases can lie in the choice of the code as well as the data used, how the data is trained and how the algorithm is tested and audited before launch. If we encounter a problem with the output of these algorithms, the humans who created them must be held accountable for all the biases and ethical concerns inherent in their algorithms.
The tech world has known about the flaws in algorithms for years. In 2013, a harvard university study found that ads for arrest records, which appear alongside results for Google searches of the names, were significantly more likely to appear on searches for specific African American names. The Federal Trade Commission reported algorithms that allow advertisers to target people who live in low-income neighborhoods with high-interest loans.
The problems are not new. They just keep getting faster as technology advances. It’s unfortunate that we need hype applications like ChatGPT to get our attention, but it doesn’t have to be that way. We should discuss these issues and resolve them as soon as they arise, and even before that.
This is why even though the metaverse is not yet a reality, I have been advocating for it. It’s never too early to discuss ethicsand I’m covering in detail, why data care – like the favoritism we’ve seen with AI – that should be discussed now and not later. Because these concerns and problems will only increase in the metaverse when AI is used with the integration of other technologies, such as brain waves and biometric data.
The Case of the Apple Card Algorithm and the Lessons Learned
Apple Card, which launched in August 2019, ran into major problems in November that year after users noticed it offered smaller lines of credit to women than men. David Heinemeier Hanson, a prominent software developer, fired on twitter Even though his wife, Jamie Hanson, had a better credit score and other factors in her favor, his application for a credit line increase was denied. Her complaints went viral, with others recounting similar experiences. Apple’s own co-founder Steve Wozniak said he had a similar experience where he was offered 10 times the credit limit his wife was offered.
Black box algorithms, such as the one Apple Card is using, are indeed capable of discriminating. They may not require human intelligence to operate, but they are created by humans. Although they are considered objective because they are automatic, this is not necessarily the case.
An algorithm depends on: (1) code created by humans, which may be intentionally or unintentionally biased; (2) the methods and data used, which are decided by the creators of the algorithm; (3) The way the algorithm is tested and audited, which is, again, decided by the creators of the algorithm.
The algorithm may be a “black box” for the users and customers who are using these applications, but it is not a “black box” for their creators.
How bias can enter the algorithm
Goldman Sachs, the bank that issued the Apple Card, quickly insisted that there was no gender bias in the algorithm, but it failed to provide any evidence. Goldman then defended it by saying that the algorithm had been vetted by a third party for potential bias; Furthermore, it doesn’t even use gender as an input. How can a bank discriminate when there is no telling which customers are women and which are men?
This explanation was somewhat misleading. It is entirely possible for algorithms to discriminate on gender, even if they are programmed to be “blind” to that variable. Intentionally imposing blindness to something as important as gender makes it difficult for a company to detect, prevent, and reverse bias on that variable.
A gender-blind algorithm can be biased against women as long as it draws on an input or inputs that correlate with gender. There is ample research to show how such proxies can lead to unwanted biases in various algorithms. studies have shownFor example, that credentials can be inferred from something as simple as whether you use a Mac or a PC. But other variables, such as home address, can serve as proxies for race. Similarly, where a person makes a purchase, information about their gender may overlap.
The book “Weapons of Math Destruction” published in 2016 by Cathy O’Neil, a former Wall Street quant, described a number of situations where proxies have been horrifically biased and unfair not only in finance but also in education, criminal justice, and law enforcement. Helped to build an automated system. , and health care.
The idea that bias is eliminated by removing an input is a very common and dangerous misconception. This means that algorithms need to be carefully audited to ensure that bias has not crept in somehow. Goldman said it did so, but the fact that the gender of customers is not collected would make such audits less effective. Companies should actively measure protected characteristics such as gender and race to ensure that their algorithms are not biased against them.
However, without knowing a person’s gender, such tests are much more difficult. It may be possible for an auditor to estimate gender from known variables and then test for bias on that. But it will not be 100% accurate. Companies should examine the data fed to an algorithm as well as its output to see whether it treats, for example, women differently from men on average, or whether there are different error rates for men and women. .
If these exams and tests are not done with careful attention, we will see more like Amazon is pulling the algorithm used in recruiting due to gender differences; Google criticized for racist autocompleteand both Embarrassed IBM and Microsoft By facial recognition algorithms that turned out to be better at recognizing white people than men and other races than women.
Sensitive Rules and Policies
AI should be regulated, and policies should be put in place to reduce abuse and biases. But the question is how. We must understand that AI is a tool, no end to means and ends, In other words, do you regulate the equipment? Do you control the hammer? Or do you control the use of the hammer?
In the case of ChatGPT, where there are appreciable concerns about chatbots such as the spread of misinformation or toxic content, legislators should address these risks in regional laws, such as Digital Services ActWhich requires platforms and search engines to deal with misinformation and harmful content, and completely ignores the risk profiles of different use cases, as proposed in the EU AI Act.
We shouldn’t treat AI as an automated “black box”, especially if it creates biases, which can exacerbate social and economic inequalities. We should require individuals and organizations to follow policies and regulations on how to use and implement AI and generative AI; and how to test and audit algorithms to ensure they are ethical, bias-proof, and produce meaningful results that can benefit users, customers, and our global society.
Remember that AI is only as smart as the person who codes it and the data it was trained on. Policies on auditing the code and the data fed into it should be a common practice of any company using AI. In regulated sectors such as employment, financial services, health care, for example, these policies and algorithms must be subject to compliance and audit by regulators.
We shouldn’t be too concerned if someone uses ChatGPT to assist in composing an email, but we should be very concerned if the AI is used for scams, where the technology mimics voices for bad actors. making it easy and cheap to do, making people understand, often elderly, that their loved ones are in distress,
We must be careful and consider a broad spectrum of AI use cases – support those that benefit our future and create rules and policies that will reduce favoritism, and unethical, harmful, nefarious activities. As John Oliver said: “AI is a mirror and will reflect exactly who we are – from the best of us to the worst.” Let’s make sure we’re putting our best face forward when it comes to artificial intelligence!
The views and opinions expressed here are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.