The combination of man and machine can make real progress! The use of algorithms is usually different from the way humans make decisions

Last month, "Harvard Business Review" published an article called WantLess-BiasedDecisions? UseAlgorithms, authored by AlexP. Miller. The article stated that although the current AI algorithms are biased, it is also very important if technological advances help improve system performance. Even with prejudice, it is much smaller than human prejudice.

For this article, Rachel Thomas, one of the founders of fast.ai, has a different view. She believes that the impact of algorithmic bias is actually far-reaching, and our focus should be on how to solve this problem. The following is Lunzhi’s compilation of the original text:

Recently, "Harvard Business Review" published an article about humans always making biased decisions (this is true), but the article ignores many related issues, including:

There is usually no proper method for the implementation of the algorithm (because many people misunderstand that the algorithm is objective, accurate and will not make mistakes)

In many cases, the use of algorithms is much larger than that of human decision-making, so it will also produce the same prejudice as humans (usually due to the low cost of algorithm use)

Users of the algorithm may not understand the probability or confidence interval (even if these are already marked), and it would be difficult for them to rewrite the algorithm

Rather than focusing on trivial choices, it is more important to think about how to create decision-making tools that are less biased.

In the Harvard Business Review article, the author Miller believes that critics of the "algorithm revolution" worry that "algorithms will be opaque and biased when used and become unexplainable tools." However, in the article, he only targeted "Prejudice" expands and explains, and there is no explanation for the "opaque" and "unexplainable" characteristics.

The combination of man and machine can make real progress

The media always shows the progress of AI through the comparison between humans and machines. We always see the headline: "In XX games, machines defeated humans." If you take into account the use of most algorithms, this comparison is also inaccurate, so the evaluation of AI is also very narrow. In all cases, algorithms have human factors, especially the collection of data, decision making, implementation methods, interpretation of results, and different understandings of different participants, which are all affected by humans.

Many people who are committed to researching AI products in the medical field do not intend to replace humans with AI. They are just creating to help doctors make more precise and efficient decisions, thereby improving the quality of medical care. It is not humans or computers that can achieve the best level, but a combination of the two.

Miller pointed out in his article that humans have a lot of prejudice, and then compared the current methods to compare which is worse. But the article does not propose any way to make decisions less biased (perhaps using a combination of humans and algorithms)? In short, this is the more important issue worth considering.

The use of algorithms is usually different from the way humans make decisions

Algorithms have a wide range of applications in practice, so many of the same prejudices appear, and they are even considered to be correct or objective results. When the research shared by Miller in the article compares them, there is no difference in actual use.

Cathy O'Neil wrote in the book WeaponsofMathDestruction that the type of algorithm she criticized was more unfriendly to the poor.

These algorithms focus on processing large tasks and have low cost of use, which is part of the reason for their appeal. On the contrary, wealthy people usually choose personally customized input. Large companies or aristocratic preparatory schools usually prefer internal recommendations or face-to-face interviews. Privileged people are usually selected by people, while ordinary people are selected by machines.

An example in O'Neil's book is a college student with bipolar disorder who wants to work in a convenience store during the summer vacation. Every store he applied for used the same psychometric software to score interviewers, so every store rejected him. This shows another danger of algorithms. Although humans usually have similar biases, not everyone will make the same decisions. Although he has a mental disorder, the college student may eventually find a shop willing to hire him.

Many people are more willing to believe in decisions made by algorithms than human decisions. Researchers who design algorithms may have a better understanding of probabilities and confidence intervals, and most people will not notice this.

Algorithm needs to be explained

Many cases of algorithmic bias have no meaningful explanation or meaningful process. This seems to be a special trend in the algorithmic decision-making process, perhaps because people mistakenly believe that the algorithm is objective, so there is no need for objective analysis. At the same time, as explained above, the algorithmic decision system is usually to cut costs, and repeated inspections will increase costs.

Cathy O'Neil also wrote that a teacher is very much loved by students and parents, but was expelled by the algorithm inexplicably. She will never know why there is such a result. If we can re-examine the algorithm or understand the factors, it will not be so confusing.

TheVerge has investigated the software used in more than half of the states in the United States to count the medical and health services people receive. When this software was used in Arkansas, many people with severe disabilities found that their Medicaid was suddenly reduced. For example, a patient with cerebral palsy, Tammy Dobbs, originally needed someone to help her get out of bed, go to the bathroom, eat, etc., but suddenly the algorithm reduced the medical assistance time to 20 hours a week. She could not explain why this happened. In the end, the court found that there was an error in the algorithm of the software, which affected patients with diabetes or cerebral palsy. However, many people like Dobbs still worry about whether their Medicaid will be cut again one day.

The creator of the algorithm was asked if there is a way to communicate decision-making, he said: "Maybe this is our responsibility." But then he said that this may be other people's reason. We cannot think that the results we have created are the responsibility of others.

Another system used in Colorado found that there were more than 900 wrong rules in the system, which resulted in many problems, such as not counting pregnant women in plans that require Medicaid. It is difficult for lawyers to find errors in algorithms, because these internal working mechanisms are protected like trade secrets. Therefore, it is important to create a mechanism that can easily find errors.

Complex real system

When we talk about AI, we need to think about complex systems in the real world. The research mentioned in the "Harvard Business Review" treats decision-making as isolated behavior, and does not consider the environment in which it is located. Judging whether a person will confess to other crimes, this kind of decision is not made alone, it must be combined with a complex legal system. It is necessary for us to understand how the real environment of the research field works, and at the same time not to ignore the people who may be affected.

The COMPAS algorithm used in US courts is a decision-making system used in pre-trial bail, sentencing, and parole. ProPublica's investigation found that the false positive rate of white defendants was 24%, while the false positive rate of black defendants was as high as 45% (that is, the defendant was deemed "high risk" by the system but did not return to prison). Subsequent research found that COMPAS is not as accurate as a simple linear equation.

It can be seen from these cases that algorithms may exacerbate potential social problems. So we have a responsibility to understand the systems and the problems they may cause.

Anti-bias is not equal to anti-algorithm

Most people who oppose algorithmic bias will oppose unfair biases, but they are not people who hate algorithms. Miller said that these critics "rarely ask how these systems perform without algorithms," which suggests that those who oppose biased algorithms may not know how much humans are biased, or may just dislike algorithms. Before I started writing about machine learning biases, I spent a lot of time researching and writing about human biases.

When I publish or share biased algorithms, there are always people who think that I am anti-algorithm or anti-technology. And I am not the only one being questioned. So I hope that the focus of the discussion on the bias algorithm will not be limited to this trivial place, we need to solve this problem in depth.

Flat Cable

Flat Extension Cord,Flat Plug Extension Cord,Flat Coaxial Cable,Flat Head Extension Cord

Dong guan Sum Wai Electronic Co,. Ltd. , https://www.sw-cables.com