Algorithms are making many of your decisions, and you might be OK with that

DECEMBER 7, 2021 by Kyle Mittan, University of Arizona

The odds are good that at least a few algorithms helped you find this article.

After all, algorithms—which are essentially systems or processes that help make a choice—have been around nearly forever. But they’ve become ubiquitous with the rise of big data, and now typically involve math formulas in the form of computer code.

Facebook uses an algorithm to deliver its News Feed to nearly 3 billion users. Algorithms are what allow Tesla’s cars to drive themselves. And any Google search involves an algorithm that decides the order of the results.

Policymakers have long assumed that most people would rather not have a machine make certain day-to-day decisions—such as whether someone deserves a bank loan or is liable for a civil traffic offense. But a new study by Derek Bambauer, a professor in the University of Arizona James E. Rogers College of Law, finds that many people are perfectly happy letting a machine make certain decisions for them.

Bambauer, who studies internet censorship, cybersecurity and intellectual property, worked in the computer science field as a systems engineer before his legal career.

His new study, set to publish in the Arizona State Law Journal in early 2022, aims to help legal scholars and policymakers understand the public perception of decision-making algorithms so they can regulate those algorithms more in accordance with consumers’ views.

“We’re at a moment where algorithms have power and potential, but there’s also a good bit of fear about them,” said Bambauer, who co-authored the study with Michael Risch, professor and vice dean of the Charles Widger School of Law at Villanova University.

That fear, he added, is likely overstated by legal scholars and policymakers.

“In general, I think both Michael and I think that technology tends to be more mundane—it does not do the terrific things that we thought it would, and it does not do the awful things that we thought it would,” Bambauer said. “And, so, we thought people were jumping ahead and saying, “We need to reform this,” before asking, “How do people actually feel?'”

Preference for Algorithms was ‘Genuinely Surprising’

To better understand how people feel about the technology, Bambauer and Risch used an online survey to ask about 4,000 people whether they would prefer that a human or an algorithm make one of four hypothetical decisions:

Study participants were randomly assigned to one of the four scenarios and to a decision-maker—either human or algorithm. Participants also were given information about the decision-maker, such as its accuracy rate, how long it takes to decide and the cost of using it. With that information, participants could then choose whether they wanted to switch to the other decision-maker.

The study found that 52.2% of all participants chose the algorithm, while 47.8% chose a human.

Even knowing that the negative public perception of algorithms has probably been oversold, the researchers were surprised by their findings.

“We thought that if people genuinely were nervous about algorithms, that would show up in that aggregate—that the percentage of people who chose algorithms would not only be under 50%, but that it would be statistically significantly lower,” Bambauer said. “But that 4% difference—while it doesn’t look like much—is statistically significant, and that was genuinely surprising.”

The researchers also found that:

The ‘What’ and ‘How’ for Policymakers

Bambauer said he hopes the study gets policymakers to ask two key questions with difficult answers: “What should they do?” and “How should they do it?”

The “What should they do?” question is difficult, in part, because algorithms are used across a range of industries, meaning one size can’t possibly fit all, Bambauer said. Algorithms also lack a certain level of transparency that regulators and consumers have come to expect, he added, because algorithms’ most tangible form is in computer code, which looks like gibberish to the average person.

“If Facebook published its algorithm tomorrow, nobody would know what it is,” Bambauer said. “For most of us, it wouldn’t make a bit of difference.”

In searching for the “How should they do it?” answer, policymakers should avoid trying to simply regulate algorithms out of existence, Bambauer said. In addition to not being in the public interest, banning social media companies outright from using algorithms is “literally impossible,” he said.

“Just displaying things in chronological order is an algorithm,” he added. “There’s just no getting around it.”

Lawmakers might do well to look to the late 1980s, Bambauer said, when Congress enacted legislation requiring credit card companies to provide a cheat sheet summarizing the costs of their cards. The charts with this information, called Schumer boxes, were named after then-Rep. Charles Schumer of New York, who sponsored the legislation.

This could serve as a model, Bambauer said, for informing consumers about the algorithms that they’re using to make decisions. He said algorithm owners could be required to provide plain-language facts about what their algorithms do, such as: “By using an algorithm, we save you money,” or, “By using an algorithm, we make fewer mistakes.”

Bambauer and Risch offer a deeper analysis of their policy recommendations in a recent essay published on TechStream, a Brookings Institution website that covers tech policy.

While policy solutions to address algorithms’ shortcomings aren’t yet clear, Bambauer said the Schumer box shows that lawmakers already have the tools to craft such solutions. He sees a future in which decision-making systems likely involve both humans and algorithms.

“The right thing to do,” he said, “is to figure out a spot where we should have the person and figure out the spot where we should have the code.”


More information: Bambauer, Derek E. and Risch, Michael. Worse Than Human? (July 31, 2021). Arizona State Law Journal, Forthcoming, Arizona Legal Studies Discussion Paper No. 21-22, ssrn.com/abstract=3897126
Provided by University of Arizona

Collected at: https://techxplore.com/news/2021-12-algorithms-decisions.html?utm_source=nwletter&utm_medium=email&utm_campaign=daily-nwletter

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x