As an Amazon Associate I earn from qualifying purchases.

How a paper by three Oxford academics influenced AWS bias and explainability software

[ad_1]

SageMaker Clarify helps detect statistical bias in data and machine learning models. It also helps explain why those models are making specific predictions. Achieving that requires the application of a collection of metrics that assess data for potential bias. One Clarify metric in particular — conditional demographic disparity (CDD) — was inspired by research done at the Oxford Internet Institute (OII) at the University of Oxford.

The research paper’s authors: Oxford Internet Institute academics Sandra Wachter, left, associate professor and senior research fellow in law and ethics; Brent Mittelstadt, middle, senior research fellow in data ethics; and Chris Russell, a group leader in Safe and Ethical AI at the Alan Turing Institute, and now an Amazon senior applied scientist.

In the paper “Why Fairness Cannot Be Automated: Bridging the gap between EU non-discrimination law and AI”, Sandra Wachter, associate professor and senior research fellow in law and ethics at OII; Brent Mittelstadt, senior research fellow in data ethics at OII; and Chris Russell, a group leader in Safe and Ethical AI at the Alan Turing Institute, and now an Amazon senior applied scientist, “proposed a new test for ensuring fairness in algorithmic modelling and data driven decisions, called ‘Conditional Demographic Disparity’.”

CDD is defined as “the weighted average of demographic disparities for each of the subgroups, with each subgroup disparity weighted in proportion to the number of observations it contains.”

“Demographic disparity asks: ‘Is the disadvantaged class a bigger proportion of the rejected outcomes than the proportion of accepted outcomes for the same class?’” explained Sanjiv Das, the William and Janice Terry professor of finance and data science at Santa Clara University’s Leavey School of Business, and an Amazon Scholar.

Das came across the paper during his review of relevant literature while working on the team that developed Clarify.

“I read the first few pages and the writing just sucked me in,” he said. “It’s the only paper I can honestly say, out of all of those I read, that really was a delight to read. I just found it beautifully written.”

I read the first few pages and the writing just sucked me in. It’s the only paper I can honestly say, out of all of those I read, that really was a delight to read. I just found it beautifully written.

The idea for the paper was rooted in research the OII group had done previously.

“Before we did this paper, we were working primarily in the space of machine learning and explainable artificial intelligence,” Mittelstadt said. “We got interested in this question of: Imagine you want to explain how AI works or how an automated decision was actually made, how can you do that in a way that is ethically desirable, legally compliant, and technically feasible?”

In pursuing that question, the researchers discovered that some of the technical standards for fairness that developers were relying on lacked an understanding as to how legal and ethical institutions view those same standards. That lack of cohesion between technical and legal/ethical standards of fairness meant developers might be unaware of normative bias in their models.

“Essentially, the question we asked was, ‘OK, how well does the technical work, which quite often drives the conversation, actually match up with the law and philosophy?’” Mittelstadt explained. “And we found that a lot of what’s out there isn’t necessarily going to be helpful for how fairness or how equality is operationalized. We found a fairly significant gap between the majority of the work that was out there on the technical side and how the law is actually applied.”

RAAIS 2020 – Sandra Wachter, Brent Mittelstadt and Chris Russell, University of Oxford

As a result, the OII team set about working on a way to bridge that gap.

“We tried to figure out, what’s the legal notion of fairness in law, and does it have an equivalent in the tech community?” Wachter said. “And we found one where there’s the greatest overlap between the two: conditional demographic disparity (CDD). There is a certain idea of fairness inside the law that says, ‘This is the ideal way, how things ought to be.’ And this way of measuring evidence, this way of deciding if something is unequal has a counterpart in computer science and that’s CDD. So now we have a measure that is informed by the legal notion of fairness.”

Das said the paper helped him see the appeal immediately.

“I was able to see the value not because I had an epiphany, but because the paper brings it out really well,” he said. “In fact, it’s my favorite metric in the product.”

Das said the OII paper is useful for a couple of reasons, including the ability to discover when something that appears to be bias might not actually be bias.

Sanjiv Das is the William and Janice Terry professor of finance and data science at Santa Clara University’s Leavey School of Business, and an Amazon Scholar.

“It also allowed us to measure whether we were seeing a bias, but the bias was not truly a bias because we hadn’t checked for something called Simpson’s Paradox,” he said. “The paper actually deals with Simpson’s Paradox.” The paradox says that trends that appear in aggregate data often disappear when that data is disaggregated.

“This came up with Berkeley’s college admissions in the 1970s,” Das explained. “There was a concern that the school was admitting more men than women and so its admission process might be biased. But when people took the data and looked at the admission rates by school — engineering versus law versus arts and sciences — they found a very strange thing: In almost every department, more women were being admitted than men. It turns out that the reason those two things are reconciled is that women were applying to departments that were harder to get into and had lower admission rates. And so, even though department by department more women got admitted, because they were applying more often to departments where fewer people got admitted, a fewer number of women overall ended up at the university.”

The approach outlined by the OII researchers accounts for that paradox by utilizing summary statistics.

“Summary statistics essentially let you see how outcomes compare across different groups within the entire population of people that were affected by a system,” Mittelstadt explained. “We’re shifting the conversation to what is the right feature or the right variable to condition on when you are measuring fairness.”

I was able to see the value not because I had an epiphany, but because the paper brings it out really well. In fact, CDD is my favorite metric in the product.

The OII team is thrilled to see their work implemented in Clarify and they said they hope their paper proves to be useful for developers.

“There is an interest on the part of developers to test for bias as vigorously as possible,” Wachter said. “So, I’m hoping those who are actually developing and deploying the algorithms can easily implement our research in their daily practices. And it’s extremely exciting to see that it’s actually useful for practical applications.”

“The Amazon implementation is exactly the sort of impact I was hoping to see,” Mittelstadt agreed. “You actually have to get a tool like this into the hands of people that will be working with AI systems and who are developing AI systems.”

For more information on how Clarify can help identify and limit bias, visit the AWS SageMaker Clarify page.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo