Kathy O’Neil [ PDF 15 ] Imagine that you’re seeing Mathematics. I stole this from my husband’s desk yesterday. Mathematicians use notation like that — hope you can see it — because mathematicians are lazy. Mathematicians use notation as shorthand for much more complicated things, that they’d have to write out with words, and it takes too long to do that — and we’re lazy — so we all agree, in our community, what a certain piece of notation means and that’s why we use notation.

The flip side of that is: if we don’t know what a piece of notation means, if it’s not well defined, people get very angry. In mathematics. “Why are you using that notation? Math is hard enough without you using notation that no one understands.”

The funny thing about notation is that it has almost the opposite effect on non-mathematicians that it has on mathematicians.

Mathematicians, when they look at that, when I look at that — I haven’t done academic mathematics for eight years, but I used to do some stuff similar to this — it’s a story. It’s a narrative. I could spend 15 minutes and tell you a story based on that.

But when other people see it — when non-mathematicians see notation, they get scared or intimidated. They feel like there’s some kind of authority there. There’s some kind of objectivity. Some scientific truth that they are not allowed to question because they’re not experts.

So that Authority of the Inscrutable, is translated as well to algorithms — which is what I’ve been studying for the last three years. First in finance — actually last eight years — first in finance and then as a data scientist.

What I’ve found is — I basically developed a theory of algorithms that are being used as weapons — through this kind of Authority of the Inscrutable.

I call them Weapons of Math Destruction.

They have five characteristics, which I’ll tell you, because I’m a Math Nerd, so that it’s very well defined.

And then I’ll have time, hopefully, for three examples of these Weapons of Math Destruction.

To be clear, these are algorithms that are used in all sorts of places, and all sorts of industries, but they’re used as a form of social control. They’re not actually helping people, in reality.

What are the characteristics?

They’re secret. They’re opaque. People who are targeted by these algorithms don’t understand how they work. They are also affected by them and

There’s a lot of people involved. its widespread.

There’s a questionable definition of success — usually the people who are targeted do not agree with the definition of success — which is often something like “saving money” (if it’s a Targeting Algorithm at work) And also

They create pernicious feedback loops.

Not only do they have a direct effect on the person targeted, but they create problems. They make problems worse.

Let me give you the first example. This is what interested me. This is how i got involved, in fact. It was the value-added model for teachers.

I don’t know if you guys know about this. It’s a widespread, current education reform algorithm.

If you want to know how widespread it is: people in DC got fired based on bad Value-Added Model scores.

The Chicago teachers strike was largely an argument over how much Value Added Model teacher’s scores could be used in assessing teachers.

The idea is that these — this algorithm — which I’m going to call VAM, that means Value-Added Model — is supposed to hold teachers accountable for good teaching.

My friend, who runs a high school in New York, wanted to understand this. It’s a math and science high school, so she thought she might be able to understand it. She asked her department of education contact to send her information about it.

They said, “oh you would want to know about it, it’s math.”

She persisted, finally got a White Paper, and showed it to me. It was unreadable. and, in fact, too abstract it be useful. So I filed a Freedom of Information Act request to get the source code — which was denied.

I later found out that the Think Tank in Madison, Wisconsin, that is in charge of this model, has a contract. It’s a licensing contract. Nobody gets to see the inside of that model.

Nobody, in the Department of Education in New York City, understands that model.

No teacher gets to understand their score, nor can they improve their score because it’s not told how.

They have statistical problems with that model — which I don’t have time for– but I just want to make the point that we’re talking about accountability for teachers — but there’s no accountability for the model.

My second example, comes from justice. The justice system.

I won’t go into it very much but there’s three tiers, actually kind of four — the bottom layer is the data. The data comes from policing events — which as we know is uneven. We have stop-and-frisk in New York. We have other kinds of uneven policing processes in other cities.

The second layer is what’s called Predictive Policing, where they take objective data of events that we know about — usually Nuisance Crimes like smoking pot, jumping the turnstile — where they therefore send police to look for crimes.

There’s bias in the data. There’s bias in predictive policing.

The next layer is Evidence-Based Sentencing

When someone is found guilty — this is just assuming they even go to a judge (which often doesn’t happen) — the judge is asked to think of a Recidivism Score, to decide how long that person should go to jail.

We can mention things that keep you up at night. this kept me up at night when I learned about this.

Some of these models that are being used now — which is, again, widespread. 22 states use these things, they’re state-by-state, as attributes that care about things like whether you finished high school, whether you have a job or whether you will have a job on leaving jail or prison and even whether your father was in jail.

This is stuff that wouldn’t be considered acceptable if a lawyer brought it to the judge. It’s unconstitutional. But again, because it’s cloaked behind mathematical obcurity, it lacks accountability.

My third example where again . . .

I just want to finish on the last one. I just want to make the point that models are embedded opinions. They’re embedded historical opinions, historical practices. so unless we specifically make sure that the models do not unfairly punish poor people or black people — we will end up with models that do.

That’s what we’re seeing

The third example, in contradiction to the last speaker, is Micro-Targeting Politics. [Especially] for me, and I want to make that case.

First of all micro-targeting in politics is powerful. It succeeds. We know that from Micah’s explanation of the Facebook Get-Out-the-Vote targeting. We know it’s powerful.

And, even though it seems kind of banal at a micro level — what the actual goal of Micro-Targeting is — is to understand you as a consumer, as a voter, and to show you what the campaign wants.
They’re having more and more success learning about your profile, testing people, testing out messages on people like you, and giving you exactly what they think will make you believe the candidate.

If Rand Paul was doing that for me he would convince me that I like him because I agree with him on financial reform. But i don’t agree with him on a host of other things, right?

But the point is that all these ways of set of tracking you around the web, are as you go around [including] political cookies. If i go to his website he’d say, “Oh I know Kathy. Show her the thing that she likes about me”.

So what I’m just saying there is that it is efficient for a given campaign. campaigns will brag about how they’re good.

It’s an arms race between Republicans and Democrats and in between different campaigns. But, at the end of the day, what is efficient for campaigns is actually inefficient for democracy.

I would argue that they also threaten democracy, as well as increase inequality — they threaten democracy because part of living in a democracy is understanding the rules.

And these algorithms, I’ve only mentioned three, are a set of secret rules. They don’t affect everyone.

Other algorithms that I look at, in my book, correspond to getting a job, at hourly wage jobs in the workplace personality tests that people are subjected to — having a job, the surveillance on the job, that Just In Time Scheduling — that all use big data.

They all punish certain people and not others.

So one thing I want to think about, when we think about Civic Tech, is that we have a destructive force. We have lots of great ideas. We have things that improve lives for the many. We also have these other things, which have been discussing — Weapons of Math Destruction — that make certain people — certain sub-populations — suffer, in unreasonable ways.

So we have to sort of add these effects up and what we see is unequal — unequal effects of technology — and modeling. Especially if you’re a minimum-wage worker.

I also want to make the point, that Cory made to me when I talked to him last night, these are all really hard problems.

How do you get education better?

How do you decrease mass incarceration?

Many of these things are really truly good intentioned gone awry

The things that happened, before the Value-Added Model for teachers, just got every teacher who didn’t have enough passing students in trouble. Right? But that was clearly, on the face of it, unfair to teachers of struggling students. So they replaced it with this model — but it’s actually not better.

In fact, it’s harder to understand and that makes it harder to acknowledge that it’s failing.

So there’s lots of good intentions here that aren’t succeeding. We have to think about how to solve those problems, as a society, and not how to rely on obscure mathematical answers that actually don’t pan out.

My final statement is that this is not, for me, a question of privacy.

The people that, the teachers I’m talking about, the prisoners I’m talking about, the people who are trying to get a job, the people who have a job, they don’t even have the right to privacy in those contexts.

This is really about social justice.

This is about understanding how these algorithms — which are secret rules — affect them in their lives and lower or open up the options to them and all of us.


O’Neil attended UC Berkeley as an undergraduate, received a Ph.D. in mathematics from Harvard University in 1999, and afterward held positions in the mathematics departments of MIT and Barnard College, doing research in arithmetic algebraic geometry.[4] She left academia in 2007, and worked for four years in the finance industry, including two years at the hedge fund D. E. Shaw. After becoming disillusioned with the world of finance, O’Neil became involved with the Occupy Wall Street movement, participating in its Alternative Banking Group.

She is a co-author (with Rachel Schutt) of Doing Data Science: Straight Talk from the Frontline. She also wrote an e-book On Being a Data Skeptic. Her book Weapons of Math Destruction was published in 2016 and has been nominated for the 2016 National Book Award for Nonfiction.

FEATURED IMAGE CREDIT: lehman_11

Tagged on:

4 thoughts on “Weapons of Math Destruction

  • 27/10/2016 at 10:12
    Permalink

    Nice O’Neil interview in today’s Guardian

    When someone is classed as “high risk”, they’re more likely to get a longer sentence and find it harder to find a job when they eventually do get out. That person is then more likely to commit another crime, and so the model looks like it got it right.

    Ultimately algorithms reinforce discrimination and widen inequality

    Reply
  • 29/10/2016 at 10:46
    Permalink

    Re:  assumptions that standardised test score are a fair measure of aptitude.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *