Bad Epistemology: How Wrong Ideas about Knowledge Can Ruin Decisions

Let’s do a quiz!

What do these things have in common?

  1. My happiness level;
  2. The job performance of a knowledge worker;
  3. The societal engagement of young people;
  4. How democratic a country is.

Well?

‘Things’ like happiness, job performance, democracy, and societal engagement don’t exist in a concrete sense. You can’t see or touch a happiness-particle or a democracy-atom. The things mentioned in 1–4 exist, yet are not “there”. In philosophy, we call them abstract objects.

They raise fascinating epistemological issues. ‘Raising pistemological issues’ is philosophers-jargon for: it’s not clear how can we know stuff about this stuff we can’t perceive because they are not objects that ‘are there’.

How can Amnesty know how democratic a country is? How can a manager know how well the new hire is performing? How can a teacher deduce how societally engaged his students are?

This time, let’s put it in terms of a multiple choice question:

  • A) These are not concrete things I can put under the microscope or perceive directly. Therefore you can’t really know how democratic a country is, in the way you can objectively determine the weight of a rock, for example.
  • B) Nah man they’re the same kind of thing as rocks — can’t we measure happiness and job performance and societal engagement objectively, in the same way as we can weigh rocks?

Many people, we call them “managers”, believe, or want to believe, that we can get the same kind of knowledge about these things as we can get about the concrete objects as claimed in answer ‘B’.

Management by spreadsheet

How does that work, given that the objects aren’t there, you ask?

Take the example of the societal engagement of schoolkids. First you recruit some sensible and nuanced educational researchers to prepare some test to measure this variable. Then you ask schools all over the country to distribute the questionnaire. And finally you run statistics on the survey results, write some fancy rapport, et voila, you have discovered a fact about the societal engagement of youngsters.

Why would this be different than facts about the weights of rocks?

Okay, we must hope that the researches prepared valid and comparable questions in the best possible way and we also have to hope that our students understood the questions and answered them properly.

But let’s stipulate that all went well.

What I want to convince you of is that, even then, it would be wrong to say you have discovered previously located information about young people’s societal engagement. The attempts to objectively measure stuff like job performance, intelligence, societal engagement, and so forth, encourages the pursuit of a type of knowledge that I believe we cannot strive for in the first place. It also has all kinds of fucked up consequences for the way we run our schools, companies and healthcare.

These proxies are not knowledge

It’s part of common sense that it would be bad if we wouldn’t have data on how productive Joe from IT has been Tuesday afternoon between 2 and 3. And terribly undesirable if we don’t have an accurate image of how societally engaged our youngsters are.

If we don’t have numerical representations of these things, we can’t take action accordingly. If I don’t have information on Joe’s productivity or on young people’s societal engagement, policymakers are powerless. If I can’t measure it, I can’t manage it.

So we rely on proxies. Joe needs to send fifty e-mails and be at the office for eight hours each day. That’ll suffice. And we devise all kind questionnaires and surveys to tell us about how “societally engaged” kids are these days.

It’s a harmful mistake to require numerical proxies (questionnaire results) before we can know and do something about the real thing (societal engagement).

I’d like you to consider two things. Could it be that (1) this reflex is more an act of creating some data point than tapping into previously located information about Joe and our youngsters that was already there for management consultants and visitation committees to come along and detect? And (2) could it be that making up proxies to measure, “know” and manage these unknowable things is not helping anyone?

They are just averages

Because I care about education, let’s stick with the example of the societal engagement of children. It’s widely believed that schools ought to make sure that children are societally engaged. So when some rapport announces, after a survey process like the one described above, that kids’ societal engagement has decreased, shit will hit the fan. Schools aren’t fulfilling one of their tasks.

Politicians will speak. Rapports will be written. Emotions will be had.

But think about this announcement for a bit. When some such questionnaire “reveals” this trend, what it’s saying is that, (1) on average, the later (2) cohortof school kids (3) scores lower on the questions than some earlier cohort.Hence, the conclusion that societal engagement is going down.

First, of all, keep in mind that this is a comparison based on averages of populations. Comparison of group averages might tell us that the earlier cohorts scored higher, but does that merit an inference to individuals? Do I know anything Joe from the 2016 cohort and Jane from the 2018 cohort?

No, you do not.

One of the big mistakes people make about data is that information about averages gives you knowledge about individuals.

Here’s a toy example.

Imagine a group of two persons. One has a length of 80 centimeters, and the other one is 20 centimeters tall. The average height of this group, therefore, is 50 centimeters. But no individual in that group is even close to that height.

You can know the average of a group without thereby knowing about the individuals in the group.

But this is actually just a side point.

They are also *meaningless* averages

The more important point is that this information doesn’t tap into anything in the real world, but just creates some data point that’s spinning in the void, not corresponding to anything in reality.

Because, what would it be corresponding to?

Claims about abstract objects such as societal engagement don’t make an empirical claim to factual truth in the first place. We have no idea what such a factual truth would look like. The conditions under which such a statement would report knowledge cannot be articulated at all.

To put it more philosophically: there is no way to say in more fundamental terms what it is for some child to be societally engaged, in the sense in which there is a way to say in more fundamental terms what it is for a rock to have a certain weight, or for a function to be continuous, or for a thing to be made of gold.

Things like societal engagement and job performance are abstract objects that aren’t fit for true knowledge claims or authoritative announcements after quasi-scientific questionnaires. It’s not at all clear they’re getting at something in reality. They are Fata Morganas that we desperately crave for because we are afraid to make decisions and trust people in the absence of quantified metrics.

Downside 1: a perverse reversal of priorities

All these surveys yield is a meaningless number, an empty shell that allows everyone to hide behind statistics. This leads to a situation where the numbers have become more important than the thing they were supposed to measure.

Much of this so-called ‘data’ has little meaning or application outside the parameters of the audit. They are fake and only have a function as part of the neoliberal carousel.

As Finnish researcher Eeva Berglund observed:

“The information that audit creates does have consequences even though it is so shorn of local detail, so abstract, as to be misleading or meaningless — except, that is, by the criteria of audit itself.”

Since there is no connection between reality and the metric, no one knows — can know — how to change reality to achieve the desired change the metric.However, there is such an enormous focus on prevention and funding depends on meeting these nasty output-criteria. Therefore, it becomes all about getting the metrics in order, regardless of changes in reality. Making sure our kids say the right things on the questionnaire that measures their societal engagement takes priority over actually molding them into engaged participants of society.

Or, to put the same point differently: what we have is not a direct comparison of school’s performance on getting kids societally engaged, but a comparison between the audited representation of that performance and the output.Inevitably, a short-circuiting occurs, and work becomes geared towards the generation and massaging of the representations rather than the ‘in-reality’ goals of school itself (which are of course not to change the metric put to foster societal engagement in kids).

The metric ends up replacing the real thing. Now all of a sudden there’s something ‘there’, something we can track. That means we can base decisions on it, and improve on it. But it is not at all clear that such an “improvement” is meaningful, that it corresponds to something in reality.

Downside 2: opportunity costs

The drive to assess the performance of teachers and to measure forms of labor which are resistant to quantification has, moreover, inevitably requires additional layers of management and bureaucracy. This brings with it massive opportunity costs in terms of both money in time.

In terms of money, while governments pump massive amounts of money into education and healthcare, too much of that mysteriously doesn’t get to the teachers and caregivers but gets stuck at the level of middle-management.

In terms of time, a case study of a local government in Britain concludes that“more effort goes into ensuring that a local authority’s services are represented correctly than goes into actually improving those services.”

I just fell off my chair.

No, I don’t have data to back that up

This was a long rant, so here’s a summary.

It’s part of common sense that we need metrics to make sure teachers do their job and that they will fuck up if managers don’t have data regarding their every movement.

Far from normal, however, this idea is a consequence of the flawed Keynesian theory of human nature. But since we’re deeply convinced that we need that data, yet it wasn’t there to be found, we went and created some instead. We would like to know about these unknowable things so badly, that we pretend there’s some fact about societal engagement ‘there’, which our surveys tap into.

Now we can finally start managing teachers!

According to Campbell’s Law:

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.

The illusion of knowledge about abstract objects leads to a perverse reversal of priorities. The system cares about the representation of everything in metrics, at the expense of the actual personal development of kids.

There is so much ‘scientific expertise’, there is so much fear of what could go wrong. So we hide behind methodologies, behind your tracking systems, behind the delusion of scientifically informed policy.

“But,” you say, “without these metrics, don’t teachers and principals no longer have safe epistemic grounds for their choices and decisions about (for example) societal engagement of children? Doesn’t this follow from what you’re saying?”

Yes, that does indeed follow. But I’m denying that’s not having numbers to base these decisions on is a problem.

Socialization and citizenship-formation are things you do, together, with each other, by working on a morally good climate in the school. For that, you need personal commitment, openness, cordiality, honesty, courage, and a good sense of the human measure.

To be grounded, these decisions need wisdom, trust, and common sense, not fake and super-costly data.

We should allow the kind-hearted teacher to stand up for his own common sense, his own compassion, his own sensitivity, and his own professional independence.

In short, you need virtue, and that is really in a different register than knowledge.


Like to read?

Join my Thinking Together newsletter for a free weekly dose of similarly high-quality mind-expanding ideas.