Head Space

Four questions for making sense of contradictory information

I recently realised how much coffee I drink (about 4-5 cups on most days, if you’re interested), and was worried it might be too many. So I found myself staring at three articles about whether coffee is good for you. One said it prevents disease. Another linked it to anxiety. The third suggested the benefits depend on your genes. All published published recently. All citing research. And I just wanted to know whether I should drink the coffee.

This is the problem we’re all facing nowโ€”not information scarcity, but contradictory information from credible-sounding sources. And it occurred to me that I have the toolkit I need to make sense of the challenge.

Four questions that matter

When you come across contradictory sources in any area of your life, don’t just pick the most convincing headline (or the one that’s most aligned with what you’d prefer is true). Ask these questions in sequence:

1. Who’s making this claim and why?

The study showing coffee’s benefits might be funded by a coffee industry group. The one linking it to anxiety might come from researchers studying sleep disorders, where they’d naturally encounter people sensitive to caffeine. Neither is automatically wrong, but knowing who’s asking the question helps you understand the answer.

2. What’s the actual evidence?

The headline says “coffee linked to anxiety” but the study might show a small effect in people drinking six espressos daily. The research on disease prevention might follow people for thirty years and find a 12% reduction in certain conditions. The strength and type of evidence changes what you do with it.

3. What are they comparing it to?

When a study says coffee drinkers live longer, longer than whom? People who drink tea? People who switched from coffee to avoid caffeine? The comparison group shapes the finding.

4. What are they not telling you?

Research on coffee and heart health might adjust for exercise and diet, or it might not. Studies on cognitive benefits might measure short-term alertness but ignore sleep quality. The gaps matter as much as what’s there.

The same questions work everywhere

The method and questions don’t change. Only the subject does.

Are you comparing schools for your kids and find that every website claims to be excellent. Run the same four questions:

  • Who benefits from this claim? (League tables reward certain metrics.)
  • What’s the actual evidence? (Define what matters to you first, then look for specific data on those measures.)
  • What’s the comparison? (Top performer compared to what baseline?)
  • What’s missing? (High test scores might coincide with high stress levels they’re not advertising.)

Or you’re evaluating business software with contradictory reviews:

  • Who’s reviewing? (Competitors, actual users, paid promoters?)
  • What evidence? (Speed improvements versus error rates.)
  • Comparison? (Against what baseline?)
  • Omissions? (Implementation costs, learning curve, support quality.)

Why the sequence matters

You might be tempted to skip straight to “what’s the evidence” because it feels most objective. But without knowing who’s making the claim, you can’t properly evaluate the evidence they’ve chosen to present. A software company’s internal testing and an independent review might both show “improved performance”โ€”but what they measured and how they measured it will differ.

Jumping to “what are they not telling you” before understanding the comparison is equally problematic. You might notice they haven’t mentioned costs, but if you don’t know what they’re comparing their solution to, you can’t assess whether the omission matters. Are they comparing to a free alternative or an enterprise system ten times the price?

The sequence builds. Each question creates context for the next one. When you skip steps or rearrange them, you lose that scaffolding.

Common mistakes people make

The biggest error is treating “who’s making this claim” as a shortcut to dismissal. Industry funding doesn’t automatically invalidate research. Academic researchers have career incentives tooโ€”publishing novel findings, contradicting established views, securing future grants. The question isn’t whether someone has incentives. Everyone does. The question is whether those incentives are visible and whether you can see how they might shape what’s been presented.

Another mistake is confusing “what’s the actual evidence” with “how much evidence is there”. Ten weak studies don’t outweigh one rigorous long-term trial. A meta-analysis of poorly designed research is still poorly designed research. You’re looking for the quality and type of evidence, not just the quantity.

People also struggle with “what are they not telling you” because it feels speculative. You’re not guessing at conspiracies. You’re noticing structural gaps. If a school advertises exam results but not university admission rates, that’s not hidden informationโ€”it’s absent information. The question is whether that absence tells you something useful.

A way of thinking about the world

These four questions aren’t just a decision-making tool. They’re a stance towards information itself.

Most of us default to one of two positions when facing contradictory sources: either we pick the most persuasive-sounding option and defend it, or we throw our hands up and say everything’s equally uncertain. The systematic approach offers a third optionโ€”staying in the uncertainty while still making progress through it.

This is what scholarship actually is. Not certainty or expertise in everything, but a method for working through incomplete and contradictory information without pretending it’s simpler than it is. You don’t need to become an expert in coffee research or education policy or software architecture. You need a way to evaluate expert claims when the experts disagree.

That shiftโ€”from seeking the right answer to systematically evaluating competing answersโ€”changes how you approach information everywhere. You stop looking for the source that settles the question and start looking for sources that show their working. You notice when evidence is strong and when it’s suggestive. You spot the comparison groups and the gaps.

It doesn’t make you right about everything. It makes you more honest about what you’re basing decisions on.

What these questions give you

You won’t find perfect certainty because nobody has that (contrary to what the headlines suggest). But you can make defensible decisions under conditions of uncertainty so that when someone questions your choice, you can explain your reasoning systematically.

The next time you face contradictory information, write down your answers to these four questions and you’ll notice patterns emerging. Some sources consistently show their working. Others rely on dramatic language and vague claims. That awareness alone may be enough to change how you evaluate information.

The tools we use as scholars aren’t special. They’re just systematic ways of handling uncertainty when the stakes are real, and they can be used by anyone as part of an everyday scholarship.


Comments

Leave a comment