Confirmation bias

From credulism.com
Jump to: navigation, search

Summary: This article shows that confirmation bias - our cognitive bias towards interpreting, seeking, and remembering information in a way that confirms, or helps to confirm, what we currently believe - is simply a logical consequence of belief, given the certainty of belief. This explanation is in contrast to all others, which involve this bias being due to some aspect of our nature, rather than logical necessity. The article also counters several possible objections, and briefly analyses the implications of the certainty of belief, and the resulting confirmation bias, for belief persistence.

1 Introduction


Confirmation bias is a cognitive bias towards interpreting, seeking, and remembering information in a way that confirms, or helps to confirm, what we currently believe.[1]

The obvious explanation for this bias is that our supposed ego creates a bias against concluding that we're wrong. However, psychologists have come up with other theories. Some theories state, for varying reasons, that it's the product, or by-product, of a cognitive heuristic.

However, as the following chains of reasoning show, confirmation bias isn't due to some aspect of our nature, as all of the current theories claim, but is simply a logical consequence of belief. Given the logical necessity of confirmation bias, it's an inevitable feature of not just human cognition, but the cognition of any intelligence anywhere.

2 Interpreting information


Consider first our interpreting of information:

  1. To try to interpret information is to try to make an inference from it.
    • For example, we often try to interpret information about someone’s behaviour - whether the information is first- or second-hand - in order to make an inference about their thoughts and emotions during the behaviour.
  2. An inference is the conclusion of reasoning.
  3. A conclusion of reasoning is, by definition, premised on what the reasoner believes at the moment of the conclusion.
  4. Therefore, to try to interpret information is, by definition, to try to form a belief from it that's consistent with what we believe at the moment of the interpretation.
    • For example, to try to interpret someone’s behaviour, in order to make an inference about their thoughts and emotions during the behaviour, is to try to form a belief that's consistent with what we believe - about the person, their situation, human psychology in general, and so on - at the moment of the interpretation.
  5. Trying to form, from information, a belief that's consistent with what we believe at the moment of its formation may involve first changing what we believe.
    • For example, consider observing someone, who you believe to have a selfish character, giving money to charity. You may, given your belief, interpret this act as being motivated solely by self-interest - such as the desire to not seem uncharitable. But you may instead, given your observation, reverse your belief about the person’s character, and interpret their act as being motivated by altruism. In either case, your interpretation of the behavioural information is consistent with what you believe at the moment of the interpretation.
  6. However, belief is certainty, and when we're trying to interpret information, our attention naturally tends to be focused on both that information and the apparent certainties that are the content of our beliefs, but not on our belief of that content.
  7. Therefore, when trying to interpret information, we tend to only consider the possibility that one of our beliefs is false if we're having difficulty fitting the information to it.
    • If belief wasn't necessarily certainty, then our doubt regarding a relevant sub-certain belief could also motivate such a consideration.
    • In the example of the charity-giver, you'll tend to only consider the possibility that your belief about their character is wrong if you're having difficulty thinking of at least one possible reason why they would choose, despite their selfishness, to give money to charity.
  8. And, when trying to interpret information, it's often not difficult to think of at least one interpretation that's consistent with what we currently believe, regardless of whether our relevant beliefs are true.
    • In the example, given your belief, it shouldn't be difficult for you to think of a way of interpreting the donation as being motivated solely by self-interest - such as the desire to not seem uncharitable.
  9. Therefore, we're biased towards interpreting information in a way that's consistent with what we currently believe - as opposed to first changing what we believe to fit the information - simply as a logical consequence of belief.
    • In the example, you will, given your belief, be biased towards interpreting the donation as being motivated by self-interest - such as the desire to not seem uncharitable - as opposed to reversing your belief, and interpreting the act as altruistic.
  10. And given that an interpretation of information is itself a belief, we'll be certain of it too.
  11. Of course, a certainty may be consistent with contrary possibilities.
    • For example, consider your certainty that the charity-giver was motivated by self-interest. Although this is consistent with your belief that they have a selfish character, it's also consistent with them having been motivated by uncharacteristic selfishness.
  12. Therefore, although the consistency of a certainty with one of our other beliefs will at least help to confirm that other belief, it won't necessarily itself constitute such confirmation.
  13. However, in the case of an interpretation of information, given that it was premised on our other relevant beliefs, this certainty consists of something being an instance of those beliefs.
    • In the example, given your belief, you're actually biased towards interpreting the donation as being motivated by not simply self-interest, but their apparent selfish character.
  14. And a certainty that's not simply consistent with, but an instance of, one of our other beliefs constitutes, by definition, confirmation of that other belief.
    • In the example, the certainty of the charity-giver’s act being motivated by their selfish character constitutes, by definition, confirmation of that character.
  15. Therefore, we're biased towards interpreting information in a way that confirms what we currently believe, simply as a logical consequence of belief.


2.1 Noticing this aspect of confirmation bias

Although this aspect of confirmation bias is a logical, and therefore inevitable, consequence of belief, we tend not to notice instances of it, for several reasons.

As point 7 states, when trying to interpret information, we tend to only consider the possibility that one of our beliefs is false if we're having difficulty fitting the information to it. And as point 8 states, when trying to interpret information, it's often not difficult to think of at least one interpretation that's consistent with what we currently believe, regardless of whether our relevant beliefs are true. Therefore, whether we consider the possibility that one of our beliefs is false, when trying to interpret information, will tend to depend on the nature of the information and of our current beliefs, and of our intelligence and imagination. However, even if, when trying to interpret information, we consider the possibility that one of our beliefs is false, and even if we then change that belief to fit the information, we do so in spite of this aspect of confirmation bias, not because it's absent.

Also, some information may be so obviously irreconcilable with our beliefs that we may, given the speed of the brain, conclude this within a fraction of a second of receiving that information, and we'll therefore likely not even have any recollection of our initial attempt to fit the information to our current beliefs. And even if an attempt to fit some information to our current beliefs is apparently successful, the speed of the brain also means that we may subsequently notice an irreconcilable inconsistency between that information and those beliefs within a further fraction of a second, and we may then quickly forget our very brief initial interpretation. The speed of the brain also means that we may sometimes interpret information in a way that's contrary to, and therefore replaces, one of our beliefs before the original belief has time to enter our thought processes and thereby influence that interpretation. But even then we must, given the logic of the above chain of reasoning, have been biased towards interpreting the information in a way that confirmed at least one of our other current beliefs.

Also, even when we successfully fit information to what we currently believe without considering the possibility that any of those beliefs is false, and such interpretations endure for much longer than a fraction of a second, we still tend to fail to notice this bias, for the following four reasons:

  1. Given the certainty of belief, and that our interpretation is a belief, we tend to not be motivated to analyse, and therefore question, its formation. And such an analysis also doesn't, of course, occur involuntarily upon the formation of our interpretation - indeed, that would lead to infinite layers of analysis, given that the conclusion of such an analysis would itself be an interpretation of information about the formation of the interpretation under analysis.

    This explains why it's normally easier for us to notice other people failing, when interpreting information, to consider the possibility that one of their beliefs is false than it is for them, or than it is for us to notice ourselves doing so: observing someone else interpreting information involves, by definition, thinking about that process.

    Therefore, although we can analyse the formation of one of our enduring interpretations of information whenever we want to, we tend not to. And we're less likely to do so the more time that has passed since the interpretation was formed, or mundane its subject, or time-pressured we are. And however obviously biased our interpretations of information may be, we won't notice the bias unless we analyse their formation.

  2. Even if we eventually change one of our enduring interpretations of information, we still won't automatically analyse the formation of the original interpretation. And, again, we're less likely to do so the more time that has passed since the original interpretation was formed, or mundane its subject, or time-pressured we are.
  3. Even if we do analyse the formation of one of our enduring interpretations of information, past or present, we may simply assume that we must have considered the possibility that at least one of our relevant beliefs at the time was false. That is, given that we're analysing, and therefore questioning, the formation of the interpretation, the possibility that at least one of our relevant beliefs at the time was, or is, false will naturally be likely to enter our mind. And if it does, we may then wrongly assume that it would have also done so at the time of the interpretation. And we're more likely to make this assumption the more time that has passed since the interpretation was formed, or significant its subject, or time-pressured we are.
  4. Even when we do realise the biased nature of one of our enduring interpretations of information, past or present, we don't realise, given the previous three points and the points preceding them, that this bias permeates our reasoning.


2.2 Objection 1

It might be objected that this explanation of this aspect of confirmation bias requires us to undertake obviously circular reasoning. That is, it requires us to conclude that our interpretation of some information confirms the same beliefs on which the interpretation was premised. This is logically equivalent to concluding that these beliefs are true because they're true. By definition, our beliefs can't be confirmed via reasoning that's premised on them. And given the obviousness of such circularity, we would almost always either foresee it, and therefore not reach such an unsound conclusion, or at least notice it immediately after doing so, and therefore then unbelieve that conclusion within a fraction of a second of forming it. Therefore, this aspect of confirmation bias can't be explained by belief.

However, the first of the above four points about why we tend to fail to notice the biased nature of even an enduring biased interpretation of information similarly explains why we also tend to fail to notice this circularity. That is, given the certainty of belief, and that our interpretation is a belief, we tend to not be motivated to analyse its formation, however recent. And however obviously circular is the apparently confirmatory nature of our interpretation, we won't notice that circularity unless we analyse its formation. This explains why it's normally easier for us to notice the circularity of other people’s apparently confirmatory interpretations of information than it is for them, or than it is for us to notice it in our own such interpretations: observing someone else interpreting information involves, by definition, thinking about that process.

2.3 Objection 2

It might also be objected that our knowledge of our fallibility neutralises the potential effect of our certainties on how we interpret information. Therefore, this aspect of confirmation bias can't be explained by belief.

However, our knowledge of our fallibility, as with any knowledge, doesn't mean that our fallibility is always, or even often, on our mind. And, as point 6 in the chain of reasoning states, when we're trying to interpret information, our attention naturally tends to be focused on both that information and the apparent certainties that are the content of our beliefs, but not on our belief of that content. Therefore, although our fallibility can enter our mind at any moment as we're trying to interpret information, it tends not to. Also, even when it does, such awareness will only have an effect regarding those of our beliefs that we consider in relation to it. All of this is why point 7 states that, when trying to interpret information, we tend to only consider the possibility that one of our beliefs is false if we're having difficulty fitting the information to it.

But the more fundamental counter to this objection is, of course, that even when our fallibility, regarding one of our beliefs, occurs to us, this awareness also constitutes, by definition, a degree of doubt about the claim in question, however small. And so such awareness would actually not neutralise the potential effect of our certainty on how we interpret information, but would simply end that specific certainty, and therefore our specific belief - and confirmation bias occurs, by definition, with respect to what we believe. We may then at least conclude that the claim in question is almost certainly true - and our attention won't necessarily then turn from the content of this new belief to our belief of that content, and back to our fallibility. And we may then mistake this new belief for our sub-certain belief of the original claim, and then wrongly conclude that our subsequent non-confirmatory interpretations of information, regarding this apparent belief, constitutes an absence of this aspect of confirmation bias. And our original belief may return soon after our awareness of our fallibility has passed, and possibly for the same reason that it originally formed.

3 Seeking information


Now consider our information seeking:

  1. To want to test a claim is, by definition, to want to try to determine whether it's true.
  2. Wanting to determine whether a claim is true involves, by definition, being uncertain about it.
  3. Therefore, given the certainty of belief, our information-seeking can't, by definition, be aimed at testing our beliefs.
    • That is, in order to want to test a claim that we believe, we must first cease believing it.
    • However, we may then at least believe that the claim in question is almost certainly true, which we may then mistake for our sub-certain belief of the claim. And we may then wrongly conclude that we're seeking information in order to test this apparent belief.
    • For example, if we want to know which retailer is selling a particular household appliance at the lowest price, and believe that online retailers are always cheaper than shops, then, while our belief exists, we'll want to check the former’s prices, but not the latter’s.
  4. Therefore, our information-seeking is biased against finding information that disconfirms our beliefs, simply as a logical consequence of belief.
    • In the example, if a local shop is actually selling the appliance for less than the online retailers, then we'll be biased against discovering this.
    • This is merely a bias because information-seeking that's not aimed at testing our beliefs may of course still happen to find information that disconfirms one of them. This includes cases when we seek information in a way that incidentally tests one of our beliefs.
    • In the example, while looking for online retailers selling the appliance, the results of a web search may be accompanied by ads for the item, and we may notice one from the local shop, displaying the lowest price. Or the local shop’s website may appear in the search results, and we may then check the item’s price on their website while believing that it's an online retailer.
  5. And we're biased towards interpreting whatever information that we do find in a way that confirms our current beliefs, simply as a logical consequence of belief.
    • In the example, we're biased towards interpreting the lowest price from the online retailers as being the lowest price from any retailer.
  6. Therefore, we're biased towards seeking information in a way that helps to confirm what we currently believe, simply as a logical consequence of belief.


3.1 Noticing this aspect of confirmation bias

Although this aspect of confirmation bias is a logical, and therefore inevitable, consequence of belief, we tend not to notice instances of it, for several reasons.

Whether we find information that disconfirms one of our beliefs, despite our information-seeking not being aimed at testing it, depends on chance and how available such information is. And whether, when we do find such information, we realise that it disconfirms one of our beliefs depends on the nature of the information and belief, and of our intelligence and imagination. Also, whether it occurs to us, when seeking information, to consider the possibility that one of our relevant beliefs is false, and then direct our information-seeking accordingly, also depends on our intelligence and imagination. However, given that this aspect of confirmation bias is a logical consequence of belief, even when we consider the possibility that one of our beliefs is false, or even when we don't aim to consider that possibility, but still discover information that disconfirms one of our beliefs, this is in spite of this aspect of confirmation bias, not because it's absent.

Also, even when we seek information without considering the possibility that one of our relevant beliefs is false, we still tend to fail to notice this aspect of confirmation bias, for the following four reasons:

  1. Given the certainty of belief, we tend to not be motivated to analyse, and therefore question, both our beliefs and the resulting focus of our information-seeking. And such an analysis of our information-seeking also doesn't, of course, occur involuntarily - indeed, that would lead to infinite layers of analysis, given that such an analysis would be dependent on seeking information about the information-seeking under analysis.

    This explains why it's normally easier for us to notice other people failing, when seeking information, to consider the possibility that one of their relevant beliefs is false than it is for them, or than it is for us to notice this in our own information-seeking: observing someone else seeking information involves, by definition, thinking about that process.

    Therefore, although we can analyse our information-seeking whenever we want to, we tend not to. And we're less likely to do so the more time that has passed since it occurred, or mundane its subject, or time-pressured we are. And however obviously biased our information-seeking may be, we won't notice the bias unless we analyse that information-seeking.

  2. Even if, when we seek information without considering the possibility that one of our beliefs is false, we subsequently change one of our relevant beliefs, we still won't automatically analyse that information-seeking. And, again, we're less likely to do so the more time that has passed since it occurred, or mundane its subject, or time-pressured we are.
  3. Even if we do analyse the information-seeking that we undertook while holding a particular relevant belief, past or present, we may simply assume that we must have considered the possibility that that belief was false. That is, given that we're analysing, and therefore questioning, that information-seeking, the possibility that the relevant belief was, or is, false will naturally be likely to enter our mind. And if it does, we may then wrongly assume that it would have also done so at the time of the information-seeking. And we're more likely to make this assumption the more time that has passed since the information-seeking occurred, or significant its subject, or time-pressured we are.
  4. Even when we do realise the biased nature of our information-seeking on a particular occasion, past or present, we don't realise, given the previous three points and the points preceding them, that this bias is ever present.


3.2 Objection 1

It might be objected that our information-seeking that's aimed at justifying one of our beliefs to others constitutes a test of the claim in question, but without us ceasing to believe it, contrary to point 3 in the chain of reasoning. However, information-seeking that's aimed not at determining whether a claim is true, but solely at finding information that apparently shows that it's true, isn't, by definition, aimed at testing that claim. Again, information-seeking that's not aimed at testing our beliefs may of course still happen to find information that disconfirms one of them, but this outcome would be in spite of this aspect of confirmation bias, not because it's absent.

3.3 Objection 2

As with interpreting information, it might also be objected that our knowledge of our fallibility neutralises the potential effect of our certainties on how we seek information. Therefore, this aspect of confirmation bias can't be explained by belief. But the counter to this objection regarding interpreting information - see Objection 2 in the previous section - also applies to seeking information.

4 Remembering information


Finally, consider our information recall:

  1. The information stored in our memory is a product of how we've previously interpreted, and sought, information, which would have been biased towards confirming, or helping to confirm, our beliefs at the time - simply as a logical consequence of belief - with many of those beliefs also being our current beliefs.
  2. This confirmation bias regarding seeking, and interpreting, information also applies to our retrieval, and interpretation, of information stored in our memory.
  3. Therefore, we're biased towards remembering information in a way that confirms, or helps to confirm, what we currently believe, simply as a logical consequence of belief.


Given that this third aspect of confirmation bias is actually simply a manifestation of the other two, all of the points in the previous two sections also apply to it.

5 Belief is self-preserving


The certainty of belief effectively provides a degree of protection to our beliefs, in two ways: it means, by definition, that we tend not to be motivated to question our beliefs, and it leads, as a matter of logic, to confirmation bias. And given that our biased interpretations of information are themselves beliefs, the certainty of belief will also effectively provide a degree of protection to them, in these two ways, and so on. Also, the belief persistence that often results from the certainty of belief could itself be interpreted, via confirmation bias, as confirmation of the belief - that is, we may reason that our belief must have lasted so long, in the face of all of the information that we have received since its formation, because it's true. And that conclusion will itself contribute to the persistence of the belief, and so on.

However, the certainty of belief, and the resulting confirmation bias, evidently don't prevent beliefs from ever ceasing. This is for two reasons:

  1. Our certainty won't prevent us receiving, or remembering, information that unavoidably leads, via our reasoning, to a belief that's contrary to, and therefore replaces, one of our current beliefs.
  2. Our certainty won't prevent us remembering our fallibility regarding one of our beliefs - even though our certainty means that we tend not to - and doing so will end that certainty, and therefore our belief.

In either case, our belief ceases upon being replaced by a contrary belief, which may simply be a doubt about the original claim - that is, certainty about the uncertainty of that claim. If so, we may then mistake our new certainty for our sub-certain belief of the original claim. Also, whatever the nature of the new belief, the original belief may be replaced by it within a fraction of a second of the original forming, given the speed of the brain, and we may therefore have no recollection of it. Also, the certainty of belief can't prevent us from learning to maintain at least some degree of awareness of it, and of the resulting confirmation bias, and of how they can lead to irrational belief persistence, which will then be lessened by such awareness.

Nevertheless, the certainty of belief, and the resulting confirmation bias, mean that a belief may endure even if the formation of a contrary belief, including simply a doubt, would only require a very small amount of thought.


6 Sources

  1. This definition is a bit narrower than the current most common one. The latter includes our bias towards testing a claim, that we neither believe nor disbelieve, by checking for instances of what it claims, as opposed to instances of what would be contrary to what it claims. However, as Jonathan Baron has pointed-out - Baron, J., 2008, Thinking and deciding, fourth edition, Cambridge University Press, Cambridge, page 172. - this is actually not a bias towards confirming the claim, given that this 'positive test strategy' often has an equal potential to disconfirm it, just as a 'negative test strategy' often has an equal potential to help to confirm it. Baron suggests the term 'congruence bias' for this bias towards testing a claim by checking for what would be congruent, as opposed to incongruent, with it.


7 External links