Some
notes
from
reading
and
research

Notes on reading Hugo Mercier

https://press.princeton.edu/ideas/why-some-mistaken-views-catch-on

On the whole, we are more likely to reject valuable messages—from the reality of climate change to the efficacy of vaccination—than to accept inaccurate ones. The main exceptions to this pattern stem not so much from a failure of open vigilance itself, but from issues with the material it draws on. People sensibly use their own knowledge, beliefs, and intuitions to evaluate what they’re told. Unfortunately, in some domains our intuitions appear to be quite systematically mistaken. If you had nothing else to go on, and someone told you that you were standing on a flat surface (rather than, say, a globe), you would spontaneously believe them. If you had nothing else to go on, and someone told you all your ancestors had always looked pretty much like you (and not like, say, fish), you would spontaneously believe them. Many popular yet mistaken beliefs spread not because they are pushed by masters of persuasion but because they are fundamentally intuitive.

[HOWEVER] If the flatness of the earth is intuitive, a two-hundred-foot-high, thousands-of-miles-long wall of ice is not. Nor is, say, Kim Jong-il’s ability to teleport.


The critical question for understanding why such beliefs spread is not why people accept them, but why people profess [assert] them.

Huh?

Besides wanting to share what we take to be accurate views, there are many reasons for professing beliefs: to impress, annoy, please, seduce, manipulate, reassure. These goals are sometimes best served by making statements whose relation to reality is less than straightforward—or even, in some cases, statements diametrically opposed to the truth. In the face of such motivations, open vigilance mechanisms come to be used, perversely, to identify not the most plausible but the most implausible views.

[einde excerpt, verder boek: Not Born Yesterday The Science of Who We Trust and What We Believe, Hugo Mercier, 2019.epub]

Hij stelt dat:

By and large, misconceptions do not spread because they are pushed by prestigious or charismatic individuals—the supply side. Instead, they owe their success to demand, as people look for beliefs that fit with their preexisting views and serve some of their goals.

termen: credulity gullibility lichtgelovigheid 

... a lot of communication happens between individuals that don’t share the same fitness. In these potentially conflicting interactions, many signals might improve the fitness of senders, while doing nothing for the fitness of receivers, or even decreasing their fitness. For instance, a vervet monkey might give out an alarm call not because there is a predator in sight but because it has spotted a tree laden with ripe fruit and wants to distract the other monkeys while it gorges on the treat. We might refer to such signals as dishonest or unreliable signals, meaning that they are harmful to the receivers.

Unreliable signals, if they proliferate, threaten the stability of communication. If receivers stop benefiting from communication, they evolve to stop paying attention to the signals. Not paying attention to something is easily done. If a given structure is no longer advantageous, it disappears—as did moles’ eyes and dolphins’ fingers. The same would apply to, say, the part of our ears and brains dedicated to processing auditory messages, if these messages were, on balance, harmful to us.

Likewise, if receivers managed to take advantage of senders’ signals to the point that the senders stopped benefiting from communication, the senders would gradually evolve to stop emitting the signals.

Communication between individuals that do not share the same incentives—the same fitness—is intrinsically fragile. And the individuals don’t have to be archenemies for the situation to degenerate.


With a few anecdotal exceptions—such as saying “I’m not mute,” which reliably communicates that one isn’t mute—there are no intrinsic constraints on sending unreliable signals via verbal communication. Unlike an unfit gazelle that just can’t stot well enough, a hack is perfectly able to give you useless advice.

A commonly invoked solution to keep human communication stable is costly signaling: paying a cost to send a signal would be a guarantee of its reliability. Costly signaling supposedly explains many bizarre human behaviors. Buying luxury brands would be a costly signal of wealth and status.

Constraining religious rituals—from frequent public prayer to fasting—would be a costly signal of one’s commitment to a religious group.

Performing dangerous activities—from turtle hunting among the Meriam hunter-gatherers to reckless driving among U.S. teens—would be costly signals of one’s strength and competence.

... what matters isn’t the cost of buying the new iPhone per se but the fact that spending so much money on a phone is costlier for a poor person, who might have to skimp on necessities to afford it, than for a rich person, for whom a thousand dollars might make very little difference.

Given that what matters is a difference—between the costs of sending a reliable and an unreliable signal—the absolute level of the cost doesn’t matter. As a result, costly signaling can, counterintuitively, make a signal reliable even if no cost is paid. As long as unreliable signalers would pay a higher cost if they sent signals, reliable signalers can send signals for free. The bowerbirds’ bowers illustrate this logic.


What keeps the system stable isn’t the intrinsic cost of building a fancy bower (which is low in any case). Instead, it is the vigilance of the males, who keep tabs on each other’s bowers and inflict a cost on those who build exaggerated bowers. As a result, as long as no male tries to build a better bower than they can afford to defend, the bowers send reliable signals of male quality without any significant cost being paid. This is costly signaling for free (or nearly for free, as there are indirect costs in monitoring other males’ bowers)

As we will see, this logic proves critical to understanding the mechanisms that allow human communication to remain stable. No intrinsic cost is involved in speaking: unlike buying the latest iPhone, making a promise is not intrinsically costly. Human verbal communication is the quintessential “cheap talk” and thus seems very far from qualifying as a costly signal. This is wrong. What matters isn’t the cost borne by those who would keep their promises but the cost borne by those who do not keep them.

As long as there is a mechanism to exert a sufficient cost on those who send unreliable messages—if only by trusting them less in the future—we’re dealing with costly signaling, and communication can be kept stable.

Undoubtedly, the fact that humans have developed ways of sending reliable signals without having to pay a cost every time they do so has greatly contributed to their success.

... I ... argue that human communication is kept (mostly) reliable by a whole suite of cognitive processes—mechanisms of open vigilance—that minimize our exposure to unreliable signals and, by keeping track of who said what, inflict costs on unreliable senders.

Ik ben benieuwd hoe onbetrouwbare zenders gestraft worden...


What should be clear in any case is that we cannot afford to be gullible. If we were, nothing would stop people from abusing their influence, to the point where we would be better off not paying any attention at all to what others say, leading to the prompt collapse of human communication and cooperation.


Term: McCarthyism (naar Joseph McCarthy)
(Ik ging twijfelen wegens verwarring met Paul McCar**tne**y; Het verschil is groter dan ik dacht, Paul heeft geen "th" terwijl  Joseph geen "ne" heeft.)

note 43: Alexander & Bruning, 2008; Meissner, Surmon-Böhr, Oleszkiewicz, & Alison, 2017.

Alexander, M., & Bruning, J. (2008). How to break a terrorist: The U.S. interrogators who used brains, not brutality, to take down the deadliest man in Iraq. New York: Free Press.

Meissner, C. A., Surmon-Böhr, F., Oleszkiewicz, S., & Alison, L. J. (2017). “Developing an evidence-based perspective on interrogation: A review of the US government’s high-value detainee interrogation group research program.” Psychology, Public Policy, and Law, 23(4), 438–457.

Far from being “gullible and biased to believe,”49 System 1 is, if anything, biased to reject any message incompatible with our background belief, but also ambiguous messages or messages coming from untrustworthy sources.50

Beliefs that resonate with people’s background views should be more successful among those who rely less on System 2, whether or not these beliefs are correct. But an overreliance on System 2 can also lead to the acceptance of questionable beliefs that stem from seemingly strong, but in fact flawed, arguments.

This is what we observe: the association between analytic thinking and the acceptance of empirically dubious beliefs is anything but straightforward. Analytic thinking is related to atheism but only in some countries.51 In Japan, being more analytically inclined is correlated with a greater acceptance of paranormal beliefs.52

Where brainwashing techniques failed to convert any POWs to the virtues of communism, the sophisticated arguments of Marx and Engels convinced a fair number of Western thinkers. Indeed, intellectuals are usually the first to accept new and apparently implausible ideas. Many of these ideas have been proven right (from plate tectonics to quantum physics), but a large number have been misguided (from cold fusion to the humoral theory of disease).

The cognitive mechanism people rely on to evaluate arguments can be called reasoning. Reasoning gives you intuitions about the quality of arguments. When you hear the argument for the Yes answer, or for why you should take the bus, reasoning tells you that these are good arguments that warrant changing your mind. The same mechanism is used when we attempt to convince others, as we consider potential arguments with which to reach that end.11

Reasoning works in a way that is very similar to plausibility checking. Plausibility checking uses our preexisting beliefs to evaluate what we’re told. Reasoning uses our preexisting inferential mechanisms instead. The argument that you shouldn’t take the metro because the conductors are on strike works because you naturally draw inferences between “The conductors are on strike” and “The metro will be closed” to “We can’t take the metro.”12 If you had thought of the strike yourself, you would have drawn the same inference and accepted the same conclusion: your colleague was just helping you connect the dots

In the metro case, the dots are very easy to connect, and you might have done so without help. In other instances, however, the task is much harder, as in the problem with Linda, Paul, and John. A new mathematical proof connects dots in a way that is entirely novel and very hard to reach, yet the people who understand the proof only need their preexisting intuitions about the validity of each step to evaluate it

Reasoning is vigilant because it prompts us to accept challenging conclusions only when the arguments resonate with our preexisting inferential mechanisms. Like plausibility checking, reasoning is essentially foolproof. Typically, you receive arguments when someone wants to convince you of something you wouldn’t have accepted otherwise.14 If you’re too distracted to pay attention to the arguments, you simply won’t change your mind. If, even though you’re paying attention, you don’t understand the arguments, you won’t change your mind either. It’s only if you understand the arguments, evaluating them in the process, that you might be convinced.


By and large, it is not because the population hold false beliefs that they make misguided or evil decisions, but because the population seek to justify making misguided or evil decisions that they hold false beliefs. If Voltaire is often paraphrased as saying, “Those who can make you believe absurdities can make you commit atrocities,” this is in fact rarely true.13 As a rule, it is wanting to commit atrocities that makes you believe absurdities.


We find countless instances of rumors not followed by any violence, and when violence does happen, its nature is typically unrelated in form or degree to the content of the rumors.

When the Jewish population of Kishinev was accused of the murder of a small boy, the lie took hold because people broadly believed this ritual to be “part and parcel of Jewish practice.”

...

The reprisal is terrible indeed, but it bears no relation to the accusations: How is pillaging liquor stores going to avenge the dead child? In other times and places, Jewish populations have been massacred, women molested, wealth plundered under vastly flimsier pretexts, such as accusations of desecrating the host.

...

By and large, scholars of rumors and of ethnic riots concur that “participants in a crowd seek justifications for a course of action that is already under way; rumors often provide the ‘facts’ that sanction what they want to do anyway.”20

... We not only spontaneously justify ourselves when our behavior is questioned but also learn to anticipate when justifications might be needed, before we have to actually offer them.27 This creates a market for justifications. But such a market arises only when we anticipate that some decisions are likely to be perceived as problematic.

The abundance of pro-Trump fake news is explained by the dearth of pro-Trump material to be found in the traditional media: not a single major newspaper endorsed his candidacy (although there was plenty of material critical of Clinton as well). At this point, I should stress that the extent to which fake news is shared is commonly exaggerated: during the 2016 election campaign, fewer than one in ten Facebook users shared fake news, and 0.1 percent of Twitter users were responsible for sharing 80 percent of the fake news found on that platform.34

34. Grinberg, Joseph, Friedland, Swire-Thompson, & Lazer, 2019; Guess et al., 2019.

Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). “Fake news on Twitter during the 2016 US presidential election.” Science, 363(6425), 374–378.

As suggested by cultural evolution researcher Alberto Acerbi, the most implausible fake news stories, whether or not they are political, spread largely because they are entertaining rather than because they offer justifications for anything.36 The most absurd political fake news stories might also owe their appeal precisely to their over-the-top nature, as they make for great burning-bridges material (see chapter 12).

36. Acerbi, 2019. On the lack of partisanship effects, see Pennycook & Rand, 2018. Another potential explanation for the sharing of fake news is a “need for chaos”: it seems some people share fake news left and right, reflecting a more general contestation of the existing system (Petersen, Osmundsen, & Arceneaux, 2018).

Acerbi, A. (2019). “Cognitive attraction and online misinformation.” Palgrave Communications, 5(1), 15.

When a piece of information is seen as a justification, we can afford to evaluate it only superficially, as it will have little or no influence on what we believe or do—by virtue of being post hoc.

This being the case, however, we should observe no changes at all, not even a strengthening of views. After all, a strengthening of our views is as much of a change as a weakening, and should require equally strong evidence. Yet it has been regularly observed that piling up justifications reinforces our views and increases polarization.

In an experiment, people had to say how much they liked someone they had just listened to for two minutes.37 This confederate appeared, by design, either pleasant or unpleasant. Participants who had to wait a couple of minutes before rating the confederate provided more extreme evaluations than people who answered immediately after hearing the confederate speak. During these extra minutes, participants had conjured up justifications for their immediate reaction, making it more extreme.38

A similar tendency toward polarization has been observed in discussion groups. In a study, American students were first asked their stance on foreign policy.39 Doves—people who generally oppose military intervention—were put together in small groups and asked to discuss foreign policy. When their attitudes were measured after the exchange, they had become more extreme in their opposition to military intervention. Experiments that look at the content of the discussions taking place in like-minded groups show that it is chiefly the accumulation of arguments on the same side that leads people to polarize.

When we evaluate justifications for our own views, or views we agree with, our standards are low—after all, we already agree with the conclusion. However, that doesn’t mean the justifications are necessarily poor. In our search for justifications, or when we’re exposed to the justifications of people who agree with us, we can also stumble on good reasons, and when we do, we should recognize them as such. Even if the search process is biased—we’re mostly looking for justifications that support what we already believe—a good reason is a good reason, and it makes sense to change our minds accordingly.

Polarization does not stem from people being ready to accept bad justifications for views they already hold but from being exposed to too many good (enough) justifications for these views, leading them to develop stronger or more confident views. Still, if people have access to a biased sample of information, the outcome can be dire.

[...]

De extremen zijn in de minderheid, maar kunnen hun partij chanteren of de meerderheid manipuleren; Tevens kan een gematigd persoon simpelweg een afkeer hebben van de "andere kant" en *negeren* dat een compromis mogelijk is (je maakt geen compromis met de duivel). (Mercier noemt iets anders, namelijk dat door sociale media gematigden beter op de hoogte zijn van de standpunten van hun eigen kant en de andere kant, en ook op basis daarvan besluiten toe te geven aan argumenten (sluit de rijen, haat de ander). 

The impression of increased polarization is not due to people developing more extreme views but rather to people being more likely to sort themselves consistently as Democrat or Republican on a range of issues.50

The only reliable increase in polarization is in affective polarization: as a result of Americans more reliably sorting themselves into Democrats and Republicans, each side has come to dislike the other more.53

53. See Iyengar, Lelkes, Levendusky, Malhotra, & Westwood, 2019. On the relationship between sorting and affective polarization, see Webster & Abramowitz, 2017. Still, even affective polarization might not be as problematic as some fear; see Klar, Krupnikov, & Ryan, 2018; Tappin & McKay, 2019; Westwood, Peterson, & Lelkes, 2018.

Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N., & Westwood, S. J. (2019). “The origins and consequences of affective polarization in the United States.” Annual Review of Political Science, 22, 129–146.

Klar, S., Krupnikov, Y., & Ryan, J. B. (2018). “Affective polarization or partisan disdain? Untangling a dislike for the opposing party from a dislike of partisanship.” Public Opinion Quarterly, 82(2), 379–390.

But if social media are trapping people into echo chambers, why do we not observe more ideological polarization? Because the idea that we are locked into echo chambers is even more of a myth than the idea of increased polarization.54

54. Elizabeth Dubois & Grant Blank, “The myth of the echo chamber,” The Conversation, March 8, 2018, <https://theconversation.com/the-myth-of-the-echo-chamber-92544
Online media geven toch meer variëteit in het aanbod van nieuws... “only about 8% of the online adults … are at risk of being trapped in an echo chamber.” (Dubois & Blank, “The myth of the echo chamber”; see also  Puschmann, C. (2018, November). “Beyond the bubble: Assessing the diversity of political search results.” Digital Journalism, doi: <https://doi.org/10.1080/21670811.2018.1539626>
> Oudere (die minder sociale media "zouden" gebruiken lijken sterker gepolariseerd... 

Then, the puzzle should surely be: Why don’t we observe more echo chambers and polarization? After all, it is undeniable that the internet provides us with easy ways to find as many justifications for our views as we would like, regardless of how crazy these views might be (see how many arguments in favor of flat-earth theory you can find online). However, the desire to justify our views is only one of our many motivations; usually, it is far from being a paramount goal. Instead, we’re interested in gathering information about the world, information that most of the people we talk to would find interesting and credible. Even when looking for justifications, most of us would have learned from experience that simplistic rationalizations won’t fly with people who do not share our point of view.62

62 See, e.g. Crowell, A., & Kuhn, D. (2014). “Developing dialogic argumentation skills: A 3-year intervention study.” Journal of Cognition and Development, 15(2), 363–381.

...the one hand, this is good news indeed, as it means that people are not so easily talked into doing stupid or horrible things. On the other hand, this is bad news, as it means that people are not so easily talked out of doing stupid or horrible things. If a belief plays little causal role in the first place, correcting the belief is also unlikely to have much of an effect.

Arg! Betreffende het pogrom voorbeeld: "A refutation from the authorities might work not because it would be more convincing but because it would signal a lack of willingness to tolerate the violence. Crowds are calculating enough: in Kishinev, they paid attention to subtle signals from the police that they wouldn’t interfere with the pogrom."

... refuting fake news or other political falsehoods might be less useful than we would hope. As a study mentioned earlier in the chapter suggests, even people who recognized that some of their views were mistaken (in this case, some of Donald Trump’s untrue statements they had accepted) did not change their underlying preferences (voting for Trump). As long as the demand for justifications is present, some will rise to fulfill it.

Even if debunking beliefs that spread as post hoc justifications appears a Sisyphean task, the efforts are not completely wasted. People do care about having justifications for their views, even if they aren’t very exigent about the quality of these justifications. As a decision or opinion is made increasingly hard to justify, some people will change their minds: if not the most hard-core believers, at least those who didn’t have such a strong opinion to start with—which is better than nothing.

[to be continued]