https://press.princeton.edu/ideas/why-some-mistaken-views-catch-on
On the whole, we are more likely to reject valuable messages—from the reality of climate change to the efficacy of vaccination—than to accept inaccurate ones. The main exceptions to this pattern stem not so much from a failure of open vigilance itself, but from issues with the material it draws on. People sensibly use their own knowledge, beliefs, and intuitions to evaluate what they’re told. Unfortunately, in some domains our intuitions appear to be quite systematically mistaken. If you had nothing else to go on, and someone told you that you were standing on a flat surface (rather than, say, a globe), you would spontaneously believe them. If you had nothing else to go on, and someone told you all your ancestors had always looked pretty much like you (and not like, say, fish), you would spontaneously believe them. Many popular yet mistaken beliefs spread not because they are pushed by masters of persuasion but because they are fundamentally intuitive.
[HOWEVER] If the flatness of the earth is intuitive, a two-hundred-foot-high, thousands-of-miles-long wall of ice is not. Nor is, say, Kim Jong-il’s ability to teleport.
The critical question for understanding why such beliefs spread is not why people accept them, but why people profess [assert] them.
Huh?
Besides wanting to share what we take to be accurate views, there are many reasons for professing beliefs: to impress, annoy, please, seduce, manipulate, reassure. These goals are sometimes best served by making statements whose relation to reality is less than straightforward—or even, in some cases, statements diametrically opposed to the truth. In the face of such motivations, open vigilance mechanisms come to be used, perversely, to identify not the most plausible but the most implausible views.
[einde excerpt, verder boek: Not Born Yesterday The Science of Who We Trust and What We Believe, Hugo Mercier, 2019.epub]
Hij stelt dat:
By and large, misconceptions do not spread because they are pushed by prestigious or charismatic individuals—the supply side. Instead, they owe their success to demand, as people look for beliefs that fit with their preexisting views and serve some of their goals.
termen: credulity gullibility lichtgelovigheid
... a lot of communication happens between individuals that don’t share the same fitness. In these potentially conflicting interactions, many signals might improve the fitness of senders, while doing nothing for the fitness of receivers, or even decreasing their fitness. For instance, a vervet monkey might give out an alarm call not because there is a predator in sight but because it has spotted a tree laden with ripe fruit and wants to distract the other monkeys while it gorges on the treat. We might refer to such signals as dishonest or unreliable signals, meaning that they are harmful to the receivers.
Unreliable signals, if they proliferate, threaten the stability of communication. If receivers stop benefiting from communication, they evolve to stop paying attention to the signals. Not paying attention to something is easily done. If a given structure is no longer advantageous, it disappears—as did moles’ eyes and dolphins’ fingers. The same would apply to, say, the part of our ears and brains dedicated to processing auditory messages, if these messages were, on balance, harmful to us.
Likewise, if receivers managed to take advantage of senders’ signals to the point that the senders stopped benefiting from communication, the senders would gradually evolve to stop emitting the signals.
Communication between individuals that do not share the same incentives—the same fitness—is intrinsically fragile. And the individuals don’t have to be archenemies for the situation to degenerate.
With a few anecdotal exceptions—such as saying “I’m not mute,” which reliably communicates that one isn’t mute—there are no intrinsic constraints on sending unreliable signals via verbal communication. Unlike an unfit gazelle that just can’t stot well enough, a hack is perfectly able to give you useless advice.
A commonly invoked solution to keep human communication stable is costly signaling: paying a cost to send a signal would be a guarantee of its reliability. Costly signaling supposedly explains many bizarre human behaviors. Buying luxury brands would be a costly signal of wealth and status.
Constraining religious rituals—from frequent public prayer to fasting—would be a costly signal of one’s commitment to a religious group.
Performing dangerous activities—from turtle hunting among the Meriam hunter-gatherers to reckless driving among U.S. teens—would be costly signals of one’s strength and competence.
... what matters isn’t the cost of buying the new iPhone per se but the fact that spending so much money on a phone is costlier for a poor person, who might have to skimp on necessities to afford it, than for a rich person, for whom a thousand dollars might make very little difference.
Given that what matters is a difference—between the costs of sending a reliable and an unreliable signal—the absolute level of the cost doesn’t matter. As a result, costly signaling can, counterintuitively, make a signal reliable even if no cost is paid. As long as unreliable signalers would pay a higher cost if they sent signals, reliable signalers can send signals for free. The bowerbirds’ bowers illustrate this logic.
What keeps the system stable isn’t the intrinsic cost of building a fancy bower (which is low in any case). Instead, it is the vigilance of the males, who keep tabs on each other’s bowers and inflict a cost on those who build exaggerated bowers. As a result, as long as no male tries to build a better bower than they can afford to defend, the bowers send reliable signals of male quality without any significant cost being paid. This is costly signaling for free (or nearly for free, as there are indirect costs in monitoring other males’ bowers)
As we will see, this logic proves critical to understanding the mechanisms that allow human communication to remain stable. No intrinsic cost is involved in speaking: unlike buying the latest iPhone, making a promise is not intrinsically costly. Human verbal communication is the quintessential “cheap talk” and thus seems very far from qualifying as a costly signal. This is wrong. What matters isn’t the cost borne by those who would keep their promises but the cost borne by those who do not keep them.
As long as there is a mechanism to exert a sufficient cost on those who send unreliable messages—if only by trusting them less in the future—we’re dealing with costly signaling, and communication can be kept stable.
Undoubtedly, the fact that humans have developed ways of sending reliable signals without having to pay a cost every time they do so has greatly contributed to their success.
... I ... argue that human communication is kept (mostly) reliable by a whole suite of cognitive processes—mechanisms of open vigilance—that minimize our exposure to unreliable signals and, by keeping track of who said what, inflict costs on unreliable senders.
Ik ben benieuwd hoe onbetrouwbare zenders gestraft worden...
What should be clear in any case is that we cannot afford to be gullible. If we were, nothing would stop people from abusing their influence, to the point where we would be better off not paying any attention at all to what others say, leading to the prompt collapse of human communication and cooperation.
Term: McCarthyism (naar Joseph McCarthy)
(Ik ging twijfelen wegens verwarring met Paul McCar**tne**y; Het verschil is groter dan ik dacht, Paul heeft geen "th" terwijl Joseph geen "ne" heeft.)
note 43: Alexander & Bruning, 2008; Meissner, Surmon-Böhr, Oleszkiewicz, & Alison, 2017.
Alexander, M., & Bruning, J. (2008). How to break a terrorist: The U.S. interrogators who used brains, not brutality, to take down the deadliest man in Iraq. New York: Free Press.
Meissner, C. A., Surmon-Böhr, F., Oleszkiewicz, S., & Alison, L. J. (2017). “Developing an evidence-based perspective on interrogation: A review of the US government’s high-value detainee interrogation group research program.” Psychology, Public Policy, and Law, 23(4), 438–457.
Far from being “gullible and biased to believe,”49 System 1 is, if anything, biased to reject any message incompatible with our background belief, but also ambiguous messages or messages coming from untrustworthy sources.50
Beliefs that resonate with people’s background views should be more successful among those who rely less on System 2, whether or not these beliefs are correct. But an overreliance on System 2 can also lead to the acceptance of questionable beliefs that stem from seemingly strong, but in fact flawed, arguments.
This is what we observe: the association between analytic thinking and the acceptance of empirically dubious beliefs is anything but straightforward. Analytic thinking is related to atheism but only in some countries.51 In Japan, being more analytically inclined is correlated with a greater acceptance of paranormal beliefs.52
Where brainwashing techniques failed to convert any POWs to the virtues of communism, the sophisticated arguments of Marx and Engels convinced a fair number of Western thinkers. Indeed, intellectuals are usually the first to accept new and apparently implausible ideas. Many of these ideas have been proven right (from plate tectonics to quantum physics), but a large number have been misguided (from cold fusion to the humoral theory of disease).
The cognitive mechanism people rely on to evaluate arguments can be called reasoning. Reasoning gives you intuitions about the quality of arguments. When you hear the argument for the Yes answer, or for why you should take the bus, reasoning tells you that these are good arguments that warrant changing your mind. The same mechanism is used when we attempt to convince others, as we consider potential arguments with which to reach that end.11
Reasoning works in a way that is very similar to plausibility checking. Plausibility checking uses our preexisting beliefs to evaluate what we’re told. Reasoning uses our preexisting inferential mechanisms instead. The argument that you shouldn’t take the metro because the conductors are on strike works because you naturally draw inferences between “The conductors are on strike” and “The metro will be closed” to “We can’t take the metro.”12 If you had thought of the strike yourself, you would have drawn the same inference and accepted the same conclusion: your colleague was just helping you connect the dots
In the metro case, the dots are very easy to connect, and you might have done so without help. In other instances, however, the task is much harder, as in the problem with Linda, Paul, and John. A new mathematical proof connects dots in a way that is entirely novel and very hard to reach, yet the people who understand the proof only need their preexisting intuitions about the validity of each step to evaluate it
Reasoning is vigilant because it prompts us to accept challenging conclusions only when the arguments resonate with our preexisting inferential mechanisms. Like plausibility checking, reasoning is essentially foolproof. Typically, you receive arguments when someone wants to convince you of something you wouldn’t have accepted otherwise.14 If you’re too distracted to pay attention to the arguments, you simply won’t change your mind. If, even though you’re paying attention, you don’t understand the arguments, you won’t change your mind either. It’s only if you understand the arguments, evaluating them in the process, that you might be convinced.
By and large, it is not because the population hold false beliefs that they make misguided or evil decisions, but because the population seek to justify making misguided or evil decisions that they hold false beliefs. If Voltaire is often paraphrased as saying, “Those who can make you believe absurdities can make you commit atrocities,” this is in fact rarely true.13 As a rule, it is wanting to commit atrocities that makes you believe absurdities.
We find countless instances of rumors not followed by any violence, and when violence does happen, its nature is typically unrelated in form or degree to the content of the rumors.
When the Jewish population of Kishinev was accused of the murder of a small boy, the lie took hold because people broadly believed this ritual to be “part and parcel of Jewish practice.”
...
The reprisal is terrible indeed, but it bears no relation to the accusations: How is pillaging liquor stores going to avenge the dead child? In other times and places, Jewish populations have been massacred, women molested, wealth plundered under vastly flimsier pretexts, such as accusations of desecrating the host.
...
By and large, scholars of rumors and of ethnic riots concur that “participants in a crowd seek justifications for a course of action that is already under way; rumors often provide the ‘facts’ that sanction what they want to do anyway.”20
... We not only spontaneously justify ourselves when our behavior is questioned but also learn to anticipate when justifications might be needed, before we have to actually offer them.27 This creates a market for justifications. But such a market arises only when we anticipate that some decisions are likely to be perceived as problematic.
The abundance of pro-Trump fake news is explained by the dearth of pro-Trump material to be found in the traditional media: not a single major newspaper endorsed his candidacy (although there was plenty of material critical of Clinton as well). At this point, I should stress that the extent to which fake news is shared is commonly exaggerated: during the 2016 election campaign, fewer than one in ten Facebook users shared fake news, and 0.1 percent of Twitter users were responsible for sharing 80 percent of the fake news found on that platform.34
34. Grinberg, Joseph, Friedland, Swire-Thompson, & Lazer, 2019; Guess et al., 2019.
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). “Fake news on Twitter during the 2016 US presidential election.” Science, 363(6425), 374–378.
As suggested by cultural evolution researcher Alberto Acerbi, the most implausible fake news stories, whether or not they are political, spread largely because they are entertaining rather than because they offer justifications for anything.36 The most absurd political fake news stories might also owe their appeal precisely to their over-the-top nature, as they make for great burning-bridges material (see chapter 12).
36. Acerbi, 2019. On the lack of partisanship effects, see Pennycook & Rand, 2018. Another potential explanation for the sharing of fake news is a “need for chaos”: it seems some people share fake news left and right, reflecting a more general contestation of the existing system (Petersen, Osmundsen, & Arceneaux, 2018).
Acerbi, A. (2019). “Cognitive attraction and online misinformation.” Palgrave Communications, 5(1), 15.
When a piece of information is seen as a justification, we can afford to evaluate it only superficially, as it will have little or no influence on what we believe or do—by virtue of being post hoc.
This being the case, however, we should observe no changes at all, not even a strengthening of views. After all, a strengthening of our views is as much of a change as a weakening, and should require equally strong evidence. Yet it has been regularly observed that piling up justifications reinforces our views and increases polarization.
In an experiment, people had to say how much they liked someone they had just listened to for two minutes.37 This confederate appeared, by design, either pleasant or unpleasant. Participants who had to wait a couple of minutes before rating the confederate provided more extreme evaluations than people who answered immediately after hearing the confederate speak. During these extra minutes, participants had conjured up justifications for their immediate reaction, making it more extreme.38
A similar tendency toward polarization has been observed in discussion groups. In a study, American students were first asked their stance on foreign policy.39 Doves—people who generally oppose military intervention—were put together in small groups and asked to discuss foreign policy. When their attitudes were measured after the exchange, they had become more extreme in their opposition to military intervention. Experiments that look at the content of the discussions taking place in like-minded groups show that it is chiefly the accumulation of arguments on the same side that leads people to polarize.
When we evaluate justifications for our own views, or views we agree with, our standards are low—after all, we already agree with the conclusion. However, that doesn’t mean the justifications are necessarily poor. In our search for justifications, or when we’re exposed to the justifications of people who agree with us, we can also stumble on good reasons, and when we do, we should recognize them as such. Even if the search process is biased—we’re mostly looking for justifications that support what we already believe—a good reason is a good reason, and it makes sense to change our minds accordingly.
Polarization does not stem from people being ready to accept bad justifications for views they already hold but from being exposed to too many good (enough) justifications for these views, leading them to develop stronger or more confident views. Still, if people have access to a biased sample of information, the outcome can be dire.
[...]
De extremen zijn in de minderheid, maar kunnen hun partij chanteren of de meerderheid manipuleren; Tevens kan een gematigd persoon simpelweg een afkeer hebben van de "andere kant" en *negeren* dat een compromis mogelijk is (je maakt geen compromis met de duivel). (Mercier noemt iets anders, namelijk dat door sociale media gematigden beter op de hoogte zijn van de standpunten van hun eigen kant en de andere kant, en ook op basis daarvan besluiten toe te geven aan argumenten (sluit de rijen, haat de ander).
The impression of increased polarization is not due to people developing more extreme views but rather to people being more likely to sort themselves consistently as Democrat or Republican on a range of issues.50
The only reliable increase in polarization is in affective polarization: as a result of Americans more reliably sorting themselves into Democrats and Republicans, each side has come to dislike the other more.53
53. See Iyengar, Lelkes, Levendusky, Malhotra, & Westwood, 2019. On the relationship between sorting and affective polarization, see Webster & Abramowitz, 2017. Still, even affective polarization might not be as problematic as some fear; see Klar, Krupnikov, & Ryan, 2018; Tappin & McKay, 2019; Westwood, Peterson, & Lelkes, 2018.
Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N., & Westwood, S. J. (2019). “The origins and consequences of affective polarization in the United States.” Annual Review of Political Science, 22, 129–146.
Klar, S., Krupnikov, Y., & Ryan, J. B. (2018). “Affective polarization or partisan disdain? Untangling a dislike for the opposing party from a dislike of partisanship.” Public Opinion Quarterly, 82(2), 379–390.
But if social media are trapping people into echo chambers, why do we not observe more ideological polarization? Because the idea that we are locked into echo chambers is even more of a myth than the idea of increased polarization.54
54. Elizabeth Dubois & Grant Blank, “The myth of the echo chamber,” The Conversation, March 8, 2018, <https://theconversation.com/the-myth-of-the-echo-chamber-92544>
Online media geven toch meer variëteit in het aanbod van nieuws... “only about 8% of the online adults … are at risk of being trapped in an echo chamber.” (Dubois & Blank, “The myth of the echo chamber”; see also Puschmann, C. (2018, November). “Beyond the bubble: Assessing the diversity of political search results.” Digital Journalism, doi: <https://doi.org/10.1080/21670811.2018.1539626> > Oudere (die minder sociale media "zouden" gebruiken lijken sterker gepolariseerd...
Then, the puzzle should surely be: Why don’t we observe more echo chambers and polarization? After all, it is undeniable that the internet provides us with easy ways to find as many justifications for our views as we would like, regardless of how crazy these views might be (see how many arguments in favor of flat-earth theory you can find online). However, the desire to justify our views is only one of our many motivations; usually, it is far from being a paramount goal. Instead, we’re interested in gathering information about the world, information that most of the people we talk to would find interesting and credible. Even when looking for justifications, most of us would have learned from experience that simplistic rationalizations won’t fly with people who do not share our point of view.62
62 See, e.g. Crowell, A., & Kuhn, D. (2014). “Developing dialogic argumentation skills: A 3-year intervention study.” Journal of Cognition and Development, 15(2), 363–381.
...the one hand, this is good news indeed, as it means that people are not so easily talked into doing stupid or horrible things. On the other hand, this is bad news, as it means that people are not so easily talked out of doing stupid or horrible things. If a belief plays little causal role in the first place, correcting the belief is also unlikely to have much of an effect.
Arg! Betreffende het pogrom voorbeeld: "A refutation from the authorities might work not because it would be more convincing but because it would signal a lack of willingness to tolerate the violence. Crowds are calculating enough: in Kishinev, they paid attention to subtle signals from the police that they wouldn’t interfere with the pogrom."
... refuting fake news or other political falsehoods might be less useful than we would hope. As a study mentioned earlier in the chapter suggests, even people who recognized that some of their views were mistaken (in this case, some of Donald Trump’s untrue statements they had accepted) did not change their underlying preferences (voting for Trump). As long as the demand for justifications is present, some will rise to fulfill it.
Even if debunking beliefs that spread as post hoc justifications appears a Sisyphean task, the efforts are not completely wasted. People do care about having justifications for their views, even if they aren’t very exigent about the quality of these justifications. As a decision or opinion is made increasingly hard to justify, some people will change their minds: if not the most hard-core believers, at least those who didn’t have such a strong opinion to start with—which is better than nothing.
[Notes continued 30-12-2024]
before they were established, many, maybe most, scientific theories would have sounded nuts to everybody but their creators.1
1. See, e.g., Shtulman, 2017. Shtulman, A. (2017). Scienceblind: Why our intuitive theories about the world are so often wrong. New York: Basic Books. Shtulman, A., & Valcarcel, J. (2012). “Scientific knowledge suppresses but does not supplant earlier intuitions.” Cognition, 124(2), 209–215.
... postmodern thinkers ...
Some things are intuitive (for example “a human”) or mostly intuitive (a god like Zeus, who is mostly like a human but can make lighting)
Pascal Boyer has argued that the vast majority of concepts of supernatural agents found across the world are only minimally counterintuitive.
...
By contrast, the Christian god, in his full theological garb, violates just about every assumption we have about humanlike agents.
...
Much like the theologically correct Christian god, many scientific concepts are full-on counterintuitive. Our concept of what moving entails—the feeling that we’re moving, movements of air, and so forth—is violated by the idea that we’re barreling through space at a tremendous speed. [ etc. ]
...
To be accepted, ideas that don’t tap into our intuitive concepts, or that go against them, face severe obstacles from open vigilance mechanisms. We have no reasons to accept ideas we don’t understand, and we have reasons to reject counterintuitive ideas. When we engage in plausibility checking, we don’t tend to reject only ideas that directly clash with our previous views but also ideas that don’t fit with our intuitions more generally.
...
Open vigilance also contains mechanisms to overcome plausibility checking and accept beliefs that clash with our previous views or intuitions: argumentation and trust.
Argumentation is unlikely to play a significant role in the wide distribution of incomprehensible ideas or counterintuitive concepts. Argumentation works because we find some arguments intuitively compelling. This means that premises and conclusions must be linked by some intuitive inferential process, as when someone says, “Joe has been very rude to many of us, so he’s a jerk.” Everyone can understand how being repeatedly rude entails being a jerk. But if a proposition is incomprehensible, then it can’t properly be argued for. That’s probably why Lacan asserts, rather than argues, that “nature’s specificity is to not be one.”10
Argumentation plays a crucial role in the spread of counterintuitive religious and scientific concepts, but only in the small community of theologians and scientists who can make enough sense of the arguments to use and construct them. Beyond that, few people are competent and motivated enough to properly evaluate the technical defense of the Christian god’s omnipotence, or of relativity theory. For example, most U.S. university students who accept evolution by natural selection don’t understand its principles properly.11
[Note 11] Greene, E. D. (1990). “The logic of university students’ misunderstanding of natural selection.” Journal of Research in Science Teaching, 27(9), 875–885.
...
If argumentation can’t explain the widespread acceptance of incomprehensible or counterintuitive beliefs, then it must be trust. Trust takes two main forms: trust that someone knows better (chapter 5), and trust that they have our best interests at heart (chapter 6). To really change our minds about something, the former kind of trust is critical: we must believe that someone knows better than we do and defer to their superior knowledge.
The preceding examples suggest that people are often so deferential toward individuals (Lacan), books (the Bible), or specialized groups (priests, scientists) that they accept incomprehensible or counterintuitive ideas. From the point of view of open vigilance, the latter is particularly problematic.
Believers don’t use counterintuitive concepts in practice, and will, for example, anthropomorphize their gods.
Barrett’s observations suggest that the acceptance of counterintuitive ideas remains shallow: we can assent to them, even draw inferences from them when pushed, but they do not affect the way we think intuitively. On the contrary, our intuitive way of thinking tends to seep into how we treat counterintuitive concepts, as when Barrett’s participants implicitly thought that god had a limited attention span.
The same logic applies to scientific concepts.
Mercier argues that [ in some respects? ] it’s a good thing that our understanding of counterintuitive ideas is shallow because otherwise it would somehow disrupt our intuitive well-being. He gives the example of getting motion sick as a result of being aware of — or simply grasping? — the motions of the earth. This example is unconvincing to me.
.... shallowness doesn’t explain why people would accept a bunch of bizarre beliefs, some of which clash with their intuitions: it still seems that people are often unduly deferential, seeing some authorities as more knowledgeable than they really are (except for scientists, whose knowledge, if anything, is likely underestimated).
A common explanation for this undue deference is that some people are charismatic: their attitude, their voice, their nonverbal language make them uniquely enthralling and even credible.
... When it comes to widespread religious or scientific beliefs, charisma cannot be the main explanation. None of our Christian contemporaries have met Jesus, and I’ve managed to accept the concept of inertia without meeting Galileo. I don’t think that personal charisma explains at all why some people are deemed more credible than others. Instead, I outline three mechanisms that lead some individuals to be perceived as more knowledgeable than they are, making their audience unduly deferential. I believe that the spread of incomprehensible and counterintuitive beliefs largely stems from a mix of these three mechanisms.
Information may have proven to be useful, or may seem potentially valuable. This grants the source reputation credit.
For information to be deemed valuable, it must be [ seem ] both plausible and useful.²¹
21. On plausibility, see Collins et al., 2018. Collins, P. J., Hahn, U., von Gerber, Y., & Olsson, E. J. (2018). “The bi-directional relationship between source characteristics and message content.” Frontiers in Psychology, 9. Retrieved from https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00018/full
For example, information about threats has the potential to be very useful, as it can help us avoid significant costs. In a series of experiments, Pascal Boyer and psychologist Nora Parren showed that people who transmit information about threats, by contrast with other types of information, are seen as more competent.²²
[22] Boyer, P., & Parren, N. (2015). “Threat-related information suggests competence: A possible factor in the spread of rumors.” PloS One, 10(6), e0128421.
... we might tend to overestimate the usefulness of threat information, deeming it relevant even when we have few chances of ever being exposed to the actual threat. ... The attribution of competence to people who circulate threats is—as was suggested in chapter 10—one of the main reasons people spread false rumors, many of which mention some threat.
Besides threats, there are other types of information that can be deemed useful without ever being seriously tested, such as justifications. Someone who provides justifications for actions people want to engage in anyway can be rewarded by being seen as more competent. However, this reputation credit can be extended indefinitely if the actions are never seriously questioned, and the justifications never tested.
However, Mercier wants us to know that the downside [loophole] of this way of attributing competence usually is no problem:
This loophole in the way we attribute competence is, in the vast majority of cases, of little import. Maybe we will think a friend who warns us about the dangers of such and such exotic food a bit more competent than they actually are, but we have other ways of more accurately estimating our friend’s competence.
The real problem dawns with the rise of specialists: people whom we don’t know personally but through their communications in a specific domain.
Nowadays, some news sources specialize in the provision of threatening information. A prime example is the media network of conspiracy theorist Alex Jones: the InfoWars website, radio show, YouTube channel, and so forth. Most stories on the InfoWars front page are about threats. Some are pretty generic threats: a lethal pig virus in China that could strike humans, a plane pilot on a murder-suicide mission.23 Many stories are targeted: migrants from Islamic countries are responsible for most sex crimes in Finland, Turkey “announces launch of worldwide Jihad,” Europe is committing suicide by accepting Muslim migrants.24 Even a non-directly threat-related piece on George Soros’s fight with the Hungarian government is accompanied by a video warning against the dangers of powerful communists such as Barack Obama (!), Richard Branson (?!), and Jeff Bezos (??!!).25
Presumably, few in Jones’s audience live in Finland, or in close proximity to sick Chinese pigs. As a result, the readers are unlikely to find out whether the threats are real, and Jones can keep the reputation credit he earned from all these warnings.
[...]
Turning to justifications, we observe a similar dynamic in the case of Galen. The Roman physician provided a complex theoretical apparatus as a rationale for the relatively intuitive practice of bloodletting. Doing so made him appear more competent (note that there were other, better reasons to deem Galen competent).
... On a much larger scale, there may be a similar dynamic affecting religious creeds, with the search for justification bringing in its wake an assortment of weird beliefs
...
Leaders emerge who promote religious justifications for ... new norms...
...the leaders who are able to articulate religious creeds fitting ... changing moral intuitions are rewarded with deference.
Ik denk ook aan de nieuwe moraal van aan de ene kant conservatieve Christenen (niet langer pacifistisch en al helemaal niet aardig voor hun buren) en aan de andere kant radicale linkse ideologie (wokeism).
One of the effects of this deference, arguably, is to help spread other ideas, ideas born of the religious specialists’ striving for a more intellectually coherent system. These ideas don’t have to be particularly intuitive, or to be of any use as justifications for most of the flock. For example, few Christians care deeply about what happens to the soul of people who lived before Jesus and thus couldn’t be saved by the sacraments (the unlearned). Yet theologians had to ponder the issue, and made this part of the official creed: for example, in Catholicism the unlearned are stuck in the “limbo of the fathers” until the Second Coming.33 More significantly, the theologically correct version of the Christian god—the omni-everything version—is the result of a slow elaboration over the ages of scholars attempting to reconcile various doctrines.34
This account distinguishes two broad sets of beliefs within the creeds of world religions. The first set comprises beliefs that many people find intuitively compelling—for example, rewards and punishments in the afterlife for good and bad deeds. The second comprises beliefs that are relevant only to the theologians’ attempts at doctrinal coherence. We find both categories in world religions besides Christianity. Crucially, the first set of beliefs is quite similar in every world religion, while the second varies widely. For example, in Buddhism the concept of merit plays a central role, so that those who do good deeds have better luck in their next lives. But we also find in Buddhism counterintuitive ideas that play little useful justificatory role and have no parallel in Christianity, such as the precise status of the Buddha in relation to humans and gods, or the cycle of reincarnation.
[Nu: rechtvaardigheid, maar ook krankzinnige ideologische denkbeelden?]
- TRICKLE-DOWN SCIENCE
Being willing to give people the benefit of the doubt and grant them a good reputation on credit, because they warn us about threats we will never face or provide justifications that will never get tested, cannot explain the widespread acceptance of counterintuitive scientific theories.
Oh. Wat was het voorgaande dan? 🤔 Voornamelijk ideeën die intuitief en aantrekkelijk zijn, met wat rare ideeën in het kielzog daarvan?
For one thing, scientific theories are nearly all counterintuitive, so scientists can’t surf on a wave of easily accepted theories to make the public swallow the rest.
Frankly, I’m not quite sure why so many people accept counterintuitive scientific theories. I’m not saying they shouldn’t, obviously, merely pointing out that the popularity of such counterintuitive ideas, even if they are right, is on the face of it quite puzzling. It is true that people accept scientific beliefs only reflectively, so the beliefs interact little with other cognitive mechanisms. But, still, why accept these beliefs at all? Very few people can properly evaluate scientists’ claims, especially when it comes to novel discoveries. A small group of specialists understands the content of new results, can interpret them in light of the literature, and knows the team that produced them. Everybody else is bound to rely on relatively coarse cues. The further removed we are from the group of relevant experts, the coarser the cues.
Cues that people use to ascertain how “scientific” something is...
Reassuringly, genuine experts weren’t fooled either by the fancy math or by the neuroanatomical babble.
... Milgram obedience experiments... illustrates the dangers of an overreliance on coarse cues to evaluate scientific value.
Other examples abound. Pseudoscientists, from creationists to homeopaths, use credentials to their advantage, touting PhDs and university accreditations they gained by professing different beliefs.46
Still, on the whole coarse cues play a positive role. After all, they do reflect reasonable trends: mathematization vastly improves science; the hard sciences have progressed much further than the social sciences; someone with a PhD and university accreditation is likely to be more knowledgeable in their field of expertise than a layperson.
Jacques Lacan relied on these coarse cues to boost his stature. He had the proper credentials. He made extensive use of mathematical symbols.⁴⁷
Sokal, A. D., & Bricmont, J. (1998). Intellectual impostures: Postmodern philosophers’ abuse of science. London: Profile Books.
Still, even knowing this, I suspect few are able to plow through his seminars, and those who do, instead of being impressed by Lacan’s depth, are more likely stunned by his abstruseness. How could prose so opaque become so respected?
[...] As a rule, when hard-to-understand content spreads, it is not because it is obscure but in spite of being obscure, when there is no easier way to get the content across. Yet the success of Lacan, and other intellectuals of his ilk, suggests that obscurity sometimes helps, to the point that people end up devoting a lot of energy to deciphering nonsensical statements. Dan Sperber has suggested that, in unusual circumstances, obscurity can become a strength through a “guru effect.”50
...
Lacan’s work confirms his mastery of the most complex psychoanalytic theory and suggests that decoding his dense prose is worth people’s while. Because they assume Lacan to be an expert, his followers devote growing amounts of energy and imagination to make sense of the master’s pronouncements. At this stage, the vagueness of the concepts becomes a strength, giving Lacan’s groupies leeway to interpret his ideas in myriad ways, to read into the concepts much more than was ever intended.
... the groupies were, as the name suggests, a group, seeing in the others’ efforts an affirmation of their own interpretive labors. As Lévi-Strauss noted when he attended one of Lacan’s seminars: “I found myself in the midst of an audience that seemed to understand.”54
... To make things worse, the pupils are credentialed, forming the next generation of public intellectuals and university professors....Again, obscurity plays in Lacan’s favor. If his theories were understandable, outsiders could form their own opinions. But their obscurity protects Lacan’s writings from the prying eyes of critics, who must defer to those who seem to be knowledgeable enough to make sense of it all, or reject them en bloc and risk looking as if they have no appreciation for intellectual sophistication.
Can we stop granting reputation on credit?
Can we test justifications for things we find attractive?
If ... justifications are ... properly evaluated—we use them in arguments with friends who disagree with us, say—everything is fine.
Philosopher Alvin Goldman suggested a series of cues people could use to evaluate scientific claims, from how consensual the claims are among experts, to whether the scientists who defend the claims have conflicts of interests.57
Goldman, A. I. (2001). “Experts: Which ones should you trust?” Philosophy and Phenomenological Research, 63(1), 85–110.
In the field of medicine, the Cochrane organization provides systematic reviews whose conclusions are vastly more reliable than the latest headline
...
Fortunately, spotting gurus is comparatively easy: they have no standing in the scientific community—at least not for the part of their work for which they use their guru status. Outside of the sciences that rely heavily on mathematics (and some might argue even then), just about any idea should be communicable with enough clarity that an educated and attentive reader can grasp it. If something looks like a jumble of complicated words pasted together, even in context, and after a bit of effort, then it probably is.
[ Chapter 15: ANGRY PUNDITS AND SKILLFUL CON MEN ]
[to be continued]
... two of the ways in which we end up trusting the wrong people. The first is when people display their loyalty to us, or to our group, by taking our side in disputes even though it does not cost them anything to do so. The second is when we use coarse cues—from someone’s profession to their ethnicity—to figure out who to trust
... on the whole we are more likely to err by not trusting when we should, rather than by trusting when we shouldn’t.
TAKING SIDE
... In small communities, where everybody knows everybody, this signal is indeed credible: the people we side against are people we could have cooperated with, so the costs are genuine. Indeed, the higher the costs, the more credible the signal.
... In our modern environments, it is quite easy to take sides without paying any costs.... The strategy of appearing to take people’s sides, while paying only minimal costs, is widely used by social media personalities, pundits, and even entire news channels.
... the strategy of taking sides to win over an audience encourages the spread of misrepresentations about the power of our (supposed) enemies, or the very existence of these enemies.
... these portrayals are sure to find an avid audience, as information about the power of other groups is deemed highly relevant. At the same time, the complexity of our economic and political environments is all too easily ignored by cognitive mechanisms that evolved by dealing with much simpler coalitions.
An .... fundamental prerequisite for the strategy of taking sides is that there should be sides to begin with. While we’re all embroiled in a variety of low-grade disputes between groups—with family members, neighbors, colleagues—these are too local to be of any interest to, say, a cable news channel. Instead, the conflicts must involve as many individuals as possible: on our side, so that the channel gains more audience, and on the other, so that the enemy looks more powerful. Agents, such as hosts on cable news networks, who rely on the taking-side strategy to gain audiences, benefit if they portray the world as divided and polarized.
...
... media can affect political outcomes, but chiefly “by conveying candidates’ [or the parties’] positions on important issues.”
...
There is a social cost to be paid when we attempt to justify our views with arguments that are too easily shot down. Apart from those that cater only to extreme partisans, most media thus have an incentive to stick to largely accurate information—even if it can be biased in a number of ways.12
12. Martin & Yurukoglu, 2017.
Martin, G. J., & Yurukoglu, A. (2017). “Bias in cable news: Persuasion and polarization.” American Economic Review, 107(9), 2565–2599.
Moreover, our reaction to challenges isn’t uniformly negative. In a fight with our partner, we might get angry at a friend who supports our partner instead of us. But, if they make a good point that we’re in the wrong, we’ll come to respect them all the more for helping us see the light (although that might take a little time). We’re wired to think in coalitional terms, but we’re also wired to form and value accurate beliefs, and to avoid looking like fools.
Geen verwijzing hiervoor. Mercier is erg optimistisch, waar haalt hij dat optimisme vandaan?
Vreemden die van elkaar weten dat ze tot dezelfde groep behoren vertrouwen elkaar meer? Dat is bijna triviaal. Zou dat ook gelden voor gangsters?
Foddy, M., Platow, M. J., & Yamagishi, T. (2009). “Group-based trust in strangers: The role of stereotypes and expectations.” Psychological Science, 20(4), 419–422.
People rely on a variety of cues to decide who they can trust, from displays of religiosity to university affiliation. But how do these cues remain reliable? After all, if appearing to be religious, or to belong to the local university, makes one more likely to be trusted, why wouldn’t everyone exhibit these cues whenever it could be useful? These cues are kept broadly reliable because they are in fact signals, involving some commitment from their sender, and that we keep track of who is committed to what. Someone who wears religious clothes but does not behave like a religious person will be judged more harshly than someone who behaves in the same way but does not display religious badges. In an extreme use of religious badges, Brazilian gang members who want a way out can now join a church, posting a video of their conversion on social media as proof. But this isn’t a cheap signal. Members of other gangs refrain from retaliating against these new converts, but they also keep close tabs on them. When a young man posted his conversion video just in time to avoid being killed, the rival gang members “monitored him for months, checking to see if he was going to church or had contact with his former [gang] leaders.”16
Marina Lopes, “One way out: Pastors in Brazil converting gang members on YouTube,” Washington Post, May 17, 2019, www.washingtonpost.com/world/the_americas/one-way-out-pastors-in-brazil-converting-gang-members-on-youtube/2019/05/17/be560746-614c-11e9-bf24-db4b9fb62aa2_story.html.
More generally, we tend to spurn people who pretend to be what they aren’t.
minor con man
... Samuel Thompson, who operated around 1850 in New York and Philadelphia
... trust ...
Herley, C. (2012). “Why do Nigerian scammers say they are from Nigeria?” WEIS. Retrieved from Wy do Nigerian scammer say thay are from Nigeria?.
Anyone who would do a Google search, ask for advice, or read their bank’s warning notices wouldn’t be worth expending any effort on.
...
Not only is getting conned a relatively rare occurrence, but there is a huge benefit from relying on coarse cues to trust strangers: it allows us to trust them at all.
... as a rule, we learn more by trusting than by not trusting. Trust is like any other skill: practice makes perfect.
...
The main issue with using coarse cues isn’t that we trust people we shouldn’t (trusting a con man because he’s dressed as a respectable businessman), but that we don’t trust people we should (mistrusting someone because of their skin color, clothing, accent, etc., when in fact they are perfectly trustworthy).