This work by Kaj Sotala is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
In addition to making a public comment, you may also send me anonymous feedback.
If you like my writing, you can also support me via GitTip.
> Tim Harford writes [...] to argue that people are mostly impervious to facts and resistant to logic [...] He admits he has no easy answers, but cites some studies showing that “scientific curiosity” seems to help people become interested in facts again. He thinks maybe we can inspire scientific curiosity by linking scientific truths to human interest stories, by weaving compelling narratives, and by finding “a Carl Sagan or David Attenborough of social science”. [...]
> Harford describes his article as being about agnotology, “the study of how ignorance is deliberately produced”. His key example is tobacco companies sowing doubt about the negative health effects of smoking – for example, he talks about tobacco companies sponsoring (basically accurate) research into all of the non-smoking-related causes of disease so that everyone focused on those instead. But his solution – telling engaging stories, adding a human interest element, enjoyable documentaries in the style of Carl Sagan – seems unusually unsuited to the problem. The National Institute of Health can make an engaging human interest documentary about a smoker who got lung cancer. And the tobacco companies can make an engaging human interest documentary about a guy who got cancer because of asbestos, then was saved by tobacco-sponsored research. Opponents of Brexit can make an engaging documentary about all the reasons Brexit would be bad, and then proponents of Brexit can make an engaging documentary about all the reasons Brexit would be good. If you get good documentary-makers, I assume both will be equally convincing regardless of what the true facts are. [...]
> Purely Logical Debate is difficult and annoying. It doesn’t scale. It only works on the subset of people who are willing to talk to you in good faith and smart enough to understand the issues involved. And even then, it only works glacially slowly, and you win only partial victories. What’s the point?
> Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys. In ideal conditions (which may or may not ever happen in real life) – the kind of conditions where everyone is charitable and intelligent and wise – the good guys will be able to present stronger evidence, cite more experts, and invoke more compelling moral principles. The whole point of logic is that, when done right, it can only prove things that are true. [...] Violence is a symmetric weapon; the bad guys’ punches hit just as hard as the good guys’ do. [...] And the same is true of rhetoric. Martin Luther King was able to make persuasive emotional appeals for good things. But Hitler was able to make persuasive emotional appeals for bad things. [...] Unless you use asymmetric weapons, the best you can hope for is to win by coincidence. [...]
> Improving the quality of debate, shifting people’s mindsets from transmission to collaborative truth-seeking, is a painful process. It has to be done one person at a time, it only works on people who are already almost ready for it, and you will pick up far fewer warm bodies per hour of work than with any of the other methods. But in an otherwise-random world, even a little purposeful action can make a difference. Convincing 2% of people would have flipped three of the last four US presidential elections. And this is a capacity to win-for-reasons-other-than-coincidence that you can’t build any other way.
Guided By The Beauty Of Our Weapons
Via Slate Star Codex: apparently the "backfire effect" of political facts isn't necessarily correct:
> First described in a 2010 paper by the political scientists Brendan Nyhan and Jason Reifler, the idea is simple: If someone believes something that’s false, and you present them with a correction, in many situations rather than update their belief they will double down, holding even tighter to that initial belief. [...]
> Two new upcoming studies of the backfire effect call into question its very existence. These studies collected far more subjects than the original backfire study, and both find effectively no backfire effect at all. And unlike the original study, the subjects in these new ones weren’t just college students — they were thousands of people, of all ages, from all around the country.
> If this new finding holds up, this is a very important, well, correction: It suggests that overall, fact-checking may be more likely to cause people, even partisans, to update their beliefs rather than to cling more tightly to them. And part of the reason we now know this is that Nyhan and Reifler put their money where their mouths were: When a team of two young researchers approached them suggesting a collaboration to test the backfire effect in a big, robust, public way, they accepted the challenge. So this is partly a story about a potentially important new finding in political science and psychology — but the story within the story is about science being done right. [...]
> As the paper notes, the experiments were set up in ways designed to maximize the chances of a backlash effect being observed. Many of the issues the respondents were asked about are extremely politically charged — abortion and gun violence and illegal immigration — and the experiment was conducted during one of the most heated and unusual presidential elections in modern American history. The idea was something like, Well, if we can’t find the backfire effect here, with a big sample size under these sorts of conditions, then we can safely question whether it exists.
> And that’s what happened. “Across all experiments,” the researchers write, “we found only one issue capable of triggering backfire: whether WMD were found in Iraq in 2003.” Even there, changing the wording of the item in question eliminated the backfire effect.
There's (More) Hope for Political Fact-checking -- Science of Us
We are pleased to announce our newest paper. It harnesses deep learning to accelerate other fields of science. We believe machine learning can completely transform animal behavior, ecology, conservation, evolutionary biology, and similar fields into big data sciences. How exciting! Congrats Mohammed and Anh, and thanks to our fantastic collaborators on the Snapshot Serengeti project!
"Automatically identifying wild animals in camera trap images with deep learning"
Norouzzadeh M, Nguyen A, Kosmala M, Swanson A, Parker C, Clune J
Having accurate, detailed, and up-to-date information about wildlife location and behavior across broad geographic areas would revolutionize our ability to study, conserve, and manage species and ecosystems. Currently such data are mostly gathered manually at great expense, and thus are sparsely and infrequently collected. Here we investigate the ability to automatically, accurately, and inexpensively collect such data, which could transform many fields of biology, ecology, and zoology into "big data" sciences. Motion sensor cameras called "camera traps" enable pictures of wildlife to be collected inexpensively, unobtrusively, and at high-volume. However, identifying the animals, animal attributes, and behaviors in these pictures remains an expensive, time-consuming, manual task often performed by researchers, hired technicians, or crowdsourced teams of human volunteers. In this paper, we demonstrate that such data can be automatically extracted by deep neural networks (aka deep learning), which is a cutting-edge type of artificial intelligence. In particular, we use the existing human-labeled images from the Snapshot Serengeti dataset to train deep convolutional neural networks for identifying 48 species in 3.2 million images taken from Tanzania's Serengeti National Park. In this paper we train neural networks that automatically identify animals with over 92% accuracy, and we expect that number to improve rapidly in years to come. More importantly, we can choose to have our system classify only the images it is highly confident about, allowing valuable human time to be focused only on challenging images. In this case, our system can automate animal identification for 98.2% of the data while still performing at the same 96.6% accuracy level of crowdsourced teams of human volunteers, saving approximately ~8.3 years (at 40 hours per week) of human labeling effort (i.e. over 17,000 hours) on a 3.2-million-image dataset. Those efficiency gains immediately highlight the importance of using deep neural networks to automate data extraction from camera trap images. The improvements in accuracy we expect in years to come suggest that this technology could enable the inexpensive, unobtrusive, high-volume and perhaps even realtime collection of information about vast numbers of animals in the wild.
Argh, these are too accurate. :D
> Sorry for the delayed response. I opened your e-mail on my phone while my date was in the bathroom, but then I saw that it required more than a “yes” or “no” reply, decided that was too much work, marked it as unread, and then forgot about it entirely until just now! [...]
> Yikes! The little Gmail preview text made your e-mail seem like a regular “Great, thanks!” e-mail I didn’t need to open, but now I see that you asked a question after the “Great, thanks!” I think we can agree that was a stupid thing for you to do, so I don’t even feel bad. I assume you found someone else to answer your question by now, but, if not, Laura (cc’d) should be able to help.
> Oof, I really didn’t mean to take twelve days to respond to your e-mail. Does it ever feel like time is passing way too fast and you never have enough hours in the day to accomplish anything? Like, the sands of time have eroded the midpoint of the hourglass and it’s all just freely falling through now? Anyway, to answer your question: Yes, I can do 3 p.m. [...]
> You e-mailed asking for my opinion, and I wanted to give a really thorough, well-thought-out, articulate response, so I starred your e-mail, and over time it became a mascot for my illogical but oppressive sense of dread in the face of slightly annoying tasks. That little yellow star became a shining testament to the burden of modernity! Every day, it dared me to write a response worthy of the time I’ve made you wait, and every day I thought, Ugh, no. But today! Today I will respond! Rejoice, my patient friend! (I’m actually really busy, though, so this is going to be a vague, half-assed response that I could have easily written in the minute after I first read your e-mail, five months ago.) Sorry!
Sorry for the Delayed Response
> The problem is a well-known one, and indeed one we have discussed here before: as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work.
> In fact, this phenomenon is so very well known and understood that it’s been given at least three different names by different people [...] Goodhart’s Law is most succinct [...] Campbell’s Law is the most explicit [...] The Cobra Effect refers to the way that measures taken to improve a situation can directly make it worse. [...]
> > The basic principle for any incentive scheme is this: can you measure everything that matters? If you can’t, then high-powered financial incentives will simply produce short-sightedness, narrow-mindedness or outright fraud. If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability.
> I think that last part is pretty much how academia used to be run a few decades ago. Now I don’t want to get all misty-eyed and rose-tinted and nostalgic — especially since I wasn’t even involved in academia back then, and don’t know from experience what it was like. But could it be … could it possibly be … that the best way to get good research and publications out of scholars is to hire good people, pay them the going rate and tell them to do the job to the best of their ability?
Every attempt to manage academia makes it worse