Yrittäjyys ja asenteet
Yksi asenne, jota näkee Piilaakso-henkisessä startup-kulttuurissa paljon ja jonka mielelläni näkisin leviävän Suomeenkin, on suhtautuminen yrittäjyyteen likipitäen altruistisena toimintana.
Tuossa kulttuurissa yrittäjä on ihminen, joka löytää uuden tavan parantaa muiden elämää, ja toteuttaa sen. Olen lukennut tuosta kulttuurista tulevia “miten ryhtyä yrittäjäksi”-artikkeleita, joiden sisältö on suunnilleen tiivistettävissä muutamaan askeleeseen: 1) keksi ihmisten elämästä niin monta puutetta kuin voit 2) keksi tapoja korjata nuo puutteet hyötyen asiasta samalla myös itse 3) valitse se puute, jonka pystyt korjaamaan mahdollisimman pienellä vaivalla mahdollisimman monelta 4) ryhdy hommiin. Yrittäjä on silloin sankari, joka parantaa muiden elämänlaatua. Hän ottaa tuosta hyödystä kylläkin oman osansa, mutta kaikki tunnustavat, että hänen muille tuottamansa hyöty on aina moninkertainen siihen nähden, mitä hän itse siitä saa.
Suomalaisesta julkisesta keskustelusta tuntuu kokonaan puuttuvan tämä asenne. Yrittäjyydestä puhutaan lähinnä tapana luoda työpaikkoja tai parantaa kansantaloutta. Nokiaakin tunnuttiin pitävän kansanaarteena lähinnä siksi, että se auttoi nostamaan maan lamasta – se, että se tuotti sadoille miljoonille ihmisille erinomaisia puhelimia tuntui olevan enemmänkin sivuseikka.
Tämän seurauksena ei varmaan ole ihmekään, ettei yrittäjyys tunnu juuri ihmisiä kiinnostavan, tai että markkinataloutta ylipäätään kohdellaan epäilyttävänä mörkönä. Yrittäjyys on muuttunut pelkäksi tavaksi luoda rahaa, ja sen rahantekemisen keinoilla ei ole juurikaan moraalista merkitystä itsessään. Saatetaan puhua yritysten yhteiskuntavastuusta, mutta sitäkin tunnutaan ajattelevan ikään kuin jonkinlaisena hyvityksenä siitä, että yritys ylipäätään on olemassa – pientä sivupuuhastelua, jolla firma oikeuttaa toimintansa jatkumisen. Ylipäätään työnteostakin tulee ihmisten mielissä vain inhottava ja kyseenalainen pakko, jota ihmisen on pakko harrastaa saadakseen rahaa.
Ja uskaltaisin veikata, että tämä voi hyvinkin päätyä itseääntoteuttavaksi ennustukseksi – kun liike-elämästä puhutaan vain keinona hankkia rahaa, niin sinne myös hakeutuvat pääasiassa ne ihmiset jotka kohtelevat sitä pelkkänä keinona hankkia rahaa, ja meno muuttuu askeleen kylmäverisemmäksi ja piittaamattomammaksi. Haluaisin lisää maailmanpelastajia liike-elämään.
This work by Kaj Sotala is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
In addition to making a public comment, you may also send me anonymous feedback.
If you like my writing, you can also support me via GitTip.
Editor of a psychology journal rejects a paper for being too good. Authors resubmit the paper with messier results. Editor accepts and encourages other people to submit similar papers.
> Inzlicht describes how, as associate editor at the Journal of Experimental Psychology: General, he rejected a certain manuscript. He did so despite the fact that the peer review reports had been very positive. The article reported 7 studies, all of which found nice, statistically significant evidence for the hypothesis in question.
> So why reject it? Because, to Inzlicht, it was just too good to be true. Real data just aren’t that consistent, suggesting that the results had been made more consistent through p-hacking, selective reporting, or other biases. So he did not accept the paper, and told the authors his concerns.
> That was a radical move. Until a few years ago, at many psychology journals, you would struggle to publish a paper that wasn’t a perfect series of uniformly positive results. Today there’s a little more skepticism but this is the first case I know of where a paper has been rejected for being too good.
> So the manuscript was rejected, but the authors resubmitted it. The revised version included all of the studies they ran, with no p-hacking. Out of 18 hypothesis tests included in the new version, only two were statistically significant – that’s 2/18 positive, down from 7/7 originally! However, a meta-analysis of all the studies still returned a significant positive result. Inzlicht accepted this second version and it was published in June.
> Inzlicht says “I am a huge fan of this second paper” and he calls it “it is a model of transparency and a template for the kinds of things we should be seeing more of in our top journals”.
Real Data Are Messy - Neuroskeptic
In some situations, we may be better off if everyone thinks selfishly rather than morally. In one study, two people told to negotiate selfishly were, on average, better at finding win-win solutions than two people told to seek justice.
> Ironically, our tendency toward biased fairness is sufficiently strong that, in some situations, we may be better off if everyone thinks selfishly rather than morally. Fieke Harinck and colleagues at the University of Amsterdam had pairs of strangers negotiate over penalties for four hypothetical criminal cases, modeled after real-life cases. Each pair of negotiators negotiated over all four cases simultaneously. One member of each pair was randomly assigned to the role of defense lawyer and, as such, attempted to get lighter penalties for the defendants. The other negotiator played the role of district attorney and, as such, attempted to get stiffer penalties.
> There were, in each criminal case, five possible penalties for the defendant, ranging from a light fine to a long jail sentence. Each negotiator received a confidential document telling her how good or bad each outcome was from her point of view as defense lawyer/ district attorney. For two of the criminal cases, the outcome values were arranged to make it a “zero sum” game. That is, a gain for one player necessarily involved an equally large loss for the other player. For the other two cases, however, the outcome values were arranged to allow for “win-win” solutions. In these cases, a gain for one side would still involve a loss for the other side, but the two cases were weighted differently for each negotiator. This meant that each player could make concessions on the case that mattered less to him and, in return, gain concessions on the case that mattered more. In other words, the experiment was set up so that both sides could come out ahead if both sides were willing to make concessions. Unbeknownst to the negotiators, each outcome had a pre-assigned point value, corresponding to the goodness/ badness of the outcome for that negotiator. By adding up the points earned by both members of each negotiating pair, the experimenters could measure how well each pair did at finding hidden “win-win” solutions.
> All of this is part of the standard setup for a negotiation experiment. The twist, in this case, was in how the negotiators were told to think about the negotiation. Some pairs were told to think about the negotiation in purely selfish terms, to try to get lighter/ stiffer penalties because doing so would advance their careers and help them get promoted. Other pairs of negotiators were told to think about the negotiation in moral terms; here, the defense lawyers were told to pursue lighter penalties because lighter penalties are, in these cases, more just. Likewise, the district attorneys were told to pursue stiffer penalties because stiffer penalties would be more just.
> So who did better, the selfish careerists or the seekers of justice? The surprising answer is that the selfish careerists did better. Bear in mind that the selfish careerists did not succeed by trampling over the seekers of justice. The selfish careerists were negotiating with each other. What Harinck and colleagues found was that two people told to negotiate selfishly were, on average, better at finding win-win solutions than two people told to seek justice. Why is this?
> Once again, in this set of negotiations, the key to mutual success is for both negotiators to make concessions on the issues that are less important to them, in order make greater gains on the issues that are more important to them. As a selfish negotiator, you’re willing to make these concessions because they result in a net gain. Moreover, you understand that your opponent, who is also selfish, will make only those concessions that result in a net gain for her. Thus, two selfish and rational negotiators who see that their positions are symmetrical will be willing to make the concessions necessary to enlarge the pie, and then split the pie evenly. However, if negotiators are seeking justice, rather than merely looking out for their bottom lines, then other, more ambiguous, considerations come into play, and with them the opportunity for biased fairness. Maybe your clients really deserve lighter penalties. Or maybe the defendants you’re prosecuting really deserve stiffer penalties. There is a range of plausible views about what’s truly fair in these cases, and you can choose among them to suit your interests. By contrast, if it’s just a matter of getting the best deal you can from someone who’s just trying to get the best deal for himself, there’s a lot less wiggle room, and a lot less opportunity for biased fairness to create an impasse. When you’re thinking of the negotiation as a mutually self-serving endeavor, it’s harder to convince yourself that there is a relevant asymmetry between you and your negotiation partner. Two selfish negotiators with no illusions about their selfishness have nowhere to hide. This surprising result doesn’t imply that we should forsake all moral thinking in favor of pure selfishness, but it does highlight one of the dangers in moral thinking. Biased fairness is sufficiently destructive that, in some cases, we’re better off putting morality aside and simply trying to get a good deal.
-- Greene, Joshua (2014-01-02). Moral Tribes: Emotion, Reason and the Gap Between Us and Them (Kindle Locations 1428-1464). Atlantic Books Ltd. Kindle Edition.
Biased fairness: it seems that knowing which side of a dispute you’re on, unconsciously changes your thinking about what’s fair.
> In 1995, a U. S. News & World Report survey posed the following question to readers: “If someone sues you and you win the case, should he pay your legal costs?” Eighty-five percent of respondents said yes. Others got this question: “If you sue someone and lose the case, should you pay his costs?” This time, only 44 percent said yes. As this turnabout illustrates, one’s sense of fairness is easily tainted by self-interest. This is biased fairness, rather than simple bias, because people are genuinely motivated to be fair. Suppose the magazine had posed both versions of the question simultaneously. Few respondents would have said, “The loser should pay if I’m the winner, but the winner should pay if I’m the loser.” We genuinely want to be fair, but in most disputes there is a range of options that might be seen as fair, and we tend to favor the ones that suit us best. Many experiments have documented this tendency in the lab. The title of a Dutch paper nicely summarizes the drift of these findings: “Performance-based pay is fair, particularly when I perform better.”
> A series of negotiation experiments by Linda Babcock, George Loewenstein, and colleagues illuminates the underlying psychology of biased fairness. In some of these experiments, pairs of people negotiated over a settlement for a motorcyclist who had been hit by a car. The details of the hypothetical case were based on a real case that had been tried by a judge in Texas. At the start of the experiment, the subjects were randomly assigned to their roles as plaintiff and defendant. Before negotiating, they separately read twenty-seven pages of material about the case, including witness testimony, maps, police reports, and the testimonies of the real defendant and plaintiff. After reading this material, they were asked to guess what the real judge had awarded the plaintiff, and they did this knowing which side they would be on. They were given a financial incentive to guess accurately, and their guesses were not revealed to the opponents, lest they weaken their bargaining positions. Following the subsequent negotiation, the subjects were paid real money in proportion to the size of the settlement, with the plaintiff subject getting more money for a larger settlement and the defendant subject getting more money for a smaller one. The settlement could be anywhere from $ 0 to $ 100,000. The pairs negotiated for thirty minutes. Both subjects lost money in “court costs” as the clock ticked, and failure to agree after thirty minutes resulted in an additional financial penalty for both negotiators.
> On average, the plaintiffs’ guesses about the judge’s award were about $ 15,000 higher than those of the defendants, and the bigger the discrepancy between the two guesses, the worse the negotiation went. In other words, the subjects’ perceptions of reality were distorted by self-interest. What’s more, these distortions played a big role in the negotiation. Pairs with relatively small discrepancies failed to agree only 3 percent of the time, while the negotiating pairs with relatively large discrepancies failed to agree 30 percent of the time. In a different version of the experiment, the negotiators didn’t know which side they would be on until after they made their guesses about the judge’s settlement. This dropped the overall percentage of negotiators who failed to agree from 28 percent to 6 percent.
> These experiments reveal that people are biased negotiators, but, more important, they reveal that their biases are unconscious. Plaintiffs guessed high about the judge’s award, and defendants guessed low, but they weren’t consciously inflating or deflating their guesses. (Once again, they had financial incentives to guess accurately.) Rather, it seems that knowing which side of a dispute you’re on unconsciously changes your thinking about what’s fair. It changes the way you process the information. In a related experiment, the researchers found that people were better able to remember pretrial material that supported their side. These unconsciously biased perceptions of fairness make it harder for otherwise reasonable people to reach agreements, often to the detriment of both sides.
-- Greene, Joshua (2014-01-02). Moral Tribes: Emotion, Reason and the Gap Between Us and Them (Kindle Locations 1374-1404). Atlantic Books Ltd. Kindle Edition.
This distinction between local and global values is interesting: "global" values are ones which are present in all cultures, though different cultures may emphasize different parts of them. E.g. a tension between the values of individual freedom and collective benefit exists in every culture, even if some cultures emphasize freedom more and others emphasize collective benefit more. "Local" values, on the other hand, are values that derive their authority "from proper nouns", such as specific religious texts.
I'm not entirely sure that I agree with Greene's assertion that "if it's local morality, it's probably religious", though. Something like patriotism towards one's country also makes reference to a proper noun, and there do seem to be people who feel that the laws of their country have a moral force by themselves - in which case they would be giving some moral authority to the parliament, congress, etc. of their particular country (another proper noun).
> [The conflicts illustrated by the Jyllands-Posten Muhammad cartoons episode] are not simply a matter of different groups’ emphasizing different values. The aggrieved Muslims were deeply opposed to the cartoons, but the Danish journalists had no problem with them at all. (Indeed, their lack of a gut reaction to the cartoons may explain why they so severely underestimated the magnitude of the Muslim world’s response.) And insofar as non-Muslims opposed the publication of the cartoons, it was out of respect for Muslim values, not because they had objections of their own. In other words, the prohibition against depicting Muhammad is a local moral phenomenon. By this I mean, once again, that it is inextricably bound up with the authority of certain entities named by proper nouns, such as Muhammad, the Koran, and Allah.
> The Danish cartoon conflict illustrates two familiar points that are nevertheless worth making explicit. First, religious moral values and local moral values are intimately related. More specifically, local moral values are nearly always religious values, although many religious values— arguably the most central ones— are not local. For example, as noted in the last chapter, all major religions affirm some version of the Golden Rule as a central principle and, along with it, general (though not exceptionless) prohibitions against killing, lying, stealing, et cetera. If it’s local morality, it’s probably religious, but if it’s religious morality, it’s not necessarily local.
> Second, the cartoon controversy reminds us that local moral values are, and have long been, a major source of conflict. Indeed, compared with other conflicts involving local religious values, the Danish cartoon affair is a minor dustup. The ongoing Israeli-Palestinian conflict, arguably the world’s most divisive political dispute, is bedeviled by competing claims to specific parcels of land, grounded in the authority of various proper-noun entities. Likewise, the ongoing conflicts within Sudan and between Pakistan and India run along religious lines. Local values and their associated proper nouns play central roles in many domestic controversies, such as those over prayer in public schools in the United States and the banning of Muslim women’s traditional facial coverings from public spaces in France. Likewise, many controversial issues, such as abortion and gay rights, which can be discussed in purely secular terms, are nevertheless deeply intertwined with local religious moral values.
> In short, serious conflicts between groups arise not only because they have competing interests, and not only because they emphasize shared values differently, but also because they have distinctive local values, typically grounded in religion. As noted above, many of the most widely held moral values, such as a commitment to the Golden Rule, are actively promoted by the world’s religions. Thus, religion can be a source of both moral division and moral unity.
-- Greene, Joshua (2014-01-02). Moral Tribes: Emotion, Reason and the Gap Between Us and Them (Kindle Locations 1352-1373). Atlantic Books Ltd. Kindle Edition.
> Given the strength and pervasiveness of racial bias, you might think that we are “hardwired” for racial discrimination. But if you think about it, this makes little sense. In the world of our hunter-gatherer ancestors, one was unlikely to encounter someone whom, today, we would classify as a member of a different race. On the contrary, the “Them” on the other side of the hill would likely be physically indistinguishable from “Us.” This suggests that race, far from being an innate trigger, is just something that we happen to use today as a marker of group membership. From an evolutionary perspective, one would expect the human mind’s social sorting system, if it has one, to be more flexible, sorting people based on culturally acquired characteristics, such as language and clothing, rather than genetically inherited physical features.
> With this in mind, Robert Kurzban and his colleagues conducted an experiment in which they pitted people’s sensitivity to race against their sensitivity to cultural markers of group membership. They had people watch an argument unfold between members of two mixed-race basketball teams. The participants saw pictures of various players paired with partisan statements such as “You were the ones that started the fight.” The researchers then gave their participants a surprise memory test, asking them to pair pictures of people with the things those people said. By looking at the kinds of mistakes people made on the test, the experimenters could see how the participants were categorizing the players. If people in this experiment are highly sensitive to race, then they should rarely misattribute a white person’s statement to a black person, or vice versa. Likewise, if people are highly sensitive to team membership, they should rarely misattribute one player’s statement to a player on the other team. Kurzban and his colleagues found that, in the absence of salient markers of team membership, people paid a lot of attention to race and not a lot of attention to team membership. That is, people were relatively unlikely to misattribute statements across racial lines and more likely to misattribute statements across team lines. However, when the players wore colored T-shirts indicating team membership, everything reversed. Suddenly, race mattered much less, and team membership mattered much more. [...]
> ...our brains are wired for tribalism. We intuitively divide the world into Us and Them, and favor Us over Them. We begin as infants, using linguistic cues, which historically have been reliable markers of group membership. In the modern world, we discriminate based on race (among other things), but race is not a deep, innate psychological category. Rather, it’s just one among many possible markers for group membership. As Tajfel’s experiments show, we readily sort people into Us and Them based on the most arbitrary of criteria. This sounds crazy, and in many ways it is. But it’s what one might expect from a species that survives by cooperating in large groups— large enough that members cannot identify one another without the help of culturally acquired identity badges.
-- Greene, Joshua (2014-01-02). Moral Tribes: Emotion, Reason and the Gap Between Us and Them (Kindle Locations 886-903, 923-929). Atlantic Books Ltd. Kindle Edition.