No Results Found
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.
The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.

In addition to making a public comment, you may also send me anonymous feedback.
If you like my writing, you can also support me via GitTip.
Kaj Sotala:
Two crucial questions in discussions about the risks of artificial superintelligence are: 1) How much more capable could an AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, I consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how an AI could improve on humans in two major aspects of thought and expertise, namely mental simulation and pattern recognition. I find that although there are very real limits to prediction, it seems like an AI could still substantially improve on human intelligence, possibly even mastering domains which are currently too hard for humans. In practice, the limits of prediction do not seem to pose much of a meaningful upper bound on an AI’s capabilities, nor do we have any nontrivial lower bounds on how much time it might take to achieve a superhuman level of capability. Takeover scenarios with timescales on the order of mere days or weeks seem to remain within the range of plausibility.
How Feasible Is the Rapid Development of Artificial Superintelligence? – Foundational Research Institute
Kaj Sotala:
> I was down at my grandparent’s shore house a few weeks ago relaxing and drawing some maps for another group’s campaign. My grandma asked about what I was doing, and I explained that it was for D&D. She said, “Oh we’d like to play, we love games!”
> I actually tried to talk her out of it at first, thinking it would be a waste of time because there was no way that my grandparents would ever be interested in playing D&D. But they pushed the issue and invited me over for dinner, telling me to bring everything I would need for them to play. [...]
> I would say I’m most surprised by my grandpa and how he has taken to the game. Out of everyone that’s playing, he is the one that I least expected to get really into his character. He’s a tough guy who has certainly done his share of manual labor, but he’s playing a sneaky, Halfling rogue named Jeffro. He’s really dived in headfirst and has even texted me to talk about his character's backstory in between sessions.
70-Year-Olds Play D&D for the First Time (and Love it) | Tabletop Terrors - RPG Inspiration for All
Kaj Sotala:
> Medical science is also taking a closer look and finding that spirituality and recovery tend to go hand-in-hand, but not for the reasons most people assumed when AA originated in mid-20th century America.
> As John F. Kelly, professor of psychiatry at Harvard University, put it in an interview, prayer, like meditation, seems to have a protective effect against relapse.
> “Prayer is hope, and hope is a positive emotion,” he said. Studies have shown it lessens the effect of alcohol triggers and is a predictor of abstinence in teenage addicts. Spirituality also tends to increase over the course of recovery. Kelly compares this to the physics of light. Just as light separates through a prism into its constituent colours, so does spirituality separate into positive emotions: gratitude, hope, bliss, empathy, compassion, awe, etc. These, in psychiatric lingo, are the mechanisms of recovery.
> On this view, the literal truth of any spiritual belief is almost beside the point, and Kelly notes in a new paper that Bill Wilson, AA’s revered co-founder, sometimes talked about God as a “pragmatic recovery tool.”
Alcoholics Anonymous’ universal status challenged in human rights court for forcing belief in higher power
Kaj Sotala:
> We live in an age of algorithmic decision-making. There are algorithms trading stocks on Wall Street (Patterson 2013); algorithms determining who is the most likely to be guilty of tax evasion (Zarsky 2013); algorithms assisting in scientific discovery (Mayer-Schonberger and Cukier 2013); and algorithms helping us in dating and mating (Slater 2013). This is just a small sample: many more could be listed (Siegel 2013). With the ongoing data revolution, and the transition towards the so-called “Internet of Things” this trend can only be set to grow (Kitchin 2014a; Kellermeit & Obodovski 2013; Rifkin 2014).
> The question raised by this article is whether the use of such algorithm-based decision-making in the public and political sphere is problematic. Suppose that the creation of new legislation, or the adjudication of a legal trial, or the implementation of a regulatory policy relies heavily on algorithmic assistance. Would the resulting outputs be morally problematic? As public decision-making processes that issue coercive rules and judgments, it is widely agreed that such processes should be morally and politically legitimate (Peter 2014). Could algorithm-based decision-making somehow undermine this legitimacy?
> In this article, I argue that it could. Although many are concerned about the hiddenness of algorithmic decision-making, I argue that there is an equally (if not more) serious problem concerning its opacity (potential incomprehensibility to human reasoning). Using David Estlund’s (1993; 2003; 2008) threat of epistocracy argument as my model, I argue that increasing reliance on algorithms gives rise to the threat of algocracy – a situation in which algorithm-based systems structure and constrain the opportunities for human participation in, and comprehension of, public decisionmaking. This is a significant threat, one that is difficult to accommodate or resist.
philpapers.org/archive/DANTTO-13.pdf
Kaj Sotala:
"Do scholars follow Betteridge’s Law? The use of questions in journal article titles", Cook & Plourde 2016 https://www.dropbox.com/s/kootab1g7yr0ecm/2016-cook.pdf
"In journalistic publication, Betteridge’s Law of Headlines stipulates that ‘‘Any headline that ends in a question mark can be answered by the word no.’’ When applied to the titles of academic publication, the assertion is referred to as Hinchcliffe’s Rule and denigrates the use of the question mark in titles as a ‘‘click-bait’’ marketing strategy. We examine the titles of all published articles in the year 2014 from five top-ranked and five mid-range journals in each of six academic fields (n = 7845). We describe the form of questions when they occur, and where a title poses a question that can be answered with a ‘‘yes’’ or ‘‘no’’ we note the article’s substantive answer. We do not find support for the criticism lodged by Betteridge’s Law and Hinchcliffe’s Rule. Although patterns vary by discipline, titles with questions are posed infrequently overall. Further, most titles with questions do not pose yes/no questions. Finally, the few questions that are posed in yes/no terms are actually more often answered with a ‘‘yes’’ than with a ‘‘no.’’ Concerns regarding click-bait questions in academic publications may, therefore, be unwarranted."
2016-cook.pdf
