The parliamentary model as the correct ethical model

In 2009, Nick Bostrom brought up the possibility of dealing with moral uncertainty with a “parliamentary model” of morality. Suppose that you assign (say) 40% probability to some form particular of utilitarianism being correct, and 20% probability to some other form of utilitarianism being correct, and 20% probability to some form of deontology being true. Then in the parliamentary model, you imagine yourself as having a “parliament” that decides on what to do, with the first utilitarian theory having 40% of the delegates, the other form having 20% of the delegates, and the deontological theory having 20% of the delegates. The various delegates then bargain with each other and vote on different decisions. Bostrom explained:

The idea here is that moral theories get more influence the more probable they are; yet even a relatively weak theory can still get its way on some issues that the theory think are extremely important by sacrificing its influence on other issues that other theories deem more important. For example, suppose you assign 10% probability to total utilitarianism and 90% to moral egoism (just to illustrate the principle). Then the Parliament would mostly take actions that maximize egoistic satisfaction; however it would make some concessions to utilitarianism on issues that utilitarianism thinks is especially important. In this example, the person might donate some portion of their income to existential risks research and otherwise live completely selfishly.

As I noted, the model was proposed for dealing with a situation where you’re not sure of which ethical theory is correct. I view this somewhat differently. I lean towards the theory that the parliamentary model itself is the most correct ethical theory, as the brain seems to contain multiple different valuation systems that get activated in different situations, as well as multiple competing subsystems that feed inputs to these higher-level systems. (E.g. there exist both systems that tend to produce more deontological judgments, and systems that tend to produce more consequentialist judgments.)

Over time, I’ve settled upon something like a parliamentary model for my own decision-making. Different parts of me clearly tend towards different kinds of ethical frameworks, and rather than collapse into constant infighting, the best approach seems to go for a compromise where the most dominant parts get their desires most of the time, but less dominant parts also get their desires on issues that they care particularly strongly about. For example, a few days back I was considering the issue of whether I want to have children; several parts of my mind subscribed to various ethical theories which felt that the idea of having them felt a little iffy. But then a part of my mind piped up that clearly cared very strongly about the issue, and which had a strong position of “YES. KIDS”. Given that the remaining parts of my mind only had ambivalent or weak preferences on the issue, they decided to let the part with the strongest preference to have its way, in order to get its support on other issues.

There was a time when I had a strong utilitarian faction in my mind which did not want to follow a democratic process and tried to force its will on all the other factions. This did not work very well, and I’ve felt much better after it was eventually overthrown.


  1. Tangled z

    That’s very interesting, and correlates to my observations – I can relate to the feeling of having a strong utilitarian faction in me that tries to enforce its will on the other factions, and I agree that it leaves me feeling quite miserable very often.

    So I’ve been trying to figure out an alternative way of making decisions, but so far have not been able to. Particularly, I am not sure of how to articulate what those other factions are, or what they want… I am much more aware of what the utilitarian faction wants because it’s also the one I’m most used to listening to, and the one that’s used to articulating its desires more.

    Any suggestions?

    • That’s a good question – when I first started intentionally “deprogramming” myself from the “utilitarian dictatorship”, I also had difficulties knowing what I’d do instead.

      I’m afraid that I don’t fully remember the details of how I did it anymore, but I think that a big part of it was focusing on questions like “what do I want to do”, “what would be fun to do”, or “what do I feel like doing” rather than “what should I do” or “what would the utilitarian thing thing to do be”. Some of those three questions felt easier than others, so it’s worth experimenting: I find that subtle differences in the exact question that I ask myself can produce large differences in the answer. (Phil Goetz once noted that if he asks himself why he forgot something, he gets back the answer that he’s a moron; but if he asks himself how he could remember it the next time, that actually produces useful ideas.)

      Another thing is, start small. I was used to using utilitarianism to think about all of my life decisions, big and small. It’s easier to start listening yourself on small concrete things rather than big abstract ones; if a long timescale causes you trouble, go for a shorter one. When I was having difficulty figuring out what I wanted to do with my next year, or even what I wanted to do with my next hour, I focused on what I would do with my next five seconds. Which was easier, because often that produced something more concrete, like “I’m thirsty, a glass of water might be nice”. And then I went to get a glass of water and that was nice, and then I could think about the next five seconds after that.

      You can also have periods when you just outright forbid yourself from doing anything that would be useful in utilitarian terms. If you think of doing something, and notice a utilitarian motivation for doing it, then refuse to do it. This might have you just sitting around twiddling your thumbs for a while, but after rejecting enough ideas and getting bored enough, you’ll eventually come up with things that feel like fun and not just useful. And then you can pay attention to what that sense of “this would be fun” feels like – it’ll be coming from some other part of your mind than the utilitarian one.

      • Tangled z

        Thanks for the detailed answer, this will be useful to think about and digest :)

        I liked the “specific questions” link you gave. Do you have any similar links that explore different ways of phrasing a question?

      • Hmm. I’ve seen the general idea – that the specific phrasing of the question you ask yourself matters a lot – pop up in a lot of different contexts, but from the top of my head, don’t have very much in the way of concrete links to offer. Some of it comes up in the context of therapy, e.g. if somebody is describing a traumatic experience (say), their reaction is likely to be a lot different if their therapist asks something like “what were the strengths that allowed you to get through that” (implicitly presupposes that the client has personal strengths which allowed them to survive, focuses the client’s attention on thinking about those and may help re-interpret the experience in more positive terms) rather than “what was bad about it” (focuses the client’s attention on the negative sides and causes them to dwell on those more). covers some specific questions that have been useful for achieving change. Various stuff on coaching also discusses this somewhat, since coaching is largely about asking the client the kinds of questions that produce useful changes in them.


  1. Rational Feed – deluks917 - […] The Parliamentary Model As The Correct Ethical Model by Kaj Sotala – An explanation of how the ‘parliamentary’ model…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.