Bayesian academy game: Some game mechanics
So far I have spoken about the possibility of edugames being good, sketched out the basic idea of an edugame built around Bayesian networks, and outlined some design constraints. Now it’s finally the time to get to the actual meat of the matter – the game mechanics.
Note that everything here is subject to change. I’m aiming to outline things to a sufficient level of detail that I have a reasonable clue of how to start implementing the first prototype. Then, when I do have an early prototype together, I’ll keep experimenting with it and throw away anything that doesn’t seem fun or useful. Still, this post should hopefully give some idea of what the final product will be like.
(This post might not be enough for anyone else to start implementing an actual prototype, since there are a lot of missing pieces still. But I think I have a lot of those missing pieces in my head, it just wouldn’t be useful to try to write all of it down.)
To make things concrete enough to start implementing the game, I need to define a concrete goal for (a part of) the game, some of the concrete ways of achieving that goal, as well as the choices involved in achieving the goal. And of course I need to tie all that together with the educational goals.
Goal. Let’s say that, in the first part of the game, you are trying to get yourself voted into the Student Council as the representative of the first-year students. This requires you to first gain the favor of at least three other first-year students, so that you will be nominated to the position. After that, you need to gain the favor of the majority of the most influential students, so that you will actually be the winning candidate.
From that will follow the second part of the game, where you need to also persuade others in the council to support your agenda (whatever your agenda is), either by persuading them or having them replaced with people who are more supportive of you. But for now I will just focus on the first part.
I’m actually starting to think that I should possibly make this into more of a sandbox game, with no set goals and letting you freely choose your own goals. But I’ll go with this for the first prototype.
How to achieve the goals. The game keeps track of your relationship to the different characters. You achieve the nomination if at least three others characters like you enough. To be voted to the council, a majority of the characters have to both like and trust you more than the other candidates.
This seems like a good time to talk more about relationships, which are a rather crucial element of any social drama. Relationships are one of the main types of resource in the game, the rest of which include your public image, personal skills, and time. I’ll cover the rest shortly.
Relationships. Most games that model your relationships with other characters do so by assigning it a single numerical score, with different actions giving bonuses or penalties to the score. So if you give someone a gift you might get +10 to the relationship and if you insult them you might get -20, and your total relationship is the sum of all these factors.
This is a little boring and feels rather game-ish, so I would like to make it feel a little more like you’re actually interacting with real people. To keep things simple, there will still be an overall “relationship meter” (or actually several, if I can make them distinct and interesting enough), but affecting its value shouldn’t feel like just a mechanical process of giving your friends gifts until they love you.
Borrowing from The Ferrett’s three relationship meters, the ones I’m initially considering are like, trust, and infatuation/love.
Like measures the general extent to which someone, well, considers you likeable, even if they don’t necessarily trust you. A relationship that’s high on like but low on trust is the one that you might have with a nice acquaintance with whom you have intellectual conversations online, or that co-worker who’s friendly enough but who you haven’t really done any major projects with or interacted outside work. A high like makes people more inclined to help you out in ways that don’t involve any risk to them.
Things that affect liking include:
- Having a public image that includes personality traits that other people like. Some traits are almost universally liked or reviled. Other traits are neutral, but other people tend to like people who they feel are similar to them – or, with some traits, dissimilar.
- Acting in the interests of others, or contrary to them.
- People having a crush on you.
- Other people that you’re friends with.
- Various random events.
Trust measures the extent to which people are willing to rely on you, and the extent to which they think you’re not going to stab them in the back. A high trust may make it easier to repair damage to the other meters, since it makes people more inclined to give you another chance. It also makes people more inclined to help you out in ways that involve a risk to themselves, to vote for you, to confide in you, and to ask you for help.
Things that build up trust include:
- Telling people information about yourself, indicating that you trust them. If you tell someone a secret you haven’t told anyone else, this will impress them more than if you tell them something that’s common knowledge. (Just don’t jump to revealing all your deepest secrets on the first meeting, or you’ll come across as a weirdo.)
- Making and keeping promises.
- Acting in the interests of others, or contrary to them.
- People having a crush on you.
- Other people that you’re friends with.
- Various random events.
Different characters have different kinds of ideals that they are attracted to. If your public image happens to match their ideal, they may develop a crush on you and build up infatuation. The infatuation will grow in strength over time, assuming that your public image continues to match their ideal. If you then also build up their like and trust, the infatuation may turn into love.
The fact that infatuation also increases their like and trust towards you makes it easier to convert their feelings to love, but the feelings may also come crashing down very quickly if you demonstrate untrustworthiness. An infatuated character is more likely to ignore minor flaws, but anything that produces a major negative modifier to any relationship meter may cause them to become completely disillusioned and start wondering what they ever saw in you in the first place. (Love is more stable, and makes it easier to take advantage of your lovers without them leaving you. But you’d never stoop that low, would you?)
A sufficiently high love makes people willing to do almost anything for you.
Besides the things that were already mentioned, love may be affected by:
- The amount of trust and like that the person has towards you.
- You committing yourself to a romantic relationship with them.
- Whether you have any other lovers (some characters are fine with sharing you, others are not).
- You choosing to genuinely fall in love with them as well – they’ll sense this and receive a considerable boost to their love meter, but you will also end up permanently prevented from ever taking certain negative actions towards them.
- Various random events.
Every modifier to any of the relationship meters is associated with some source, which you may try to influence. For example, suppose that you promise your friend to do something by a certain time, and then fail to do so. This will produce a negative modifier to their trust rating. You can then try to talk to them and apologize, and possibly offer some compensation for the misdeed. If you play your cards right, you may be able to erase the penalty, or even turn it into a bonus.
Your public image. I have already mentioned your public image a few times, when I mentioned that your perceived personality traits influence the extent to which others like you, as well as the chance of someone developing a crush on you.
Basically, there’s a set of different personality traits that any character may or may not have. Some, like being kind or being cruel, are mutually exclusive. In the beginning, you don’t know anyone’s personality traits, nor does anyone know yours. If you act kindly, you will develop an image as a kind person, and if you act in a cruel way you’ll develop a reputation as a cruel person. And of course, not everything depends directly on your actions – your rivals may try to spread negative rumors about you.
I haven’t yet determined how exactly knowledge about your actions spreads. I don’t want all of the player’s actions to magically become common knowledge the moment the action is made, but neither do I want to keep track of every separate piece of information. And I do want to offer the player the option to try to keep some of their doings secret. My current compromise would be to keep track of who knows what for as long as the amount of people who knew a particular piece of information remained under a certain number. So if the limit was 6, then the game would keep track of any piece of information that was known to at most six people: once the seventh person found out, it would be assumed to have become common knowledge.
People who like you are less inclined to pass on negative rumors about you, as well as more inclined to pass on positive ones. You can also try to spread negative rumors about your rivals, yourself – but this risks developing a reputation as a lying gossip.
Personal skills. The best way of acting in any particular situation depends on what you know of the people involved. If you cultivate a certain kind of image, who will end up liking you more as a result of it, and who will end up liking you less? Is the concession you are offering to your offended friend sufficient to make them forgive you? How much should you trust your new lover or friend? Who might be spreading those nasty rumors about you?
There are several ways by which you could find this out. First, empathy skills can be learned by study: if you have a high relevant empathy skill, you can make a good guess of what someone is like, or how they might react in some situation, just based on your skill. Learning the skills takes considerable time that could be spent on other things, however.
Also, many personality traits correlate with each other, and various actions correlate with different personality traits. It’s almost as if their connections formed a… wait for it… Bayesian network! But in the beginning of the game, you only have a very rough idea of what the structure of the network is like, or what the relevant conditional probabilities are. So you have to figure this out yourself.
Ideally – and I’m not yet sure of how well I’ll get this to work – the game will give you a tool which you can use to build up your own model of what the underlying network structure might be like: a Bayesian network that you can try to play around with. In the beginning, nearly all of the nodes in the model will be unconnected with each other, though some of the most obvious nodes start out connected. For instance, you’ll know that cruel people are more likely to insult others than kind people are, though your estimate of the exact conditional probability is likely to be off.
If you suspect that two nodes in the underlying graph might be linked, you can try joining them together in your model and test how well your model now fits your observations. You can adjust both the conditional probability tables and linkages in order to create a model that most closely represents what you have seen, with the game automatically offering suggestions for the probabilities based on your experiences so far. (Just be careful to avoid overfitting.)
In addition, you can study various knowledge skills. Knowing psychology, for instance, may reveal some of the links in the network and their associated probabilities to you with certainty, which will automatically update your model.
Time. Everything you do takes time, and you can only be in a single place at once. Your lovers and friends will expect you to hang out with them regularly, studying skills takes time, and your plans may be completely interrupted by the friend of yours who has a mental breakdown and needs you there to comfort them RIGHT NOW. (But if you have a sufficiently good reason why you can’t make it, maybe they’ll understand. Perhaps.)
What choices do you need to make? The above discussion should have suggested some choices that you might run across in the game, such as:
- Who do you make friends with?
- Do you try to build a small group of strong friendships, or a large group of weak friendships?
- Who do you trust with information that could hurt you?
- How much time do you spend on learning the various skills?
- How much time do you spend on chasing down the source of various rumors?
- Do you want to spread any nasty rumors yourself?
- Do you get romantically involved with someone?
- If so, who will it be? One lover or many?
- Do you risk doing things that would damage your reputation if people found out?
- Do you generally side with the people who are your closest allies, or the ones who have actually been wronged?
And others. As you might notice, the choice you’ll want to make in most of these depends on what you’ve figured out about others… which should help us fulfill our goal of making the player genuinely interested in how to figure these things out, via the formalisms that the game employs and attempts to teach.
Free feel to suggest more the comments!
Bayesian academy game: Constraints
My work on my Master’s thesis and the Bayesian academy game was temporarily interrupted when I had to focus on finishing the work I had piled up for another course. Now I’m slowly trying to get back into the flow, so here’s a post on some of the things that I’ll be trying to keep in mind while creating the game, and which should help shape its design. This post is still somewhat abstract: more concrete material should hopefully follow in the next post.
I’ve also started on the actual programming, putting together what should hopefully be a flexible Bayes net framework. (I looked at some existing libraries, but it seemed easier to just put together my own implementation.) Mostly model level stuff so far, though, so not much to show yet. I’ll probably put a copy up on Github or somewhere eventually, once it stops looking terribly embarrassing.
Constraints
“Design is the successive application of constraints until only a unique product is left.” — Donald Norman, The Design of Everyday Things
Having some constraints is nice. They help narrow down the space of possible designs and give an idea for what you need to do. So let’s establish some constraints for our game design.
Has to teach useful things about Bayesian networks / probabilistic reasoning. Kinda obvious, but still worth stating explicitly. At the same time, the game has to be fun even if you had no intrinsic interest in Bayesian networks. Balancing these two gets tough, since you can attract gamers by various story elements and interesting game mechanics, but then these elements might easily become ones that do nothing to teach the actual subject matter. My general approach for solving this is to build the mechanics so that they are all tied to various pieces of hidden information, with Bayes nets being your tool for uncovering that hidden information. More on that later.
Every choice should be interesting. Even if you manage to figure out exactly who knows what, that shouldn’t dictate the right option to pick, just as in XCOM, correctly figuring out the probability of killing your opponent given a certain tactic isn’t enough to dictate the choice of the best tactic. Rather, correctly using the skills that the game tries to teach you should be something that better informs you of the benefits and drawbacks of the different choices you make. If there’s only one obvious option to pick, that’s not interesting.
Of course, eventually somebody will figure out some optimal strategy for playing the game which dictates an ideal decision for various situations, but that’s fine. If we design the game right, figuring out the perfect strategy should take a while.
Must not be ruined by “save-scumming”. In my previous article, I gave an example of a choice involving Bayesian networks: you are given several options for how to act, with the best option depending on exactly which character knew what. Now, what does your average video game player do when they need to make a choice based on incomplete information, and they find out the truth state of affairs soon afterwards? They reload an earlier save and choose differently, that’s what.
Constantly reloading an earlier save in order to pick a better decision isn’t much fun, and really ruins the point of the whole game. But if that’s the optimal way to play, people will feel the temptation to do it, even if they know that it will ruin their fun. I would like to give people the freedom to play the game the way they like the most, but I’m worried that in this case, too much freedom would make the experience unfun.
Other games that rely on randomness try to combat save-scumming by saving the random seed, so reloading an earlier save doesn’t change the outcome of any die rolls. We could try doing the opposite: re-randomizing any hidden states once the player reloads the game. But this could turn out to be tricky. After all, if we have a large network of characters whose states we keep track of and who influence each other, setting up the game in such a way that their states can be re-randomized at will seems rather challenging. So I’m inclined to go for the roguelike approach, with only a single, constantly updating save slot per game, with no ability to go back to earlier saves. That gives us a new constraint:
Each individual game should be short. A constantly updating save means that if you lose the game, you have to start all over. This is fine with a game like Faster Than Light, where a single pass through the game only takes a couple of hours. It would be less fine in an huge epic game that took 50 hours to beat. Based on my own gut feel, I’m guessing that FTL’s couple of hours is quite close to the sweet spot – long enough that you get depth to a single game, short enough that your reaction to failure will be to start a new game rather than to quit the whole thing in disgust. So I will be aiming for a game that can be finished in, say, three hours.
This constraint is probably also a good one since my natural inclination would be to make a huge epic sprawling game. Better to go for something easier to handle at first – a limited-duration game is also easier to extensively playtest and debug. I can always expand this after finishing my thesis, making the game so far the first chapter or whatever.
One could also allow reloading, but restricting where you are allowed to save. Recently I have been playing Desktop Dungeons, where your kingdom grows as you go out on quests that are about 10 minutes long each. You can’t save your progress while on a quest, but you can save it between quests, and the temptation to try just one more quest makes for an addictive experience in the same way that FTL’s short length also makes for an addictive experience. But I’m not sure of whether my current design allows for any natural “units” that could be made into no-save regions in the same way as the quests in Desktop Dungeons can.
Another consequence of having constant saves is that
Failures should be interesting.
Always make failure entertaining. In fact, failure should be more entertaining than success. Success is its own reward. If you succeed at solving a puzzle, you feel good, you barely need any confirmation from the game. Just a simple ding will be satisfying. But if you’re struggling, if the game sort of is in on the joke with you and laughs about it, and gives you a funny animation, it’s actually sort of saying, yeah we want — it’s okay to be here. You can have fun while you’re here. You’ll get there eventually — you’ll get to the other end eventually, but while you’re here you can enjoy yourself; you can relax. And that’s really something I learned sort of from doing the game, but that’s really become an ongoing principal of ours in design is to make the failure — keep failure interesting. — Scot Osterwald
Even a game being short doesn’t help if a couple of bad decisions mean that you’re stuck and would be better off restarting rather than playing to the end. FTL solves this by quickly killing you off when you’re starting to do badly: this is interesting, since you can almost always trace your failure back to some specific decision you made, and can restart with the intention not to make that mistake again. I’m not sure how well this would work in this game, though I have been thinking about splitting it into a number of substages, each with a limit on the number of actions that you are allowed to do. (First stage: become elected as the Student Council representative of the first year students by winning them over before the elections. Etc.) Failing to achieve the objective before the time limit would lead to a game over, thus killing you quickly once you started doing too badly to recover. Possibly each stage could also be made into a single no-save unit, allowing saves between them.
Another option would be to take the Princess Maker approach: in these games, you are trying to raise your daughter to become a princess, but even if you fail in that goal, there are a variety of other interesting endings based on the choices that you made in the game. A third option would be to ensure that even a sub-optimal choice opens some new paths through the game – but it could be difficult to ensure that you never ended up in a hopelessly unwinnable state.
Still not entirely sure of the exact form of this constraint and the previous one: will just have to try things out and see what seems to be the most fun.
The next update should either be about some concrete game mechanics (finally!) and the ways that relationships are handled in this game, or about the way that the information is represented to the player.
How to make it easier to receive constructive criticism?
Typically finding out about the flaws in something that we did feels bad because we realize that our work was worse than we thought, so receiving the criticism feels like ending up in a worse state than we were in before. One way to avoid this feeling would be to reflect on the fact that the work was already flawed before we found out about it, so the criticism was a net improvement, allowing us to fix the flaws and create a better work.
But thinking about this once we’ve already received the criticism rarely helps that much, at least in my experience. It’s better be to consciously remind yourself that your work is always going to have room for improvement, and that it is certain to have plenty of flaws you’re ignorant of, before receiving the criticism. That way, your starting mental state will be “damn, this has all of these flaws that I’m ignorant about”, and ending up in the post-criticism state where some of the flaws have been pointed out, will feel like a net improvement.
Another approach would be to take the criticism as evidence of the fact that you’re working in a field where success is actually worth being proud about. Consider: if anyone could produce a perfect work in your field, would it be noteworthy that you had achieved the same thing that anyone else also achieve? Not really. And if you could easily produce a work that was perfect and had no particular flaws worth criticizing, that would also be evidence of your field not being particularly deep, and of your success not being very impressive. So if you get lots of constructive criticism, that’s evidence that your field *is* at least somewhat deep, and that success in it is non-trivial. Which means that you should be happy, since you have plenty of room to grow and develop your talents – and you’ve just been given some of the tools you need in order to do so.
Towards meaningfully gamifying Bayesian Networks, or, just what can you do with them
In my previous article, I argued that educational games could be good if they implemented their educational content in a meaningful way. This means making the player actually use the educational material to predict the possible consequences of different choices within the game, in such a manner that the choices will have both short- and long-term consequences. More specifically, I talked about my MSc thesis project, which would attempt to use these ideas to construct a learning game about Bayesian networks.
However, I said little about how exactly one would do this, or how I was intending to do it. This article will take a preliminary stab at answering that question – though of course, game design on paper only goes as far, and I will soon have to stop theorizing and start implementing prototypes to see if they’re any fun. So the final design might be something completely unlike what I’m outlining here. But advance planning should be valuable nonetheless, and perhaps this will allow others to chime in and offer suggestions or criticism.
So, let’s get started. In order to figure out the best way of meaningfully integrating something into a game, we need to know what it is and what we can do with it. What are Bayesian networks?
Bayesian networks in a nutshell
Formally, a Bayesian network a directed acyclic graph describing a joint probability distribution function over n random variables. But that definition isn’t very helpful in answering our question, so let’s try again with less jargon: a Bayesian network is a way for reasoning about situations where you would like to know a thing X, which you can’t observe directly, but you can instead observe a thing Y, which is somehow connected to X.
For example, suppose that you want to know whether or not Alice is a good person, and you believe that being a good person means caring about others. You can’t read her thoughts, so you can’t directly determine whether or not she cares about others. But you can observe the way she talks about other people, and the way that she acts towards them, and whether she keeps up with those behaviors even when it’s inconvenient for her. Combining that information will give you a pretty good estimate of whether or not she does genuinely care about others.
A Bayesian network expresses this intuitive idea in terms of probabilities: if Alice does care about people, then there’s some probability that she will exhibit these behaviors, and some probability that she does not. Likewise, if she doesn’t care about them, there’s still some other probability that she will exhibit these behaviors – she might be selfish, but still want to appear as caring, so that others would like her more. If you have an idea of what these different probabilities are like, then you can observe her actions and ask the reverse question: given these actions, does Alice care about other people (or, what is the probability of her caring)?
Towards gamifying Bayesian networks
Now how does one turn this into a game?
Possibly the simplest possible game that we can use as an example is that of Rock-Paper-Scissors. You’re playing somebody you don’t know, it’s the second round, and on the first round you both played Rock. From this fact, you need to guess what he intends to play next, and use that information to pick a move that will beat him. The observable behavior is the last round’s move, and the thing that you’re trying to predict is your opponent’s next move. (The pedants out there will correctly point out that the actual network you’d want to build for predicting RPS moves would look quite a bit different and more complicated than this one, but let’s disregard that for now.) Similarly, many games involve trying to guess what your opponent is likely to do next – based on the state of the game, your previous actions, and what you know of your opponent – and then choosing a move that most effectively counters theirs. This is particularly obvious with games such as Poker or Diplomacy.
In the previous article, we had a very simple probabilistic network involving Alice, Bob, and Charlie. From the fact that Charlie knew something, we concluded something about the probability of either Alice or Bob knowing something about it; and from the added fact that Bob knew something about it, we could refine our probability of Alice knowing something about it.
Suppose that the piece of knowledge in question was some dark secret of yours that you didn’t want anyone else to know. By revealing that secret, anyone could hurt you.
Now let’s give you some options for dealing with the situation. First, you could preemptively reveal your dark secret to people. That would hurt most people’s opinion about you, but you could put the best possible spin on it, so you wouldn’t be as badly hurt as you would if somebody else revealed it.
Second, you could try to obtain blackmail material on the people who you thought that knew, in order to stop them from revealing your secret. But obtaining that material could be risky, might make them resent you in case they had never intended to reveal the secret in the first place, or encourage them to dig up your secret if they didn’t actually know about it already. Or if you didn’t figure out everyone who knew, you might end up blackmailing only some of them, leaving you helpless against the ones you missed.
Third, you might elect to just ignore the whole situation, and hope that nobody who found out had a grudge towards you. This wouldn’t have the costs involved with the previous two alternatives, but you would risk becoming the target of blackmail or of your secret being revealed. Furthermore, from now on you would have to be extra careful about not annoying the people who you thought knew.
Fourth, you could try to improve your relationship with the people you thought knew, in the hopes that this would improve the odds of them remaining quiet. This could be a wasted effort if they were already friendly with you or hadn’t actually heard about the secret, but on the other hand, their friendship could still be useful in some other situation. Those possible future benefits might cause you to pick this option even if you weren’t entirely sure it was necessary.
Now to choose between those options. To predict the consequences of your choice, and thus the best choice, you would want to know 1) who exactly knew about your secret and 2) what they currently thought of you. As for 1, the example in our previous article was about just that – figuring out the probability of somebody knowing your secret, given some information about who else knew. As for 2, you can’t directly observe someone’s opinion of you, but you can observe the kinds of people they seem to hang out with, the way they act towards you, and so on… making this, too, a perfect example of something that you could use probabilistic reasoning to figure out.
You may also notice that your choices here have both short- and long-term consequences. Choosing to ignore the situation means that you’ll wish to be extra careful about pissing off the people who you think you know, for example. That buys you the option of focusing on something more urgent now, at the cost of narrowing your options later on. One could also imagine different overall strategies: you could try to always be as honest and open as possible about all your secrets, so that nobody could ever blackmail you. Or you could try to obtain blackmail material on everyone and keep everybody terrified of ever pissing you off, and so on. This is starting to sound like a game! (Also like your stereotypical high school, which is nice, since that was our chosen setting.)
Still, so far we have only described an isolated choice, and some consequences that might follow from it. That’s still a long way from having specified a game. After all, we haven’t answered questions like: what exactly are the player’s goals? What are the things that they could be doing, besides blackmailing or befriending these people? The player clearly wants other people to like them – so what goal does it serve to be liked? What can they do with that?
We still need to specify the “big picture” of the game in more detail. More about that in the next article.
Interlude: figuring out the learning objectives
Ideally, the designer of an edugame should have some specific learning objectives in mind, build a game around those, and then use their knowledge about the learning objectives to come up with tests that measure whether the students have learned those things. I, too, would ideally state some such learning goals and then use them to guide the way I designed the “big picture” of the game.
Now my plans for this game have one problem (if you can call it that): the more I think of it, the more it seems like many natural ways of structuring the overall game would teach various ways of applying the basic concepts of the math in question and seeing its implications. For example, it could teach the player the importance of considering various alternative interpretations about an event, instead of jumping to the most obvious-seeming conclusion, or it might effectively demonstrate the way that “echo chambers” of similar-minded people who mostly only talk with each other are likely to reach distorted conclusions. And while those are obviously valuable lessons, they are not necessarily very compatible with my initial plan of “take some existing exam on Bayes networks and figure out whether the players have learned anything by having them do the exam”. The lessons that I described primarily teach critical thinking skills that take advantage of math skills, rather than primarily teaching math skills. And critical thinking skills are notoriously hard to measure.
Also, a game which teaches those lessons does not necessarily need to teach very many different concepts relating to Bayes nets – rather, it might give a thorough understanding of a small number of basic concepts and a few somewhat more advanced ones. A typical math course, in contrast, attempts to cover a much larger set of concepts.
Of course, this isn’t necessarily a bad thing either – a firm grounding in the basic concepts may get people interested in learning the more advanced ones on their own. For example, David Shaffer’s Escher’s World was a workshop in which players became computer-aided designers working with various geometric shapes. While the game did not directly teach much that might come up on a math test, it allowed the students to see why various geometric concepts might be interesting and useful to know. As a result, the students’ grades improved both in their mathematics and art classes, as the game had made the content of those classes meaningful in a new way. The content of math classes no longer felt like just abstract nonsense, but rather something whose relevance to interesting concepts felt obvious.
Again, our previous article mentioned that choices in a game become meaningful if they have both short- and long-term consequences. Ideally, we would want to make the actions of the players meaningful even beyond the game – show them that the mathematics of probabilistic reasoning are interesting because it’s very much the kind of reasoning that we use all the time, in our daily lives.
So I will leave the exact specification of the learning objectives until later – but I do know that one important design objective will be to make the game enjoyable and compelling enough that people will be motivated to play it voluntarily, even on their spare time. And so that, ideally, they would find in it meaning that went beyond it being just a game.
For the next article, I’ll try to come up with a big-picture description of the game that seems fun – and if you people think that it does, I’ll finally get to work on the prototyping and see how much of that I can actually implement in an enjoyable way.
Next post in series: Bayesian academy game – constraints.