# Towards meaningfully gamifying Bayesian Networks, or, just what can you do with them

In my previous article, I argued that educational games could be good if they implemented their educational content in a meaningful way. This means making the player actually *use* the educational material to predict the possible consequences of different choices within the game, in such a manner that the choices will have both short- and long-term consequences. More specifically, I talked about my MSc thesis project, which would attempt to use these ideas to construct a learning game about Bayesian networks.

However, I said little about how exactly one would do this, or how I was intending to do it. This article will take a preliminary stab at answering that question – though of course, game design on paper only goes as far, and I will soon have to stop theorizing and start implementing prototypes to see if they’re any fun. So the final design might be something completely unlike what I’m outlining here. But advance planning should be valuable nonetheless, and perhaps this will allow others to chime in and offer suggestions or criticism.

So, let’s get started. In order to figure out the best way of meaningfully integrating something into a game, we need to know what it is and what we can do with it. What *are* Bayesian networks?

**Bayesian networks in a nutshell**

Formally, a Bayesian network *a directed acyclic graph describing a joint probability distribution function over *n* random variables*. But that definition isn’t very helpful in answering our question, so let’s try again with less jargon: a Bayesian network is a way for reasoning about situations where you would like to know a thing X, which you can’t observe directly, but you can instead observe a thing Y, which is somehow connected to X.

For example, suppose that you want to know whether or not Alice is a good person, and you believe that being a good person means caring about others. You can’t read her thoughts, so you can’t directly determine whether or not she cares about others. But you can observe the way she talks about other people, and the way that she acts towards them, and whether she keeps up with those behaviors even when it’s inconvenient for her. Combining that information will give you a pretty good estimate of whether or not she does genuinely care about others.

A Bayesian network expresses this intuitive idea in terms of probabilities: if Alice does care about people, then there’s some probability that she will exhibit these behaviors, and some probability that she does not. Likewise, if she doesn’t care about them, there’s still some other probability that she will exhibit these behaviors – she might be selfish, but still want to appear as caring, so that others would like her more. If you have an idea of what these different probabilities are like, then you can observe her actions and ask the reverse question: given these actions, does Alice care about other people (or, what is the probability of her caring)?

**Towards gamifying Bayesian networks**

Now how does one turn this into a game?

Possibly the simplest possible game that we can use as an example is that of Rock-Paper-Scissors. You’re playing somebody you don’t know, it’s the second round, and on the first round you both played Rock. From this fact, you need to guess what he intends to play next, and use that information to pick a move that will beat him. The observable behavior is the last round’s move, and the thing that you’re trying to predict is your opponent’s next move. (The pedants out there will correctly point out that the actual network you’d want to build for predicting RPS moves would look quite a bit different and more complicated than this one, but let’s disregard that for now.) Similarly, many games involve trying to guess what your opponent is likely to do next – based on the state of the game, your previous actions, and what you know of your opponent – and then choosing a move that most effectively counters theirs. This is particularly obvious with games such as Poker or Diplomacy.

In the previous article, we had a very simple probabilistic network involving Alice, Bob, and Charlie. From the fact that Charlie knew something, we concluded something about the probability of either Alice or Bob knowing something about it; and from the added fact that Bob knew something about it, we could refine our probability of Alice knowing something about it.

Suppose that the piece of knowledge in question was some dark secret of yours that you didn’t want anyone else to know. By revealing that secret, anyone could hurt you.

Now let’s give you some options for dealing with the situation. First, you could preemptively reveal your dark secret to people. That would hurt most people’s opinion about you, but you could put the best possible spin on it, so you wouldn’t be as badly hurt as you would if somebody else revealed it.

Second, you could try to obtain blackmail material on the people who you thought that knew, in order to stop them from revealing your secret. But obtaining that material could be risky, might make them resent you in case they had never intended to reveal the secret in the first place, or encourage them to dig up your secret if they didn’t actually know about it already. Or if you didn’t figure out everyone who knew, you might end up blackmailing only some of them, leaving you helpless against the ones you missed.

Third, you might elect to just ignore the whole situation, and hope that nobody who found out had a grudge towards you. This wouldn’t have the costs involved with the previous two alternatives, but you would risk becoming the target of blackmail or of your secret being revealed. Furthermore, from now on you would have to be extra careful about not annoying the people who you thought knew.

Fourth, you could try to improve your relationship with the people you thought knew, in the hopes that this would improve the odds of them remaining quiet. This could be a wasted effort if they were already friendly with you or hadn’t actually heard about the secret, but on the other hand, their friendship could still be useful in some other situation. Those possible future benefits might cause you to pick this option even if you weren’t entirely sure it was necessary.

Now to choose between those options. To predict the consequences of your choice, and thus the best choice, you would want to know **1)** who exactly knew about your secret and **2)** what they currently thought of you. As for **1**, the example in our previous article was about just that – figuring out the probability of somebody knowing your secret, given some information about who else knew. As for **2**, you can’t directly observe someone’s opinion of you, but you can observe the kinds of people they seem to hang out with, the way they act towards you, and so on… making this, too, a perfect example of something that you could use probabilistic reasoning to figure out.

You may also notice that your choices here have both short- and long-term consequences. Choosing to ignore the situation means that you’ll wish to be extra careful about pissing off the people who you think you know, for example. That buys you the option of focusing on something more urgent now, at the cost of narrowing your options later on. One could also imagine different overall strategies: you could try to always be as honest and open as possible about all your secrets, so that nobody could ever blackmail you. Or you could try to obtain blackmail material on everyone and keep everybody terrified of ever pissing you off, and so on. This is starting to sound like a game! (Also like your stereotypical high school, which is nice, since that was our chosen setting.)

Still, so far we have only described an isolated choice, and some consequences that might follow from it. That’s still a long way from having specified a game. After all, we haven’t answered questions like: what exactly are the player’s goals? What are the things that they could be doing, besides blackmailing or befriending these people? The player clearly wants other people to like them – so what goal does it serve to be liked? What can they do with that?

We still need to specify the “big picture” of the game in more detail. More about that in the next article.

**Interlude: figuring out the learning objectives**

Ideally, the designer of an edugame should have some specific learning objectives in mind, build a game around those, and then use their knowledge about the learning objectives to come up with tests that measure whether the students have learned those things. I, too, would ideally state some such learning goals and then use them to guide the way I designed the “big picture” of the game.

Now my plans for this game have one problem (if you can call it that): the more I think of it, the more it seems like many natural ways of structuring the overall game would teach various ways of *applying* the basic concepts of the math in question and seeing its implications. For example, it could teach the player the importance of considering various alternative interpretations about an event, instead of jumping to the most obvious-seeming conclusion, or it might effectively demonstrate the way that “echo chambers” of similar-minded people who mostly only talk with each other are likely to reach distorted conclusions. And while those are obviously valuable lessons, they are not necessarily very compatible with my initial plan of “take some existing exam on Bayes networks and figure out whether the players have learned anything by having them do the exam”. The lessons that I described primarily teach critical thinking skills that take advantage of math skills, rather than primarily teaching math skills. And critical thinking skills are notoriously hard to measure.

Also, a game which teaches those lessons does not necessarily need to teach very many different concepts relating to Bayes nets – rather, it might give a thorough understanding of a small number of basic concepts and a few somewhat more advanced ones. A typical math course, in contrast, attempts to cover a much larger set of concepts.

Of course, this isn’t necessarily a bad thing either – a firm grounding in the basic concepts may get people interested in learning the more advanced ones on their own. For example, David Shaffer’s Escher’s World was a workshop in which players became computer-aided designers working with various geometric shapes. While the game did not *directly* teach much that might come up on a math test, it allowed the students to see why various geometric concepts might be interesting and useful to know. As a result, the students’ grades improved both in their mathematics and art classes, as the game had made the content of those classes meaningful in a new way. The content of math classes no longer felt like just abstract nonsense, but rather something whose relevance to interesting concepts felt obvious.

Again, our previous article mentioned that choices in a game become meaningful if they have both short- and long-term consequences. Ideally, we would want to make the actions of the players meaningful even *beyond* the game – show them that the mathematics of probabilistic reasoning are interesting because it’s very much the kind of reasoning that we use all the time, in our daily lives.

So I will leave the exact specification of the learning objectives until later – but I do know that one important design objective will be to make the game enjoyable and compelling enough that people will be motivated to play it voluntarily, even on their spare time. And so that, ideally, they would find in it meaning that went beyond it being just a game.

For the next article, I’ll try to come up with a big-picture description of the game that seems fun – and if you people think that it does, I’ll finally get to work on the prototyping and see how much of that I can actually implement in an enjoyable way.

*Next post in series: Bayesian academy game – constraints.*

### 5 comments

### Trackbacks/Pingbacks

- Teaching Bayesian networks by means of social scheming, or, why edugames don’t have to suck | Kaj Sotala - […] try to make the math concerning Bayesian networks relevant and interesting in my game, while a later post will…

This is fascinating! I always was interested in knowing the ‘why’ of what I was learning, especially in relationship to my future career goals.

Hi Kaj, what are the prerequisites to play your game? I would think a good command of what probabilities and conditional probabilities, probability distribution etc should be in place. Which group of players do you target? I would advise you, to maximize the success of what you are doing, to shrink your learning objectives to very specific areas and define clearly the game mechanics and representation of the mathematical concepts at stake. I m afraid you jump to fast to the big picture of the game. To my mind, you are missing the most important design phase in a learning game, meaning designing the concepts and their interactions, physically. That s what your players are going to manipulate and learn from. One of the big challenge is to make understand your players how your controls work. JB

Hi Jean,

thanks again for your comment!

Those are good questions, and excellent points. You’re right that the physical appearance of the concepts will require a lot more work than I have here so far. I haven’t written much about them because, while writing about the big picture design helps me think about it better, I don’t expect writing to help with the physical design – for that, I need to draw pictures and make playable prototypes that I can test on people to see how intuitive they actually are.

I was *hoping* that I could come up with a design that would allow the game to be played even without much of an understanding about probability, and have some preliminary ideas for how the game could teach them as one went along, but they’re again rather hard to express in writing. I’ll hopefully be showing them soon – your comment made me realize that I should increase their priority.

You’re probably right that this post is probably jumping too much into the big picture of a big complicated game – I started thinking about that myself, after posting it. I should, as you say, start by focusing on some specific and clearly-defined part of the whole thing, and only then build on top of that.

Actually, I just happened to come up with a simple puzzle game mechanic that would be useful in teaching the flow of probability in a network, but which wouldn’t require any prerequisite knowledge and which might be moderately fun even on its own, without any story… I should start putting together a prototype of that tomorrow. :)

Interesting article!

I’ve also been thinking of using game mechanics to teach statistical reasoning and Bayesian Networks; my metaphors of choice are usually science or criminal investigations.

This article made me think, one important component could be getting players to manipulate probabilities, models or hypotheses as “first order objects”, i.e. explicitly in the game, and not in the player’s mind as a way of solving the games’ puzzles.

Now a game where you manipulate probability distributions sounds pretty dry, BUT, how about putting it in terms of *stories*? The player could acquire a list of “stories” (represented as mini wordless scenes/comic strips, for example), and to each associate a rating of how likely it is. That would fit well with your social gossip theme (as it would in a criminal investigation).

And if the stories can be categorized by situation (“What happened in the bathroom”, “What Amy told Mike during the camping trip”, “Why Rachel suddenly went home”, etc.) (with several stories for each), then each situation has different mutually equivalent stories.

So, first levels could teach the player to think in terms of competing hypotheses between which probability mass is distributed (though he only rates them as “likely”, “unlikely”, etc. – but the game displays little bars next to each story that sum up to one); and advanced levels introduce the concept of “situations” such that each situation has a set of mutually exclusive stories, and there are links between situations and the whole thing becomes like a Bayesian Network of sorts.

Anyway, I don’t know where you’re going exactly with this but that would be one way of doing it :)

“First, you could preemptively reveal your dark secret to people. That would hurt most people’s opinion about you, but you could put the best possible spin on it, so you wouldn’t be as badly hurt as you would if somebody else revealed it.”

… or more formally, P(admit secret | secret has innocuous explanation) > P(admit secret | secret has shameful explanation), but there’s also a third node (“someone else is about to reveal the secret anyway”) that comes into play … that situation can very well be modeled by a Bayesian Network.