# Teaching Bayesian networks by means of social scheming, or, why edugames don’t have to suck

As a part of my Master’s thesis in Computer Science, I am designing a game which seeks to teach its players a subfield of math known as Bayesian networks, hopefully in a fun and enjoyable way. This post explains some of the basic design and educational philosophy behind the game, and will hopefully also convince you that educational games don’t have to suck.

I will start by discussing a simple-but-rather-abstract math problem and look at some ways by which people have tried to make math problems more interesting. Then I will consider some of the reasons why the most-commonly used ways of making them interesting are failures, look at the things that make the problems in entertainment games interesting and the problems in most edutainment games uninteresting, and finally talk about how to actually make a good educational game. I’ll also talk a bit about how I’ll try to make the math concerning Bayesian networks relevant and interesting in my game, while a later post will elaborate more on the design of the game.

So as an example of the kinds of things that I’d like my game to teach, here’s an early graph from the Coursera course on Probabilistic Graphical Models. For somewhat mathy people, it doesn’t represent anything complicated: there’s a deterministic OR gate Y that takes as input two binary random variables, X1 and X2. For non-mathy people, that sentence was probably just some incomprehensible gibberish. (If you’re one of those people, don’t worry, just keep reading.)

I’m not going to go through the whole example here, but the idea is to explain why observing the state of X1 might sometimes give you information about X2. (If the following makes your eyes glaze over, again don’t worry – you can just skip ahead to the next paragraph.) Briefly, if you know that Y is true, then either **X1** or **X2** or **both** must be true, and in two out of three of those possible cases, **X2 is true**. But if you find out that **X1 is true**, then that eliminates the case where **X1 was false and X2 was true**, so the probability of **X2 being true** goes down to .5 from .66. In the course, the explanation of this simple case is then used to build up an understanding of more complicated probabilistic networks and how observing one variable may give you information about other variables.

For mathy types the full explanation is probably relatively easy to follow, at least if you put in a little bit of thought. But for someone who is unfamiliar with math – or worse, scared of it -, it might not be. So the question is, how to convert that explanation to a form that is somewhat easier to understand?

The traditional school math approach would be to convert the abstract explanation into a concrete “real-life” case. Let’s say that the variables are people. X1 becomes Alice, X2 becomes Bob, and Y becomes Charlie. A variable being true means that the person is question has heard about some piece of information – say, that Lord Deathfist the Terrible is on a rampage again. If one takes the lines to mean “Alice tells Charlie stuff and Bob tells Charlie stuff (but Alice and Bob don’t talk with each other)”, the “OR gate” thing becomes relatively easy to understand. It means simply that Charlie knows about the rampage if either Alice or Bob, or both, know about it and have told Charlie.

Now we could try to explain it in common-sense terms like this: “Suppose that **Charlie** **knows** about Lord Deathfist. That means that either **Alice**, or **Bob**, or **both**, know about it, and have told him. Now out of those three possibilities, **Alice knows** about it in two possible cases (the one where** Alice knows**, and the one where **Alice and Bob both know**) and there’s one possible case where she does not know (the scenario where **only Bob knows**), so the chance of Alice knowing this is ⅔. But if we are also told that **Bob knows it**, that only allows for the possibilities where 1) **Bob knows** and 2) **both Alice and Bob know**, so that’s 1 possibility out of 2 for Alice knowing it, so the chance of Alice knowing goes down to ½ where it used to be ⅔.”

This is… slightly better. Maybe. We still have several problems. For one, it’s still easy to lose track of what exactly the possible scenarios are, though we might be able to solve that particular problem by adding animated illustrations and stuff.

But still, the explanation takes some effort to follow, and you still need to be motivated to do so. And if we merely dress up this abstract math problem with some imaginary context, that still doesn’t make it particularly interesting. Who the heck are these people, and why should anyone care about what they know? If we are not already familiar with them, “Alice” and “Bob” aren’t much better than X1 or X2 – they are still equally meaningless.

We could try to fix that by picking names we were already familiar with – like Y was Luke Skywalker, and X1 and X2 were Han Solo and Princess Leia, and Luke would know about the Empire’s new secret plan if either Han or Leia had also found out about it, and we wanted to know the chance of all of them already knowing this important piece of information.

But we’d still be quite aware of the fact that the whole Star Wars gimmick was just coating for something we weren’t really interested in. Not to mention that the whole problem is more than a little artificial – if Leia tells Luke, why wouldn’t Luke just tell Han? And even if we understood the explanation, we couldn’t do anything interesting with it. Like, knowing the logic wouldn’t allow us to blow up the Death Star, or anything.

So some games try to provide that kind of significance for the task: work through an arithmetic problem, and you get to see the Death Star blown up as a reward. But while this might make it somewhat more motivating to play, we’d rather play an action game where we could spend all of our time shooting at the Death Star and not waste any time doing arithmetic problems. Additionally, the action game would also allow us to shoot at other things, like TIE Fighters, and that would be more fun.

Another way of putting this would be that we don’t actually find the math task itself meaningful. It’s artificial and disconnected from the things that we are actually interested in.

Let’s take a moment to contrast this to the way that one uses math in commercial entertainment games. If I’m playing XCOM: Enemy Unknown, for instance, I might see that my enemy has five hit points, while my grenade does three points of damage. Calculating the difference, I see that throwing the grenade would leave my enemy with two hit points left, enough to shoot back on his turn. Fortunately I have another guy nearby, and he hasn’t used his grenade either – but I also know that there are at least six more enemies left on the battlefield. Do I really want to use both of my remaining grenades, just to take out one enemy? Maybe I should just try shooting him… both of my guys have a 50% chance to hit him with their guns, and they’d do an average of three points of damage on a hit, so that’s an expected three points of damage if both take the shot, or – calculating it differently – a 25% chance of killing the alien dead… which aren’t very good odds, so maybe the other guy should throw the grenade and the other shoot, and since grenades magically never miss in this game, I’d then have a 50% chance of killing the alien.

So as I play XCOM, I keep running arithmetic calculations through my head. But unlike in the “solve five arithmetic problems, then you get to see the Death Star blowing up” example, these calculations aren’t just glued-on. In fact, while playing, I never actually think that I am solving a set of arithmetic and probability problems in order to be rewarded with the sight of the enemies dying and my soldiers surviving. I think that I’m out killing aliens and doing my best to keep my guys alive. (How many of you had realized that XCOM is an educational game that, among other things, drills you on arithmetic problems? Well, it is!)

This can be a bad thing in some senses – it means that I’m engaging in “stealth learning”, learning a skill without realizing it. Not realizing it means that I can’t consciously reflect and introspect on my learning, and I may have difficulties transferring the skill to other domains, since my unawareness of what I’m doing makes it harder to notice if I happen to run across a problem that employs the same principles but looks superficially different. But it does also mean that the calculations are very much meaningful, and that I don’t view them as an unnecessary annoyance that I’d rather skip and move on to the good parts.

The game scholars Katie Salen and Eric Zimmerman write:

*Another component of meaningful play requires that the relationship between action and outcome is integrated into the larger context of the game. This means that an action a player takes not only has immediate significance in the game, but also affects the play experience at a later point in the game. Chess is a deep and meaningful game because the delicate opening moves directly result in the complex trajectories of the middle game-and the middle game grows into the spare and powerful encounters of the end game. Any action taken at one moment will affect possible actions at later moments.*

The calculations in XCOM are meaningful because they let me predict the immediate consequences of my choices. Those immediate consequences will influence the outcome of the rest of the current battle, and the end result of the battle will influence my options when I return to the strategic layer of the game, where my choices will influence how well I will do in future battles…

In contrast, the arithmetic exercises in a simple edutainment game aren’t very meaningful: maybe they let you see the Death Star blowing up, but you don’t care about the end result of the calculations themselves, because they don’t inform any choices that you need to make. Of course, there can still be other ways by which the arithmetic “game” becomes meaningful – maybe you get scored based on how quickly you solve the problems, and then you end up wanting to maximize your score, either in competition with yourself or others. Meaning can also emerge from the way that the game fits into a broader social context, as the competition example shows. But of course, that still doesn’t make most edutainment games very fun.

So if we wish people to actually be motivated to solve problems relating to Bayesian networks, we need to embed them in a context that makes them meaningful. In principle, we could just make them into multistage puzzles: DragonBox is fantastic in the way that it turns algebraic equations into puzzles, where you need to make the right choices in the early steps of the problem in order to solve it in the most efficient manner. But while that is good for teaching abstract mathematics, it doesn’t teach much about how to apply the math. And at least I personally find games with a story to be more compelling than pure puzzle games – and also more fun to design.

So I’ll want to design a game in which our original question of “does Bob also know about this” becomes meaningful, because that knowledge will inform our choices, and because there will be long-term consequences that are either beneficial or detrimental, depending on whether or not we correctly predicted the probability of Bob knowing something.

My preliminary design for such a game is set in an academy that’s inspired both by Harry Potter’s Hogwarts (to be more specific, the Hogwarts in the fanfic Harry Potter and the Methods of Rationality) and Revolutionary Girl Utena’s Ohtori Academy. Studying both physical combat and magic, the students of the academy are a scheming lot, ruled over by an iron-fisted student council made up of seven members… And figuring out things like exactly which student is cheating on their partner and who else knows about it, may turn out to be crucial for a first-year student seeking to place herself and her chosen allies in control of the council. If only she can find out which students are trustworthy enough to become her allies… misreading the evidence about someone’s nature may come to cost her dearly later.

In my next post, I will elaborate more on the preliminary design of the game, and of the ways in which it will teach its players the mathematics of Bayesian networks.

### 6 comments

### Trackbacks/Pingbacks

- Towards meaningfully gamifying Bayesian Networks, or, just what can you do with them | Kaj Sotala - […] my previous article, I argued that educational games could be good if they implemented their educational content in a…
- Bayesian academy game: Constraints | Kaj Sotala - […] you manage to figure out exactly who knows what, that shouldn’t dictate the right option to pick, just as…
- Bayesian academy game: Some game mechanics | Kaj Sotala - […] far I have spoken about the possibility of edugames being good, sketched out the basic idea of an edugame…
- Bayesian academy game: A core mechanic | Kaj Sotala - […] seen through the same probabilistic framework. And it ought to all relate to the game content in a meaningful…

You make a good analysis about the meaning of concrete particular actions in the game: if they are connected to the bigger narrative meaning of the game and have dependencies to the future, then the actions make sense (Salen & Zimmerman). You have the game world, why wouldn’t you make particular actions meaningful and related to it (well, maybe because you are just gluing game elements on top of a traditional learning problem or activity).

I’d like to take up on an issue though: The question of transfer is an empirical one or needs more references than you wrote. You seem to assume transfer would be harder in “stealth learning” (or implicit abstract learning from concrete, contextual, and situational experiences), because (again assuming) a contextualized learning wouldn’t encourage reflection. But does reflection in fact encourage transfer or not? This sounds really basic and my bet would also be yes, but I think we’d be better of with some (reference to empirical) data here ;)

From Abstrctions to Transfer

I general, I’d think the question of abstraction and transfer is interesting here – especially after gone recently through a multidisciplinary study project with people that were not familiar with analytic and logical thinking nor clearly defined concepts (i.e. scientific thinking), http://mainiosocial.com

My question is, are you better off when learning from i) a contextualized and concrete situational experience/example or from ii) an abstract and generalized “academic” (logical, mathematical, etc.) representation? What’s the “better off”, you might ask? I’d define it in terms of transfer, that is, in how many future context the learner is able to apply this thinking to understand situations and make (rational approaching) decisions? We can now ask, what do we actually mean by abstract here, along the previous functional style thinking and definition?

What is Concrete/Abstract Depends on an Individual Learning History

I’d like to think, applying my work-in-progress bachelor’s thesis, that “what is abstract” depends on a personal learning history. That is, for example, if you have invested your time heavily on logic calculations, you are in fact made logic very concrete to yourself. Yet, learners differ. Some might not see the practical relevance of logic in everyday life, whereas for some it might be easy to apply.

The additional benefit in this style of “abstract learning context” is, that the abstract here includes an explicit assumption of “being applicable to numerous contexts”. Does that happen, depends on the learner also.

Knowledge and Skill Transfereability Rather than Abstractions

The crucial metric here is the transferability, the cross-situational individual future applicability of learned material. On the general level, I consider the source representation secondary; but we surely find learner differences in what kind of material is the most easy to digest and apply.

Summarizing my educated adhocery above

1. Transfer as goal: The goal of teaching should be applicable and transferable skills and knowledge.

2. Learning material preferences: People differ in what type of learning material and context is optimal for them. Don’t forget this obvious difference, when designing systems.

3. “Transfer skill” differences: People might differ in their ability to transfer skills from learning material to application context. Some might connect this to abstract thinking, but I like to think it as a different, as the preceding point.

Note: I don’t talk about abstraction or abstract thinking here, as I rather use its subjective definition above. I find it more beneficial to talk functionally about “transfer ability”, that is, how to elaborately apply what was learned. Abstraction is a technical, intermediate, and subjective term, whereas transfer is concerned with in-situ abilities and behavior, which is more concrete and interesting for a broader audience.

Thanks for the in-depth comment!

I saw the “transfer requires reflection” claim made in several of the papers on serious/educational games that I’ve read so far; however, I have not yet had the opportunity to follow the references that they gave in support of this claim. I’ll get back to you on this as soon as I’ve had the chance to properly read up on the transfer aspect, I agree that it’s an extremely important one.

I enjoyed this immensely. Can’t wait for the next post!

Very interested in the solutions you find. I don’t think I have all that many problems understanding the concepts you want to teach with this, but, having only learned them on my own, I’ve never quite understood their application. As such, teaching them on is a feat beyond me.

Hi Kaj, allow me to come with some input from my end. It s not really based on experience, but rather intuition and some thoughts I have, so quite subjective…

1) I think that it s always easier and may be funnier for people to design the world of the game. it is easy to imagine it, it s easy to communicate. I unfortunately think it is the cause of most failure in the game industry, and certainly the biggest pitfall in the serious game industry. I would start with very mechanical design of actions the player can take, whatever the environment/ world/story is. these actions/processes must be interested in in an intrinsic manner. Usually they will, because they are based on logic. And our brain loves logic.

2) wrt transfer, my theory, maybe similar to Vigotsky, is that abstract knowledge (meaning concept based and not physically per se) is something that is built thru language acquistion and social interaction. I dont think languages can be built without references to an experience or without a metaphor linking the new concepts to the reality of the learner. Serious games can create the experience and the visualization of these concepts in a defined situation. Once this reality is conceived and natural for the learner, social interaction and verbalizing will help create the consciousness of these new territories in the map of the brain, and hence will be transferable to other areas/ contexts.

br

jean

Hi Jean,

thanks for taking the time to comment!

I agree with your points, though I would also emphasize the way that an appropriate world/setting can have in generating ideas for new mechanics. You’re absolutely right that it’s the mechanics that should be the focus, but at least for me, having some kind of a setting made it much easier to figure out how to actually make a game built around the raw math. (Though perhaps I’ll eventually figure out a way to do this as a purely abstract puzzle game as well, once I know Bayes nets well enough – I’m actually also learning them as I go along.) But yeah, the mechanics are still ultimately what makes or breaks the game like this.

I also agree with your point about metaphor – maybe you’ve already read it, but there’s a great book called

Where Mathematics Comes Fromthat covers some of the ways by which mathematical concepts are learned, either as metaphors from everyday experience or, in the case of more advanced concepts, as metaphors that are built on top of the existing ones. E.g. the number line as a metaphor from our intuitive sense of place, and multiplication by negative numbers as a metaphorical blend that takes the metaphors of “multiplication by positive numbers” and “negative numbers” and combines them to produce something that we couldn’t have gotten by just a straightforward extension of either metaphor alone.