Bayesian Knowledge Tracing, or BKT, is an artificial intelligence
algorithm that lets us infer a student's current knowledge state to
predict if they have learned a skill.
There are four parameters involved in BKT (each with a value between 0 and 1, inclusive):
P(known):
the probability that the student already knew a skill.
P(will learn):
the probability that the student will learn a skill on the next practice opportunity.
P(slip):
the probability that the student will answer incorrectly despite knowing a skill.
P(guess):
the probability that the student will answer correctly despite not knowing a skill.
Every time the student answers a question, our BKT algorithm calculates
P(learned), the probability that the student has learned the skill
they are working on, using the values of these parameters.
The formula for P(learned) depends on whether their response was correct.
First, we compute the conditional probability that the student
learned the skill previously (at time n-1), based on whether
they answered the current question (at time n) correctly or incorrectly.
Then, we use the result of our first calculation to compute the
conditional probability that the student has learned the skill now
(at time n).
For the next question, we use P(learned) as the new value of P(known).
Once P(known) ≥ 0.95, we say that the student has achieved
mastery.
Now that you’ve had a chance to learn about the four parameters,
here’s a tool that can help you visualize the relationships between them
and explore how each one influences the probability calculations underlying BKT.
We'll be modeling the system with a hot air balloon, using its height as a
measure of mastery.
Let's begin!
BKT Balloon Simulator
Drag/click the sliders on the right to adjust the parameters and help the balloon rise.
Explore on your own or use the following prompts as a guide.
Find two different parameter combinations that will result in mastery if the student
answers correctly. Hint: make P(learned if correct) ≥ 0.95
and press "answer correct" to verify your results.
Recall that P(learned) becomes the new value for P(known).
Explore what adjustments you have to make depending on P(known).
Try a higher P(known) and a lower P(known) and compare your results.
What happens to P(learned if correct) and P(learned if wrong)
if P(guess) and/or P(slip) exceeds 0.5?
P(learned) is higher if the user answers incorrectly vs. correctly.
This is why P(guess) is typically bounded at 0.3 and P(slip) at 0.1.
[3]
Does it make sense why the balloon flips now?
What happens to P(learned) if the student answers incorrectly?
Hint: compare P(learned if wrong) with P(known)
(aka. your previous P(learned)).
Generally, P(learned) is always assumed to increase because BKT
considers every answer, wrong or right, as a learning opportunity that
brings you one step closer to mastery.
However, can you think of a situation where P(learned)
might decrease with a wrong answer? Feel free to try modeling
different scenarios with the sliders using your knowledge
of the BKT parameters.
Keep exploring! Can you find any other flaws or interesting characteristics of BKT?
Controls the balloon's height.
P(known):
Changes the storminess and number of clouds to represent learning difficulty.
P(will learn):
Certain values will flip the balloon.
P(slip):
Certain values will flip the balloon.
P(guess):
Remember, P(learned) depends on whether the student answers correctly
and this probability becomes the new value for P(known).
Simulate student responses by choosing an answer button below.
P(learned if correct):
P(learned if wrong):
Hint: hover over the parameters to see how they impact the simulator.