Something he hasn't said yet, but HMMs can be thought of as a special-case of Bayes nets. OK, now he's said it. This special-case-ness took us a while to notice. For years they were studied separately.

In the underground robot example, Sebastian mentions "noise in [the] motors." That might be an odd phrase to hear, because we think of noise as something applying to a signal, and motors don't have a signal. He's using the word noise metaphorically, referring to the noise in sensors.

*Errors*in motors occur when wheels slip, or bumps in the floor nudge the robot off its path.

Markov Chain questions: note how using the law of total probability at each state is actually really efficient compared to doing the computation by treating the sequence as the outcome. That is, we could say that the probability of rain on day three is the sum of all possible ways we could get rain on day 3:

\[P(R_0R_1R_2R_3) +

P(R_0S_1R_2R_3) +

P(R_0R_1S_2R_3) +

P(R_0S_1S_2R_3)

\]

The nice things about the Markov model is that we only need to look at the previous day and the transitions, to compute the current probability. This reduces the number of mutliplications you need to make.

Nice discussion of particle filters. I'm going to need to update what I do in Robotics next semester. Overall, this lecture was pretty tight.

## No comments:

## Post a Comment