Monday, December 29, 2008

using abstract information to update concretely-learned rules

For more than two years, I regularly drank the tap water in lab. Then I learned from the building management that it is not potable because in some other labs the same water system is hooked up to equipment that can potentially create backflow (although there are guards against this, it's considered a significant risk). So I stopped drinking the water.

I probably drank the water a thousand times, and I never suffered any ill effects. Just going by my experience, I'd have to infer a very small probability that continuing to drink the water would cause harm (leaving aside the possibility of a cumulative health hazard- e.g. increased cancer risk). If I were a cave-man king drinking from a mountain stream, I would have long ago stopped requesting my servant to test the water for me.

But I incorporated the abstract knowledge of an invisible threat into my behavior. It's as if I were watching someone play Russian roulette. They pull the trigger fifty times and no bang. I shouldn't be too worried about squeezing one on myself. On the other hand, if someone shows me a schematic of the gun, and on the inside there's a digital counter ticking down from fifty-one, I will be more worried.

In particular, the abstract information gave me a new Markov model of the world. Before I heard from building management, my belief was that contaminants in the water would either always or never be present. Therefore, each incident of not being poisoned reinforced my belief that I would never be poisoned. However, the new information revealed that there was always a small but finite chance of toxic backflow.

I wonder how this kind of learning fits into our learning processes in general. It's always seemed mysterious to me how we can extract deep Markov structure from essentially binary environmental feedback.

No comments: