Sunday, February 01, 2015


i said before that we can get stuck in a situation where we aren't making progress. i was saying that awareness of this situation allows the energy basins to get washed out by other dynamics. 

taking my limited understanding of predictive coding ideas, maybe another part of this could be relaxing precision on "top-down" beliefs. 

gilbert says experience is like a vast necker cube with almost unlimited different interpretations - a nasty inverse problem with maybe no real ground truth in some ways. 

does having a stronger "top-down" influence -- i.e., asserting an interpretation for this necker cube -- make it easier to get stuck? does it create less-escapable energy basins in the overall space?

with zen, the idea of being "present" or "in the moment" could conceivably be linked to relaxing the precision on these "top-down" beliefs. i suppose this would raise the effective dimensionality of the overall problem, right? without the shrinkage priors, there are a lot more effective degrees of freedom in the overall system. this makes the problem much much harder to "solve" (if that's meaningful), but maybe it also creates more saddle-points in the energy landscape - and makes it easier to re-organize interpretations.

(PS - the therapeutic effects of hallucinogens, ECT, and maybe even standard antidepressants, could be linked to similar effects maybe. it also sounds parallel to norepinephrine (high phasic NE, or low tonic NE) signalling uncertainty and maybe encouraging a re-organization of "higher level" beliefs -- switching models or tasks, or maybe allowing a new level to emerge in the hierarchy by signalling that the extra model complexity is justified.)

so in zen if you keep bringing attention back to sensory inputs, you facilitate this re-organization... but, i feel like there's still a crucial piece missing here. in what i've said so far, reducing the top-down influence could be sort of like raising the temperature of the system - escaping from local minima, but with no particular reason to believe you'd find better solutions. if you subtract what you "know" at the higher levels, you're back to the very hard original inverse problem of your massive-dimensional sensory input. this is why i feel like this view is incomplete and needs something like clark's model. because when you relax your self, there's something like an external gradient that shapes the system toward real improvement. what is that neuroscientifically? i feel like this needs a pretty serious re-organization of my own thinking to get a grip on. but maybe it has something to do with erasing the barrier between form and content. letting the structure of the system be the computation itself. *possibly* this might reveal a flaw in the idea of recreating life in a computer. 

No comments: