you know how after you learn an association, if you present one of the two items alone, a representation of the other one appears after a short delay? e.g.,
i was thinking about this in the context of the remerge model:
it could be that the dynamics of hippocampus are particularly good for stitching together the topology of space using little chunks of sequence. and more recently in evolution this is also used for stitching together arbitrary concepts.
when two or more concepts are linked together for the first time, this is initially stored in recurrent MTL dynamics, such that presenting either item alone triggers a fast sequence including the other item. (dharsh's idea, if i understand right.)
but maybe, after more learning (or sleep/replay), some PFC neurons, which are repeatedly being exposed to rapid sequential activation of one item after the other, become specifically sensitive to the new compound. as in tea-jelly.
now the tea-jelly cells in PFC can become building blocks for new sequences in MTL. maybe one reason the PFC is huge in humans is that we have to know about crazy amounts of stuff, like global warming and tensorflow. these are things you can't get from a stacked convnet on the visual stream; only from binding multiple things?
this could also fit with the idea that episodic learning and statistical learning are part of the same mechanism. when you learn a new concept, it's originally from a single or small number of examples. it would be interesting if the final geometry of the abstract concept, even after it's been well consolidated, still somewhat depends on what first examples you learned it from.