A commonly-held idea is that Multi-Voxel Pattern Analysis has something to do with patterns over voxels.
This flames of this misconception are probably fanned by images like this:
[http://www.cogsci.mq.edu.au/research/projects/thebrainthatadapts/]
Because we never saw this kind of image in review papers about single-voxel fMRI analysis, we tend to think MVPA is special in being able to detect patterns like the one colorfully shown.
What do we mean by a "pattern", exactly? In particular, people often get the idea that MVPA is special because it's sensitive to cases where nearby voxels might encode the stimulus in opposite directions. This intuitively fits with the image above.
But the truth is, analyses that treat each voxel independently are perfectly happy to tell you about nearby voxels encoding a stimulus in opposite directions. Suppose you have two stimulus conditions, like face vs. house. At each voxel independently, you can perform an ANOVA against these category labels. If two adjacent voxels encode the categories in opposite directions, the F-statistics at these voxels will both be large.
You can spatially smooth these F-statistics and align them between subjects, and get a statistical map of where in the brain encodes information about faces and houses.
So, even considering one voxel at a time, you can still pick up the pattern of positive and negative encoding shown in the colorful image above.
One thing MVPA does do is sacrifice spatial resolution to gain sensitivity. By including multiple voxels, more information about the variable of interest (e.g., faces vs. houses) is pooled together. The tradeoff is that we don't know which voxels in that pool contain information about faces and houses.
Consider multiple regression, a typical MVPA approach. Our prediction of the response y (e.g., faceness vs. houseness) is related to:
beta_1 * voxel_1 + beta_2 * voxel_2 + ... + beta_n * voxel_n
In other words, our prediction is roughly the average of the predictions that the individual voxels make. (Although we can find better beta coefficients with multiple regression than with n single regressions.)
A weighted average can be much better than a prediction from a single voxel - but there's no magical "pattern" information.
Conversely, you can use many of the methods typically applied to MVPA to analyze single voxels, including representational similarity analysis. For example, here's a representational dissimilarity matrix constructed from real data from a single MEG sensor:
MVPA does bring some extra benefits if we allow for nonlinearities. Multi-voxel analysis can detect encodings that are invisible to single voxel analysis, like this one:
Neither voxel by itself carries information about red versus blue, but a 2D Gaussian kernel can separate them.
(2D linear classification, unlike 2D linear regression, could perform above chance on these example data, by drawing a boundary that puts all the blue points on one side, and half of the red points on the other side.)
No comments:
Post a Comment