Subjective ratings prediction by flatmapped covariance In fMRI the temporal and spatial dynamics of the BOLD signal can tell us a great deal about underlying regional brain activity. How much information can we extract from this signal? Can we use the voxel-level patterns of activity from across the brain to reliably predict the subjective experience of an individual? If so, will these methods prove to be robust when stimuli approach the complexity of real life? As a part of the Pittsburgh Brain Activity Interpretation Competition our approach relied on identifying groups of voxels that were highly covariant with a behavioral response vector. Since behavioral ratings were given for two of the three movies we focused on using those two movies to train a novel method of voxel selection. Using the functional imaging data and behavioral data pairs we created volumes wherein each voxel represented the covariance score between that voxel’s timecourse and the hemodynamically-convolved behavioral rating. By flatmapping the results of this operation were able to use a custom method of similarity detection to constrain the voxels used in the generation of a final prediction timecourse. Our method was effective at predicting some, but not all of the behavioral ratings. It preformed best on the most objective of the ratings, including body parts, language, and faces. Its performance degraded with the increased subjective value of other ratings, such as arousal, attention, and sadness. Across all ratings correlation scores for predictions made between movies one and two varied between 0.00 and 0.75
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.