Common component classification: what can we learn from machine learning?

TitleCommon component classification: what can we learn from machine learning?
Publication TypeJournal Article
Year of Publication2011
AuthorsAnderson, A, Labus JS, Vianna EP, Mayer EA, Cohen MS
JournalNeuroImage
Volume56
Issue2
Pagination517-24
Date Published2011 May 15
ISSN1095-9572
KeywordsArtificial Intelligence, Brain Mapping, Computer Simulation, Female, Humans, Image Interpretation, Computer-Assisted, Magnetic Resonance Imaging
Abstract

Machine learning methods have been applied to classifying fMRI scans by studying locations in the brain that exhibit temporal intensity variation between groups, frequently reporting classification accuracy of 90% or better. Although empirical results are quite favorable, one might doubt the ability of classification methods to withstand changes in task ordering and the reproducibility of activation patterns over runs, and question how much of the classification machines' power is due to artifactual noise versus genuine neurological signal. To examine the true strength and power of machine learning classifiers we create and then deconstruct a classifier to examine its sensitivity to physiological noise, task reordering, and across-scan classification ability. The models are trained and tested both within and across runs to assess stability and reproducibility across conditions. We demonstrate the use of independent components analysis for both feature extraction and artifact removal and show that removal of such artifacts can reduce predictive accuracy even when data has been cleaned in the preprocessing stages. We demonstrate how mistakes in the feature selection process can cause the cross-validation error seen in publication to be a biased estimate of the testing error seen in practice and measure this bias by purposefully making flawed models. We discuss other ways to introduce bias and the statistical assumptions lying behind the data and model themselves. Finally we discuss the complications in drawing inference from the smaller sample sizes typically seen in fMRI studies, the effects of small or unbalanced samples on the Type 1 and Type 2 error rates, and how publication bias can give a false confidence of the power of such methods. Collectively this work identifies challenges specific to fMRI classification and methods affecting the stability of models.

Alternate JournalNeuroimage