A restriction spectrum sparse fascicle model for diffusion MRI

Ariel Rokem1, Christian Pötter2, Robert F. Dougherty2

1The University of Washington eScience Institute

2Center for Cognitive and Neurobiological Imaging, Stanford University

View the code on GitHub

Paper

Slides

Example modeling one voxel:

Notebook viewer

Live notebook on binder

Generating predictions for the WM challenge:

Notebook viewer

Live notebook on binder

This model was developed as a contribution to the White Matter Challenge, ISBI 2015

Abstract

The Sparse Fascicle Model (SFM [1]) is a member of the large family of models that account for the diffusion MRI signal in the white matter as a combination of signals due to compartments corresponding to different axonal fiber populations (fascicles), and other parts of the tissue. The model proceeds in two steps. First, an isotropic component is fit. We model the effects of both the measurement echo time (TE), as well as the measurement b-value on the signal. These are fit as a log(TE)-dependent decay with a low order polynomial function, and a b-value-dependent multi-exponential decay (including also an offset to account for the Rician noise floor). The residuals from the isotropic component are then deconvolved with the perturbations in the signal due to a set of fascicle kernels each modeled as a radially symmetric (λ2=λ3) diffusion tensor. The putative kernels are distributed in a dense sampling grid on the sphere. Furthermore, Restriction Spectrum Imaging (RSI [2]) is used to extend the model, by adding a range of fascicle kernels in each sampling point, with different axial and radial diffusivities, capturing diffusion at different scales. To restrict the number of anisotropic components (fascicles) in each voxel, and to prevent overfitting, the RS-SFM model employs the Elastic Net algorithm (EN [3]), which applies a tunable combination of L1 and L2 regularization on the weights of the fascicle kernels. We used elements of the SFM implemented in the dipy software library [4] and the EN implemented in scikit-learn [5]. In addition, to account for differences in SNR, we implemented a weighted least-squares strategy whereby each signal’s contribution to the fit was weighted by its TE, as well as the gradient strength used. EN has two tuning parameters determining: 1) the ratio of L1-to-L2 regularization, and 2) the weight of the regularization relative to the least-squares fit to the signal. To find the proper values of these parameters, we employed k-fold cross-validation [1], leaving out one shell of measurement in each iteration for cross-validation. We determined that the tuning parameters with the lowest LSE [6] provide an almost-even balance of L1 and L2 penalty with weak overall regularization. Because of the combination of a dense sampling grid (362 points distributed on the sphere), and multiple restriction kernels (45 per sampling point), the maximal number of parameters for the model is approximately 16300, more than the number of data points. However, because regularization is employed, the effective number of parameters is much smaller, resulting in an active set of approximately 20 regressors [7].

References:

[1] Rokem et al. (2015) PLoS 1, in press.
[2] White et al. (2013) HBM 34: 327-346.
[3] Zou and Hastie (2005) J R Statist Soc B 67:301-320.
[4] Garyfallidis et al. (2014). Front Neuroinf 8:8
[5] Pedregosa et al. (2011) JMLR: 12: 2825-30
[6] Panagiotaki et al. (2012) Neuroimage 59: 2241-2254.
[7] Zou et al. (2007). Ann Statist. 35: 2173-2192.