Practical neuroimaging, the sequel

Who is this course for?

The course is designed for people starting or doing neuroimaging, with some programming experience (for example, writing MATLAB or Python scripts).

We have designed the course to help you:

  • Understand the basic concepts in neuroimaging, and how they relate to the wider world of statistics, engineering, computer science;
  • Be comfortable working with neuroimaging data and code, so you can write your own basic algorithms, and understand other people’s code;
  • Work with code and data in a way that will save you time, help you collaborate, and continue learning.


  • Reasonable knowledge of programming in any language;


That which I cannot build, I do not understand (Feynman)
  • Practical workflow for continuous learning;
  • Relevant concepts in mathematics / statistics / engineering;
  • FMRI analysis steps.

We aim to teach you to work efficiently, so that you can in due course forget about your tools and think about the ideas. We aim to make things simple, rather than easy, so that you can reach a stage where things are both simple and easy.

Practical workflow

  • Version control (we teach git);
  • Extensible text editor (see below);
  • Versatile programming language (we teach Python);
  • Testing code;
  • Collaborating with code.

Relevant concepts

  • convolution (hemodynamic modeling, smoothing);
  • interpolation (slice time correction, image resampling);
  • optimization (registration, advanced statistics);
  • basic linear algebra (statistics);

FMRI analysis steps

  • Diagnostics;
  • slice timing;
  • motion correction;
  • registration within and between subject;
  • smoothing;
  • statistical estimation with multiple regression;
  • statistical inference.

Format of the classes

  • Prior reading / homework for each week of approx 30 minutes;

  • Class is 2 hours:

    • 10 minutes debrief from previous class
    • 30 minutes talk introduction + questions;
    • 60 minutes problems;
    • 10 minutes review of problems;

For each day there will be a short teaching point on one of:

  • version control;
  • text editing;
  • code collaboration.

General reading

Text editors

  • vim;
  • emacs;
  • TextMate;
  • sublime text;
  • Notepad++.

You can use any other text editor, but we’ll be doing text editor challenges through the course to teach ourselves speed and shortcuts for our editors.

The list of editors comes from with the edition of TextMate (because we know at least one extremely efficient coder who uses it).

Your teachers use Vim (x2) and Emacs (x1).

General teaching points

  • Proceed by iterating through a single subject analysis;
  • balance of IPython notebook and Python modules.


Day 0 : introduction and taster


Install requirements on your machine

  • git;
  • Python;
  • pip;
  • scipy-stack (numpy, scipy, matplotlib, IPython, pandas);
  • nibabel.

Make a private github account.

Installation instructions will be a version of

Possible reading:


Day 1 : introducing Python



  • variables, math
  • flow control (conditional statements, loops)
  • basic data structures (lists and dictionaries)
  • importing modules
  • reading and parsing text files (basic)

Day 2: images as arrays and plotting



Day 3: diagnostics / version control



  • Refresher on Python modules and packages;
  • Transfer notebook code into text files;
  • Add to git;
  • Time series diagnostics;
  • Make an edit and commit and push;

Day 4: first statistics / version control


  • Make a branch, edit and commit;
  • Merge;
  • Push;
  • Splitting FMRI time series by slicing;
  • Subtracting on blocks from off blocks;
  • Visualizing result.

Day 5: convolution and correlation



  • Creating the convolution kernel;
  • Extracting time series (slicing in 4th dimension);
  • Convolution the dumb way;
  • Convolution the scipy way;
  • Correlating the convolved time course with the data.

Day 6: regression and the general linear model



  • Load time course;
  • rebuild convolved regressor;
  • set up matrices;
  • run estimation;
  • visualize result;
  • replicate subtraction analysis from previous day with dummy regressors;
  • visualize result;
  • (relationship of correlation and regression).

Day 7: diagnostics using principal component analysis

This day is for us to practice working with matrices, and to get an idea of the level of underlying variance in data.



  • Get code from notebook;
  • Run PCA;
  • Fetch projection matrices, vectors and values;
  • Reconstruct data using reduced number of components.
  • Investigate and diagnose components;
  • Investigate correlation of vectors with data.

Day 8: 1D interpolation and slice timing


  • Convert notebook to Python module;
  • write code to do linear interpolation on example time series;
  • write tests;
  • use scipy interpolation code;
  • investigate splines.

Day 9: optimization, 2D interpolation and registration


(Add subtracted image after registration).


  • Convert optimization notebook to Python module;
  • Run;
  • Try different cost functions;
  • Try different optimization methods;
  • Local minima with a 180 degree rotation;
  • Investigate and run FSL motion correction.

Day 10: coordinate systems and cross-modality registration


Need to fix this up.


  • Load EPI;
  • Load anatomical;
  • Reslicing using coordinate transforms;
  • Scipy ndimage and affine_transform;
  • FSL coregistration;
  • SPM coregistration.

Day 11: registration between subjects



  • Affine registration using scipy;
  • Affine registration using FSL;
  • Warping in 2D using dipy regtools;
  • Diagnosing the warp using the deformation mesh;
  • Affine plus warping using FSL;
  • Thinking about what makes a good registration.

Day 12: smoothing and modeling



  • Smoothing as convolution;
  • HRF regressor model on smoothed and unsmoothed data;
  • Different smoothing levels;
  • single voxel;
  • whole brain.

Day 13: testing hypotheses with t and F contrasts



  • Block (on / off model) F contrasts;
  • Motion parameters as confounds;
  • t contrasts for motion;
  • F contrasts for motion;
  • FSL contrasts;
  • SPM contrasts.

Day 14: random effects, choosing models



  • Recall t and F tests: what are they doing exactly. Where does my variability come from?
    • example within run (from previous course)
    • example random effect (between subjects).
  • Wait a second: What is a model exactly ?
  • I am chosing a very wrong model: Consequences on the results of t/F tests.
  • How do I know my model is - is not very wrong? the good, the bad, the ugly (reverse order)
    • the ugly: p-hacking. Let’s try it.
    • the bad: use R2.
    • the good: model validation
  • Model validation: Principle. Example of “random effect” model testing the effect of “grumpiness”.

Day 15: statistical inference



  • Generate map of T
  • correct using Bonferroni;
  • correct using random fields;
  • correct using FDR;

Possible extra days

  • Using machine learning tools with scikit-learn;
  • Introduction to diffusion imaging;
  • Introduction to DICOM;
  • Data visualization.