add tags

There is no guarantee that this page is updated in the end to reflect the final state of the tutorial site.
So chances are that this page is considerably outdated and irrelevant. The notes here might not reflect the current state of the code, and you should **not use this as serious documentation**.

# Source reconstruction of event-related fields using minimum-norm estimate

## Introduction

In this tutorial we will show how to do source-analysis with minimum-norm estimate on the event-related fields (MEG) of a single subject.

We will use an MEG dataset of a language task. This tutorial is part of a series of tutorials which use the same dataset:

We will also use MRI images which belong to the same subject, a template MRI and a template cortical sheet.

We will repeat code to select the trials and preprocess the data as described here. We assume that preprocessing and event-related averaging is already clear for the reader. This tutorial will *not* show how to do group-averaging or statistics.

The major differences (beside the algorithm of the inverse solution) compared to the source localization procedure shown in the beamforming tutorial are:

- We will calculate the source of the event-related fields rather than the oscillatory activity.
- We will calculate the source-activation over time and not average the activation over a time-window.
- We will use a source model that only models the cortex as a surface (i.e. the cortical sheet) and not by scanning on a volumetric grid covering the whole brain.

## Background

In this tutorial, the type of **inverse model** used for source-localization is based on minimum-norm estimation (MNE).

In the event-related averaging tutorial the event related fields have been computed in three conditions and the cluster statistics tutorial showed significant differences among two conditions. The topographical distribution of the ERFs belonging to each conditions and ERFs belonging to those differences have been plotted. In this tutorial we will calculate a distributed representation of the neuronal activity underlying the sensor level data for each condition. We will also visualize the difference between conditions on the source-level.

The MNE method is favored for tracking the wide-spread activation and analyzing evoked responses over time. The source model only describes the cortical surface and uses a large number of equivalent current dipoles placed at the vertices of this surface grid. MNE estimates the amplitude of the complete source model simultaneously and recovers a distribution of the activity with minimum overall energy that produces data consistent with the measurement ^{1)} ^{2)}. The most appropriate literature reference for the FieldTrip implementation is Dale et al. (2000).

## Procedure

Figure 1. shows the important steps in the minimum-norm estimate. It shows that the computation of the inverse solution is based on the outputs of two independent processing steps: the processing of the anatomical information that leads to the forward solution and the processing of the MEG data.

In the anatomical processing part we will create the source model and the volume conduction model of the head (i.e. head model). In this tutorial we will use a template sourcemodel and template headmodel that are both derived from the canonical MNI template MRI. We will spatially transform these to overall match the individuals MRI. Note that this approach is suboptimal because the folding of the cortical sheet will follow the template rather than the individual MRI. However, the advantage is that the results of the source estimation on this individualized template can be easily compared to other subjects' results. A computationally more complex, but also more accurate MNE source estimation on the individual cortical sheet, is described in another tutorial. insert link

To compute the minimum-norm estimation we perform the following steps:

- The subject's MRI is read in with
**ft_read_mri**and normalized with**ft_volumenormalise**. Since the MRI from disk is already coregistered, we can skip**ft_volumerealign**. - To create the sourcemodel, we read in a template cortical sheet with
**ft_read_headshape**and warp it with**ft_transform_geometry**. - To create a headmodel, first, we load a template mri and segment it with
**ft_volumesegment**and create a mesh using**ft_prepare_mesh**. We transform the mesh with**ft_transform_geometry**. In the last step, we create the from this mesh using**ft_prepare_headmodel**. - To check whether all anatomical information are well-aligned, we read in the sensor locations with
**ft_read_sens**, then we plot them with**ft_plot_sens**together with the sourcemodel using**ft_plot_mesh**and the headmodel, using**ft_plot_vol**. - Then, we will compute the forward solution with
**ft_prepare_leadfield**. - For the inverse solution, we will first preprocess the MEG data using
**ft_definetrial**and**ft_preprocessing**and compute the average over trials and estimate the noise-covariance using**ft_timelockanalysis**. We compute the inverse solution using**ft_sourceanalysis**and visualize the results with**ft_plot_mesh**and**ft_sourcemovie**.

## Processing of anatomical data

Some of the functions described in this part of the tutorial are using the SPM toolbox which can be found under the fieldtrip/external folder. You do not have to add this toolbox yourself, but it is important that you set up your MATLAB path properly.

cd <PATH_TO_FIELDTRIP> ft_defaults

### Processing of the subject's mri

In the following, we will use the anatomical MRI belonging to Subject01. The file can be obtained from ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/Subject01.zip.
We read in the subject's MRI as follows:

mri = ft_read_mri('Subject01.mri'); disp(mri) dim: [256 256 256] anatomy: [256x256x256 int16] hdr: [1x1 struct] transform: [4x4 double] coordsys: 'ctf'

The structure of your MRI variable contains the following fields:

**dim**: This field gives information on the size (i.e. the number of voxels) of the anatomical volume into each direction.**anatomy**: This is a matrix (with the size and number of dimensions specified in**dim**) that contains the anatomical information represented by numbers.**hdr**: Header information of the anatomical images.**transform**: A transformation matrix that aligns the anatomical data (in field**anatomy**) to a certain coordinate system.**coordsys**: The description of the coordinate system which the anatomical data is aligned to.

You can see that the **coordsys** field of anatomical data shows 'ctf'. The subject's MRI should be in the CTF head coordinate system because this is also how the locations of the MEG sensors are defined relative to the head. Hence, we the source model and head model that we create have to be expressed in the same CTF head coordinate system.

**ft_volumerealign**or

**ft_electroderealign**. This alignment or coregistration is commonly done using fiducial points on the head.

When you read in your own anatomical data, it may not give information on the coordinate system in which the anatomical data is expressed and/or maybe there is no transformation matrix specified. In this case, you can visually inspect and determine the coordinate-system with

**ft_determine_coordsys**.

In the next step, we normalize the individual MRI using the MNI template anatomy from SPM. We do not explicitly have to give the MNI template to the function as it is used by default. In the subsequent step, we will also use a template MRI and a template cortical sheet. All templates we use are based on the same “colin27” anatomical MRI.

cfg = []; norm_mri = ft_volumenormalise(cfg, mri); disp(norm_mri) anatomy: [181x217x181 double] transform: [4x4 double] dim: [181 217 181] params: [1x1 struct] initial: [4x4 double] coordsys: 'spm' cfg: [1x1 struct]

After the normalization, the MRI is aligned to the SPM/MNI coordinate system, in which the template cortical sheet and template MRI are also expressed. In **norm_mri.cfg.final** we find a homogenous transformation matrix that defines how the voxel positions can be transformed between the CTF head coordinates to the normalized SPM/MNI coordinates. We will use this transformation matrix in the subsequent steps.

### Exercise 1

*mri.transform**norm_mri.transform**norm_mri.cfg.final*

If you are not familiar with the concept of homogenous transformation matrices, please read this page.

if I am correct then 1*3=2, please check. L: When we figured this out, may I need to rerun the entire analysis again.

### Source model

We read in a template cortical sheet from the FieldTrip template directory. The path to the data depends on where your FieldTrip directory is located.

temp_sheet_orig = ft_read_headshape('fieldtrip/template/sourcemodel/cortex_8196.surf.gii'); disp(temp_sheet_orig) pnt: [8196x3 single] fid: [1x1 struct] tri: [16384x3 int32] unit: 'mm'

The brain surface is represented by points (vertices) that are connected into triangles. The x, y and z coordinates of each vertex is given in the **pnt** field. Each row of the **tri** field contains the indices of three vertices that describe a triangle.

disp(temp_sheet_orig.pnt(1:3,:)) -10.1924 -104.5533 -4.3301 -16.6946 -103.9027 -4.2946 -7.7264 -103.1324 0.8651 disp(temp_sheet_orig.tri(1:3,:)) 3958 3893 3917 3893 3870 3845 3917 3893 3845

### Exercise 2

*temp_sheet_orig.tri*?

The template cortical sheet needs to be transformed from MNI/SPM into CTF coordinates. For this, we use the inverse of the transformation matrix from the earlier section:

ind_cortex = ft_transform_geometry(inv(norm_mri.cfg.final), temp_sheet_orig);

### Exercise 3

x = ind_cortex.pnt(:,1); y = ind_cortex.pnt(:,2); z = ind_cortex.pnt(:,3); figure; plot3(x,y,z,'LineStyle','none','Marker','.'); xlabel('X'); ylabel('Y'); zlabel('Z'); grid on; x = temp_sheet_orig.pnt(:,1); y = temp_sheet_orig.pnt(:,2); z = temp_sheet_orig.pnt(:,3); figure; plot3(x,y,z,'LineStyle','none','Marker','.'); xlabel('X'); ylabel('Y'); zlabel('Z'); grid on;

What is the difference in the location of the points between the original (temp_sheet_orig) and the individualized (ind_cortex) sheet?

The location of the vertices of the cortical sheet are now expressed in the CTF coordinate system relative to a point between the two ears. However, the sheet also needs to be in the same units as at the gradiometer positions. Therefore, we convert the units to 'cm'.

ind_cortex = ft_convert_units(ind_cortex, 'cm');

This completes the construction of the source model.

### Head model

#### Segmentation

In this section, we will create an individualized template volume conduction model of the head. We will segment the template MRI with 1mm resolution and create a mesh that describes the inside of the skull. You can download the template MRI from the FieldTrip/template/anatomy directory. The path to the file will depend on where your version of FieldTrip is located.
First, the template anatomical MRI is segmented into a brainmask, i.e the voxels which belong to the brain tissue are set to 1 and all others are set to 0. We will use **ft_volumesegment** for this.

template_mri = ft_read_mri('... fieldtrip/template/anatomy/single_subj_T1_1mm.nii'); template_mri.coordsys = 'spm'; % we know that the template is in spm/mni coordinates clear mri; % to avoid confusion between the template and subject's MRI

the naming of the tutorials and ftp directories has to be consistent (there is now minimumnormestimate, minimum_norm_estimate_light and mne)

cfg = []; cfg.output = {'brain'}; template_seg = ft_volumesegment(cfg, template_mri); disp(template_seg) dim: [181 217 181] transform: [4x4 double] coordsys: 'spm' unit: 'mm' brain: [181x217x181 logical] cfg: [1x1 struct]

The *template_seg* structure is similar to the MRI data structure, but contains the following new fields:

**brain**: binary representation of the brain tissue**cfg**: configuration information of the function which created*template_seg*

The procedure of the segmentation does not change the units, coordinate system, nor the size of the volume. You can see this in the first three fields (**dim**, **transform** and **coordsys**) which are the same as the corresponding fields of the input MRI data structure. The field **transform** specifies how each voxel of the **brain** volume can be expressed in the coordinate system defined in the **coordsys** field.

#### Mesh

The surface of the brain approximates the inside of the skull. We construct a triangulated surface description from the binary brainmask of *template_seg* by the **ft_prepare_mesh** function.

cfg = []; cfg.numvertices = 3000; template_mesh = ft_prepare_mesh(cfg, template_seg); disp(template_mesh); pnt: [3000x3 double] tri: [5996x3 double] unit: 'mm' cfg: [1x1 struct]

The *template_mesh* contains the following fields:

**pnt**: represents the vertices of the surface.**tri**: each row defines three vertices (row numbers of the**pnt**field) that form a triangle.**unit**: The units in which the vertices are expressed.

Similar to the cortical sheet, we transform the mesh that describes the inside of the skull using the transformation matrix of the normalized individual MRI from the earlier step and convert its units to 'cm'.

ind_insideskull = ft_transform_geometry(inv(norm_mri.cfg.final), template_mesh); ind_insideskull = ft_convert_units(ind_insideskull, 'cm');

#### Volume conduction model of the head

We have the right geometry of the brain (an individualized template) from the earlier step, so we can finally create the volume conduction model. We will use the single shell type model which is suitable for MEG data.

cfg = []; cfg.method = 'singleshell'; vol = ft_prepare_headmodel(cfg, ind_insideskull); disp(vol) bnd: [1x1 struct] type: 'singleshell' unit: 'cm' cfg: [1x1 struct]

The *vol* data structure contains the following fields:

**bnd**: contains the geometrical description of the head model.**type**: describes the method that was used to create the headmodel.**unit**: the unit of measurement of the geometrical data in the bnd field**cfg**: configuration of the function that was used to create vol

The **bnd** field describes a surface with vertices and triangles (in the **vol.bnd.pnt** and **vol.bnd.tri** fields) as the geometrical description of the volume conductor.

### Visualization

As the final step of the anatomical processing pipeline it is important to check the alignment and the transformation of all geometrical data. We will plot the sensors together with the source and head model to check whether they are aligned to each other and have the right proportions.

The location of the MEG channels are defined in the .ds file of the tutorial data. First, we need to get this information using the **ft_read_sens** function. (The .zip file that can be downloaded from the FieldTrip ftp server also contains the .ds file.) Then, we will plot the sensors (MEG channels) with the **ft_plot_sens** function. Second, we will plot the head model and the source model in the same figure with the sensors using **ft_plot_vol** and **ft_plot_mesh**, respectively.

% read in the sensor positions for this subject sens = ft_read_sens('Subject01.ds'); figure hold on ft_plot_mesh(ind_cortex, 'edgecolor', 'none', 'facecolor', 'red'); camlight % add some light for the 3-D effect ft_plot_vol(vol); alpha 0.5 % make it a bit transparent ft_plot_sens(sens, 'coil', 'true'); axis on grid on

*Figure 2. Head model, sourcemodel and sensors plotted in the same figure*

### Exercise 4

## Processing of functional data

In the following section, we will use the MEG data belonging to Subject01. The raw data can be obtained from ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/Subject01.zip.
For both preprocessing and averaging, we will follow the steps outlined in the Event related averaging tutorial. We will use the trials belonging to the FC and FIC conditions.

### Preprocessing of MEG data

### Averaging and noise-covariance estimation

The function **ft_timelockanalysis** makes averages of all the trials in a data structure and also estimates the noise-covariance. We will use the noise covariance in the calculation of the common filter (i.e. the filter used in the source analysis which is common for both conditions). Therefore, we will append the trials of both conditions, calculate the noise covariance and their average. (For a correct noise-covariance estimation it is important to use the cfg.demean = 'yes' option when the function **ft_preprocessing** was applied.) When the input data contains trials (and not the average of them) the noise-covariance is estimated for each trial separately and subsequently averaged over trials. We will also average the trials separately for each condition.

% averaging of all trials and noise covariance estimation cfg = []; dataall = ft_appenddata(cfg, dataFC_LP, dataFIC_LP); cfg = []; cfg.covariance = 'yes'; dataall_tlck = ft_timelockanalysis(cfg, dataall); % averaging the conditions cfg = []; avgFIC = ft_timelockanalysis(cfg, dataFIC_LP); cfg = []; avgFC = ft_timelockanalysis(cfg, dataFC_LP); disp(dataall_tlck) avg: [149x900 double] var: [149x900 double] dof: [149x900 double] cov: [149x149 double] dimord: 'chan_time' time: [1x900 double] label: {149x1 cell} grad: [1x1 struct] cfg: [1x1 struct]

You can see that the average of all data (dataall_tlck) also has a **cov** field that contains the noise covariance matrix.

## Forward solution

The source model, the volume conduction model and the sensor positions are needed for creating the forward solution with **ft_prepare_leadfield**. The sensor positions are contained in the **grad** field of the MEG data. However, the **dataall_tlck.grad** includes also the position of the bad channels and reference channels, therefore the relevant channels for the leadfield have to be specified.

cfg = []; cfg.grad = dataall_tlck.grad; % sensor positions cfg.channel = {'MEG', '-MLP31', '-MLO12'}; % the used channels cfg.grid = ind_cortex; % source points cfg.vol = vol; % volume conduction model leadfield = ft_prepare_leadfield(cfg); disp(leadfield) pos: [8196x3 single] unit: 'cm' tri: [16384x3 int32] inside: [8188x1 double] outside: [8x1 double] cfg: [1x1 struct] leadfield: {1x8196 cell}

The *leadfield* contains the following fields:

**pos**: the dipole positions, i.e. the vertices of the cortical sheet**unit**: the units in which the sourcepoints are defined**inside**: the source points that represent the brain**outside**: the source points that are outside the brain, therefore the leadfield was not computed at these points**leadfield**: the forward solution (as Nchannel*33 matrix) for each source point

For beamformer source reconstructions we typically scan a regular 3-D grid that span a “box” that encompasses the whole brain. In the 3-D grid the points inside the source compartment (i.e. the brain) are marked *inside* and the points outside are marked as *outside* and excluded from the scan.

For the cortical sheet source model the source points are all supposed to be inside the brain compartment. However, you can see that some points are marked as *outside* in the leadfield. These points stick out from the headmodel. We will use the **ft_prepare_sourcemodel** function to move these inward.

*outside*, something seems to be wrong with the coregistrationYou can check by visual inspection.

ind_cortex_orig = ind_cortex; cfg = []; cfg.moveinward = 0.01; cfg.grid = ind_cortex_orig; cfg.vol = vol; ind_cortex = ft_prepare_sourcemodel(cfg); disp(ind_cortex) pos: [8196x3 single] unit: 'cm' tri: [16384x3 int32] inside: [1x8196 double] outside: [] cfg: [1x1 struct]

Now the field **outside** is empty, i.e. we do not have any sourcepoints outside the brain surface of the headmodel anymore.

### Exercise 5

We compute the leadfield again with the modified cortical sheet.

cfg = []; cfg.grad = dataall_tlck.grad; % sensor positions cfg.channel = {'MEG', '-MLP31', '-MLO12'}; % the used channels cfg.grid = ind_cortex; % source points cfg.vol = vol; % volume conduction model leadfield = ft_prepare_leadfield(cfg); disp(leadfield) pos: [8196x3 single] unit: 'cm' tri: [16384x3 int32] inside: [1x8196 double] outside: [] leadfield: {1x8196 cell} cfg: [1x1 struct]

## Inverse solution

The goal of the analysis in this tutorial is to contrast the data between the two conditions in the time window of interest relative to the stimulus. If we later want to compare the two conditions statistically, we have to compute the sources based on an inverse estimation based on both conditions, i.e. the so called 'common filters' approach, and then apply this inverse estimation “filter” separately to each condition to obtain the source estimates per condition. The rationale is that you don't want differences in noise-levels between the conditions to explain differences in the source estimates.

The following computes the filter using the average and the noise-covariance matrix of all trials over conditions, and subsequently computes the source estimate for each condition.

% compute the filter cfg = []; cfg.grid = leadfield; cfg.vol = vol; cfg.method = 'mne'; cfg.mne.lambda = 2; cfg.mne.keepfilter = 'yes'; cfg.mne.prewhiten = 'yes'; cfg.mne.scalesourcecov = 'yes'; sourceAll = ft_sourceanalysis(cfg, dataall_tlck);

The configuration structure for this function contains some general options that are used in all source analysis methods, but also some method-specific options. In order to use the computed filters later, we need to specify the 'keepfilter' option. The 'lambda' value is responsible for scaling the noise-covariance matrix. If it is zero, the noise-covariance estimation will not be taken into account during the computation of the inverse solution. The options 'prewhiten' and 'scalesourcecov' options modify the leadfield matrix and the source covariance matrix for a more optimal solution. We do not need to specify the noise-covariance matrix in the configuration structure, as it is contained in **dataall_tlck.cov**.

During execution of the source analysis, some text is printed on screen that shows whether the specified options are taken into account:

- using headmodel specified in the configuration
- estimating current density distribution for repetition 1
- using pre-computed leadfields: some of the specified options will not have an effect
- computing the solution where the noise covariance is used for regularisation
- prewhitening the leadfields using the noise covariance
- scaling the source covariance

% compute source-analysis cfg = []; cfg.method = 'mne'; cfg.grid = leadfield; cfg.grid.filter = sourceAll.avg.filter; cfg.vol = vol; sourceFC = ft_sourceanalysis(cfg, avgFC); cfg = []; cfg.method = 'mne'; cfg.grid = leadfield; cfg.grid.filter = sourceAll.avg.filter; cfg.vol = vol; sourceFIC = ft_sourceanalysis(cfg, avgFIC); disp(sourceFC) time: [1x900 double] pos: [8196x3 single] inside: [8196x1 double] outside: [1x0 double] method: 'average' avg: [1x1 struct] cfg: [1x1 struct] disp(sourceFC.avg) mom: {8196x1 cell} pow: [8196x900 double]

When computing the source estimate, the following text printed on screen shows that the common filter has been used in the computation:

- using pre-computed spatial filter: some of the specified options will not have an effect

The **pos**, **inside** and **outside** fields of the source contains specifies the points for which the source was calculated. These fields should be the same as in the *leadfield* and in the *sourcemodel*. The **time** field describes the timepoints for which the sources were estimated and should be the same as in the event-related input data (*avgFC* and *dataFC_LP*). The **sourceFC.avg.pow** contains the power estimate at each source- and timepoints.

### Exercise 6

## Visualization

You can plot the estimated source strength at a specific time-point with the low-level **ft_plot_mesh** function.

bnd.pnt = sourcemodel.pnt; bnd.tri = sourcemodel.tri; % get the estimate at the 450th time-point, which is 500 ms after stimulus onset value = sourceFIC.avg.pow(:,450); ft_plot_mesh(bnd, 'vertexcolor', value);

*Figure 3. The result of the source-reconstruction of the FIC condition plotted at 500 ms*

### Exercise 7

## Summary and suggested further reading

In this tutorial, we demonstrated the minimum-norm estimate method for source reconstruction using a individualized template. If you would like to compute the source estimation on an individual cortical sheet, you can go to this tutorial ( Insert link). Two other tutorials show in more detail how to construct a volume conduction model for MEG and EEG data.

Here you can find related FAQs:

2009/11/09 15:25 | ||

2009/02/17 15:18 | Robert Oostenveld | |

2009/02/17 15:16 | Robert Oostenveld | |

2012/08/17 10:23 | Robert Oostenveld | |

2012/03/02 15:55 | ||

2010/04/25 07:50 | Robert Oostenveld | |

2013/12/05 11:01 | Robert Oostenveld | |

2011/09/07 09:57 | ||

2011/11/10 11:01 | Jan-Mathijs Schoffelen | |

2010/10/06 16:30 | Robert Oostenveld | |

2009/02/17 15:18 | Robert Oostenveld | |

2010/01/12 10:44 | ||

2009/02/17 15:15 | Robert Oostenveld | |

2009/02/26 22:06 | ||

2016/07/13 10:51 | Robert Oostenveld | |

2009/02/17 15:16 | Robert Oostenveld | |

2016/11/30 10:54 | Robert Oostenveld |

and example scripts:

2009/03/03 21:52 | ||

2016/11/09 12:04 | Robert Oostenveld | |

2012/08/09 20:01 | Lilla Magyari | |

2014/09/25 19:55 | ||

2009/03/03 21:52 | ||

2009/03/03 21:52 | ||

2009/07/31 11:49 | ||

2009/07/09 20:11 | ||

2010/07/21 11:04 | Robert Oostenveld | |

2009/05/20 14:57 | ||

2018/05/30 15:57 | Robert Oostenveld | |

2015/11/29 10:55 | Robert Oostenveld | |

2009/05/20 14:57 | ||

2010/04/22 09:01 | ||

2009/03/05 16:56 | Saskia Haegens | |

2009/10/29 16:22 | ||

2009/03/03 21:58 |

^{1)}

^{2)}