M/EEG source imaging

Contents

1. Introduction

1.1. Scenario

Using real experimental data, this tutorial guides through the procedures from importing raw EEG/MEG data to estimating source currents. All the procedures will be conducted with scripts.

s

1.2. Experimental procedure

In the experiment, subjects performed three tasks:

  • Auditory task,
  • Somatosensory task,
  • Visual task.

In the auditory task, the subjects were instructed to close their eyes and listen to a pure tone. The tone was presented to both of the ears. In the somatosensory task, the subjects were instructed to close their eyes and an electrical stimulation was presented to the right median nerve. In the visual task, the subjects were instructed to open their eyes and observe the concentric rings that expanded or contracted at 7 deg/s.

During the tasks, MEG and EEG were simultaneously recorded with a whole-head 400-channel system (210-channel Axial and 190-channel Planar Gradiometers; PQ1400RM; Yokogawa Electric Co., Japan) and a whole-head 63-channel system (BrainAmp; Brain Products GmbH, Germany), respectively. The sampling frequency was 1 kHz. Electrooculogram (EOG) signals were also simultaneously recorded and included in the EEG file.

Using three Tesla MR scanner (MAGNETOM Trio 3T; Siemens, Germany), we also conducted MRI experiment to obtain structural (T1), diffusion, and functional MRIs. The fMRI was recorded only during the visual task from a subject.

2. Starting tutorial

2.1. Setting environment

This tutorial was developed using MATLAB 2019a with Signal Processing Toolbox on Linux.

(1) Download and unzip tutorial.zip
(2) Copy and paste all the files to $your_dir
(3) Start MATLAB and change current directory to $your_dir/program
(4) Open make_all.m by typing

>> open make_all

2.2. Adding necessary toolboxes to search path

Thereafter, we will sequentially execute the commands in make_all.m from the top.

We set directories of necessary toolboxes: VBMEG and SPM8. Please modify these for your environment.

>> path_of_VBMEG = '/home/cbi/takeda/analysis/toolbox/vbmeg_20220721';
>> path_of_SPM8 = '/home/cbi-data20/common/software/external/spm/spm8';

Please modify the directory of FreeSurfer written in path_of_VBMEG/vbmegrc for your environment.

We add the toolboxes to search path.

>> addpath(path_of_VBMEG);
>> vbmeg
>> addpath(path_of_SPM8);

2.3. Setting parameters

We set the parameters for analyzing the EEG and MEG data, and set the file and directory names.

>> sub = 's006';
>> p = set_parameters(sub);

You can also set sub = 's043'.

By default, analyzed data and figures will be respectively saved in

  • $your_dir/analyzed_data/s006,
  • $your_dir/figure.

3. Modeling brain to define current sources

First of all, we construct a cortical surface model from the T1 image and define current sources on it.

3.1. Importing T1 image

We convert T1 DICOM files to a NIfTI (.nii) file (User manual).

>> import_T1(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/struct/Subject.nii

3.2. Correcting bias in T1 image

Using SPM8, we correct bias in the T1 image (User manual).

>> correct_bias_in_T1(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/struct/mSubject.nii

3.3. Segmenting T1 image

Using SPM8, we extract a gray matter image from the T1 image.

>> segment_T1(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/struct/c1mSubject.nii (Gray matter)
  • $your_dir/analyzed_data/s006/struct/mSubject_seg_sn.mat (Normalization)

The gray matter image will be used in Preparing leadfield, and the normalization file will be used in 4. Importing fMRI information.

3.4. Constructing cortical surface models from T1 images

Using FreeSurfer, we construct a polygon model of a cortical surface from the T1 image (User manual).

>> construct_cortical_surface_model(p);

The generated files will be saved in

  • $your_dir/analyzed_data/s006/freesurfer.

3.5. Constructing brain models from cortical surface models

From the cortical surface model, we construct a brain model, in which the positions of current sources are defined (vb_job_brain.m).

>> construct_brain_model(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/brain/Subject.brain.mat (Brain model)
  • $your_dir/analyzed_data/s006/brain/Subject.act.mat
  • $your_dir/analyzed_data/s006/brain/Subject*.area.mat

In Subject.act.mat, the fMRI information will be stored (4. Importing fMRI information). Subject*.area.mat defines in which area each vertex belongs to.

3.6. Checking brain model

We check the constructed brain model by plotting the current sources on the T1 image.

>> check_brain_model(p);

The following figure will be saved.

  • $your_dir/figure/check_brain_model/s006.png

b

4. Importing fMRI information

4.1. Importing fMRI information

VBMEG estimates source currents from EEG/MEG data using an fMRI result as a prior on the source current variances. The fMRI result is assumed to be obtained using SPM8. We import the fMRI result of the visual task (User manual, vb_job_fmri.m).

>> import_fmri(p);

T-values and percent signal changes will be stored in

  • $your_dir/analyzed_data/s006/brain/Subject.act.mat.

4.2. Checking imported fMRI result

We check the imported fMRI result (vb_plot_cortex.m).

>> check_imported_fmri(p);

Figures will be saved in

  • $your_dir/figure/check_imported_fmri.

f

5. Source imaging from EEG data

In this section, we introduce the procedures from importing the raw EEG data to the source current estimation.

5.1. Preprocessing

Importing EEG data

We convert BrainAmp files to MAT files (User manual, vb_job_meg.m).

>> import_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/loaded/a1.eeg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/loaded/s1.eeg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/loaded/v1.eeg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/eeg/loaded/v2.eeg.mat (Visual task, run 2)

Filtering EEG data

The recorded EEG data include drift and line noise (60 Hz at Kansai in Japan). To remove these noises, we apply a low-pass filter, down-sampling, and a high-pass filter.

>> filter_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/filtered/a1.eeg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/filtered/s1.eeg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/filtered/v1.eeg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/eeg/filtered/v2.eeg.mat (Visual task, run 2)

Making trial data

We segment the continuous EEG data into trials. First, we detect stimulus onsets from the trigger signals included in the EEG data. Then, we segment the EEG data into trials by the detected stimulus onsets.

>> make_trial_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/a1.eeg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/s1.eeg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/v1.eeg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/v2.eeg.mat (Visual task, run 2)

Detecting blinks from EOG data

We detect trials in which the subject blinked during the exposure to the visual stimulus based on the power of the EOG data. The detected trials will be rejected in Rejecting bad trials.

>> detect_blink_from_eog(p);

The blink information will be stored in EEGinfo in the above trial files.

Removing EOG components

We regress out EOG components from the EEG data. For each channel, we construct a linear model to predict the EEG data from the EOG data and remove the prediction from the EEG data.

>> regress_out_eog_from_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/r_a1.eeg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/r_s1.eeg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/r_v1.eeg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/r_v2.eeg.mat (Visual task, run 2)

Checking the result of removing EOG components

To check if the EOG components were appropriately removed, we compare the spatial distributions of the powers between the original and the EOG-removed EEG data (vb_plot_sensor_2d.m).

>> check_removing_eog_from_eeg(p);

The following figure will be saved.

  • $your_dir/figure/check_removing_eog_from_eeg/s006.png

eo

Correcting baseline

For each channel and trial, we make the average of prestimulus EEG to 0.

>> correct_baseline_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/br_a1.eeg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/br_s1.eeg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/br_v1.eeg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/br_v2.eeg.mat (Visual task, run 2)

Combining trials across runs

To handle all the trials collectively, we combine the information of all the trials.

>> combine_trial_eeg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/all.info.mat

This file contains the information (e.g. file paths) to serve as the shortcut to these files.

  • br_a1.eeg.mat
  • br_s1.eeg.mat
  • br_v1.eeg.mat
  • br_v2.eeg.mat

Applying independent component analysis (ICA)

Using EEGLAB, we apply ICA to the EEG data.

>> apply_ica_eeg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/eeg/ica/apply_ica_eeg.mat

Classifying ICs into brain activities and noise

We automatically classify each IC into brain activities and noise based on its kurtosis, entropy, and stimulus-triggered average (Barbati et al., 2004).

>> classify_ic_eeg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/eeg/ica/classify_ic_eeg.mat

Showing ICs

For each IC, we show its spatial pattern, stimulus-triggered average, and power spectrum. For the ICs classified as noise, their stimulus-triggered averages and power spectra were shown by red lines.

>> show_ic_eeg(p);

Figures will be saved in

  • $your_dir/figure/show_ic_eeg/s006.

Removing noise ICs

We remove the ICs classified as noise from the EEG data (Jung et al., 2001).

>> remove_noise_ic_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/i_all.info.mat (Shortcut to the following files)
  • $your_dir/analyzed_data/s006/eeg/trial/ibr_a1.eeg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/ibr_s1.eeg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/ibr_v1.eeg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/ibr_v2.eeg.mat (Visual task, run 2)

Rejecting bad channels

We detect noisy channels based on the amplitudes of the EEG data and reject them. The rejected channels will not be used in the following analyses.

>> reject_channel_eeg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/ri_all.info.mat

Rejecting bad trials

We reject the trials in which the subject blinked during the exposure to the visual stimulus. The rejected trials will not be used in the following analyses.

>> reject_trial_eeg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/rri_all.info.mat

Taking common average

We take common average; that is, we make the averages of EEG data across the channels to 0.

>> common_average_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/crri_all.info.mat (Shortcut to the following files)
  • $your_dir/analyzed_data/s006/eeg/trial/cibr_a1.eeg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/cibr_s1.eeg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/cibr_v1.eeg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/eeg/trial/cibr_v2.eeg.mat (Visual task, run 2)

Separating trials into tasks

We separate the combined trials into the three tasks: the auditory, somatosensory, and visual tasks.

>> separate_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/trial/crri_a.info.mat (Auditory task)
  • $your_dir/analyzed_data/s006/eeg/trial/crri_s.info.mat (Somatosensory task)
  • $your_dir/analyzed_data/s006/eeg/trial/crri_v.info.mat (Visual task)

Showing preprocessed EEG data

For each task, we show the processed EEG data averaged across the trials (vb_plot_sensor_2d.m).

>> show_trial_average_eeg(p);

Figures will be saved in

  • $your_dir/figure/show_trial_average_eeg/s006.

e

5.2. Source imaging

Preparing leadfield

First, we construct a 3-shell (cerebrospinal fluid, skull, and scalp) surface model (User manual, vb_job_head_3shell.m). Then, we make the leadfield matrix based on the 3-shell model (User manual, vb_job_leadfield.m). Finally, we take the common average of the leadfield matrix.

>> prepare_leadfield_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/leadfield/Subject.head.mat (3-shell model)
  • $your_dir/analyzed_data/s006/eeg/leadfield/Subject.basis.mat (Leadfield)

Estimating source currents

First, we estimate the current variance by integrating the fMRI information and the EEG data (User manual, vb_job_vb.m). Then, we estimate source currents using the current variance as a constraint (User manual, vb_job_current.m]).

>> estimate_source_current_eeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/eeg/current/a.bayes.mat (Current variance)
  • $your_dir/analyzed_data/s006/eeg/current/a.curr.mat (Source current)
  • $your_dir/analyzed_data/s006/eeg/current/s.bayes.mat
  • $your_dir/analyzed_data/s006/eeg/current/s.curr.mat
  • $your_dir/analyzed_data/s006/eeg/current/v.bayes.mat
  • $your_dir/analyzed_data/s006/eeg/current/v.curr.mat

Showing estimated source currents

For each task, we show the estimated source currents averaged across the trials (vb_plot_cortex.m]).

>> show_source_current_eeg(p);

Figures will be saved in

  • $your_dir/figure/show_source_current_eeg/s006.

ec

6. Source imaging from MEG data

In this section, we introduce the procedures from importing the raw MEG data to the source current estimation.

6.1. Preprocessing

Importing MEG data

We convert Yokogawa MEG files to MAT files (User manual, vb_job_meg.m).

>> import_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/loaded/a1.meg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/meg/loaded/s1.meg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/meg/loaded/v1.meg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/meg/loaded/v2.meg.mat (Visual task, run 2)

Removing environmental noise

The MEG data include large environmental noise from air‐conditioner and others. Using reference sensor data, we remove the environmental noise from the MEG data (de Cheveigne and Simon, 2008).

>> remove_env_noise_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/denoised/a1.meg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/meg/denoised/s1.meg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/meg/denoised/v1.meg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/meg/denoised/v2.meg.mat (Visual task, run 2)

Checking the result of removing environmental noise

To check if the environmental noise is appropriately removed, we compare the power spectra between the original and the noise-removed MEG data.

>> check_denoised_meg(p);

The following figure will be saved.

  • $your_dir/figure/check_denoised_meg/s006.png

md

Filtering MEG data

The recorded MEG data include drift and line noises (60 Hz at Kansai in Japan). To remove these noises, we apply a low-pass filter, down-sampling, and a high-pass filter.

>> filter_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/filtered/a1.meg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/meg/filtered/s1.meg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/meg/filtered/v1.meg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/meg/filtered/v2.meg.mat (Visual task, run 2)

Making trial data

We segment the continuous MEG data into trials. First, we detect stimulus onsets from the trigger signals included in the MEG data. Then, we segment the MEG data into trials by the detected stimulus onsets.

>> make_trial_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/trial/a1.meg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/s1.meg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/v1.meg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/v2.meg.mat (Visual task, run 2)

Importing blink information from EEG data

In the EEG analyses, we detected the trials in which the subject blinked during the exposure to the visual stimulus (Detecting blinks from EOG data). We import this blink information to the MEG data to reject the blink trials in Rejecting bad trials.

>> import_blink_from_eeg(p);

The blink information will be stored in MEGinfo in the above trial data.

Removing EOG components

We regress out EOG components from the MEG data. For each channel, we construct a linear model to predict the MEG data from the EOG data and remove the prediction from the MEG data.

>> regress_out_eog_from_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/trial/r_a1.meg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/r_s1.meg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/r_v1.meg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/r_v2.meg.mat (Visual task, run 2)

Checking the result of removing EOG components

To check if the EOG components were appropriately removed, we compare the spatial distributions of the powers between the original and the EOG-removed MEG data (vb_plot_sensor_2d.m).

>> check_removing_eog_from_meg(p);

The following figure will be saved.

  • $your_dir/figure/check_removing_eog_from_meg/s006.png

mo

Correcting baseline

For each channel and trial, we make the average of prestimulus MEG to 0.

>> correct_baseline_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/trial/br_a1.meg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/br_s1.meg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/br_v1.meg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/br_v2.meg.mat (Visual task, run 2)

Combining trials across runs

To handle all the trials collectively, we combine the information of all the trials.

>> combine_trial_meg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/meg/trial/all.info.mat

This file contains the information (e.g. file paths) to serve as the shortcut to these files.

  • br_a1.meg.mat
  • br_s1.meg.mat
  • br_v1.meg.mat
  • br_v2.meg.mat

Applying ICA

Using EEGLAB, we apply ICA to the MEG data.

>> apply_ica_meg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/meg/ica/apply_ica_meg.mat

Classifying ICs into brain activities and noise

We automatically classify each IC into brain activities and noise based on its kurtosis, entropy, and stimulus-triggered average (Barbati et al., 2004).

>> classify_ic_meg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/meg/ica/classify_ic_meg.mat

Showing ICs

For each IC, we show its spatial pattern, stimulus-triggered average, and power spectrum. For the ICs classified as noise, their stimulus-triggered averages and power spectra were shown by red lines.

>> show_ic_meg(p);

Figures will be saved in

  • $your_dir/figure/show_ic_meg/s006.

Removing noise ICs

We remove the ICs classified as noise from the MEG data (Jung et al., 2001).

>> remove_noise_ic_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/trial/i_all.info.mat (Shortcut to the following files)
  • $your_dir/analyzed_data/s006/meg/trial/ibr_a1.meg.mat (Auditory task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/ibr_s1.meg.mat (Somatosensory task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/ibr_v1.meg.mat (Visual task, run 1)
  • $your_dir/analyzed_data/s006/meg/trial/ibr_v2.meg.mat (Visual task, run 2)

Rejecting bad channels

We detect noisy channels based on the amplitudes of the MEG data and reject them. The rejected channels will not be used in the following analyses.

>> reject_channel_meg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/meg/trial/ri_all.info.mat

Rejecting bad trials

We reject the trials in which the subject blinked during the exposure to the visual stimulus. The rejected trials will not be used in the following analyses.

>> reject_trial_meg(p);

The following file will be saved.

  • $your_dir/analyzed_data/s006/meg/trial/rri_all.info.mat

Separating trials into tasks

We separate the combined trials into the three tasks: the auditory, somatosensory, and visual tasks.

>> separate_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/trial/rri_a.info.mat (Auditory task)
  • $your_dir/analyzed_data/s006/meg/trial/rri_s.info.mat (Somatosensory task)
  • $your_dir/analyzed_data/s006/meg/trial/rri_v.info.mat (Visual task)

Showing preprocessed MEG data

For each task, we show the processed MEG data averaged across the trials (vb_plot_sensor_2d.m).

>> show_trial_average_meg(p);

Figures will be saved in

  • $your_dir/figure/show_trial_average_meg/s006.

m

6.2. Source imaging

Preparing leadfield

First, we construct a 1-shell (cerebrospinal fluid) surface model (User manual, vb_job_head_3shell.m]). Then, we make the leadfield matrix based on the 1-shell model (User manual, vb_job_leadfield.m).

>> prepare_leadfield_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/leadfield/Subject.head.mat (1-shell model)
  • $your_dir/analyzed_data/s006/meg/leadfield/Subject.basis.mat (Leadfield)

Estimating source currents

First, we estimate the current variance by integrating the fMRI information and the MEG data (User manual, vb_job_vb.m). Then, we estimate source currents using the current variance as a constraint (User manual, vb_job_current.m).

>> estimate_source_current_meg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meg/current/a.bayes.mat (Current variance)
  • $your_dir/analyzed_data/s006/meg/current/a.curr.mat (Source current)
  • $your_dir/analyzed_data/s006/meg/current/s.bayes.mat
  • $your_dir/analyzed_data/s006/meg/current/s.curr.mat
  • $your_dir/analyzed_data/s006/meg/current/v.bayes.mat
  • $your_dir/analyzed_data/s006/meg/current/v.curr.mat

Showing estimated source currents

For each task, we show the estimated source currents averaged across the trials (vb_plot_cortex.m).

>> show_source_current_meg(p);

Figures will be saved in.

  • $your_dir/figure/show_source_current_meg/s006.

mc

7. Source imaging from both EEG and MEG data

Using VBMEG, we can estimate source currents from both EEG and MEG data.

7.1. Estimating source currents

From the preprocessed EEG and MEG data and their leadfield matrices, we estimate the current variance. Then, we estimate source current using the current variance as a constraint.

>> estimate_source_current_meeg(p);

The following files will be saved.

  • $your_dir/analyzed_data/s006/meeg/current/a.bayes.mat (Current variance)
  • $your_dir/analyzed_data/s006/meeg/current/a.curr.mat (Source current)
  • $your_dir/analyzed_data/s006/meeg/current/s.bayes.mat
  • $your_dir/analyzed_data/s006/meeg/current/s.curr.mat
  • $your_dir/analyzed_data/s006/meeg/current/v.bayes.mat
  • $your_dir/analyzed_data/s006/meeg/current/v.curr.mat

7.2. Showing estimated source currents

For each task, we show the estimated source currents averaged across the trials (vb_plot_cortex.m).

>> show_source_current_meeg(p);

Figures will be saved in.

  • $your_dir/figure/show_source_current_meeg/s006.

bc

Congratulations! You have successfully achieved the goal of this tutorial.