• Contact sales

EEG Data Processing and Classification with g.BSanalyze Under MATLAB

By Günter Edlinger, g.tec Medical Engineering GmbH and Christoph Guger, g.tec Medical Engineering GmbH

Advances in the acquisition and analysis of biosignals such as electroencephalograms (EEGs) and electrocorticograms (ECoGs) are profoundly improving brain wave research, creating opportunities to bypass severed nerve pathways to control prostheses and allow movement of paralyzed body parts. This article describes how products from g.tec, which were built on MATLAB® and Simulink®, can be used to perform this multimodal acquisition and analysis.

In 1929, Hans Berger performed the first noninvasive measurements of bioelectrical activity in the brain. During the last seven decades, electroencephalography, or EEG, has been established as a tool for monitoring brain dynamics and brain function.

g.tec, a MathWorks Connections Partner based in Graz, Austria, develops hardware and software for biosignal processing. They have used distinct EEG patterns acquired during feedback experiments to develop a brain-computer-interface (BCI). A sequence of processing steps can be performed with g.BSanalyze software from g.tec to implement a BCI. This sequence involves displaying and training a person on specific visual stimuli, recording an EEG, and analyzing the EEG using artifact control and feature extraction by filtering common spatial patterns. Data classification is then performed via a linear discriminant analysis. After a training period, the subject is able to control a horizontal bar on the computer screen with an accuracy of nearly 100% simply by imagining the movement of a limb.

EEG Measurement and Applications

An EEG is measured noninvasively using small electrodes that are attached to the surface of the scalp. The number of electrodes can vary from one to 256. The electrodes are placed at certain predefined positions according to the international 10/20 system or variants of that system. The weak electrical activity detected by the electrodes ranges from 5 to 100 µV, and the frequency range of interest is between 1 - 40 Hz.

The EEG recording can provide clues about the physical and mental state of the subject. For example, an EEG that shows alpha waves with high amplitudes over the occipital area, a specific part of the brain, indicates that the subject is relaxed and has his eyes closed. If the subject opens his eyes, the Alpha waves will disappear or desynchronize. In addition, sleep researchers use whole night sleep recordings to investigate and classify different sleep stages. EEGs in epileptic patients can also help in localizing epileptic activity in the brain.

The EEG is typically applied in a stimulus-response scenario, measuring the brain's response to cognitive exercises or auditory, tactile, or visual stimuli. Depending on the kind of stimuli and further data processing steps, either phase-locked signals ("evoked potentials") or non-phase locked signals ("event-related synchronization/desynchronization, ERS/ERD") can be investigated.

Phase-locked signals are an effective means of performing diagnostics. These signals are measured after visual, auditory, or tactile stimuli are presented to a patient. By measuring these signals, you can confirm if specific brain pathways are working properly. The phase-locked signal in the brain should react consistently each time the subject is exposed to a particular stimulus. For example, if you present a tone to the subject and you find out that the shape of the resulting evoked potential is different to the normal case, this indicates a problem with the subject's auditory system.

Non-phase locked changes in the EEG can be observed in hand movement experiments and even in experiments when the subjects only imagines a hand movement.

Evoked Potential Response Versus Event-Related EEG

EEG signal processing occurs at different frequencies. For example, if the subject is moving his hand, this modifies the Alpha frequency range. However, it is often difficult to identify which frequency is being impacted based on the EEG signal because there is a great deal of background noise present. A crucial point in EEG signal processing is the signal-to-noise ratio. Depending on the specific experimental question, the definition of signal and noise changes. Hence, not only technical noise (amplifier noise, capacitive, or inductive effects) but also the activity of the brain itself can be seen as superimposed noise to the signal of interest. To solve this dilemma, researches average out many repetitions of the same movement so the noise is attenuated and the relative signal can be identified.

Movement-Related EEG Phenomena

It is a well known phenomenon that EEG rhythmic activities, observed over motor and related areas of the brain, disappear approximately one second prior to the onset of physical movement of the body. Therefore, we can predict from the spatio-temporal EEG pattern that, for example, a hand movement will be performed. In addition, various groups of researchers have demonstrated that a desynchronized EEG occurs when a person has imagined movements (i.e. a motor imagery). The activation of hand area neurons, either by the preparation for a real movement or by the imagination of a movement, is accompanied by a circumscribed ERD over the hand area. Depending on the type of motor imagery, different EEG patterns can be obtained.

Figure 1 displays topographical maps for a desynchronized EEG for an imagined left-hand movement (panel A) and an imagined foot movement (panel B) displayed over the cortical surface. Panel A shows that the EEG is desynchronized over the contra-lateral area. Dark red spots indicate maximum desynchronization.



Panel A, imagery of a left-hand movement.



Panel B, imagery of a foot movement.

The phenomenon of EEG desynchronization/synchronization and the resulting distinct EEG patterns can be used to predict voluntary movements of subjects. This opens a new communication channel based on the synchronized and desynchronized EEG. Such a brain-computer-interface has been investigated by several research teams in the US, Germany, and Austria. This interface is typically used to develop a simple binary response for the control of a device or a cursor on a computer screen.


The goal of this article is to explain the processing steps that are used to evoke, acquire, process, and discriminate EEG patterns for the development of a brain-computer-interface. In particular, the following processing steps will be explained:

  • EEG recording
  • Experimental paradigm
  • Data triggering and data class assignment
  • Artifact control
  • Feature extraction by means of spatial filtering (common spatial patterns)
  • Data classification

You can also take one further step (which is outside of the scope of this article) in order to use the obtained data to analyze an EEG in real time and to provide feedback to the subject, allowing the control of a cursor on a computer screen.

EEG Recording

For this experiment, a total of 27 electrodes (overlaying the sensorimotor area) were spaced equally and placed on a subject's head as indicated in Figure 2. The distance between the electrodes was approximately 20 mm. All recordings were referenced against the right ear and a ground electrode was attached to the subject's forehead. Vertical and horizontal eye movements were detected by placing electrodes medially above and laterally below the right eye.



Figure 2: Montage of a 27-channel electrode grid. The electrodes are placed along the subject's head. This figure is a view of the electrode grid from above. The basis for the electrode positions is the extended international 10/20 system.

The amplified EEG was band pass filtered by an analog filter between 0.5 and 50 Hz and sampled at 128 Hz. The resolution was 12 bits. A notch filter was used to suppress the 50 Hz power line interference.

Experimental Paradigm



Figure 3: Timing of one trial of the experiment with feedback.

The experimental procedure consisted of several sessions without feedback. Each session was divided into four experimental runs of 40 trials with randomized directions of the cues (20 left and 20 right). The experiments lasted approximately one hour (including electrode application, breaks between runs, and experimental preparation). The subject was seated in a comfortable armchair 1.5 meters in front of a computer monitor and was instructed not to move, to keep both arms and hands relaxed, and to keep his eyes focused on the center of the monitor throughout the experiment.

The experimental paradigm was realized with g.STIMunit, a real-time visual, auditory, and tactile stimulation unit, and started with the display of a fixation cross shown in the center of a monitor. After two seconds, a warning stimulus was given in the form of a beep sound. From second 3 until 4.25, an arrow (cue stimulus) pointing to the left or right was shown on the monitor. The subject was instructed to imagine a left or right-hand movement depending on the direction of the arrow. Between second 4.25 and 8, the EEG was classified online and the classification result was translated into a feedback stimulus in the form of a horizontal bar that appeared in the center of the monitor. If the subject imagined a left-hand movement, the bar extended to the left, as shown in panel A and vice versa in panel B in Figure 1 (correct classification assumed). The subject's task was to extend the bar toward the left or right boundary of the monitor, as indicated by the arrow cue. One trial lasted 8 seconds and the time between two trials was randomized in a range of 0.5 to 2.5 seconds to avoid adaptation. (See Figure 3 for the timing of the paradigm.)

GUI-Based Data Processing and Batch Processing

All commands and processing steps used in the experiments are available via a graphical user interface (GUI). However, for the processing of multiple data sets within a study, a batch processing mode is also available. All necessary commands (for example, the command to classify EEG data) are written to a diary so that they will be available for further use. The necessary commands are displayed in the following sections along with the GUIs.

Data Triggering

In order to align EEG data and the experimental paradigm, a TTL trigger was generated via the paradigm at second 2. This trigger indicates a new trial and is used to cut EEG data of 8 seconds length out of the data stream. Figure 4 displays the "Trigger" dialog and the triggered data. Channel 28 (red color) is the trigger channel, which is used for triggering the data.





Figure 4: Left panel: "Trigger" dialog displaying channel 28 as defined trigger channel. Right panel: 8 second EEG traces and trigger channel (red color) after triggering was performed.Click to enlarge.

Batch Processing



Click code to enlarge

Assignment of Data Class Attributes

After the generation of 160 trials, the class information available via the paradigm (either an arrow to the right or to the left) has to be assigned to the trials. In this case, a data vector containing 160 entries is loaded. Each entry containing a "0" means that the trial belongs to a right-hand motor imagery and each entry containing a "1" means that the trial belongs to a left-hand motor imagery. Figure 5 indicates the triggered EEG data along with the assigned class attributes.



Figure 5: EEG traces for 10 trials along with the trial attributes are indicated. Blue EEG traces belong to a left-hand motor imagery. Green EEG traces belong to a right-hand motor imagery.Click to enlarge.

Batch Processing



Click code to enlarge

Artifact Control

In order to calculate the common spatial patterns and the spatial filter, it is necessary to perform artifact control. In this case, all experimental trials were visually checked for artifacts in the time period 2-6 seconds. Trials that contained artifacts (EMG, EOG, or range overflow of analog-to-digital converter) were discarded (See Figure 6).



Figure 6: Display of trials 89 to 98 for EEG channel 1. Trial 89 is marked as ARTIFACT (red color). Click to enlarge.

Feature Extraction by Means of Spatial Filtering (Common Spatial Patterns)

The aim of spatial filtering is to extract features that lead to an optimal distinction between two populations of EEG related to right-hand and left-hand motor imagery.

The method of common spatial patterns (CSP) presented here is based on the decomposition of raw EEG signals into spatial patterns. The method of CSP uses the covariance to design common spatial patterns, and is based on the simultaneous diagonalization of two covariance matrices. The decomposition (or filtering) of the EEG leads to new time series, which are optimal for the discrimination of two populations. The patterns are designed in such a way that the signal that results from the EEG filtering with the CSP has maximum variance for left-hand trials and minimum variance for right-hand trials and vice versa. In this way, the difference between left and right populations is maximized. The only information contained in these patterns is at the largest variance of the EEG. (See Figure 7 for the "Common Spatial Filter" dialog and the four most important CSP maps.)

All EEG channels were filtered (FIR filter) between 8 - 30 Hz prior to the computation of CSP because this broad frequency range contains all mu and beta frequency components of the EEG, which are important for the discrimination task.

Given N channels of EEG for each left and right trial X, the CSP method provides an NxN projection matrix W. This matrix is a set of N subject-specific spatial patterns, which reflect the specific activation of cortical areas during hand movement imagination. With the projection matrix W, the decomposition of a trial X is described by

Z = WX. (1)

The columns of W-1 are the common spatial patterns. They can be seen as time-invariant EEG source distribution vectors. Figure 4 displays the spatial patterns obtained for a right-hand and left-hand motor imagery.



Figure 7: Left panel: Display of CSP filter dialog. Right panel: Display of topographical maps for the four most important spatial patterns (CSP No. 1,2 and CSP No. 26, 27) for optimal discrimination. The spatial patterns are shown as color coded maps. The black numbers indicate the electrode positions. Electrodes surrounded by dark red colors or dark blue colors are the most important electrodes for the discrimination.

Batch processing



Click code to enlarge

Data Classification

By construction, the variance for a left movement imagination is largest in the first row of Z and decreases with the increasing number of the subsequent rows. The opposite is the case for a trial with a right motor imagery.

For classification of the left and right trials, the variances have to be extracted as reliable features of the newly designed N time series. However, it is not necessary to calculate the variances of all N time series. The method provides a dimensionality reduction of the EEG. A high number of EEG channels (N) can be reduced to only a few time series and a few spatial patterns. After building W from an artifact corrected training set, only the first and last 2 rows (p=4) of W were used. The EEG data X is filtered with these p spatial filters. Then the variance of the resulting four time series is calculated for a time window T.

Figure 8 displays the time series after filtering the EEG data with the two most important (1, 27) and the two second most important (2, 26) common spatial patterns, according to equation (1). The filter was constructed in such a way that the variance is maximized in filter 1 and 2 during a left-hand movement imagination and minimized in filter 26 and 27. The left-hand column shows the new time series of a left trial, the right-hand column of a right trial. By comparing the most discriminating time series (1 and 27), a high amplitude difference can be observed. When comparing the second most important time series (2 and 26), a difference can still be seen, albeit a smaller one. The opposite is the case for the right trial. The variance in time series 1 and 2 is smaller than in 26 and 27.



Figure 8: Time series of filtered EEG signals with the most important spatial filters. The left-hand panel displays the time series for a left trial. The right-hand panel displays the time series for a right trial. Seconds 0 to 8 are displayed.

Batch processing



Click code to enlarge

In order to yield an approximately normal distribution eqn, 2 is normalized and log-transformed, yielding four feature vectors fp.


Batch processing



Click code to enlarge

The feature vectors fp of left and right trials are then used to construct a linear classifier. For proper estimation of the classification result, the data set is divided into a training and a test set. The training set is used to calculate the classifier, which is then used to classify the testing set. The classification results are then calculated using a 10 x 10 fold cross validation. The 10 x 10 fold cross validation mixes the data set randomly and divides it into 10 equally sized partitions. Each partition is then used once for testing; the other partitions are used for training. This results in 10 different error rates.

Figure 9 displays the "Generate Classifier " dialog. The classifier is calculated between 5500 and 6500 ms. The classification method is the linear discriminant analysis. A 10 x 10 fold cross validation method is used for determining the classification accuracy. The right-hand panel in Figure 9 yields the classification results for class right-hand and left-hand motor imagery.

The results show that the error is approximately 50% before the motor imagery is performed. The error drops down between second 5000 to 8000 ms to approximately 3-5% while the motor imagery is performed. The subject was able to almost perfectly control the cursor on the screen. This shows that the selected features based on CSPs are well suited for the discrimination of the two class problem.





Figure 9: Left panel displays the "Generate Classifier" dialog. The classifier is calculated for class right-hand motor imagery versus class left-hand motor imagery. Right paneldisplays the achieved error using the linear classifier and a 10 x 10 fold cross validation. The error drops from nearly 50% between seconds 1 to 4 down to approximately 3% between seconds 5 to 8.


The processing steps and results generated by g.BSanalyze show that the method of common spatial patterns can be used for the development of a brain-computer-interface. In a further step, the CSPs were used to analyze the EEG in real time to provide feedback to a subject, allowing the control of a cursor on a computer screen. The real-time feedback application is based on the real-time biosignal processing system s. Further details on the online application and results can be found in Guger et al. IEEE Trans. Rehab. Engng., vol 9, 1-10, 2001. The BCI dataset presented in the Rehab Engineering paper is available at

Published 2002

Products Used

Third-Party Products

Receive the latest MATLAB and Simulink technical articles.

Related Resources

Latest Blogs