0% found this document useful (0 votes)
60 views6 pages

Brain-Controlled Mechatronic Devices Using BCI

Uploaded by

S D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views6 pages

Brain-Controlled Mechatronic Devices Using BCI

Uploaded by

S D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Automatic Time-Frequency Analysis of MRPs for

Mind-controlled Mechatronic Devices


Daniela De Venuto, Giovanni Mezzina
Dept. of Electrical and Information Engineering
Politecnico di Bari
Bari, 70125, Italy
{[Link], [Link]}@[Link]

Abstract— This paper describes the design, implementation the brain activity pattern (BAP). The BAP must be capable to
and in vivo test of a novel Brain Computer Interface (BCI) for the convey the subject’s intention to the computer. The last trend in
mechatronic devices control. The method exploits the research area concerns the single-trial EEG classification
electroencephalogram acquisitions (EEG), and specifically the improvement. Indeed, achieving high single-trial accuracy
Movement Related Potentials (MRPs) (i.e., ȝ and ȕ rhythms), to means creating fast, robust and easy to be handleable BCIs [7].
actuate the user intention on the mechatronic device. The EEG The BAPs that allow to realize a reliable single trial
data are collected by only five wireless smart electrodes positioned classification are typically: the event related potentials (ERPs),
on the central and parietal cortex area. The acquired data are the slow cortical potentials (SCP), the event-related (de-)
analyzed by an innovative single-trial classification algorithm that,
synchronization potentials (ERD/ERS), the steady state visual
with respect to the current state of the art, strongly reduces the
training time (Minimum: ~1 h, reached: 10 min), as well as the
potentials (SSVP) and the sensorimotor rhythms (SMR) [7].
acquisition time - after stimulus - for a reliable classification Several single-trial based BCI were proposed by the state of the
(Typical: 4-8 s reached: 2 s). As first step, the algorithm performs art and they have been summarized in the Table I [8-11]. Some
an EEG time-frequency analysis in the selected bands, making the noteworthy solutions are compared with the proposed work in
data suitable for further computations. The implemented machine terms of: chosen BAP, FE and classification methods, mean
learning (ML) stage consists of: (i) dimensionality reduction; (ii) accuracy, EEG trial length after stimulation (AS), number of
statistical inference-based features extraction (FE); (iii) subjects involved in the test, data rate (commands/min) and
classification model selection. It is also proposed a dedicated number of available choices.
algorithm, the MLE-RIDE, for the dimensionality reduction that, In this context, the paper proposes a novel BCI methodology
jointly with statistical analyses, digitalize the ȝ and ȕ rhythms, able to reduce the training time (from a minimum of ~1h to about
performing the features extraction. Finally, the best support 10 min) and the EEG trials length needed for accurate BAP
vector machine (SVM) model is selected and used in the on-line recognition (from 4s or 8s to 2s with only 1s after the
classification. As proof of concept, two mechatronic devices have stimulation), remaining competitive with the state of the art in
been brain-controlled by using the proposed BCI algorithm: a term of classification accuracy.
three-finger robotic hand and an acrylic prototype car. The This paper fully investigates the evolution of two movement
experimental results, obtained with data from 3 subjects (aged related potentials (MRPs), the ȝ and ȕ rhythms, considering their
26±1), showed an accuracy on human will wireless detection of time-frequency responses during several stimulation protocols.
87.4%, in the real-time binary discrimination, with 33.7ms of
The implemented algorithm analyzes the EEGs, from 5 smart
computation times.
electrodes, in the time-frequency domain, considering the
Keywords— BCI, MRP, Classification, SVM, Simulink frequency bands of interest. In the machine learning (ML) phase,
the data to be evaluated are submitted to a first dimensionality
I. INTRODUCTION reduction through a dedicated algorithm,
The worldwide impact of neuro-degenerative diseases, TABLE I . COMPARISON WITH THE STATE OF THE ART
bringing losses of motor neurons and, then, the partial or System [8] [9] [10] [11] Our work
Oscillatory
complete paralysis (e.g. in amyotrophic lateral sclerosis), led BAP SCP ERD/ERS ERPs
Rhythms
MRPs
several research groups in developing Brain Computer Imagery
Standard LLP Motor MLE-RIDE
Interfaces (BCI) to enhance the life quality of these patients. Modulation Algorithm Imagery + +
Methods AR +
+ Unsupervised CVA+ Statistics+
Indeed, the BCIs, in their most general definition, allow Threshold
ANN Method Gaussian SVM
discriminating the user intentions by “mind reading” from # Electr. 32 56 31 16 5
Mean 83.6% 90.2%
electroencephalogram (EEG) sequences. Accuracy
85.6 %
(pseudo real- (pseudo real-
86.2% 87%
They are not a clinical prerogative but found a huge range of (offline) (online) (online)
(%) time) time)

ordinary life applications, such as wheelchairs or cars driving [1, Training


~50 h ~4h ~4h 2 weeks ~ 10 min
time
2], communication assisting [3], prosthetic solutions [4], gaming EEG trial
4s 8s 17 s 1s 1s
[5] and ambient assisted living [6]. length AS
Nevertheless, in a non-invasive BCI framework, three main Data rate
2 6 2.4 2 8
(com/min)
issues must be solved [7]. The first issue concerns the # Choices 2 3 24 2 2
identification of the right clinical protocol to be submitted to the Dataset
2 3 13 10 3
subject, in order to modulate his brain response, making it (subjects)
*AR: Autoregressive Coefficients, ANN: Artificial Neural Network, LLP: learning label proportions, CVA:
readable by the BCI. The second issue is related to the choice of Canonical Variate Analysis

978-3-9819263-2-3/DATE19/2019
c EDAA 752
named MLE-RIDE. It extracts the residuals-based averages for cortex rhythm is responsible for muscles firing management
each channel and available movement, excluding from further [12]. MRPs are typically more visible in the central and early
computing the less descriptive MRPs waveforms. Then, the BCI parietal cortex areas known as motor and sensory regions (Fig.
realizes a digitalization on the preserved data by using statistical 1), and specifically in the brain hemisphere opposite to the limb
inference-based thresholds. Formally, the resulting digitized that performs the movement.
signal represents the features to be classified. Finally, as stated by the author in [7], the multi-channel EEG-
The paper evaluates several classification models for the final mapping demonstrated that some localized brain areas
ML step. contribute to cerebral motor command processes, via an MRP
As proof of concept, the commands provided by the real-time known as Bereitschaftspotential (BP). The BP moves from 2Hz
classification are sent via Bluetooth to two external mechatronic to 5Hz and in time-domain it is a negative deflection that
devices: a three-finger robotic hand and an acrylic prototype car. precedes the voluntary initiation of the movement and it is
With about 10min of training and 1s AS long EEG trials, the clearly visible in lateral scalp positions (C3, C4) [7, 14, 15]. It is
proposed BCI can reach the 87.36% of accuracy, in a real-time not useful to distinguish two specific actions (e.g., left or right
binary discrimination context. finger movements).
The paper is structured as follows: Sec. II describes the clinical
background and the stimulation protocols used. Sec. III provides B. The Stimulation Protocols
information about the overall architecture, from the ML stage to To provide a complete overview of the MRPs applicability in
real-time classification. The Sec. IV outlines the features of the the BCI framework, several protocols have been carried out both
actuators used as proof of concept. Sec. V describes the in controlled and noisy environments. These tests have been
experimental results and Sec. VI underlines the conclusions and designed to emphasize the movement voluntariness [16].
further perspective. In the following, all the tests will be uniquely numbered. The
test number can be followed by a subscript “N” if the test was
II. CLINICAL BACKGROUND AND STIMULATION PROTOCOLS carried out in a noisy environment.
Test0(N). The user is sitting in resting state, without interactions
A. The Selected Brain Activity Pattern with other participants or staff. The system continually
First, Several clinical studies [12] demonstrated that during investigates the user brain response in the frequency band of
voluntary movements, a cerebral preparation process precedes interest. It is used as a ground truth level of voluntariness.
activation of proper muscles sequence. For this reason, we Test1(N). The user is sitting in front of a button. The user counts
focused on BAPs, which are specifically related to the mentally to 5 s, then, with the prescribed hand, he pushes the
preparation and execution of a motor command, rather than button.
reflecting merely unspecific modulations of vigilance or Test2(N). The user is sitting in front of a button. He randomly
attention [7, 12]. pushes the button with self-paced timing by using the prescribed
Typically, the neural preparation process starts 1 s before the hand.
muscle contraction and it is characterized, inter alia, by EEG Test3(N). Like the Test2, but there are no interactions with the
potentials named movement-related potentials (MRPs) [12]. In push button. The finger movement is however permitted.
this paper, we studied two highly descriptive MRPs for the BCI Test4(N). The user is sitting in front of a push button, with a
application: the ȝ-rhythms, and the ȕ-rhythms [12, 13]. The ȝ- buzzer in the same test environment. The buzzer is programmed
rhythm occupies a frequency band between 9 and 11 Hz and can to sound with a random inter-stimulus timing (from 3s to 9s). It
occur up to 1 s before the movement activation. Performing a emits two different tones to indicate that the button must be
motor action suppresses this rhythm. The ȕ-rhythm has a pressed with the right hand finger or with the left hand one.
frequency band ranging from 12.5 to 30 Hz. This premotor

Fig.1 Overall system schematic from the acquisition to the machine learning stage, the classification model extraction and the on-line classification. The
blue arrows indicate the self-adapting nature of the implemented algorithm.

Design, Automation And Test in Europe (DATE 2019) 753


Test5(N). The user is sitting and focuses his attention to the not ensure highly descriptive differences between right and left-
buzzer emitted tones. The buzzer is programmed in the same hand movement are excluded from the statistical inference-
way of the Test4. When the buzzer sounds the user programs the based FE. This statistical study identifies some thresholds,
defined movement without interacts with the switch. allowing to “barcode” the selected signals. The extracted
features are used to train a set of Support Vector Machines
III. THE OVERALL ARCHITECTURE (SVM) with four different kernels. The model with the best
Fig. 1 shows a block diagram of the implemented BCI validated accuracy is selected as the one to be implemented in
architecture from the signal acquisition to the on-line the real-time discrimination context.
mechatronic activation. The overall architecture can be divided In the on-line classification context, the extracted trials (that are
into five main operative units: (i) the acquisition unit; (ii) the unlabeled) undergo the FE phase, coherently with the one
Stimulation Panel (iii) the Off-line Machine Learning unit (iv) operated in the ML stage. The features are used to feed an SVM
the On-line Classification block and (v) the Bluetooth based that provides the specific command linked to the user intention.
communication interface. The proposed system has been fully Finally, the Simulink model sends a message to a generic
implemented in MATLAB 2017b and Simulink environments. Bluetooth receiver (e.g. in this application, a low-cost HC-05).
The off-line ML processing has been entrusted to MATLAB It manages the command to be applied on the specific
scripting, while the EEG acquisition, the Stimulation mechatronic device responsible of the actuation. The tested
management, the on-line features extraction (FE), the BAP devices (robotic hand and acrylic prototype car) are both
classification and the commands transmission via Bluetooth, managed in actuation by an ATMega328 P-PU board.
operate in real-time on Simulink. The Stimulation Panel consists B. The Acquisition Unit
of a tactile switch and a buzzer and it is controlled via
ATMega328 P-PU programming. The acquisition unit is sketched on the left of the Fig.1 as an
EEG headset with two semi-transparent area. These are the
A. The System Operation motor (red) and sensory (blue) cortex areas. The implemented
The operation of the BCI can be briefly described as follows: the multichannel sensing system consists in a 32-channel EEG
EEGs are acquired from a wireless headset, which collects and headset. For this application, data from 5 EEG channels on the
filters the data, sending them via Bluetooth Low Energy to a sensorimotor area have been wirelessly acquired: C3, Cz, C4,
base station connected to a PC USB socket. This base station Cp1, Cp2. The AFz electrode is used as GND for a monopolar
provides a set of digital inputs, which can be interfaced to reading and the right ear lobe is used as reference electrode
external trigger signals, to synchronize them with the EEGs. The (REF) [16, 17]. EEG samples are recorded in an analog input
Stimulation Panel outputs are connected to them. range of ±187.5mV with 24bit resolution at 500Hz sampling
When new EEG samples occur, they undergo a Short Time rate. In the Simulink interface, the EEGs are numerically band-
Fourier Transform (FFT radix 2 in Fig.1) with sampling rate pass filtered between 0.5Hz and 35Hz (8th order Butterworth).
fs=512samples/s and window length Lw=256 samples [17] Finally, an amplitude winsorizing approach completes the
(resolution of 2Hz). In the Lw window, the power spectrum preprocessing, deleting - from further computing - the EEG
densities of the ȝ and ȕ rhythms are progressively summed. The outliers and reducing the artifacts effect.
FFT block, in fig. 1, provides two waveforms in output. The BP C. The Acquisition Unit
waveform is extracted in a similar way. The BP was studied [7,
11] to differentiate between them the resting state and the onset The implemented multichannel sensing system consists in a 32-
of the voluntary movement, without the need for external channel EEG headset. For this application, data from 5 EEG
triggers, as explained in Sec II.B: Test0. BP amplitude variation channels on the sensorimotor area have been wirelessly
has been used as trigger during the Test3. acquired: C3, Cz, C4, Cp1, Cp2. The AFz electrode is used as
Considering a generic test (e.g. the Test4): when the buzzer GND for a monopolar reading and the right ear lobe is used as
sounds and the user pushes the button, the trigger signal extracts reference electrode (REF) [16, 17]. EEG samples are recorded
a piece of the streamed ȝ and ȕ waveforms. This temporal in an analog input range of ±187.5mV with 24bit resolution at
window is known as trial and consists in 2 s (1024 samples) of 500Hz sampling rate. In the Simulink interface, the EEGs are
acquisition: 1 s before the trigger rising edge and 1s after it. In numerically band-pass filtered between 0.5Hz and 35Hz (8th
the following, the ȕ and ȝ signals will be numbered the notation: order Butterworth). Finally, an amplitude winsorizing approach
“ȕ(or ȝ)Ch,M” (e.g. ȕC3R) where Ch is the channel on which the completes the preprocessing, deleting - from further computing
MRP is evaluated and M is the performed movement (R: right - the EEG outliers and reducing the artifacts effect.
hand, L: left hand movement). Moreover, all the ȕ or ȝ signals D. The Off-Line Machine Learning
are referenced to the respective Cz magnitude in a differential
way: e.g. ȕC3R= ȕC3R’- ȕCZR and so on, with ȕC3R’ monopolar The first step of the ML stage is entrusted to the MLE-RIDE
version of ȕC3R. algorithm, which is able to extract a coherent average along a
During the learning stage, the trials are collected in 3D high number of observations. Usually, when this number
observations matrices and sent to an algorithm able to increases, the mathematical average, in classic sense, loses of
emphasized the neural potential differences when a specific significance due to the broadening effect [18]. The MLE-RIDE
movement is performed. The ML stage exploits a waveform re- is an improved version of the most used tuned-RIDE [18].
phasing approach based on a residue iterative decomposition Differently from the t-RIDE that can extract only event related
algorithm, empowered by a Maximum likelihood (MLE-RIDE) potentials (e.g. P300) from the background EEG, the MLE-
estimation algorithm. The MRPs, on specific channels, that do RIDE can be used to re-phase any statistically relevant part of
the signal with no limitations in applicability. Indeed, the main

754 Design, Automation And Test in Europe (DATE 2019)


differences from t-RIDE [18] concern: (i) the number of -50ms] ‫[ ת‬110ms ՜ 820ms]. On these highly representative
components that can be emphasized (not only one as in the t- windows, the system proceeds to the FE stage.
RIDE, but up to 4 components along the trial), (ii) the latency Features extraction. The FE stage uses the 8 reference
re-phasing alignment is based on MLE and not on Woody thresholds (2 for each pair pi) and considers only the trial
method. It strengthens the algorithm against the inter-trial jitter windows provided by the previous ML step.
error [18]. The FE consists in a kind of EEG barcoding:
The Fig.2a shows the differences in analyzing a classic average 1. The algorithm selects the ȕ and ȝ trials of the same specific
or a re-phased one. In particular, the Fig.2a shows in solid blue labelled observation (e.g. 5th observation – Right hand
line the phased mean of the ȕC3R against a classic average in
movement), then it considers only the four relevant signals,
dashed blue line. It also shows in solid red line the phased mean
as previous stated (e.g. ȕC3, ȝC4, ȝCp2, ȕCp1).
of the ȕC3L and the classic average (dashed line).
After the EEGs acquisition for the specific test (Sec. II.B), the 2. The algorithm realizes, on the selected trial (Tr), two
MATLAB script receives two observations matrices: OMȕ and barcodes by eq. (2) and (3):
”ሺ‹ሻ ൐ Š୙୔ ሺ‹ሻ  ՜ „”ሺ‹ሻ ൌ ͳ‹ ൌ ͳǤ Ǥ 
OMȝ ‫ א‬Թ Ch,S,No with S number of samples and No number of ‫ ۾܃ܚ܂܊‬ൌ ൜
‘–Ї”™‹•‡ ՜ „”ሺ‹ሻ ൌ Ͳ
(2)
observations realizing the dataset. OMȕ and OMȝ comprise the
trials for the right-hand movements and left hand ones. ”ሺ‹ሻ ൏ Šୈ୛ ሺ‹ሻ  ՜ „”ሺ‹ሻ ൌ ͳ‹ ൌ ͳǤ Ǥ 
The Fig.2b shows two graphs Observation vs Trial samples: 50 ‫ܚ܂܊‬۲‫ ܅‬ൌ ൜ (3)
‘–Ї”™‹•‡ ՜ „”ሺ‹ሻ ൌ Ͳ
trials of ȝCp2R and 72 trials of ȝCp2L. In these plots some 3. The resulting signals are sliced in the prescribed temporal
differences are visible in terms of magnitude: centrally, the ȝCp2L windows.
(left hand movement) is higher than ȝCp2R. 4. After discarding the useless barcode pieces, the system
Firstly, the system analyzes OMȕ and OMȝ extracting the phased sums all the bits that compose both the barcodes, realizing
averages of 16 signals (1 ȕ and 1 ȝ for all the 4 channels and for the feature matrix f ‫ א‬Թ8.
the 2 directions): ȕC3R, ȕC3L, ȕC4R, ȕC4L, ȕCp1R, ȕCp1L, ȕCp2R,
ȕCp2L, ȝC3R, ȝC3L, ȝC4R, ȝC4L, ȝCp1R, ȝCp1L, ȝCp2R, ȝCp2L. E. The Classification Models
Then, it identifies the most different waveforms with reference The 1D vectors f are concatenated and labelled (R or L), to
to the same channel, according to eq. (1): realize the train set matrix F‫ א‬Թ(Nf+1),No , in which each row has
۲ ൌ ቊ෍
େ౦మ
ȁȾŠୖ െ ȾŠ୐ ȁ Ǣ෍
େ౦మ
ȁɊŠୖ െ ɊŠ୐ ȁቋ
the form: {Fi‫ א‬ԸNf, Li}, where i=1…No. No is the number of
(1)
େ୦ୀେయ େ୦ୀେయ observations, Nf is the number of features (Nf=8) and Li ‫א‬
Ch*2
The vector D‫ א‬Թ contains all the absolute values of the ሼെͳǡ ͳሽ is the i-th trial label: Yi=-1 ĺ Left hand movement,
comparisons. while Yi= 1 ĺ Right hand movement. The F is used
The vector D is sorted, and the first 4 relevant differences are
extracted as pairs: e.g. p1={ȕC3R, ȕC3L}, p2={ȝC4R, ȝC4L},
p3={ȝCp2R, ȝCp2L}, p4={ȕCp1R, ȕCp1L}.
Once the most relevant MRPs and channels are defined, the
system statistically analyzes them, as shown via boxplots in
Fig.2.c. The other MRPs are discarded. In particular, it extracts
four thresholds: two upper thresholds and two lower ones. The
first ones are realized by calculating the 75th percentiles of the
observation matrix referred to specific MRP and channel for
both the movements, e.g. for OMȕ(Ch): ThUP,R ThUP,L ‫ א‬ԹS.
Similarly, the last ones are realized by calculating the 25th
percentiles of the observation matrix referred to specific MRP
and channel for both the movements: ThDW,R ThDW,L ‫ א‬ԹS.
Overall, 16 thresholds are extracted, 8 ThUP and 8 ThDW (2
ThUP and 2 ThDW per pair pi, i=1..4). For each pair pi the lowest
ThUP and the highest ThDW are selected as reference thresholds.
The system subtracts to the reference ThUP the discarded one,
and do the same thing for the ThDW ones.
As shown in Fig.2d, the subtractions provide interesting
information about the windows in which the differences
between the signals on the same channel, but for different
movements, are emphasized. From the plot in Fig.2d, the
system automatically extracts the windows in which the
differences overcome the 30% of the maximum absolute value.
For instance, the Fig.2d shows that in the case of ȕC3, the Fig.2 (a) ȕ and μ rhythms on C3 and Cp2 during right (blue lines) and left
movements (red lines). (b) OMȝ for the Cp2 channel during right (top) and left
optimal temporal windows is the entire trial, while in the case movements (bottom) (c) top: boxplots of ȕC3 during right (blue) and left (red)
of ȝCp2 , 2 optimal temporal windows can be found: [-450ms ՜ movements. bottom: boxplots of ȝCp2 during right (blue) and left (red)
movements. (d) differences between percentile lines.

Design, Automation And Test in Europe (DATE 2019) 755


to train the SVMs [19], with four different kernel bases: linear,
quadratic, cubic and Gaussian. Iteratively, a k-folded validated
accuracy [19] is performed for each classifier on the train set.
The highest accuracy model is selected and uploaded in an
extrinsic function on Simulink for the real time classification.
F. The On-line Prediction
In the real-time prediction block, the system extracts the ȕ and
ȝ signals from the FFT radix-2 blocks, then submit them to a Fig.3 (a) Mechatronic devices Bluetooth interfaced with Simulink based BCI:
coherent down sampling with a rate of 32 (1024 samples ĺ32 three-finger robotic hand and acrylic prototype car. (b) Simulink control panel
samples). When the trigger rising edge occurs, the system Features Extraction and SVM model selection. The k-fold
extracts the trial, which is, in this case, unlabeled. The ȕ and ȝ (with k=15) validation [14] accuracies for each SVM model,
trials are submitted to the first dimensionality reduction, test and subject are shown in Table III.
managed by the MLE RIDE. It chooses the most relevant MRPs The table provides information about the implemented SVM
and channels as stated in Sec. III.C. models performance in terms of: mean accuracy along the tests,
Eight features are extracted from the unlabeled trials following number of support vectors and classification timing. The Cubic
the steps from 1 to 4 of the Sec. III.C: Feature Extraction. The SVM ensures the best performance along all the tests with a
1D vector f (eq.(4)) feeds the chosen SVM model, which total accuracy of 91.4±1.23%.
provides in output the command to be actuated. The Test3, which does not require physical interaction with a
button, shows the worst performance, with an overall accuracy
IV. THE ACTUATORS of the 86.55±2.59% against the 92.23±2.71% of the Test1.
The SVM classifier sends the predicted label to a dedicated Considering the normal and noisy tests separately, the accuracy
block that manages the Bluetooth communication. Firstly, the decreases from an overall value of 93.41% (normal) to 85.62%
Simulink model opens the communication by sending a string: (noisy). Selecting a specific model (e.g. Cubic SVM on T1,
{89, command, 89}. The communications are then Sub.1) and applying it to all the other tests datasets, the
stopped waiting for the next trigger edge. The HC-05 placed on permutations return accuracies that range from a minimum of
the actuator is programmed to operate at 9600 baud. In this 71.4% to a maximum of 96.5%.
paper, we propose the Bluetooth-driven actuation of a three-
B. Real-Time Validation
finger robotic hand and an acrylic prototype car, which are
shown in Fig.3 with their available functionalities. In the real-time validation context, the datasets as reported in
The used prototype car is the one largely detailed in our Table II has been divided into two equal parts. The first dataset
previous work [2].The three-finger robotic hand is a homemade part is dedicated to a mixed commands pattern (R & L) to be
structure, which operates by using three servomotors in tensile actuated on a robotic hand, while the second one on the acrylic
or releasing progressions. It operates with three gradual steps prototype car. Fig.4 shows the real time system response in term
in the closing phase and the same number of steps in the of accuracy versus test typology, subject and actuator. It shows
opening stage. Thus, using a two choices paradigm is possible that, overall, the BCI accuracies when the actuator is the robotic
to open or close a robotic hand with a self-paced timing [20]. hand and when it is an acrylic car is almost the same:
87.36±0.02%.
V. RESULTS
The proposed architecture has been tested in vivo on a dataset C. Classification Timing
of 3 subjects (aged 26±1), all students of Politecnico di Bari. In a future perspective of a fast and asynchronous BCI, the
The Table II resumes the composition of the datasets on which proposed system has reserved attention to strict constraints on
the experimental results in the following are based. the computational times for the real time operation [2].
In the top tab is shown the train dataset composition in terms TABLE II . DATASETS COMPOSITION FOR TRAINING AND VALIDATION
Train Datasets Composition
of: test typology (Test0 to Test5) in normal and noisy (N) T1 T2 T3 T4 T5
environment [21]. In the table, they are also distinguished by Sub. Age T0
- N - N - N - N - N
the observation for the right hand movement (R) and left hand 1 25 181
R 62 51 81 63 64 52 50 47 52 45
L 65 44 80 61 66 50 50 48 59 44
ones (L). The bottom tab shows the real-time validation dataset 72 45 74 60 66 51 50 36 61 51
R
composition, with the same criteria of the train set. 2 25 174
L 70 47 79 60 66 55 50 37 66 50
The system accuracy is defined as the ratio between the 3 27 166
R 61 48 50 42 71 61 50 49 67 49
L 66 48 52 49 78 62 50 51 71 55
correctly detected observations (supervised) and the total
Real-Time Validation Datasets Composition (Prescribed Pattern)
number of requested actions. T1 T2 T3 T4 T5
Sub. Age T0
A. Machine Learning Performance - N - N - N - N - N
R 40 30 40 12 50 25 40 15 40 21
1 25 -
The ML stage is performed off-line through a MATLAB script L 40 30 40 15 50 20 40 12 40 22
R 40 30 40 17 50 18 40 13 40 20
on a PC equipped with Intel i5 processor and 16GB RAM. 2 25 -
L 40 30 40 19 50 19 40 15 40 19
It requests, on average, 11.41±4.6s to perform the signal re- R 40 - 40 - 50 - 40 - 40 -
3 27 -
phasing by MLE-RIDE on 4 channels, inference-based L 40 - 40 - 50 - 40 - 40 -

756 Design, Automation And Test in Europe (DATE 2019)


TABLE III . SVM KERNELS COMPARISON [3] Farwell, L.A., & Donchin, E. (1988). Talking off the top of your head:
SVM Linear Quadratic Cubic Gaussian Toward a mental prosthesis utilizing event-related brain potentials.
S1 90.02 S1 91.42 S1 92.42 S1 88.34 Electroenceph Clin Neurophysiol, 70(6), 510-523.
Accuracy*1 [4] Ortner, Rupert, et al. "An SSVEP BCI to control a hand orthosis for
S2 88.06 S2 89.44 S2 90.04 S2 87.40
(%) S3 89.36 S3 89.48 S3 91.76 S3 89.26 persons with tetraplegia." IEEE Transactions on Neural Systems and
S1 51 S1 62 S1 74 S1 47 Rehabilitation Engineering 19.1 (2011): 1-5.
# Support
S2 56 S2 64 S2 76 S2 59 [5] Nijholt, Anton. "BCI for games: A ‘state of the art’survey." International
Vectors S3 52 S3 61 S3 71 S3 58 Conference on Entertainment Computing. Springer Berlin Heidelberg,
Classification
S1 4.16 S1 4.54 S1 5.26 S1 4.24 2008.
S2 4.25 S2 4.54 S2 5.21 S2 4.24 [6] V. F. Annese, M. Crepaldi, D. Demarchi and D. De Venuto, "A digital
timing*2 (ms) S3 S3 S3 S3
4.25 4.61 5.43 4.21 processor architecture for combined EEG/EMG falling risk prediction,"
1 2
* Mean accuracy along the tests (green histograms) * Computed from num. of predictions/s 2016 Design, Automation & Test in Europe Conference & Exhibition
(DATE), Dresden, 2016, pp. 714-719. 978-3-9815-3707-9
[7] Blankertz, B., Curio, G., & Müller, K. R. (2002). Classifying single trial
EEG: Towards brain computer interfacing. In Advances in neural
information processing systems (pp. 157-164).
[8] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey,
A. Kübler, J. Perelmouter, E. Taub, and H. Flor, “A spelling device for
the paralysed”, Nature, 398: 297–298, 1999
[9] B. O. Peters, G. Pfurtscheller, and H. Flyvbjerg, “Automatic
Differentiation of Multichannel EEG Signals”, IEEE Trans. Biomed.
Eng., 48(1): 111–116, 201
[10] Hübner D, Verhoeven T, Schmid K, Müller KR, Tangermann M, et al.
(2017) Learning from label proportions in brain-computer interfaces:
Online unsupervised learning with guarantees. PLOS ONE 12(4):
e0175856. [Link]
[11] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson and J. d. R. Millán,
"Towards Independence: A BCI Telepresence Robot for People With
Fig.4 Real time validation accuracy versus test typology, subject and actuator Severe Motor Disabilities," in Proceedings of the IEEE, vol. 103, no. 6,
The system requires a fixed communication latency of 14ms pp. 969-982, June 2015.
and 1s data after the trigger rising edge. Then, once the data are [12] M. de Tommaso, E. Vecchio, K. Ricci, A. Montemurno, D. De Venuto
and V. F. Annese, "Combined EEG/EMG evaluation during a novel dual
acquired, the computational chain (FFT + inference-based FE), task paradigm for gait analysis," 2015 6th International Workshop on
jointly with the Bluetooth transmission and actuation, asks for Advances in Sensors and Interfaces (IWASI), Gallipoli, 2015, pp. 181-
33.7±6.1ms. 186. doi: 10.1109/IWASI.2015.7184949
[13] V. F. Annese and D. De Venuto, "The truth machine of involuntary
movement: FPGA based cortico-muscular analysis for fall prevention,"
VI. CONCLUSIONS 2015 IEEE International Symposium on Signal Processing and
Information Technology (ISSPIT), Abu Dhabi, 2015, pp. 553-558. doi:
The paper detailed the implementation and test of an MRP- 10.1109/ISSPIT.2015.7394398
based BCI for the remote control of mechatronic devices. The [14] Elaine Marieb, Katja Hoehn, Human Anatomy & Physiology, 7ª, San
system, fully implemented on a hybrid platform MATLAB- Francisco, Pearson Prentice Hall, 2007, ISBN 0-13-173297-8.
[15] S. Carrara, M. D. Torre, A. Cavallini, D. De Venuto and G. De Micheli,
Simulink, interfaces a wireless EEG headset, which provides "Multiplexing pH and temperature in a molecular biosensor," 2010
the acquisition from 5 channels. Biomedical Circuits and Systems Conference (BioCAS), Paphos, 2010,
The here presented architecture embeds a novel BCI algorithm pp. 146-149. doi: 10.1109/BIOCAS.2010.5709592
that exploits the time-frequency analysis and a statistical [16] De Venuto D., Annese V.F., de Tommaso M., Vecchio E., Sangiovanni
Vincentelli A.L. (2015) Combining EEG and EMG Signals in a Wireless
inference-based features extraction stage to reduce the training System for Preventing Fall in Neurodegenerative Diseases. In: Andò B.,
time, and the data needed for the classification, if compared Siciliano P., Marletta V., Monteriù A. (eds) Ambient Assisted Living.
with the current state of the art. Biosystems & Biorobotics, vol 11. Springer, Cham
[17] V. F. Annese and D. De Venuto, "FPGA based architecture for fall-risk
Experimental results obtained on a dataset of 3 subjects provide assessment during gait monitoring by synchronous EEG/EMG," 2015 6th
results that can make the paper a pilot in the study. Indeed, International Workshop on Advances in Sensors and Interfaces (IWASI),
considering a short machine training (~10min) and a small Gallipoli, 2015, pp. 116-121. doi: 10.1109/IWASI.2015.7184953
[18] D. De Venuto, V. F. Annese and G. Mezzina, "Remote Neuro-Cognitive
amount of data (only 2s long EEG trial * 5 EEG channels) the Impairment Sensing Based on P300 Spatio-Temporal Monitoring," in
system reaches, on average, an accuracy of 87.36±0.02%. The IEEE Sensors Journal, vol. 16, no. 23, pp. 8348-8356, Dec.1, 2016. doi:
computational chain leads to the user’s intention recognition, 10.1109/JSEN.2016.2606553
[19] Kecman V., T. -M. Huang, and M. Vogt. “Iterative Single Data Algorithm
Bluetooth transmission, and actuation in 33.7±6.1ms. We for Training Kernel Machines from Huge Data Sets: Theory and
consider the above strategy giving the direction for further Performance.” In Support Vector Machines: Theory and Applications,
investigations. 255–274. Berlin: Springer-Verlag, 2005.
[20] V. F. Annese and D. De Venuto, "Fall-risk assessment by combined
REFERENCES movement related potentials and co-contraction index monitoring," 2015
IEEE Biomedical Circuits and Systems Conference (BioCAS), Atlanta,
[1] Grychtol, B., Lakany, H., Valsan, G., & Conway, B. A. (2010). Human GA, 2015, pp. 1-4. doi: 10.1109/BioCAS.2015.7348366
behavior integration improves classification rates in real-time BCI. [21] D. De Venuto and E. Stikvoort, "Low Power High-Resolution Smart
Neural Systems and Rehabilitation Engineering, 8(4), 362-368. Temperature Sensor for Autonomous Multi-Sensor System," in IEEE
[2] D. De Venuto, V. F. Annese and G. Mezzina, "An embedded system Sensors Journal, vol. 12, no. 12, pp. 3384-3391, Dec. 2012. doi:
remotely driving mechanical devices by P300 brain activity," Design, 10.1109/JSEN.2012.21989
Automation & Test in Europe Conference & Exhibition (DATE), 2017,
Lausanne, 2017, pp. 1014-1019. doi: 10.23919/DATE.2017.7927139

Design, Automation And Test in Europe (DATE 2019) 757

You might also like