A Systematic Deep Learning Model Selection for P300-Based Brain-Computer Interfaces

Abstract : Predicting attention-modulated brain responses is a major area of investigation in brain-computer interface (BCI) research that aims to translate neural activities into useful control and communication commands. Such studies involve collecting electroencephalographic (EEG) data from subjects to train classifiers for decoding users' mental states. However, various sources of inter or intrasubject variabilities in brain signals render training classifiers in BCI systems challenging. From a machine learning perspective, this model training generally follows a common methodology: 1) apply some type of feature extraction, which can be time-consuming and may require domain knowledge and 2) train a classifier using extracted features. The advent of deep learning technologies has offered unprecedented opportunities to not only construct remarkably accurate classifiers but also to integrate the feature extraction stage into the classifier construction. Although integrating feature extraction, which is generally domain-dependent, into the classifier construction is a considerable advantage of deep learning models, the process of architecture selection for BCIs generally depends on domain knowledge. In this study, we examine the feasibility of conducting a systematic model selection combined with mainstream deep learning architectures to construct accurate classifiers for decoding P300 event-related potentials. In particular, we present the results of 232 convolutional neural networks (CNNs) (4 datasets x 58 structures), 36 long short-term memory cells (LSTMs) (4 datasets x 9 structures), and 320 hybrid CNN-LSTM models (4 datasets x 80 structures) of varying complexity. Our empirical results show that in the classification of P300 waveforms, the constructed predictive models can outperform the current state-of-the-art deep learning architectures, which are partially or entirely inspired by domain knowledge. The source codes and constructed models are available at https://githu...
 EXISTING SYSTEM :
 ? It can be seen that the application scenarios of TL in the existing literature have focused almost only on classification and regression tasks. ? The principle of TL is realizing the knowledge transfer from different but related tasks, i.e., using existing knowledge learned from accomplished tasks to help with new tasks. ? To decide if the current trial’s features and estimated label should be added to the existing knowledge base, a confidence ratio CR. ? The algorithms of electroencephalography (EEG) decoding are mainly based on machine learning in current research. ? One of the main assumptions of machine learning is that training and test data belong to the same feature space and are subject to the same probability distribution.
 DISADVANTAGE :
 ? LIU improved the convolutional neural network on the basis of Cecotti’s algorithm which was named BN3 algorithm, taking the batch normalization (BN) layer and the dropout (DP) layer to deepen the network layers and overcome the problem of overfitting. ? The BN3 algorithm achieved good results in classification, but still need to improve the recognition accuracy when the number of experiments are reduced. ? Then, the number of positive samples should be increased before the next step, in order to prevent the classification problems caused by the imbalance of positive and negative samples. ? These may be caused by the overfitting problem when the convolutional neural network processes a large amount of data, which affects the experimental performance.
 PROPOSED SYSTEM :
 • They proposed two different transfer methods: To finetune a pre-trained network and then extract image features by said pre-trained network, and to classify the status of brain using an SVM. • Popular networks such as Alexnet, VGG16net, VGG19net, and Squeezenet, were used to verify the performance of the proposed framework. • Some studies proposed that the Bayesian model is a promising approach to capture variability. • This model is built based on multitask learning, and variation in some features is often extracted, such as spectral and spatial. • The feature of EEG from these subjects exhibits inseparability in feature space. Therefore, the parameter optimization of the classifier does not significantly improve the classification results.
 ADVANTAGE :
 ? To measure the performance of the algorithms, we used two indices, the accuracy rate and the information translate rate (ITR) to compare the proposed PCA-CNN with other CNN algorithms in the literature. ? In this paper, we used the accuracy rate and ITR to evaluate the P300 signals detection performance on two datasets of different subjects. ? The ITR of PCA-CNN is higher than the traditional SVM classification algorithm, which proves the stability of the algorithm performance. ? It can implement a variety of different functions, and can even be used in the home of people with disability. ? PCA (Principal Component Analysis), a principal component analysis method, is widely used in feature extraction and data dimensionality reduction.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com