Vis enkel innførsel

dc.contributor.authorKhan, Md. Sakib
dc.contributor.authorSalsabil, Nishat
dc.contributor.authorAlam, Md. Golam Rabiul
dc.contributor.authorDewan, M. Ali Akber
dc.contributor.authorUddin, Md Zia
dc.date.accessioned2023-09-05T13:28:43Z
dc.date.available2023-09-05T13:28:43Z
dc.date.created2022-10-05T13:25:01Z
dc.date.issued2022
dc.identifier.citationScientific Reports. 2022, 12, 14122.en_US
dc.identifier.issn2045-2322
dc.identifier.urihttps://hdl.handle.net/11250/3087557
dc.description.abstractRecognizing emotional state of human using brain signal is an active research domain with several open challenges. In this research, we propose a signal spectrogram image based CNN-XGBoost fusion method for recognising three dimensions of emotion, namely arousal (calm or excitement), valence (positive or negative feeling) and dominance (without control or empowered). We used a benchmark dataset called DREAMER where the EEG signals were collected from multiple stimulus along with self-evaluation ratings. In our proposed method, we first calculate the Short-Time Fourier Transform (STFT) of the EEG signals and convert them into RGB images to obtain the spectrograms. Then we use a two dimensional Convolutional Neural Network (CNN) in order to train the model on the spectrogram images and retrieve the features from the trained layer of the CNN using a dense layer of the neural network. We apply Extreme Gradient Boosting (XGBoost) classifier on extracted CNN features to classify the signals into arousal, valence and dominance of human emotion. We compare our results with the feature fusion-based state-of-the-art approaches of emotion recognition. To do this, we applied various feature extraction techniques on the signals which include Fast Fourier Transformation, Discrete Cosine Transformation, Poincare, Power Spectral Density, Hjorth parameters and some statistical features. Additionally, we use Chi-square and Recursive Feature Elimination techniques to select the discriminative features. We form the feature vectors by applying feature level fusion, and apply Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) classifiers on the fused features to classify different emotion levels. The performance study shows that the proposed spectrogram image based CNN-XGBoost fusion method outperforms the feature fusion-based SVM and XGBoost methods. The proposed method obtained the accuracy of 99.712% for arousal, 99.770% for valence and 99.770% for dominance in human emotion detection.en_US
dc.language.isoengen_US
dc.publisherNature Portfolioen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleCNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysisen_US
dc.title.alternativeCNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysisen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© The Author(s) 2022en_US
dc.source.volume12en_US
dc.source.journalScientific Reportsen_US
dc.identifier.doi10.1038/s41598-022-18257-x
dc.identifier.cristin2058790
dc.source.articlenumber14122en_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal