An EEG & eye-tracking dataset of ALS patients & healthy people during eye-tracking-based spelling system usage (2024)

Background & Summary

Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder that results in paralysis and severely impairs the communication abilities of patients in the advanced stages1,2. When patients have intact consciousness and limited motor function, including voluntary eye movement control, muscle twitching or blinking, they are said to be in a locked-in state (LIS)3,4. For LIS patients, eye-tracking based augmentative and alternative communication (AAC) technologies are currently the most effective and almost the only means of communication5,6. Previous studies have suggested that eye-controlled on-screen keyboards are the most suitable and optimal method to facilitate communication for LIS patients, as they still retain full cognitive capacity, and eye control ability remains relatively intact despite advanced general paralysis5.

The most widely used eye-tracking method in AAC systems for individuals in LIS involves the detection and tracking of eye movements7. This technology tracks the direction of users’ gaze on a screen or interface, enabling them to select letters, words, or symbols to form messages. Generally, users utilize their eye gaze to select a key on an on-screen keyboard. Gaze-tracking technique enables the user to select any on-screen key by using their eye gaze to determine the position of the selected key and to signal the system to recognize it8,9,10. Building upon this method, an eye-tracking based spelling communication system was developed to support Vietnamese individuals with impaired speech motor function and specifically those with ALS, to communicate by typing text messages via an on-screen keyboard using their eye gaze11. The system’s on-screen keyboard has a key layout and a quick typing suggestion mechanism tailored specifically for the Vietnamese language. Six ALS patients with varying degrees of functional impairment, ranging from partial to full paralysis, were able to use the eye-tracking based spelling communication system to select letters to form words and sentences, and thus communicate freely. The study’s design and paradigm are described in detail in the Methods section.

This data descriptor presents electroencephalogram (EEG) and eye-tracking (ET) recordings captured from a total of 176 participants, consisting of six distinct ALS patients and 170 healthy individuals. To provide context, the prevalence of ALS in the United States was reported to be 5,2 per 100.000 persons in 2015, according to the US National ALS Registry12. The recordings in the proposed dataset, named the EEGET-ALS dataset13, were obtained while the participants utilized the eye-tracking-based spelling communication system, as well as when they performed other requested tasks. More specifically, the EEG recordings were acquired during the execution of specific tasks, such as imagining particular movements or demands and performing those movements, by both ALS patients and healthy participants. For ALS patients, the movements were performed to the extent possible given their functional impairment. The EEG and ET recordings were simultaneously obtained during the use of the eye-tracking based spelling communication system, as participants typed sentences corresponding to the movements or demands.

There have been previous studies with similar objectives, and several EEG and/or ET datasets have been publicly released14,15,16,17. However, most of these datasets are focused solely on motor imagery or have been recorded with a limited number of subjects. Furthermore, there are only a few published EEG and/or ET datasets available concerning ALS patients, and no datasets containing ET and/or EEG recordings of both ALS patients and healthy individuals have been found. To the best of our knowledge, there are no datasets with the same characteristics as the one reported here in the available open-access specialized repositories18,19,20. It is noteworthy that the proposed EEGET-ALS dataset13 contains simultaneous recordings of ET and EEG signals from six distinct ALS patients and 170 different healthy individuals, which were recorded using dedicated devices for each type of signal. These ET and EEG data were captured under identical experimental design from healthy participants and ALS patients in different advanced stages, who were either gradually descending into the LIS state or had been already completely in it. The dataset13 highlights a phase in ALS where communication becomes progressively more challenging and eventually impossible without AAC. It includes recordings from ALS patients over a span of months as well as recordings from a large number of healthy individuals. Furthermore, the dataset13 contains EEG/ET data relevant to other task imagination and other activities, not just motor imagery. As a result, we believe that the EEGET-ALS dataset13 can serve as a valuable benchmark for improving spelling systems with brain-computer interfaces, studying motor imagery, investigating ALS, studying motor cortex function, and examining the progress of patients with motor impairments during rehabilitation, among other applications.

Methods

The experiment described in this study was approved by the Institutional Review Board in Human Research Dinh Tien Hoang Institute of Medicine, which has the operating codes as IRB-VN02010 issued by Vietnam Ministry of Health & as IRB00010830 and IORG0009080 issued by U.S. Department of Health and Human Services. The study was conducted in accordance with the guidelines established by Dinh Tien Hoang Institute of Medicine. Written informed consent was obtained from all participants, including both healthy individuals and patients or their legal representatives, to permit publication of their data. The methods described here are complementary to an in-depth description of the results derived from this dataset that have been presented in <Potential Applications>.

Participants

To recruit participants for the study, a recruitment notice for voluntary participation was widely disseminated. Subsequently, for those who volunteered, physicians within our research team conducted assessments and selected individuals to participate in data collection. These individuals needed to meet certain criteria: if they were not ALS patients, they had to have normal mobility and no prior history of neurological, psychological, or language disorders; if they were ALS patients, their eyes still had to function well enough to use the eye-tracking-based spelling system.

In this study, the data recording process involved the participation of six distinct ALS patients and 170 healthy individuals. The healthy participants, who had normal mobility and no prior history of neurological, psychological, or language disorders, were between 19 to 70 years of age. Of the ALS patients, four patients had a functional rating scale revised (ALSFRS-R)21 score of 0 in the LIS, while the remaining two had a ALSFRS-R score of 1 in the LIS. The participants were identified only by their aliases “id001” through “id170” and “ALS01” through “ALS06”. The data recording process for healthy individuals was conducted once in the laboratory, whereas for each ALS patient, it was performed multiple times at their home. The team visited each ALS patient every few weeks and conducted the data recording depending on their health status and convenience, with each recording time lasting approximately one hour.

Spelling communication system

The spelling communication system employed in this study is an AAC system designed to aid Vietnamese individuals with impaired speech motor function, particularly those with ALS, in communicating with others by utilizing their eye gaze to type text messages through an on-screen keyboard11. The system, depicted in Fig.1, is composed of an eye tracking device (b), a monitor for displaying an on-screen keyboard interface (c), a computer processor that is responsive to the eye tracking device, and a speaker (d). To use the system, the user (a) sits on a chair or lies on a specialized recliner and gazes at the on-screen keyboard displayed on the monitor, which is positioned approximately 80 cm in front of the user’s eyes. The eye tracking device (b), typically comprising one or two cameras located near the computer (usually below the monitor), tracks the user’s eye movements and provides the computer processor with the user’s gaze position. The system then analyzes this position information and determines the specific key on the on-screen keyboard (c) that the user is staring at and wishes to select. The system’s on-screen keyboard has a key layout and a quick typing suggestion mechanism specifically designed for the Vietnamese language. Additionally, the system can generate speech that corresponds to the text entered by the user and broadcast the sound to the speaker (d).

The eye-tracking based spelling communication system.

Full size image

Prerequisites for performing the study

Each participant provided voluntary written consent for their involvement in the data recording process in this study. For healthy individuals, data was recorded once in a spacious and enclosed laboratory. In the case of ALS patients, visits were scheduled every one to four weeks with the agreement of their caregivers, taking into account the patients’ health and well-being as well as resource optimization. However, due to the medical condition of one out of six patients and available resources, only one visit was conducted for this patient. For the remaining five patients, each received a total of ten visits. During each visit, a team of three members transported all equipment and set up all systems at the patient’s residence or lodging, taking into account the same criteria for the patient’s health and wellness. Additionally, all participants, including both ALS patients and healthy individuals, were trained to use the eye-tracking based spelling communication system, enabling them to form sentences by selecting letters using their eyes and communicate freely.

Experimental paradigm

Prior to the recording experiment, participants were fully informed of the entire procedure, including an explanation of the steps and methods of the experiment, and all participants had a full understanding of the whole process. Meanwhile, the experimenter was responsible for supervising the experimental process to ensure its reliability. The data recording was conducted in a spacious and enclosed area. Before EEG/ET data acquisition, participants were required to perform eye movements as directed for ET calibration, and the experimenter help them to wear acquisition EEG equipment. During EEG/ET data acquisition, participants performed tasks as requested and instructed.

During one data collection experiment, each participant was engaged in nine sessions, each corresponding to a specific scenario related to a common human action or demand. These scenarios are described in Table1.

Full size table

For scenarios 1 to 7, participants had to perform three types of tasks. First, they had to imagine the movement or demand corresponding to the scenario for a period of 5 to 7 seconds with closed eyes (task i). Then, they physically performed three consecutive movements corresponding to the scenario with open eyes and short 1–2 second rest between executions (task ii). Finally, they used the eye-tracking based spelling communication system to type the corresponding Vietnamese sentence related to the movement or demand (task iii). In these seven scenarios, participants alternated between tasks i and ii three times before proceeding to task iii. Participants with ALS performed the physical movements to the best of their ability considering their functional limitations. If they were unable to perform the movements, they refrained from doing so.

For scenarios 8 and 9, participants only performed tasks i three times before completing task iii (they did not perform task ii). Before starting each task, participants were given a 5-second resting interval for all nine scenarios.

Figure2 illustrates a recording session of a participant for a scenario. Task i and ii were executed alternately three times, and finally, task iii was performed once. For scenarios 8 and 9, the participant did not perform task ii and executed task i three times in a row before performing task iii. During breaks between tasks, which lasted 5–7 seconds, the participant looked at the screen and rested their mind. The participant was asked not to perform any action unrelated to the experimental scenario being performed, such as leg shaking, arm shaking, stretching. The experimenter guided the participant to start or stop performing each task. EEG recordings were acquired during the execution of the task i and ii, while EEG and ET recordings were simultaneously obtained during the execution of the task iii. Each task execution would correspond to an event in the EEG signal.

A recording session of a single participant.

Full size image

System for data acquisition

The EEG and ET data were acquired using a Recorder Software running on a Core i7 computer. This software, developed by our research team, enables simultaneous recording of both EEG and ET data. It has the capability to capture ET data while the user is utilizing the mentioned Spelling communication system. The EEG and ET data communicate through data stream LSL, which is a protocol that enables a streamlined and synchronized collection of time series measurements across multiple devices. Besides, the Recorder Software enables direct annotation of events which are mentioned in <Experimental paradigm> in the EEG signal.

To record EEG data, the Emotiv EPOC Flex22 device was used. It has 32 electrodes and they are installed by adapting the international standards 10–2023 with a sampling frequency of 128 Hz. We use saline sensors for the headset to prioritize participant comfort without compromising signal quality. Figure3 illustrates the positions of the electrodes, with blue circles representing the positions for mounting the 32 electrodes in 10-10 system (extended from 10–20), and two black circles used for mounting the reference electrodes. ET data is captured by using Tobii Eye Tracker 524 with a sampling frequency of 30 Hz.

The position of EEG electrodes following the 10-10 standard.

Full size image

Data Records

Raw data folders and usage notes

Our EEGET-ALS dataset13 is available at https://doi.org/XXXXX.

In the EEGET-ALS dataset13, data was collected from each participant in 9 recording sessions, corresponding to 9 experimental scenarios, with each recording session lasts approximately 2 minutes. Each recording session includes events corresponding to 7 steps in one experimental scenario, and the participant was allowed to take short breaks (about 5 seconds) between steps to minimize fatigue. The data is organized into separate folders based on the participants’ identification codes. The structure of the dataset13 is depicted in Fig.4. The root folder contains recorded data of 176 participants, with all personal identifying information removed. Data for each individual is stored in a separate subfolder which includes age and sex information, as well as data folders for different experimental scenarios, such as scenario 1, scenario 2, etc. The data folder for each scenario contains the following:

  • Meta files (scenario.json, info.json) provide additional information of sample. scenario.json describes information about the recorded scenario while info.json presents information of participant i.e. sex, age…

  • EEG data in EDF format (EEG.edf).

  • A meta file (eeg.json) that provides information about the recorded EEG data, such as the number of channels and sampling rate.

  • A meta file (EEGTimeStamp.txt) that stores the timestamps of the saved data in EEG.edf.

  • ET data in CSV format (ET.csv).

Structure of dataset directory.

Full size image

The main raw data for each scenario is contained in EEG.edf and ET.csv files.

EEG.edf

EEG file contains signal of 32 channels. The EEG data are stored in the European Data Format (EDF). Each EEG data file corresponds to a specific experimental scenario and contains multiple events and can be extracted using the MNE package for python25. These events correspond to the participant’s execution of tasks, as described in the <Experimental paradigm> section. Figure5 provides an example of the different sessions in a data sample. In the Figure, the bottom bar represents the timeline of the recorded signal, with the current visualized signal highlighted in gray.

Preview data in an EEG.edf file by using MNE tool.

Full size image

ET.csv

Figure6 shows sample of data in ET.csv. Each ET data frame is stored in a separate row which contains five fields of data, namely:

Sample of data in ET.csv.

Full size image

For further information and details about the eye-tracking based spelling communication system, the layout of the on-screen keyboard, and how users interact with it, readers can refer to our earlier publication11.

For healthy participants, each session corresponding to a scenario was performed continuously in about from 2 to 3 minutes. The EEG and ET data of each session was recorded without interruption (meaning the EEG and ET data included the data during periods of rest between steps). All sessions were usually recorded consecutively over a period of approximately 40 minutes. Between sessions there was a period of approximately 30 seconds for participants to fully relax.

For 5 of 6 participants with ALS, we conducted 10 times of data acquisition (9 sessions each time same as for healthy subjects) over a period of 3 to 5 months. The recording times usually were 1 week apart, but due to some objective reasons such as the health status of the participants or the epidemic situation, some of the recording times were 2 to 4 weeks apart. Information for the recording time for each data acquisition time from 1 to 10 was described in a file ‘recording_time.json’; (Fig.7 show an example of description in file ‘recording_time.json’). For this group of participants data, we named the data folder according to the syntax “ALS_[participant code]”. For the remaining participant with ALS, we were only able to perform 1 data collection time.

Example of description in file recording_time.json.

Full size image

To facilitate ease of use, we have annotated certain events within the recorded data. Each annotation corresponds to one of the tasks (i, ii, iii, and “resting”) outlined in <Experimental Paradigm>, as detailed in Table2.

Full size table

Artifacts by physical tasks

Artifacts may be occurred when participants conduct the physical tasks. Users are advised to employ alignment and preprocessing methods when exploring our dataset to address these potential issues.

Technical Validation

EEG and ET recording setup

The participants were provided with specific instructions to reduce head movements during EEG and ET data collection. Furthermore, they were instructed to avoid any extraneous actions unrelated to the experimental scenario, such as leg or arm shaking or stretching, that could potentially introduce unwanted noise into the recordings. The recordings were conducted in a spacious, quiet and enclosed laboratory or room. To ensure consistency in recording conditions, the data were recorded at the same time intervals every day, from 8 AM to 11 AM and 1 PM to 5 PM.

EEG Electrodes placement

The electrode placement adhered to the international 10-10 system, whereby 34 electrodes were positioned on the scalp, consisting of 32 electrodes for data acquisition and 2 for reference (Fig.3). The electrode placement procedure was conducted by a qualified technician and validated by cross-checking against a scalp map. The reference electrode was located behind the participant’s ear.

ET Calibration

Prior to data acquisition of EEG and ET, participants were required to move their eyes as instructed for ET calibration. Specifically, participants were directed to fixate on a series of calibration targets displayed on the monitor of the eye-tracking based spelling communication system. These calibration targets were presented at various locations on the screen to ensure precise gaze tracking across the entire display.

Preprocessing steps

The raw EEG/ET data was provided without undergoing any preprocessing to ensure its integrity.

Quality control measures

The Emotiv headset provided electrode impedance ratings from 0 (very bad) to 4 (very good), from which overall EEG quality was calculated as the average of the 3 three lowest rated channels, normalized to the maximum score of 4. The EEG electrode impedance was checked before and after each recording session to ensure electrode contact quality. If the overall quality computed based on three worst channels falls below 80, the recorded data is discarded. The raw EEG data was visually inspected to identify any clearly visible artifacts or noise originating from external environmental factors during recording. In case such noise or artifact was identified, the corresponding recording was eliminated from further analysis. The ET data was also visually inspected to ensure that the gaze data was accurate and free from artifacts. Please note that despite thorough checks, the distributed EEG/ET datasets may still contain some noise or artifacts, as only clearly visible issues could be removed. Users should take this into consideration when analyzing the data.

Potential applications

The EEGET-ALS dataset13 collected in this study has the potential to be utilized for various research applications, such as determining the degree of participant attention to improve selection speed in gaze-controlled systems and for person identification (identifying the identity of an individual from a group of people).

To address the attention degree determination problem, we used the ET data to label the EEG data, which was processed to extract features for a determining classification. Attention data was defined as data collected when participants looked at a key to select it. Based on this definition, the EEG data was separated into two groups of samples: attention data (positive labeled) and inattention data (negative labeled) using ET data. We then extracted Power Spectral Density (PSD) and Common Spatial Patterns (CSP26) features from the EEG data and passed them to a simple classification such as Support Vector Machine (SVM) or Artificial Neural Networks (ANNs) to determine when the users were paying attention. This method with initial data had an accuracy of approximately 80% in the cross-check case.

In addition, person identification using EEG signals could improve user experience through personalization applications. To verify our data, we conducted several experiments on our recorded data for this problem. The EEG data was segmented and split into two sets as training and testing data. The learned model was then used to determine the identity of the testing signal. We employed four mechanisms for this investigation, including Support Vector Machine (SVM) learning and classification of the Power Spectral Density of the signal, SVM learning and classification of the features with inter-hemispheric amplitude ratio (IHAR27), Convolutional Neural Network (CNN) learning and classification of the raw signal, and Long short-term memory – Convolutional Neural Network (CNN-LSTM) learning and classification of the raw signal. The results provided in Table3 demonstrate that the mentioned methods worked well on our EEGET-ALS dataset13.

Full size table

Moreover, a common and essential task in the field of EEG is the classification of motor imagery (MI). MI classification involves the interpretation of EEG signals to identify and distinguish patterns related to imagined movements or actions. This task has applications in brain-computer interfaces (BCIs) and assistive technologies, where individuals can control devices or computers simply by thinking about specific motor actions. Accurate classification of motor imagery holds great promise in enhancing the quality of life for individuals with physical disabilities and advancing our understanding of brain function and control. To generate evaluation of MI classification on our dataset, 2 type of experiments settings which regarding to 2 parts in our datasets are conducted. This paper selected two classical algorithms (CSP26 and Euclidean Alignment - EA28) to enhance extracted informative features, and three machine learning or deep learning methods (SVM, EEG-ITNet29, and EEGNet30) for classification. Models are trained to predict 2 different sets of labels: 3 labels (LR0: lift left hand, lift right hand, resting) and 4 labels (LRF0: lift left hand, lift right hand, lift leg, resting). Performance of models are evaluated by 2 metrics: Balanced Accuracy (BAC) and Cohen’s Kappa (Kappa). The train/test separation for experiment of only healthy participants in our EEGET-ALS dataset13 is depicted in Fig.8 and the corresponding MI classification results are presented in Table4. The train/test separation for experiment of healthy and ALS participants in our EEGET-ALS dataset13 is illustrated in Fig.9 and the corresponding MI classification results are presented in Table5.

Train/test separation for experiment of only healthy participants.

Full size image
Full size table

Train/test separation for experiment of healthy and ALS participants for MI classification.

Full size image
Full size table

For person identification, architecture of deep models are preserved as presented in papers31 with learning rate is 3e-4, batch size is 32 and Adam optimizer.

For MI classification e.g EEG-ITNET30 is trained with these hyperparameter: the size of batch is 16, the learning rate is 3e-4, maximum iteration times is 150 epoch, Adam optimizer is employed with objective function is cross entropy. Each EEG sample is cropped with a duration of 2 seconds.

Code availability

• For the Motor Imagery task, code is available at: https://github.com/txdat/bci-motor-imagery/blob/master/notebooks/eeg_final.ipynb

• For the person identification, code is available at: https://github.com/dangkh/VINIF_IdentifyPerson

References

  1. Brownlee, A. & Bruening, L. M. Methods of communication at end of life for the person with amyotrophic lateral sclerosis. Top.Lang. Disord. 32(2), 168–185 (2012).

    Article Google Scholar

  2. Chaudhary, U., Birbaumer, N. & Ramos-Murguialday, A. Brain-computer interfaces for communication and rehabilitation. Nat. Rev. Neurol. 12, 513–525 (2016).

    Article PubMed Google Scholar

  3. Bauer, G., Gerstenbrand, F. & Rumpl, E. Varieties of the locked-in syndrome. J. Neurol. 221, 77–91 (1979).

    Article CAS PubMed Google Scholar

  4. Kübler, A. & Birbaumer, N. Brain-computer interfaces and communication in paralysis: Extinction of goal directed thinking in completely paralyzed patients? Clin. Neurophysiol. 119, 2658–2666 (2008).

    Article PubMed PubMed Central Google Scholar

  5. Calvo, A. et al. Eye Tracking Impact on Quality-of-Life of ALS Patients. 11th International Conference on Computers Helping People with Special Needs, Linz (AT). 5105, 70–77, https://doi.org/10.1007/978-3-540-70540-6_9 (2008).

    Article Google Scholar

  6. Beukelman, D., fa*ger, S. & Nordness, A. Communication Support for People with ALS. Neur. Res. Int 04, 714693 (2011).

    Google Scholar

  7. Raupp S. Keyboard layout in eye gaze communication access: typical vs. ALS (Doctoral Dissertation, East Carolina University). 2013 January.

  8. Yang, S., Lin, C., Lin, S. & Lee, C. Design of virtual keyboard using blink control method for the severely disabled. Computer methods and programs in biomedicine. 111(2), 410–418 (2013).

    Article PubMed Google Scholar

  9. Yang, S., Lin, C., Lin, S. & Lee, C. Text Entry by Gaze: Utilizing Eye-Tracking. Text Entry Systems: Mobility, Accessibility, Universality. Chapter 9: p. 175-187 (2007).

  10. Ghosh, S., Sarcar, S., Sharma, M. & Samanta, D. Effective virtual keyboard design with size and space adaptation. 2010 IEEE Students Technology Symposium (TechSym). (2010).

  11. Nguyen, M. H. et al. On-screen keyboard controlled by gaze for Vietnamese people with amyotrophic lateral sclerosis. Technology and Disability 35(no. 1), 53–65, https://doi.org/10.3233/TAD-220391 (2023).

    Article Google Scholar

  12. Mehta, P. et al. Prevalence of amyotrophic lateral sclerosis: United States. MMWR Morb Mortal Wkly Rep 2018; 67:1285–1289.

  13. Ngo, T. D. et al. An EEG & eye-tracking dataset of ALS patients & healthy people during eye-tracking-based spelling system usage. figshare https://doi.org/10.6084/m9.figshare.c.6910027.v1 (2024).

  14. Gorges, M. et al. Eye movement defcits are consistent with a staging model of pTDP-43 pathology in Amyotrophic Lateral Sclerosis. PLoS One 10(11), e0142546 (2015).

    Article PubMed PubMed Central Google Scholar

  15. Jaramillo-Gonzalez. et al. A dataset of EEG and EOG from an auditory EOG-based communication system for patients in locked-in state. Scientific Data. 8. https://doi.org/10.1038/s41597-020-00789-4. 2021.

  16. Kaya, M. et al. A large electroencephalographic motor imagery dataset for electroencephalographic brain computer interfaces. Scientific Data. 5, 180211, https://doi.org/10.1038/sdata.2018.211 (2018).

    Article PubMed PubMed Central Google Scholar

  17. Ma, J. et al. A large EEG dataset for studying cross-session variability in motor imagery brain-computer interface. Scientific Data. 9. https://doi.org/10.1038/s41597-022-01647-1. 2022.

  18. BNCI Horizon 2020 http://bnci-horizon-2020.eu/database (2020).

  19. PhysioNet: Te research resource for complex physiological signals https://physionet.org/about/database/ (2020).

  20. BrainSignals: Publicly available brain signals EEG MEG ECoG data http://www.brainsignals.de/ (2020).

  21. Cedarbaum, J. M. et al. The ALSFRS-R: a revised ALS functional rating scale that incorporates assessments of respiratory function. J. Neurol. Sci. 169, 1–2 (1999).

    Article Google Scholar

  22. Emotiv, 2022. [Online]. Available: https://www.emotiv.com/epoc-flex/. [Accessed 1 November 2023].

  23. Jasper, H. The Ten-Twenty Electrode System of the International Federation. Electroencephalography and Clinical Neurophysiology 10, 371–375 (1958).

    Google Scholar

  24. Tobii, “Tobii Eye Tracker 5,” Tobii, [Online]. Available: https://gaming.tobii.com/product/eye-tracker-5/. [Accessed 1 November 2023].

  25. MNE package [Online]. Available: https://mne.tools/stable/index.html. [Accessed 1 November 2023].

  26. Ramoser, H., Johannes, M. G. & Gert, P. Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE transactions on rehabilitation engineering 8.4, 441–446 (2000).

    Article Google Scholar

  27. Jayarathne, I., Michael, C. & Senaka, A. Person identification from EEG using various machine learning techniques with inter-hemispheric amplitude ratio. PLoS One 15.9, e0238872 (2020).

    Article Google Scholar

  28. He, H. & Dongrui, W. Transfer learning for brain–computer interfaces: A Euclidean space data alignment approach. IEEE Transactions on Biomedical Engineering 67.2, 399–410 (2019).

    Google Scholar

  29. Salami, A., Andreu-Perez, J. & Gillmeister, H. EEG-ITNet: An explainable inception temporal convolutional network for motor imagery classification. IEEE Access 10, 36672–36685 (2022).

    Article Google Scholar

  30. Lawhern, V. J. et al. EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. Journal of neural engineering 15(5), 056013 (2018).

    Article PubMed Google Scholar

  31. Sun, Y., Lo, F. P.-W. & Lo, B. EEG-based user identification system using 1D-convolutional long short-term memory neural networks. Expert Systems with Applications 125, 259–267 (2019).

    Article Google Scholar

Download references

Acknowledgements

Research is supported by Vingroup Innovation Foundation (VINIF) in project code VINIF.2020.DA10.

Author information

Authors and Affiliations

  1. University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam

    Thi Duyen Ngo,Hai Dang Kieu,Minh Hoa Nguyen&Thanh Ha Le

  2. Vietnam-Korea Institute of Science and Technology, Ministry of Science and Technology, Hanoi, Vietnam

    The Hoang-Anh Nguyen

  3. Vietnam Military Medical University, Hanoi, Vietnam

    Van Mao Can&Ba Hung Nguyen

Authors

  1. Thi Duyen Ngo

    View author publications

    You can also search for this author in PubMedGoogle Scholar

  2. Hai Dang Kieu

    View author publications

    You can also search for this author in PubMedGoogle Scholar

  3. Minh Hoa Nguyen

    View author publications

    You can also search for this author in PubMedGoogle Scholar

  4. The Hoang-Anh Nguyen

    View author publications

    You can also search for this author in PubMedGoogle Scholar

  5. Van Mao Can

    View author publications

    You can also search for this author in PubMedGoogle Scholar

  6. Ba Hung Nguyen

    View author publications

    You can also search for this author in PubMedGoogle Scholar

  7. Thanh Ha Le

    View author publications

    You can also search for this author in PubMedGoogle Scholar

Contributions

Thi Duyen Ngo: Study design and conceptualization; Data collection; Data curation; Manuscript writing; Manuscript correction; Supervision. Hai Dang Kieu: Data collection; Data curation; Data validation; Manuscript writing.Minh Hoa Nguyen: Data collection; Data validation. Hoang-Anh Nguyen The: Study design and conceptualization. Can Van Mao: Study design and conceptualization. Nguyen Ba Hung: Study design and conceptualization. Thanh Ha Le: Study design and conceptualization; Data curation; Manuscript correction; Supervision.

Corresponding authors

Correspondence to Thi Duyen Ngo or Thanh Ha Le.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

An EEG & eye-tracking dataset of ALS patients & healthy people during eye-tracking-based spelling system usage (10)

Cite this article

Ngo, T.D., Kieu, H.D., Nguyen, M.H. et al. An EEG & eye-tracking dataset of ALS patients & healthy people during eye-tracking-based spelling system usage. Sci Data 11, 664 (2024). https://doi.org/10.1038/s41597-024-03501-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41597-024-03501-y

An EEG & eye-tracking dataset of ALS patients & healthy people during eye-tracking-based spelling system usage (2024)

References

Top Articles
Latest Posts
Article information

Author: Rubie Ullrich

Last Updated:

Views: 5989

Rating: 4.1 / 5 (52 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Rubie Ullrich

Birthday: 1998-02-02

Address: 743 Stoltenberg Center, Genovevaville, NJ 59925-3119

Phone: +2202978377583

Job: Administration Engineer

Hobby: Surfing, Sailing, Listening to music, Web surfing, Kitesurfing, Geocaching, Backpacking

Introduction: My name is Rubie Ullrich, I am a enthusiastic, perfect, tender, vivacious, talented, famous, delightful person who loves writing and wants to share my knowledge and understanding with you.