• Important Date

     

    Dataset open Test (Phase I): 1st April ~ 31st May, 2024

    Abstract submission for SC: 15th April ~ 15th,June

    Dataset open Test (Phase II): 15th June ~ 15th July, 2024 

    Notification of acceptance:   1st August, 2024

    Paper final submission: 1st September, 2024

    Early registration:     1st October, 2024 

     

    Note:
    • All SC papers will be included in Health & Technology Journal.

    • Health & Technology will publish the SC papers on 1st November 2024.

  • Important Date for Scientific Challenge

     

     

    Dataset open Test (Phase I): 01 April ~ 31 May, 2024
    Abstract submission for SC: 15 April ~ 15 June, 2024
    Dataset open Test (Phase II): 15 June ~ 15 July, 2024
    Notification of acceptance: 01 August, 2024 
    Paper final submission: 01 September, 2024 
    Early registration: 01 October, 2024
     

    Note:
    • All accepted abstracts can be submitted as full text scientific papers within the specified deadline for publication under a Topical Collection of the IUPESM journal Health and Technology. The full-text article will be subject of peer-review process and subsequent publication in Health and Technology Journal, Issue 14, Volume 6, 2024.

  • Scientific Challenge

    TITLE: “Mind-Reading Emotions”

     

    In recent years, neuroimaging techniques like functional magnetic resonance imaging (fMRI) have revolutionized our understanding of the human brain and its connection to emotions. Emotions are intrinsic to the human experience and significantly influence cognition, behavior, and mental well-being. Beyond their neural origins, emotions transcend the confines
    of the brain, leaving indelible imprints on the body. These manifestations ripple across multiple levels, encompassing neurobiological shifts, physiological responses, outward expressions, overt behaviors, and introspective reflections.
    Thus, the body emerges as a gateway to the mind, and monitoring measurable bodily changes offers a profound glimpse into otherwise intangible emotional states.

     

    Is it possible to identify someone’s emotions directly from readings of his brain and body?

    Accurately identifying emotional states is crucial for understanding how emotions affect us and developing novel avenues for diagnosis and intervention. By combining advanced brain scanning methods with the study of bodily reactions, scientists aim to decode how emotions work in our brains and bodies.

    In this challenge, participants are given access to a rich dataset of 20 participants, each containing 30 trials of 25 seconds of data from three simultaneous sources:

    1) pre-processed fMRI data,

    2) photoplethysmography (PPG) data,

    3) respiratory data.

     

    As a result, the dataset contains a total of N=600 examples of emotional states (20 participants times 30 trials).

     

    Each trial was collected while the participant watched an emotion-provoking video clip of three predefined classes (positive valence, negative valence, or neutral valence). Valence is a characteristic of an emotion related to its pleasantness or unpleasantness. Positive valence is associated with feelings of happiness, joy, and contentment, making experiences appealing and desirable. In contrast, negative valence involves emotions such as sadness, anger, and fear, which make experiences unappealing and undesirable. Each trial was rated by the participants on a valence scale of 9 levels, from -4 (maximum negative valence) up to 4 (maximum positive valence). So, each video contains a valence class {positive, negative, neutral} and a valence rating {-4, -3, -2, -1, 0, 1, 2, 3, 4}.

     

     

    GOAL

     

    Your task is to identify (predict) the valence class and valence rating of the emotion felt by the participant in each trial just by looking into its brain activity and body physiological responses.

     

    Quick Start

    In order to participate as an individual or collective team, you should first register in the challenge to have access to the dataset for the competition.

    • Register using the website of the challenge, you will get an email with training data set link from https://ttmd.cycu.edu.tw
    • Download the data set that you can use to develop your solution.
    • Develop your solution. You can use whichever technology or framework you wish, since you will be asked to submit only the resulting labels for the un-annotated data.

    Submit your results for scoring. Improperly-formatted entries will not be accepted. You are allowed for 10 attempts for phase I and 5 attempts for phase II, for each you will receive (limited) feedback that will help you improve your model.

    • Submit an overview paper with the best solutions to icbhi2024.sc@gmail.com
    • Attend ICBHI2024, and present your work there (at least one team member must be registered and attend ICBHI2024).

    For those wishing to compete officially, please follow the additional TWO steps described in the Rules and Deadlines.

     

  • Rules and Deadlines

    This challenge is divided into two phases: Phase I and Phase II.

    In Phase I, participants will have full access to all the data within the Train set and may submit 10 entries for the Test set to obtain feedback regarding their models’ performance.

    In Phase II, participants will get access to the true labels of 9 trials (3 for each CLASS) from each of the subjects in the Test set. During this phase, participants may use this subject-specific data to fine-tune their models and submit 5 additional entries.

    Unused entries for Phase I will not be carried on to Phase II. For both phases, submissions that cannot be scored, due to missing components or improper formatting, will not count as an entry.
    All deadlines occur at noon GMT (+8) on the dates mentioned below

  • broken image

    All official entries must be received no later than 12:00 GMT(+8) on the July 15, 2024. In the interest of fairness to all participants, late entries will not be accepted or scored. To be eligible you must do all of thefollowing:

    1. Register; you should include the username, email address and team name.
    2. Submit at least one entry that can be scored during Phase I until the defined deadline
      (12:00 GMT(+8), May 31, 2024).
    3. Submit an abstract (up to 1 page) describing your approach on the Scientific Challenge to ICBHI’24 no later than the June 15, 2024.
    4. You will be notified if your abstract has been accepted by email from ICBHI’24 until August 01, 2024. Submit afull paper describing your work for the Scientific Challenge following the
      ICBHI’24 rules no later than September 01, 2024 using the regular submissionsystem. Includethe overall score for at least one Phase I or Phase II entry in your paper.
    5. Attend ICBHI’24 (Oct.30 ~ Nov. 02, 2024) and present your work there (at least one team member must attend ICBHI).

    Please do not submit an analysis of this year's Scientific Challenge data to other conferences or journals until after ICBHI’24 has taken place, so the competitors are able to discuss the results in a single
    forum.

    All accepted abstracts can be submitted as full text scientific papers within the specified deadline for publication under a Topical Collection of the IUPESM journal Health and Technology. Authors are requested to make sure that all data and materials as well as software application or custom code support their published claims and comply with field standards. Authors are required to make materials, data, code, and associated protocols available to readers (an open link such as git repository must be provided). The full-text article will be subject of peer-review process and subsequent publication in Health and Technology Journal, Issue 14, Volume 6, 2024.

    broken image
  • Dataset description 

    The dataset includes the data from 20 participants who performed an fMRI task (Figure 1). The fMRI task followed a block design consisting of a total of 30 trials. Each trial was composed of three sequential blocks: (1) fixation cross (15 seconds), a white cross in a black background was showed to the participants and acted as a baseline, (2) video block (15 seconds), watching an emotion-provoking video clip sourced from the CAAV dataset (DiCrosta et al., 2020) previously validated and associated with an emotional valence level, and (3) response block, where participants rated the previously watched video in terms of emotional arousal and valence(Figure 1).

  • broken image

    Figure1. Illustration of a single trial of the fMRI task used for data acquisition

     

    For convenience, we already segmented each trial into 25-second segments of relevant data (Figure 2):

    • The last 5 seconds of the fixation cross
    • The 15 seconds of the video
    • The first 5 seconds after the end of the video (to account for the hemodynamic delay)

     

    Furthermore, the fMRI data was converted to a 2D data format using the Brainnetome (BN) anatomical atlas containing 246 regions (Fan et al., 2016). For each region of the atlas, the mean BOLD signal was extracted.
     

    Thus, for each trial, a 246 x 25 matrix of fMRI data is available (regions x time samples). fMRI data (BOLD signal) was
    recorded at a rate of 1 sample per second. Concurrently, for each video, PPG and respiratory physiological data will be available for the same 25 seconds. The physiological data was collected at 400Hz. Each trial has two labels associated. The first label (CLASS) will be used for classification purposes and contains three classes: -1, 0, and 1, corresponding to negative valence, neutral valence and positive valence, respectively. The second label (LEVEL) corresponds to the valence level reported by the participants (value between -4 and 4).

    broken image

    Figure2. Visualization of the data segmentationperformed.

    Train set:

    The train set consists of the data from 16 participants. The true labels for both CLASS and LEVEL are available to train

    the models.

    Test set:

    The test set consists of the data from the remaining 4 participants, without the true labels.

     

    Each participant folder is organized with the following contents:

    ■ fMRI_data.npz numpy compress file:

    after load can be access like a dictionary using the key = “data”. Will return a numpy array with shape [30, 246, 25]

    ■ ppg_data.npz numpy compress file:

    after load can be access like a dictionary using the key = “data”. Will return a numpy array with shape [30, 1, 10000]

    ■ resp_data.npz numpy compress file:

    after load can be access like a dictionary using the key = “data”. Will return a numpy array with shape [30,1, 10000]

    ■ labels.npz numpy compress file: 

    afterload can be access like a dictionary using the key = “data”. Will return a numpy array with shape [30, 2]. The columns will correspond to [CLASS, LEVEL] labels. The labels.npz file is only present on the subjects of the training
    set.

     

    Note: For both the ppg_data and resp_data there might be incomplete trial data containing missing values, represented by np.nan values.

     

    In addition to the data from the 20 participants, an example of each class of the videos shown to the participants is included in the supplementary folder, as well as the labeling of each region of the BN atlas. In “BN_atlas.txt” file, the first column indicates the index where the region is located in the 2D fMRI data matrix and the second column indicates the name of the region:

     

    Ex: “BN_atlas.txt”

    0 A8m_L (first element of the matrix corresponds to the A8m_L region of the BN atlas)

    1 A8m_R

    245 lPFtha_R

     

    Furthermore, we provide a simple Python script with sample code for loading the data of a subject, and the submission.csv file is a template file to be filled and submitted for each attempt in the challenge.

     

    Supplementary\

      High_Valence.mp4

      Low_Valence.mp4

      Neutral_Valence.mp4

      BN_atlas.txt

     

    Train\

      P01\

         fMRI_data.npz

         ppg_data.npz

         resp_data.npz

         labels.npz

      P02\

         …

     

      …

      P16\

         …

     

    Test\

      P17\

         fMRI_data.npz

         ppg_data.npz

         resp_data.npz

      P18\

         …

      …

      P20\

         …

     

    References:
    Di Crosta, A. etal. (2020) "The Chieti Affective Action Videosdatabase, a resource for the study of emotions in psychology", Scientific Data, 7(1), p. 32. doi: 10.1038/s41597-020-0366-1.

    Fan, L. et al. (2016) "The HumanBrainnetome Atlas: A New Brain Atlas Based on Connectional Architecture", Cerebral Cortex, 26(8), pp. 3508–3526. doi: 10.1093/cercor/bhw157.

  • Scoring

     

    For the classification task, the scoring CLASSerror will be computed based on the percentage of errors on the predicted labels you submit, using the formula:

  • broken image

     

     

     

    For the level task, the scoring  LEVELerror will be computed based on the symmetric mean absolute percentage error (SMAPE), using the formula:

    broken image

     

     

     

    The final score will be given by:

    broken image

     

    The final score, CLASSerror and LEVELerror will be rounded to the fourth decimal case.

     

     

     

     

     

    Submissions

     

    For each attempt, your team must fill the “submission.csv” file, present in the Supplementary folder, with the class and levels predicted for each trial of each subject of the test set, and submit it to the email icbhi2024.sc@gmail.com. The file must be renamed for the following format: “team_TEAMNAME_phase_PHASENUM_attempt_NUMOFATTEMPT.csv”, with TEAMNAME being the name of the team (replace spaces with hyphens “-“if you have them in your team’s name), PHASENUM should be I or II, depending of the phase, and NUMOFATTEMPT the number of the attempt of that phase, starting from 1 in each phase. Note that you are allowed for 10 attempts for phase I and 5 attempts for phase II.

     

    You will receive an email confirming the reception of the attempt and a second email with the scoring obtained.