Skip to content

ag027592/Meta-PerSER

 
 

Repository files navigation

Meta-PerSER: Few Shot Personalized Speech Emotion Recognition via Meta-learning

Welcome to Meta-PerSER !


Installation

Installing the packages

conda env create -f environment.yml

We use conda to manage python environemnt.

Activate conda environment

conda activate meta-perser

Download IEMOCAP dataset

IEMOCAP datasets need to be downloaded and placed all WAV files into folder data/IEMOCAP/Audios/ with a folder structure like this:

├── data
│   ├── IEMOCAP
│   │   ├── Audios                  # All WAV files
│   │   ├── split_session           # Labels for split session
│   │   ├── no_split_session        # Labels for no split session
│   │   ├── all_data.csv            # Original IEMOCAP Labels
│   │   ├── ...

Train and Evaluation

Run Code

running code with script file!

First step: Run normal SSL

./run.sh

Second step: change the model path to pretrained SSL checkpoint in run_meta.sh.

--load_model <path to your checkpoint> \

Final: run our Meta-PerSER!

./run_meta.sh

Changing other training arguments.

--testonly                 # flag to meta-test the load model only 
--test_annotator_id 1      # choose the annotator id to test(choices: 1, 3, 6, 7, 9)
--test_session Ses05       # test on unseen data
--wandb                    # upload training information to Weight and Bias

About

Code publication for Meta-PerSER

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 96.4%
  • Shell 3.6%