Submit to HEAR
Submissions to the HEAR benchmark leaderboard are welcome at any time. To evaluate on the HEAR benchmark and submit your results to the leaderboard:
- Develop an audio embedding model that follows the HEAR API.
- You may train your embedding model on any data that you want, except for data from splits in HEAR tasks that are explicitly denoted as test splits in train/validation/test dataset split setups.
- See
hear-baseline
for examples of how to wrap an existing embedding model in the HEAR API.
- Download the HEAR benchmark tasks
at the correct sample rate for your model.
- All tasks are available for download at the following sample rates: 16000, 22050, 32000, 44100, and 48000.
- If your model uses a sample rate that is not listed you will need to either resample the datasets or use
hear-preprocess
to recreate the datasets at the desired sample rate.
- Use
hear-eval-kit
to:- a) Compute embeddings for all HEAR tasks using your model
- b) Evaluate embeddings by training a shallow downstream classifier
- Submit your results to the leaderboard by appending them to the CSV file located in the GitHub repository for this website and then create a pull request with the updated CSV file. Detailed submission instructions are provided in the README for the GitHub repository.