|
vor 1 Jahr | |
---|---|---|
.. | ||
README.md | vor 1 Jahr | |
__init__.py | vor 1 Jahr | |
evaluate.py | vor 1 Jahr |
Refer to the inference tutorial for the supported tasks and language directions to run inference with SeamlessM4T models.
We use SACREBLEU library for computing BLEU scores and JiWER library is used to compute these CER and WER scores.
Evaluation can be run with the CLI, from the root directory of the repository.
The model can be specified with --model_name
: seamlessM4T_v2_large
or seamlessM4T_large
or seamlessM4T_medium
m4t_evaluate <path_to_data_tsv_file> <task_name> <tgt_lang> --output_path <path_to_save_evaluation_output> --ref_field <ref_field_name> --audio_root_dir <path_to_audio_root_directory>
--src_lang
arg needs to be specified to run evaluation for T2TT task