Kaushik Ram Sadagopan b066cd8044 Adding m4t_audio_to_units, README for audio_to_units, vocoder.to(dtype). 2 lat temu
..
README.md b066cd8044 Adding m4t_audio_to_units, README for audio_to_units, vocoder.to(dtype). 2 lat temu
__init__.py b066cd8044 Adding m4t_audio_to_units, README for audio_to_units, vocoder.to(dtype). 2 lat temu
audio_to_units.py e2da150258 Refactoring wav2vec2_layer_output and getting unit_extraction parity with fairseq. 2 lat temu

README.md

Convert raw audio into units (unit_extraction)

Raw audio needs to be converted to units to train UnitY models and vocoders. Units act as supervision for UnitY models, and are the input to the vocoders which synthesize speech from these units.

The unit extraction pipeline comprises the following steps:

  • Compute features from layer 35 (determined empirically) of the pretrained XLSR v2 model, which is a wav2vec2 model at the core.
  • Assign features for each timestep to a collection of precomputed K-Means centroids to produce a sequence of units.

Quick start:

audio_to_units is run with the CLI, from the root directory of the repository.

m4t_audio_to_units <path_to_input_audio>

audio_to_units calls for UnitExtractor which provides a predict method to convert an audio to units.

The convenience method resynthesize_audio of UnitExtractor, can be used to resynthesize audio waveforms from units.