This repository is not maintained anymore. Evaluation was migrated to mozilla/translations. The models are stored on Google Cloud Storage.
See:
CPU-optimized NMT models for Firefox Translations.
The model files are hosted using Git LFS.
The models are located in the /models directory and grouped by available configuration:
- tiny - the fastest and the smallest models but with lower translation quality
- base - best quality but slower and larger
- base-memory - slightly lower quality than
basebut also lower memory footprint
The evaluation is run as part of a pull request in CI. The evaluation will automatically run, and then commits will be added to the pull request. The evaluation uses Microsoft and Google translation APIs, Argos Translate, NLLB and Opus-MT models and pushes results back to the branch (not available for forks). It is performed using the evals tool.
Use Firefox Translations training pipeline or browsermt/students recipe to train CPU-optimized models. They should have similar size and inference speed to already submitted models.
Do not use SacreBLEU or Flores datasets as a part of training data, otherwise evaluation will not be correct.
To see SacreBLEU datasets run sacrebleu --list.
Create a pull request to the main branch from another branch in this repo (not a fork).
This pull request should include the models, and the evaluation will be added as extra commits in the CI task.
Create a pull request to the contrib branch.
When it is reviewed and merged, a maintainer should create a pull request from contrib to main.
This second PR will run the automatic evaluation and add the evaluation commits.
You can run model evaluation locally by running bash scripts/update-results.sh.
Make sure to set environment variables GCP_CREDS_PATH and AZURE_TRANSLATOR_KEY to use Google and Microsoft APIs.
If you want to run it with bergamot only, remove mentions of those variables from bash scripts/update-results.sh and remove microsoft,google from scripts/eval.sh.
Prefix of the vocabulary file:
vocab.- vocabulary is reused for the source and target languagessrcvocab.andtrgvocab.- different vocabularies for the source and target languages
Suffix of the model file:
intgemm8.bin- supportsgemm-precision: int8shiftAllinference settingintgemm.alphas.bin- supportsgemm-precision: int8shiftAlphaAllinference setting
- Run
python scripts/pull_models.py [TASK_ID]where[TASK_ID] is a Taskcluster ID of theexporttask such asSjPZGW9CRYeb9PQr68jCUw`. - Commit the changes, which adds the files and removes any previous evaluations.
- Push the changes to
originand open a PR. - Wait for the CI to run the evaluations and add the commits.
- Merge the PR.
Models are deployed to Remote Settings to be delivered to Firefox.
Records and attachments are uploaded via a CLI tool which lives in the
remote_settings directory in this repository.
View the remote_settings README for more details on publishing models.
For a list of the released models please see the Firefox Translations Models dashboard.