You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Skip lazy imports when the caller is inspect.py
This avoids having certain inspect functions import our lazy modules when we don't want them to. `getframeinfo` in particular appears to do it, and this gets called by PyTorch at some point. IPython might also be doing it but autocomplete still seems to work.
This does not appear to break anything. Added test for hyperpyyaml to ensure we're not breaking that.
* SSL_Semantic_Token _ new PR (speechbrain#2509)
* remove unnecassry files and move to dasb
* remove extra recepie from test
* update ljspeech qunatization recepie
* add discrete_ssl and remove extra files
* fix precommit
* update kmeans and add tokeizer for postprocessing
* fix precommit
* Update discrete_ssl.py
* fix clone warning
---------
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
* _ensure_module Raises docstring
* Expose `ensure_module` so that docs get generated for it
This is already an internal class anyway, and this is safe to call.
* Update actions/setup-python
* Use `uv` in test CI + merge some dep installs
The consequence is faster dependency installation. Merging some of the dependency installs helps avoid some packages being reinstalled from one line to the next. Additionally, CPU versions are specified when relevant, to avoid downloading CUDA stuff the CI can't use anyway.
* Use `uv` in doc CI + merge some dep installs
Similar rationale as for the test CI
* Parallelize doc generation with Sphinx
This does not affect the entire doc generation process but should allow some minor multithreading even with the 2-core CI workers.
* Enable `uv` caching on the test CI
* Enable `uv` caching on the docs CI
* CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) (speechbrain#2290)
CTC-only pre-training of conformer and branchformer.
---------
Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>
* Update CommonVoice transformer recipes (code from Samsung AI Center Cambridge) (speechbrain#2465)
* Update CV transformer recipes to match latest results with conformer.
---------
Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
* Whisper improvements: flash attention, KV caching, lang_id, translation, training... (speechbrain#2450)
Whisper improvements:
- flash attention
- kv caching
- lang identifaction
- translation
- finetuning amelioration
... and more ...
* Update README.md
* precommit
* update zed download link (speechbrain#2514)
* `RelPosEncXL` refactor and precision fixes (speechbrain#2498)
* Add `RelPosEncXL.make_pe`, rework precision handling
* Rework RelPosEncXL output dtype selection
* Fix in-place input normalization when using `sentence`/`speaker` norm (speechbrain#2504)
* fix LOCAL_RANK to be RANK in if_main_process (speechbrain#2506)
* Fix Separation and Enhancement recipes behavior when NaN encountered (speechbrain#2524)
* Fix Separation and Enhancement recipes behavior when NaN encountered
* Formatting using precommit hooks
* Lock torch version in requirements.txt (speechbrain#2528)
* Fix compatibility for torchaudio versions without `.io` (speechbrain#2532)
This avoids having the Python interpreter attempt to resolve the type annotation directly.
* fix docstrings
* consistency tests - classification
* consistency tests - classification
* consistency tests - interpret
* default to no wham
* fix after tests pass
* fix after tests pass
* tests after that
* fix consistency
---------
Co-authored-by: asu <sdelang@sdelang.fr>
Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com>
Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com>
Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com>
Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com>
Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com>
Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr>
Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com>
Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net>
Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com>
Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com>
Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>
| Hindi | 2023-08-15 | Medium | train_hi_hf_whisper.yaml | No | 5.82 | 12.51 | 8.16 | 17.04 |[model](https://huggingface.co/speechbrain/asr-whisper-medium-commonvoice-hi)|[model](https://www.dropbox.com/sh/z9vriyy3i6xqvif/AAB7ql-40yWTjKEQJiuhYUr5a?dl=0)| 1xV100 16GB |
60
-
| Serbian | 2023-08-15 | Medium | train_sr_hf_whisper.yaml | No | 8.63 | 25.10 | 7.25 | 22.29 |[model](https://huggingface.co/speechbrain/asr-whisper-medium-commonvoice-sr)|[model](https://www.dropbox.com/sh/5lhk230q45sd97z/AAD-U9b_Ws_vFPs-cazsbOY0a?dl=0)| 1xV100 16GB |
61
-
| French | 2023-08-15 | Medium | train_fr_hf_whisper.yaml | No | 3.26 | 9.65 | 4.30 | 11.79 |[model](https://huggingface.co/speechbrain/asr-whisper-medium-commonvoice-fr)|[model](https://www.dropbox.com/sh/7zlk07yxnslk4yy/AAANcI3EaG0ZFy6UrKk1Mm2Ga?dl=0)| 1xV100 16GB |
62
-
| Italian | 2023-08-15 | Medium | train_it_hf_whisper.yaml | No | 2.42 | 8.26 | 3.03 | 9.63 |[model](https://huggingface.co/speechbrain/asr-whisper-medium-commonvoice-it)|[model](https://www.dropbox.com/sh/u5tex3nvzzs5pex/AAD-J7cOBE_fNfBono8waTKCa?dl=0)| 1xV100 16GB |
45
+
## Whisper Finetuning
46
+
Following table contains whisper-finetuning results for 1 epoch using Whisper model, freezing encoder and finetuning decoder.
47
+
| Language | Release | Model | commit hash | hyperparams file | LM | Val. CER | Val. WER | Test CER | Test WER | HuggingFace link | Model link | GPUs |
0 commit comments