Skip to content

Commit 75049f4

Browse files
authored
Update references from .pretrained to .inference (#2483)
1 parent 9c70317 commit 75049f4

6 files changed

Lines changed: 7 additions & 7 deletions

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ python train.py hparams/train.yaml
4949
- Each model comes with a user-friendly interface for seamless inference. For example, transcribing speech using a pretrained model requires just three lines of code:
5050

5151
```python
52-
from speechbrain.pretrained import EncoderDecoderASR
52+
from speechbrain.inference import EncoderDecoderASR
5353

5454
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-conformer-transformerlm-librispeech", savedir="pretrained_models/asr-transformer-transformerlm-librispeech")
5555
asr_model.transcribe_file("speechbrain/asr-conformer-transformerlm-librispeech/example.wav")

recipes/CommonLanguage/lang_id/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Basically, you can run inference with only few lines of code:
2626

2727
```python
2828
import torchaudio
29-
from speechbrain.pretrained import EncoderClassifier
29+
from speechbrain.inference import EncoderClassifier
3030
classifier = EncoderClassifier.from_hparams(source="speechbrain/lang-id-commonlanguage_ecapa", savedir="pretrained_models/lang-id-commonlanguage_ecapa")
3131

3232
# Italian Example

recipes/LibriParty/VAD/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ The pre-trained model + easy inference is available on HuggingFace:
2929
Basically, you can run inference with only a few lines of code:
3030

3131
```python
32-
from speechbrain.pretrained import VAD
32+
from speechbrain.inference import VAD
3333

3434
VAD = VAD.from_hparams(source="speechbrain/vad-crdnn-libriparty", savedir="pretrained_models/vad-crdnn-libriparty")
3535
boundaries = VAD.get_speech_segments("speechbrain/vad-crdnn-libriparty/example_vad.wav")

recipes/VoxLingua107/lang_id/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ You can run inference with only few lines of code:
9393

9494
```python
9595
import torchaudio
96-
from speechbrain.pretrained import EncoderClassifier
96+
from speechbrain.inference import EncoderClassifier
9797
language_id = EncoderClassifier.from_hparams(source="speechbrain/lang-id-voxlingua107-ecapa", savedir="tmp")
9898
# Download Thai language sample from Omniglot and convert to suitable form
9999
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")

speechbrain/inference/interfaces.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ def foreign_class(
6969
---------
7070
source : str or Path or FetchSource
7171
The location to use for finding the model. See
72-
``speechbrain.pretrained.fetching.fetch`` for details.
72+
``speechbrain.utils.fetching.fetch`` for details.
7373
hparams_file : str
7474
The name of the hyperparameters file to use for constructing
7575
the modules necessary for inference. Must contain two keys:
@@ -412,7 +412,7 @@ def from_hparams(
412412
---------
413413
source : str
414414
The location to use for finding the model. See
415-
``speechbrain.pretrained.fetching.fetch`` for details.
415+
``speechbrain.utils.fetching.fetch`` for details.
416416
hparams_file : str
417417
The name of the hyperparameters file to use for constructing
418418
the modules necessary for inference. Must contain two keys:

tests/utils/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Depending on the testing need, `test.yaml` grows - some examples
2929
1. [ssl-wav2vec2-base-librispeech/test.yaml](https://github.com/speechbrain/speechbrain/blob/hf-interface-testing/updates_pretrained_models/ssl-wav2vec2-base-librispeech/test.yaml) - the play between test sample, interface class, and batch function is handled via HF testing in `tests/utils`
3030
```yaml
3131
sample: example.wav # test audio provided via HF repo
32-
cls: WaveformEncoder # existing speechbrain.pretrained.interfaces class
32+
cls: WaveformEncoder # existing speechbrain.inference class
3333
fnx: encode_batch # it's batch-wise function after audio loading
3434
```
3535
2. [asr-wav2vec2-librispeech/test.yaml](https://github.com/speechbrain/speechbrain/blob/hf-interface-testing/updates_pretrained_models/asr-wav2vec2-librispeech/test.yaml) - testing single example & against a dataset test partition

0 commit comments

Comments
 (0)