Fix return value of batch_evaluation for separation recipes#1555
Fix return value of batch_evaluation for separation recipes#1555ycemsubakan merged 1 commit intospeechbrain:developfrom z-wony:fix-batch-eval
Conversation
|
My callstack is below. (with batch_size: 2) In the core.py evaluate_batch() returns |
|
Dear @mravanelli . Could you review this? |
|
@z-wony Sorry for my late reply. I will take a look soon. Thanks for the PR. |
|
ping? : ) |
|
Guys, I'll take a look this week. Very sorry, I am swamped with several things. |
|
Alright, I just tried this branch, it seems this doesn't cause anything else to break. We can merge. @z-wony Do you know when this started to break the eval loop? Because originally this wasn't causing any issues. Like do you know which commit started to cause this issue? (Just out of curiosity) |
@ycemsubakan Thank you for review and sorry I have no idea about your question. |
|
Cem, do you have some time to review this as well?
…On Mon, Sep 5, 2022 at 9:53 AM Jiwon Kim ***@***.***> wrote:
Dear @mravanelli <https://github.com/mravanelli> . Could you review this?
—
Reply to this email directly, view it on GitHub
<#1555 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEA2ZVWVYS7BYPNAPJPUX7DV4X3ODANCNFSM573BVHQQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
In case of 'batch_size' is not 1, evaluate_batch() returns tensor array (in separation recipes).
It occurs exception in _fit_valid() (#988)
So, this commit fixes return values as API reference of evaluate_batch()
But, I expect there are a lot of same issues in other recipes.
So, I'd like to suggest changing update_average() in core.py.
If 'list of tensors' are allowed to input argument, solution is more simple.