Skip to content
Merged
Changes from 1 commit
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
37cd702
Update 'sparse' parameter for OHE for sklearn >= 1.4
PGijsbers Jul 2, 2024
2343203
Add compatability or skips for sklearn >= 1.4
PGijsbers Jul 2, 2024
667529e
Change 'auto' to 'sqrt' for sklearn>1.3 as 'auto' is deprecated
PGijsbers Jul 3, 2024
7b088c4
Skip flaky test
PGijsbers Jul 3, 2024
875a05a
Fix typo
PGijsbers Jul 3, 2024
87cf0b3
Ignore description comparison for newer scikit-learn
PGijsbers Jul 3, 2024
7b826e0
Adjust for scikit-learn 1.3
PGijsbers Jul 3, 2024
280a972
Remove timeout and reruns to better investigate CI failures
PGijsbers Jul 3, 2024
72a8765
Fix typo in parametername
PGijsbers Jul 3, 2024
363724a
Add jobs for more recent scikit-learns
PGijsbers Jul 3, 2024
d664a34
Expand the matrix with all scikit-learn 1.x versions
PGijsbers Jul 3, 2024
9af2f62
Fix for numpy2.0 compatibility (#1341)
PGijsbers Jul 3, 2024
5d1da88
Rewrite matrix and update numpy compatibility
PGijsbers Jul 3, 2024
9bd7b2f
Move comment in-line
PGijsbers Jul 3, 2024
7ce5b89
Stringify name of new step to see if that prevented the action
PGijsbers Jul 3, 2024
91f6dee
Fix unspecified os for included jobs
PGijsbers Jul 3, 2024
670a76b
Fix typo in version pinning for numpy
PGijsbers Jul 3, 2024
412a193
Fix version specification for sklearn skips
PGijsbers Jul 4, 2024
35206bb
Output final list of installed packages for debugging purposes
PGijsbers Jul 4, 2024
f19897e
Cap scipy version for older versions of scikit-learn
PGijsbers Jul 4, 2024
dd11f5d
Update parameter base_estimator to estimator for sklearn>=1.4
PGijsbers Jul 4, 2024
9372054
Account for changes to sklearn interface in 1.4 and 1.5
PGijsbers Jul 4, 2024
72a2fb1
Non-strict reinstantiation requires different scikit-learn version
PGijsbers Jul 4, 2024
6830681
Parameters were already changed in 1.4
PGijsbers Jul 4, 2024
369f5c0
Fix race condition (I think)
PGijsbers Jul 4, 2024
a2e7022
Use latest patch version of each minor release
PGijsbers Jul 4, 2024
828a7a4
Convert numpy types back to builtin types
PGijsbers Jul 4, 2024
e98019c
Specify versions with * instead to allow for specific patches
PGijsbers Jul 4, 2024
c7f93c8
Flow_exists does not return None but False is the flow does not exist
PGijsbers Jul 4, 2024
68481cf
Update new version definitions also installation step
PGijsbers Jul 4, 2024
a6b5ddd
Fix bug introduced in refactoring for np.generic support
PGijsbers Jul 4, 2024
9e8217f
Add back the single-test timeout of 600s
PGijsbers Jul 4, 2024
eda2b23
[skip ci] Add note to changelog
PGijsbers Jul 4, 2024
6d0cb41
Check that evaluations are present with None-check instead
PGijsbers Jul 4, 2024
2c161e4
Remove timeouts again
PGijsbers Jul 4, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Fix race condition (I think)
It seems to me that run.evaluations is set only when the run is
fetched. Whether it has evaluations depends on server state.
So if the server has resolved the traces between the initial
fetch and the trace-check, you could be checking
len(run.evaluations) where evaluations is None.
  • Loading branch information
PGijsbers committed Jul 4, 2024
commit 369f5c06f9ccb11d0259622c24408e154694895b
2 changes: 1 addition & 1 deletion tests/test_runs/test_run_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,14 +119,14 @@ def _wait_for_processed_run(self, run_id, max_waiting_time_seconds):
# time.time() works in seconds
start_time = time.time()
while time.time() - start_time < max_waiting_time_seconds:
run = openml.runs.get_run(run_id, ignore_cache=True)

try:
openml.runs.get_run_trace(run_id)
except openml.exceptions.OpenMLServerException:
time.sleep(10)
continue

run = openml.runs.get_run(run_id, ignore_cache=True)
Copy link
Copy Markdown
Collaborator Author

@PGijsbers PGijsbers Jul 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes tests fail in with the old code because run.evaluations is None. But I am not able to reproduce this locally. However, the fails do seem fewer now that I move where the run object is loaded (which makes sense, initially there is a race condition where perhaps the trace isn't yet processed in get get_run call, but is by the time the get_run_trace is called). Additionally, I simply changed the check from a length check to a None check. As far as I can tell, evaluation should be a non-empty dictionary or None, so the length check doesn't make a lot of sense (probably historically the behavior was different). I kept in an assert with the old check just to make sure my assumption is correct (and we get an error if it isn't).

if len(run.evaluations) == 0:
time.sleep(10)
continue
Expand Down