Skip to content

Commit e75af7f

Browse files
author
Github Actions
committed
Neeratyoy Mallik: Making some unit tests work (#1000)
1 parent 8894389 commit e75af7f

151 files changed

Lines changed: 1742 additions & 729 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

develop/.buildinfo

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Sphinx build info version 1
22
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
3-
config: c67f9e82b87e6d5502ebba386244da75
3+
config: fbe988eb3582ca246bb81ab0b775fcd7
44
tags: 645f666f9bcd5a90fca523b33c5a78b7

develop/_downloads/9015033ecdd320a033e2856e20fc25f4/custom_flow_tutorial.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
"cell_type": "markdown",
7070
"metadata": {},
7171
"source": [
72-
"It is possible to build a flow which uses other flows.\nFor example, the Random Forest Classifier is a flow, but you could also construct a flow\nwhich uses a Random Forest Classifier in a ML pipeline. When constructing the pipeline flow,\nyou can use the Random Forest Classifier flow as a *subflow*. It allows for\nall hyperparameters of the Random Classifier Flow to also be specified in your pipeline flow.\n\nIn this example, the auto-sklearn flow is a subflow: the auto-sklearn flow is entirely executed as part of this flow.\nThis allows people to specify auto-sklearn hyperparameters used in this flow.\nIn general, using a subflow is not required.\n\nNote: flow 15275 is not actually the right flow on the test server,\nbut that does not matter for this demonstration.\n\n"
72+
"It is possible to build a flow which uses other flows.\nFor example, the Random Forest Classifier is a flow, but you could also construct a flow\nwhich uses a Random Forest Classifier in a ML pipeline. When constructing the pipeline flow,\nyou can use the Random Forest Classifier flow as a *subflow*. It allows for\nall hyperparameters of the Random Classifier Flow to also be specified in your pipeline flow.\n\nIn this example, the auto-sklearn flow is a subflow: the auto-sklearn flow is entirely executed as part of this flow.\nThis allows people to specify auto-sklearn hyperparameters used in this flow.\nIn general, using a subflow is not required.\n\nNote: flow 9313 is not actually the right flow on the test server,\nbut that does not matter for this demonstration.\n\n"
7373
]
7474
},
7575
{
@@ -80,7 +80,7 @@
8080
},
8181
"outputs": [],
8282
"source": [
83-
"autosklearn_flow = openml.flows.get_flow(15275) # auto-sklearn 0.5.1\nsubflow = dict(components=OrderedDict(automl_tool=autosklearn_flow),)"
83+
"autosklearn_flow = openml.flows.get_flow(9313) # auto-sklearn 0.5.1\nsubflow = dict(components=OrderedDict(automl_tool=autosklearn_flow),)"
8484
]
8585
},
8686
{
@@ -116,7 +116,7 @@
116116
},
117117
"outputs": [],
118118
"source": [
119-
"flow_id = autosklearn_amlb_flow.flow_id\n\nparameters = [\n OrderedDict([(\"oml:name\", \"cores\"), (\"oml:value\", 4), (\"oml:component\", flow_id)]),\n OrderedDict([(\"oml:name\", \"memory\"), (\"oml:value\", 16), (\"oml:component\", flow_id)]),\n OrderedDict([(\"oml:name\", \"time\"), (\"oml:value\", 120), (\"oml:component\", flow_id)]),\n]\n\ntask_id = 1408 # Iris Task\ntask = openml.tasks.get_task(task_id)\ndataset_id = task.get_dataset().dataset_id"
119+
"flow_id = autosklearn_amlb_flow.flow_id\n\nparameters = [\n OrderedDict([(\"oml:name\", \"cores\"), (\"oml:value\", 4), (\"oml:component\", flow_id)]),\n OrderedDict([(\"oml:name\", \"memory\"), (\"oml:value\", 16), (\"oml:component\", flow_id)]),\n OrderedDict([(\"oml:name\", \"time\"), (\"oml:value\", 120), (\"oml:component\", flow_id)]),\n]\n\ntask_id = 1965 # Iris Task\ntask = openml.tasks.get_task(task_id)\ndataset_id = task.get_dataset().dataset_id"
120120
]
121121
},
122122
{

develop/_downloads/a8c9cf480c327557392da8a19a0b9378/custom_flow_tutorial.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -82,10 +82,10 @@
8282
# This allows people to specify auto-sklearn hyperparameters used in this flow.
8383
# In general, using a subflow is not required.
8484
#
85-
# Note: flow 15275 is not actually the right flow on the test server,
85+
# Note: flow 9313 is not actually the right flow on the test server,
8686
# but that does not matter for this demonstration.
8787

88-
autosklearn_flow = openml.flows.get_flow(15275) # auto-sklearn 0.5.1
88+
autosklearn_flow = openml.flows.get_flow(9313) # auto-sklearn 0.5.1
8989
subflow = dict(components=OrderedDict(automl_tool=autosklearn_flow),)
9090

9191
####################################################################################################
@@ -120,7 +120,7 @@
120120
OrderedDict([("oml:name", "time"), ("oml:value", 120), ("oml:component", flow_id)]),
121121
]
122122

123-
task_id = 1408 # Iris Task
123+
task_id = 1965 # Iris Task
124124
task = openml.tasks.get_task(task_id)
125125
dataset_id = task.get_dataset().dataset_id
126126

develop/_downloads/b95c071188526f5ef5d991e382df9fa5/datasets_tutorial.py

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@
112112

113113
############################################################################
114114
# Edit a created dataset
115-
# =================================================
115+
# ======================
116116
# This example uses the test server, to avoid editing a dataset on the main server.
117117
openml.config.start_using_configuration_for_example()
118118
############################################################################
@@ -143,18 +143,23 @@
143143
# tasks associated with it. To edit critical fields of a dataset (without tasks) owned by you,
144144
# configure the API key:
145145
# openml.config.apikey = 'FILL_IN_OPENML_API_KEY'
146-
data_id = edit_dataset(564, default_target_attribute="y")
147-
print(f"Edited dataset ID: {data_id}")
148-
146+
# This example here only shows a failure when trying to work on a dataset not owned by you:
147+
try:
148+
data_id = edit_dataset(1, default_target_attribute="shape")
149+
except openml.exceptions.OpenMLServerException as e:
150+
print(e)
149151

150152
############################################################################
151153
# Fork dataset
154+
# ============
152155
# Used to create a copy of the dataset with you as the owner.
153156
# Use this API only if you are unable to edit the critical fields (default_target_attribute,
154157
# ignore_attribute, row_id_attribute) of a dataset through the edit_dataset API.
155158
# After the dataset is forked, you can edit the new version of the dataset using edit_dataset.
156159

157-
data_id = fork_dataset(564)
160+
data_id = fork_dataset(1)
161+
print(data_id)
162+
data_id = edit_dataset(data_id, default_target_attribute="shape")
158163
print(f"Forked dataset ID: {data_id}")
159164

160165
openml.config.stop_using_configuration_for_example()
Binary file not shown.

develop/_downloads/ea59c0f4a0e0075b24ecbb3b84c5744d/datasets_tutorial.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -195,7 +195,7 @@
195195
"cell_type": "markdown",
196196
"metadata": {},
197197
"source": [
198-
"Editing critical fields (default_target_attribute, row_id_attribute, ignore_attribute) is allowed\nonly for the dataset owner. Further, critical fields cannot be edited if the dataset has any\ntasks associated with it. To edit critical fields of a dataset (without tasks) owned by you,\nconfigure the API key:\nopenml.config.apikey = 'FILL_IN_OPENML_API_KEY'\n\n"
198+
"Editing critical fields (default_target_attribute, row_id_attribute, ignore_attribute) is allowed\nonly for the dataset owner. Further, critical fields cannot be edited if the dataset has any\ntasks associated with it. To edit critical fields of a dataset (without tasks) owned by you,\nconfigure the API key:\nopenml.config.apikey = 'FILL_IN_OPENML_API_KEY'\nThis example here only shows a failure when trying to work on a dataset not owned by you:\n\n"
199199
]
200200
},
201201
{
@@ -206,14 +206,14 @@
206206
},
207207
"outputs": [],
208208
"source": [
209-
"data_id = edit_dataset(564, default_target_attribute=\"y\")\nprint(f\"Edited dataset ID: {data_id}\")"
209+
"try:\n data_id = edit_dataset(1, default_target_attribute=\"shape\")\nexcept openml.exceptions.OpenMLServerException as e:\n print(e)"
210210
]
211211
},
212212
{
213213
"cell_type": "markdown",
214214
"metadata": {},
215215
"source": [
216-
"Fork dataset\nUsed to create a copy of the dataset with you as the owner.\nUse this API only if you are unable to edit the critical fields (default_target_attribute,\nignore_attribute, row_id_attribute) of a dataset through the edit_dataset API.\nAfter the dataset is forked, you can edit the new version of the dataset using edit_dataset.\n\n"
216+
"### Fork dataset\nUsed to create a copy of the dataset with you as the owner.\nUse this API only if you are unable to edit the critical fields (default_target_attribute,\nignore_attribute, row_id_attribute) of a dataset through the edit_dataset API.\nAfter the dataset is forked, you can edit the new version of the dataset using edit_dataset.\n\n"
217217
]
218218
},
219219
{
@@ -224,7 +224,7 @@
224224
},
225225
"outputs": [],
226226
"source": [
227-
"data_id = fork_dataset(564)\nprint(f\"Forked dataset ID: {data_id}\")\n\nopenml.config.stop_using_configuration_for_example()"
227+
"data_id = fork_dataset(1)\nprint(data_id)\ndata_id = edit_dataset(data_id, default_target_attribute=\"shape\")\nprint(f\"Forked dataset ID: {data_id}\")\n\nopenml.config.stop_using_configuration_for_example()"
228228
]
229229
}
230230
],
Binary file not shown.
0 Bytes
Loading
331 Bytes
Loading
0 Bytes
Loading

0 commit comments

Comments
 (0)