feat: Add Custom Training by Python Script#42
Conversation
| from google.cloud.aiplatform import base | ||
| from google.cloud.aiplatform import datasets | ||
| from google.cloud.aiplatform import initializer | ||
| from google.cloud.aiplatform import models |
There was a problem hiding this comment.
Do we have a general guideline for when to import modules (e.g. from google.cloud.aiplatform import models here) versus when to import classes (e.g. from google.cloud.aiplatform_v1beta1 import Model below)?
if there are simple rules of deciding this, please add to the contributing guide file for reference.
There was a problem hiding this comment.
If there is no standard defined then we should go with the go/pystyle#imports.
I'll make changes in this PR and we should ensure future PRs follow that style.
I also have a preference to reduce levels of indirection of imports as well so we should be importing directly from the module that contains the implementation.
…ort positional args and flags.
|
|
||
| _TEST_MODEL_NAME = "projects/my-project/locations/us-central1/models/12345" | ||
|
|
||
| _TEST_PIPELINE_RESOURCE_NAME = ( |
There was a problem hiding this comment.
for later: all of these are getting a little heavy and difficult to manage. we should at least put all of these pieces in a separate file so they are shared across tests. further down the road we might need to modify the approach.
* feat: Add Custom Training by Python Script * chore: fix LRO tests and formatting * feat: Address comments and change args to accept list of args to support positional args and flags. * feat: rename is_failed to has_failed * refactor: remove unused Dict type * chore: lint
* feat: Add Custom Training by Python Script * chore: fix LRO tests and formatting * feat: Address comments and change args to accept list of args to support positional args and flags. * feat: rename is_failed to has_failed * refactor: remove unused Dict type * chore: lint
* feat: Add Custom Training by Python Script * chore: fix LRO tests and formatting * feat: Address comments and change args to accept list of args to support positional args and flags. * feat: rename is_failed to has_failed * refactor: remove unused Dict type * chore: lint
* feat: Add Custom Training by Python Script * chore: fix LRO tests and formatting * feat: Address comments and change args to accept list of args to support positional args and flags. * feat: rename is_failed to has_failed * refactor: remove unused Dict type * chore: lint
* feat: Add Custom Training by Python Script * chore: fix LRO tests and formatting * feat: Address comments and change args to accept list of args to support positional args and flags. * feat: rename is_failed to has_failed * refactor: remove unused Dict type * chore: lint
* feat: Add Custom Training by Python Script * chore: fix LRO tests and formatting * feat: Address comments and change args to accept list of args to support positional args and flags. * feat: rename is_failed to has_failed * refactor: remove unused Dict type * chore: lint
Adds CustomTrainingJob Class.
The current implementation splits the constructing of the CustomTrainingJob and running of the Job. Note that no Custom Training service resource is created when the object is constructed. It is created at the
runcall. Which can lead to issues if the correct args were not passed in during construction. This mainly is an issue for model serving args as we require the model serving environment definitions at construction time, which should make sense because it's tightly coupled to the script provided at construction time. An alternative is to move this to the run call.An additional alternative is to add all constructor and run args to the constructor. And only have the model accessible through the get_model API:
Instead of the current flow:
Fixes b/169779290 🦕