You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implement JobService API calls & connect it to SDK (#1129)
* Implement half of JobService functionality
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* Python lint
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* Correct implementation of all jobservice functions tested on standalone mode
* New API calls (start_offline_to_online_ingestion, start_stream_to_online_ingestion) now return Remote Jobs instead of job ids
* Implement list_jobs & get_job for standalone mode (looks like Spark is running in local mode and we can't get job statuses so we have to keep cache in memory)
* Wire up list_jobs & get_job on client side with job service
* Tested locally on Feast 101 notebook, everything works
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* Fix list_jobs when include_terminated=False
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* e2e tests
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* Format python
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* Add docstring
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* Remove spark extra configs and hardcode spark jars/conf in stadalone mode
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* Remove extra spark params from dockerfile
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
* Use start_stream_to_online_ingestion from launcher in job service
Signed-off-by: Tsotne Tabidze <tsotnet@gmail.com>
0 commit comments