Manually log metrics with Agent Platform Experiments

TensorBoard time series metrics can be manually logged with a Agent Platform Experiments run. These metrics are visualized in the Agent Platform Experiments console, or in your Agent Platform TensorBoard experiment web app.

For more details on logging metrics and parameters, see Manually log data to an experiment run.

Python

from typing import Dict, Optional

from google.cloud import aiplatform
from google.protobuf import timestamp_pb2


def log_time_series_metrics_sample(
    experiment_name: str,
    run_name: str,
    metrics: Dict[str, float],
    step: Optional[int],
    wall_time: Optional[timestamp_pb2.Timestamp],
    project: str,
    location: str,
):
    aiplatform.init(experiment=experiment_name, project=project, location=location)

    aiplatform.start_run(run=run_name, resume=True)

    aiplatform.log_time_series_metrics(metrics=metrics, step=step, wall_time=wall_time)

  • experiment_name: Provide a name for your experiment.
  • run_name: Provide a run name.
  • metrics: Dictionary of where keys are metric names and values are metric values.
  • step: Optional. Step index of this data point within the run.
  • wall_time: Optional. Wall clock timestamp when this data point is generated by the end user. If not provided, wall_time is generated based on the value from time.time().
  • project: . You can find these IDs in the Google Cloud console welcome page.
  • location: Location of your experiment and TensorBoard instance. If the experiment or TensorBoard don't already exist they will be created in this location.