-
Google Cloud announced Beta version of cloud AI Platform Pipelines to support easy to install secure execution environments for machine learning workflows.
-
The service is designed to deploy a robust, repeatable AI pipeline in the cloud along with other features such as monitoring, auditing, version tracking, and reproducibility.
-
The newly launched feature is going to be beneficial that will help companies reduce the time it takes to put a product into production.
When you're just prototyping a machine learning (ML) model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make an ML workflow sustainable and scalable, things become more complex. A machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis to training to evaluation to deployment, and more. It’s hard to compose and track these processes in an ad-hoc manner—for example, in a set of notebooks or scripts—and things like auditing and reproducibility become increasingly problematic.
Recently, Google announced the beta launch of Cloud AI Platform Pipelines, a service designed to deploy robust, repeatable AI Pipelines along with monitoring, auditing, version tracking, and reproducibility in the cloud. Google’s pitching it as a way to deliver an “easy to install” secure execution environment for machine learning workflows, which could reduce the number of time enterprises spends bringing products to production.
“When you’re just prototyping a machine learning model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make a [machine learning] workflow sustainable and scalable, things become more complex.”
Anusha Ramesh, Product Manager, Google
AI Platform Pipelines has two major parts.
1) The infrastructure for deploying and running structured AI workflows that are integrated with Google Cloud Platform services and
2) The pipeline tools for building, debugging and sharing pipelines and components.
The service runs on a cluster that is automatically created during the installation process and is accessible through the Cloud AI Platform dashboard. With AI Platform Pipelines, developers can specify pipelines using the Kubeflow Pipelines Software Development Kit (SDK) or by using the TFX SDK to customize the TensorFlow Extended (TFX) Pipeline template. This SDK compiles the pipeline and submits it to the Pipelines REST API server, which stores and plans to execute the pipeline.
Learn more:
��
AI Pipeline uses the to run pipelines and has other microservices to record metadata, process component IO, and plan pipeline runs. The pipeline steps are performed as independent Pods in the cluster, and each component can utilize Google Cloud services, such as data streaming, AI platform training and forecasting, BigQuery, etc. At the same time, the pipeline can include the steps of performing graphics card and tensor processing unit calculations directly in the cluster, directly using functions such as automatic scaling and automatic node settings.
AI Platform Pipeline runs include automatic metadata tracking using ML Metadata, a library for recording and retrieving metadata related to machine learning developer and data scientist workflows. Automatic metadata tracks the artifacts used in each pipeline step, the pipeline parameters, and the links between input/output artifacts, as well as the pipeline steps that create and use them. In a news, to help developers more easily integrate AI capabilities in apps.
Also, AI Platform Pipelines supports pipeline versioning, which enables developers to upload multiple versions of the same pipeline and group them in the UI, as well as automatic artifact and lineage tracking. Native workpiece tracking can track models, data statistics, model evaluation indicators, and more. Lineage tracking shows history and versions of models, data, etc.
“A machine learning workflow can involve many steps with dependencies on each other, from data preparation and analysis to training, to evaluation, to deployment, and more. It’s hard to compose and track these processes in an ad-hoc manner — for example, in a set of notebooks or scripts — and things like auditing and reproducibility become increasingly problematic.”
Learn more:
Google said that shortly, AI platform pipelines will gain multi-user isolation, which will allow everyone who accesses the pipeline cluster to control who can access their pipelines and other resources. Here is an article where it has explained . Other upcoming features include workload identification to support transparent access to Google Cloud Services; UI-based settings for back-end data storage outside the cluster, including metadata, server data, job history and metrics; simpler cluster upgrades; And more templates for authoring workflows.
What’s next?
Google Cloud has some new Pipelines features coming soon, including support for:
-
Multi-user isolation, so that each person accessing the Pipelines cluster can control who can access their pipelines and other resources
-
Workload identity, to support transparent access to GCP services
-
Easy, UI-based setup of off-cluster storage of backend data—including metadata, server data, job history, and metrics—for larger-scale deployments and so that it can persist after cluster shutdown
-
Easy cluster upgrades
-
More templates for authoring ML workflows