MACHINE LEARNING

Tecton Partners with Databricks to Deploy Machine Learning Apps in Minutes

Tecton | June 27, 2022

Tecton
In order to assist businesses in building and automating their machine learning (ML) feature pipelines from prototype to production, Tecton, the enterprise feature store firm, today announced a partnership with Databricks, the Data and AI Company and inventor of the data lakehouse paradigm. As a result, data teams may utilize Tecton to quickly develop production-ready ML features on the Databricks Lakehouse Platform, thanks to its integration with Tecton.

"We are thrilled to have Tecton available on the Databricks Lakehouse Platform. As a result, Databricks customers now have the option to use Tecton to operationalize features for their ML projects and effectively drive business with production ML applications," said Adam Conway, SVP of Products at Databricks.

Too many businesses are hesitant to integrate machine learning (ML) into their core business operations and services because productionizing ML models to support a wide range of predictive applications, such as fraud detection, real-time underwriting, dynamic pricing, recommendations, personalization, and search, presents unique data engineering challenges. In addition, it is challenging to curate, provide, and manage the ML features—predictive data signals—that power predictive applications. For this reason, Databricks and Tecton have teamed together to streamline and automate the numerous processes involved in converting raw data inputs into ML features and making those features available to power large-scale predictive applications.

Databricks, which are based on an open lakehouse architecture, enable ML teams to gather and analyze data, facilitate cross-team communication, and standardize the whole ML lifecycle from experimentation to production. With Tecton, these same teams can quickly operationalize ML applications and automate the entire lifecycle of ML features without leaving the Databricks workspace.

"Building on Databricks' powerful and massively scalable foundation for data and AI, Tecton extends the underlying data infrastructure to support ML-specific requirements. This partnership with Databricks enables organizations to embed ML into live, customer-facing applications and business processes, quickly, reliably and at scale."

Mike Del Balso, co-founder and CEO of Tecton

The central source of truth for ML features, Tecton, is accessible on the Databricks Lakehouse Platform. It orchestrates, manages, and maintains the data pipelines that produce features. The interface also enables ML teams to track and share characteristics with a version-control repository, allowing data teams to write features as code using Python and SQL. Then, using production-grade ML data pipelines, Tecton automates and organizes them such that feature values are materialized in a single repository. From there, customers don't have to worry about common obstacles like training-serving skew or point-in-time correctness as they can immediately explore, exchange, and serve features for model training, batch, and real-time predictions across use cases.

Customers can analyze features utilizing real-time and streaming data from a variety of data sources using Tecton, which serves as the interface between the Databricks Lakehouse Platform and their ML models. Tecton reduces the need for intensive engineering support and enables customers to significantly enhance model performance, accuracy, and outcome by automatically creating the intricate feature engineering pipelines required to process streaming and real-time data.

Spotlight

Today’s virtualised, cloud-based and converged IT infrastructures hold great business potential for the companies deploying them. To deliver fully on this potential, however, these powerful but complex environments require a support services regime that reflects their business importance.

Spotlight

Today’s virtualised, cloud-based and converged IT infrastructures hold great business potential for the companies deploying them. To deliver fully on this potential, however, these powerful but complex environments require a support services regime that reflects their business importance.

Related News

SOFTWARE,FUTURE TECH

Zayo Launches API Developer Portal to Accelerate Enterprise Digital Transformation

Zayo Group Holdings, Inc. | August 18, 2022

Zayo Group Holdings, Inc., a leading global communications infrastructure platform, today announced the launch of its Application Programming Interface (API) Developer Portal, providing customers with a single online platform to explore, on-board and test live environments with existing API offerings, and stay informed about upcoming developments. One of the biggest challenges enterprises face when working with multiple vendors is siloed information and data. With Zayo’s API Developer Portal, customers now have a centralized, well-organized, easily navigable repository of information about automating business processes with Zayo. “We want to empower our customers through every step of their digital transformation journey, and APIs are where that journey comes to life in the form of automation at our customers' fingertips. “With our developer portal, customers now have a live environment where they can easily find the information they need to understand, design and build the best software integration strategy to make an impact and drive business results.” Leidy Perez, vice president of product, strategic software at Zayo Currently, Zayo receives hundreds of thousands API calls a month originating from 100+ customers. These requests come through key partners such as Connectbase, ACS Solutions, GeoTel, Upstack, NDA Corp, Masterstream and LMX as well as direct integrations. Benefits of Zayo’s existing API offerings include: Network Discovery: Zayo’s Building Validation, Location, and Cloud Service Provider APIs enable customers to analyze how Zayo’s network availability powers their business footprint. Quote and Order: Customers can grow their business capabilities by leveraging Zayo products in its Product Catalog API, generating quotes with the Quote API, and automating the ordering process through the Order API. Service Management: Customers are able to retrieve critical service information, get support for issues, and keep track of planned outages with the Service Inventory, Ticketing, Ticket Catalog and Maintenance Cases APIs. In addition to the API developer portal, Zayo has started the roll out of MEF API “Lifecycle Service Orchestration” (LSO) Sonata “Billie” release for quoting and ordering as well as performance monitoring. Further incorporation of MEF standards will enable Zayo to better serve its global carrier and enterprise customers to buy services in a standardized way and to automate MEF 3.0 Carrier Ethernet. “Too often, customers must work around their system of record to manage multiple vendors. The swivel-chair of that information back into each system of record causes massive inefficiencies that can impact service delivery time and revenue,” said Stan Hubbard, MEF Principal Analyst. “MEF LSO APIs offer an automated, standardized way for service providers to buy and sell services and maximize return on their investment in interface development. We are excited that MEF LSO Sonata APIs enable Zayo’s innovative API Developer Portal.” About Zayo Group Holdings, Inc. Zayo Group Holdings, Inc. is the leading global communications infrastructure platform, delivering a range of solutions, including fiber & transport, packet and managed edge services. Zayo owns and operates a Tier 1 IP backbone spanning 134,000 miles across North America and Europe. By providing this mission-critical bandwidth to its category-leading customers across the wireless, hyperscale, media, tech and finance industries, Zayo is fueling the innovations that are transforming society.

Read More

SOFTWARE

Weights & Biases and Run:Ai Announce Ml Developer Workflow Partnership

Weights & Biases | June 28, 2022

A partnership has been announced between Run:ai, the industry pioneer in computing orchestration for AI workloads, and Weights & Biases (W&B), the premier developer-first MLOps platform. In order to give ML developers and MLOps platform owners a smooth experience for MLOps and GPU orchestration for machine learning and deep learning workloads, the cooperation will concentrate on establishing connectors between the two platforms. "We are excited to partner with Run.ai to provide data scientists and ML practitioners the best tools for their ML development workflow. The combination of the developer tools provided by Weights and Biases, the dynamic allocation and orchestration of compute resources from Run.ai, and the optimized hardware and software provided by NVIDIA gives ML teams everything they need to accelerate the development and deployment of AI in the enterprise," said Seann Gardiner, VP of Business Development of Weights & Biases. The collaboration will simplify AI initiatives for academics in the field as well as the MLOps and IT teams in charge of maintaining the infrastructure. The Weights & Biases MLOps capabilities, including experiment and artifact tracking, hyperparameter optimization, and collaboration reports, will be available to ML practitioners in a single interface, along with access to NVIDIA accelerated computing resources managed by Run:Atlas ai's Platform. The W&B dashboard will allow ML developers to track NVIDIA GPU usage, and they can then increase it by using Run:scheduling ai's and orchestration features. In addition, the MLOps platform owner will be able to provide a single ML system of record to maintain an accurate history of all ML experiments, model history, and dataset versioning. They will also be able to optimize GPU resource scheduling and consumption for ML practitioners. "Run:ai and Weights & Biases together give data scientists, and the IT teams that support them, a complete solution for the full ML lifecycle, from building models, to training and inference in production. Companies building their AI infrastructure and tooling from scratch can use NVIDIA GPUs, Run:ai, and W&B and have everything they need to manage AI initiatives at scale." Omri Geller, CEO and co-founder of Run:ai The collaboration between Weights & Biases and Run:ai will significantly help ML academics and businesses using NVIDIA GPUs. As software partners for NVIDIA DGX-Ready and NVIDIA AI Accelerated, Run:ai and W&B have both received validation. In order to assist businesses to get the most out of their investment in NVIDIA-accelerated systems for ML and AI, the cooperation will build on connections between W&B and Run.ai with NVIDIA AI Enterprise software.

Read More

AI TECH

Behavox Takes a Giant Step Forward in Compliance

Behavox | June 20, 2022

Behavox, a security company that helps compliance, HR, and security teams defend their companies and employees from business risk, announced the launch of Behavox QuantumTM, its next-generation compliance solution. Behavox will organize an "AI in Compliance and Security" conference on July 19th to officially launch Behavox Quantum to clients and the artificial intelligence industry. A new, state-of-the-art AI execution engine and a brand-new AI architecture are at the heart of this AI and machine learning leap. "This solution is the culmination of our efforts over the last three years, and leverages technology that wasn't available until 2021. The feedback we've received thus far is that Behavox Quantum is the future of AI and machine learning. There is nothing like it in the marketplace today - it is a quantum leap forward in compliance solutions," -Chief Customer Intelligence Officer Fahreen Kurji These improvements have undergone extensive scientific testing to make objective comparisons between Behavox Quantum and alternatives in terms of accuracy and efficiency. Behavox Quantum was compared to lexicon-driven systems using these scientific tests and regularly outperformed them in terms of accuracy and performance, with some models performing even better. "Behavox Quantum generates more useful, actionable alerts, meaning less time is wasted chasing false positives. The ability to catch 80% of bad actors in comparison to the 2% Lexicon catches will revolutionize the industry,The objective benchmarking of Behavox Quantum puts it unequivocally in a league of its own, the market leader from day one." -Chief Technology Officer Joseph Benjamin. The Behavox in-house Compliance team creates thorough model risk paperwork to accompany AI models. Anti-trust/anti-competition, Client interests, Conflicts of Interest, Financial Crime, Fit and Proper Considerations, General Misconduct Indicators, Market Abuse, and Trade Errors are all dangers that Behavox Quantum protects consumers from. Behavox Quantum is available in various languages, including Spanish, Japanese, French, Danish, and Swedish, with more to come as the service grows. In addition, its built-in translator allows compliance teams to monitor surveillance in many languages. The Behavox Translator uses the same cutting-edge AI core as Behavox Quantum, and in the realm of financial services and compliance, it outperformed Google, DeepL, Microsoft, and Amazon in terms of BLEU scores.

Read More