AI APPLICATIONS

Blaize and VSaaS to develop AI surveillance apps

Blaize | June 29, 2022

Blaize
The pioneer in AI surveillance solutions, Video Surveillance as a Service (VSaaS), and Blaize have established a strategic partnership to develop AI-based infrastructure as a service (IaaS) in intelligence applications across several markets in Chile and the US. The development of AI surveillance applications for retail, smart cities, and transportation will be scaled up and time-to-market accelerated by Blaize and VSaaS as part of the deal.

"We have developed AI models running on the Blaize Graph Streaming Processor (GSP®) architecture, connecting 4X the number of cameras compared to other solutions. Our AI surveillance applications powered by Blaize enable quick and easy deployment running on the edge, on-premise, or in a data center, addressing the unique needs of our shared customers."

Francisco Soto, Founder and CEO of VSaaS

The VSaaS platform makes use of current infrastructure to build, launch, and scale AI and machine learning-based video surveillance applications. The security, retail, and transportation sectors use VSaaS's AI Edge technology, which is integrated with existing infrastructures. This partnership will enable novel AI use cases and quicker ROI of AI edge deployments in diverse markets when combined with the Blaize's highly efficient, low latency hardware paired with open and code-free AI software.

The leading supplier of artificial intelligence-based video surveillance applications is VSaaS. The platform can manage and deploy surveillance apps using the same infrastructure, which has various advantages for the end-user. It can interact with AI models at the Edge, on-premise, and in the Cloud. Customers with up to 800 cameras may quickly and easily develop security solutions for use in retail, construction, transportation, and smart cities. VSaaS, which has its headquarters in Chile, is establishing a San Francisco (CA) branch.

Spotlight

Delivered by faculty members at the forefront of AI implementation, AI for Business is designed to give managers an understanding of the growing deployment of AI in business. The programme provides practical templates to guide how to work with AI specialists, making the most of these emerging technologies

Spotlight

Delivered by faculty members at the forefront of AI implementation, AI for Business is designed to give managers an understanding of the growing deployment of AI in business. The programme provides practical templates to guide how to work with AI specialists, making the most of these emerging technologies

Related News

MACHINE LEARNING

Tecton Partners with Databricks to Deploy Machine Learning Apps in Minutes

Tecton | June 27, 2022

In order to assist businesses in building and automating their machine learning (ML) feature pipelines from prototype to production, Tecton, the enterprise feature store firm, today announced a partnership with Databricks, the Data and AI Company and inventor of the data lakehouse paradigm. As a result, data teams may utilize Tecton to quickly develop production-ready ML features on the Databricks Lakehouse Platform, thanks to its integration with Tecton. "We are thrilled to have Tecton available on the Databricks Lakehouse Platform. As a result, Databricks customers now have the option to use Tecton to operationalize features for their ML projects and effectively drive business with production ML applications," said Adam Conway, SVP of Products at Databricks. Too many businesses are hesitant to integrate machine learning (ML) into their core business operations and services because productionizing ML models to support a wide range of predictive applications, such as fraud detection, real-time underwriting, dynamic pricing, recommendations, personalization, and search, presents unique data engineering challenges. In addition, it is challenging to curate, provide, and manage the ML features—predictive data signals—that power predictive applications. For this reason, Databricks and Tecton have teamed together to streamline and automate the numerous processes involved in converting raw data inputs into ML features and making those features available to power large-scale predictive applications. Databricks, which are based on an open lakehouse architecture, enable ML teams to gather and analyze data, facilitate cross-team communication, and standardize the whole ML lifecycle from experimentation to production. With Tecton, these same teams can quickly operationalize ML applications and automate the entire lifecycle of ML features without leaving the Databricks workspace. "Building on Databricks' powerful and massively scalable foundation for data and AI, Tecton extends the underlying data infrastructure to support ML-specific requirements. This partnership with Databricks enables organizations to embed ML into live, customer-facing applications and business processes, quickly, reliably and at scale." Mike Del Balso, co-founder and CEO of Tecton The central source of truth for ML features, Tecton, is accessible on the Databricks Lakehouse Platform. It orchestrates, manages, and maintains the data pipelines that produce features. The interface also enables ML teams to track and share characteristics with a version-control repository, allowing data teams to write features as code using Python and SQL. Then, using production-grade ML data pipelines, Tecton automates and organizes them such that feature values are materialized in a single repository. From there, customers don't have to worry about common obstacles like training-serving skew or point-in-time correctness as they can immediately explore, exchange, and serve features for model training, batch, and real-time predictions across use cases. Customers can analyze features utilizing real-time and streaming data from a variety of data sources using Tecton, which serves as the interface between the Databricks Lakehouse Platform and their ML models. Tecton reduces the need for intensive engineering support and enables customers to significantly enhance model performance, accuracy, and outcome by automatically creating the intricate feature engineering pipelines required to process streaming and real-time data.

Read More

SOFTWARE

Atos and OVHcloud partner to advance quantum computing

Atos | June 10, 2022

In the field of quantum computing, Atos and OVHcloud, the European leader in cloud computing, have announced cooperation to make Atos' quantum emulator available "as a service" through OVHcloud offerings. This European premiere will increase the accessibility of quantum emulations technologies, hence boosting the quantum technologies ecosystem. Eventually, research labs, universities, startups, and huge corporations will be able to build quantum software and investigate cutting-edge applications well ahead of the market. Atos and OVHcloud are committed to contribute to the establishment of a coherent ecosystem suited for the advent of quantum computing technologies by fostering the emergence of tomorrow's champions and boosting their international reach. As a result, both public and private players in this burgeoning ecosystem will have access to a quantum development environment regardless of where they are located, allowing them to develop and experiment "as a service" software bricks ahead of the effective release of the first quantum computers, thanks to the two European leaders. "By announcing this partnership, OVHcloud confirms its ambition to address and make accessible to the greatest number of people the most advanced technologies. The quantum revolution and the deployment of the first use cases cannot be achieved without the Cloud, whose consumption mode and freedom of use are uniquely able to unite expert communities," Thierry Souche, CTO, OVHcloud This product seeks to replicate the many approaches to quantum processing by simulating a real quantum environment. The Atos QLM delivers unrivaled simulation capabilities, including three independent quantum programming modes, by utilizing the unique capability of Atos' SMP BullSequana X800 server (the gate model, the annealing model and the analog model). As a result, users will be able to imitate circuits up to 38 qubits in double precision and solve quantum annealing issues with up to 5,000+ qubits when using OVHcloud. OVHcloud will be able to offer quantum processing solutions through Jupyter Notebooks, giving developers easy access, thanks to its decision to use Atos technology as a foundation for future improvements. The Notebook, which is built on free and open standards, will provide varying levels of performance based on the infrastructure and will benefit from the work already done by the OVHcloud Artificial Intelligence teams. "This partnership is a major milestone in preparing the quantum revolution. Thanks to OVHcloud, we can offer an "as a service" cloud version to democratize and share our quantum emulation technologies more widely to prepare for the future. We are convinced that the future of high-performance computing lies in the hybridization of our technologies, between traditional computing and the integration of co-processors, accelerators, quantum and cloud technologies. Already a pioneer in quantum with the launch of our QLM in 2017, which was the first emulator on the market, this strategic partnership with OVHcloud is an additional step forward in our quantum strategy," said Emmanuel Le Roux, SVP, Director of HPC, AI and Quantum, Atos.

Read More

FUTURE TECH

Next Pathway to Partner with Microsoft to migrate customers to Microsoft Azure

Next Pathway | August 08, 2022

Next Pathway Inc., the Automated Cloud Migration company, today announced a collaboration with Microsoft to accelerate the migration from legacy data warehouses and data lakes to Microsoft Azure. Next Pathway's SHIFT™ Migration Suite is being offered as part of the Service to plan and execute Cloud Migrations to Azure. The suite includes SHIFT™ Analyzer which provides a comprehensive review of source legacy application workloads to review the code types and objects which are present, and SHIFT™ Translator which accelerates the translation, testing and migration of complex workloads, such as, SQL, Stored Procedures, ETL pipelines/workflows, and various other code types. Moreover, Next Pathway's technology can also move workloads from other cloud platforms and cloud data warehouses to Azure with ease and efficiency. An important feature of Next Pathway's technology is the ability to translate legacy ETL pipelines to run natively in Azure Data Factory (ADF). ADF is a cloud-based data integration service that allows users to create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. This collaboration creates an accelerated path for customers to Azure. "At Next Pathway we are continuously innovating to make it easier, and faster for our customers to migrate to the cloud. We are extremely excited to be working with Microsoft". Chetan Mathur, Chief Executive Officer at Next Pathway "We are pleased to partner with Next Pathway and provide our customers with a faster and more efficient migration path to Microsoft Azure," said Zia Mansoor, Worldwide Vice President Data and AI. About Next Pathway Next Pathway is the Automated Cloud Migration company. Powered by Crawler360™, the Migration Planner and the SHIFT™ Migration Suite, Next Pathway automates the end-to-end challenges companies experience when migrating applications to the cloud.

Read More