Tanla | June 13, 2022
Tanla, a leading global CPaaS provider, and Kore.ai, the world's leading enterprise conversational AI software platform and solutions company, today announced an exclusive partnership in five countries – India, United Arab Emirates, Indonesia, Vietnam, and Philippines. This partnership is a momentous step forward in offering enterprises and brands the ability to upraise the digital experiences of their key stakeholders: customers, partners, and employees through best-in-class conversational artificial intelligence (AI) based Natural Language Processing (NLP) system. For the users, this effectively translates into digital interactions truly becoming intuitive and meaningful.
Recognized as a Leader in Gartner's Enterprise Conversational AI Platforms Magic Quadrant 2022, Kore.ai offers an enterprise-grade, end-to-end, no-code conversational AI platform and AI-first solutions that serve as a secure foundation for enterprises to design, build, test, host, deploy and manage virtual assistants, process assistants and conversational digital applications for optimized customer, employee and agent experiences across voice and digital channels. The Kore.ai Experience Optimization Platform (XO) supports on-prem and cloud deployments for more than 35 channels in 100 languages. Kore.ai also brings with it an experienced and dedicated team to jointly accelerate product development and go-to-market (GTM) with Tanla in India and other focus geographies.
Tanla is an industry leader in CPaaS, serving marquee clients across all major industries and transforming the way the world collaborates and communicates through innovative CPaaS solutions. It touches over a billion lives with its purpose "EC*2" i.e., "shaping the world of trusted digital experiences to empower consumers and enable companies." Tanla's Wisely platform is an embodiment of a step towards this purpose. Co-developed with Microsoft, Wisely is the world's first blockchain-powered cloud-based platform that connects enterprises and suppliers through a secure 'express route', ensuring complete transparency and a single source of truth that results in immutable audit trails and zero dispute settlements. Gartner has recognized Tanla in the 2021 CPaaS Competitive landscape based on a combination of prominence and the unique features of Wisely.
The conversational AI market is growing at a rate of approximately 21% CAGR (source: Market Digits) while the Indian market itself is expected to grow at a rate of 25% CAGR (source: Gartner, Expert Interview). This outlook presents a huge growth opportunity in the conversational space.
Tanla's rich communication portfolio and a strong customer base coupled with Kore.ai's conversational AI capabilities and large development organization will position the duo as a leader, enabling enterprises to deliver advanced, highly intelligent, and personalized experiences to their customers across their brand's digital touchpoints.
"Enterprises are looking for technologies that help them create extraordinary customer and employee experiences, which positively impact business outcomes. Through this partnership, Tanla and Kore.ai will jointly offer a first-of-its-kind customer engagement platform offering conversation-first experiences that can automate and optimize voice and chat interactions across multiple channels, languages, and regions, while retaining the human touch all through. We are thrilled to work with a partner like Tanla that will help us advance our vision of creating extraordinary customer, employee, and agent experiences."
Raj Koneru, Founder and CEO of Kore.ai
Today's enterprises require a conversational AI provider, a channel delivery partner, a marketing partner to promote the intelligent virtual assistant, and a separate implementation partner to be able to set up effective conversational bot support. However, with the coming together of Tanla and Kore.ai, enterprises will be able to leverage their end-to-end capabilities and consultative support without having to go through multiple stops and partners. Some notable advantages of this also include access to a no-code virtual assistant development platform, an implementation team, omnichannel communication, campaign management for marketing, advanced analytics, and much more to support enterprises with unparalleled end-to-end ownership and quality assurance.
"The Tanla – Kore.ai business partnership is a key milestone in our pursuit to provide best-in-class next-gen solutions to our enterprise customers. This partnership will provide cutting-edge AI solutions on Wisely Platform to help clients realize the value of truly omnichannel digital customer experiences," said Uday Reddy, Founder Chairman and CEO, Tanla Platforms Limited.
Tanla and Kore.ai are confident that this collaboration will usher in a new era of automated and seamless digital communication that will elevate customer experience to new heights while helping enterprises build a better relationship with their customers.
Kore.ai is a global leader in the conversational AI-first platform and solutions helping enterprises automate business interactions to deliver extraordinary experiences for their customers, employees, and contact center agents. More than 200 Fortune 2000 companies trust Kore.ai's experience optimization (XO) platform and technology to automate their business interactions for over 100 million users worldwide to achieve extraordinary outcomes. Kore.ai has been recognized as a leader and an innovator by top analysts and ensures the success of its customers through a growing team headquartered in Orlando with offices in India, the UK, Japan, South Korea, and Europe.
Tanla Platforms Limited transforms the way the world collaborates and communicates through innovative CPaaS solutions. Founded in 1999, it was the first company to develop and deploy A2P SMSC in India. Today, as one of the world's largest CPaaS players, it processes more than 800 billion interactions annually and about 63% of India's A2P SMS traffic is processed through Trubloq, making it the world's largest Blockchain use case. Wisely, our patented enterprise-grade platform offers private, secure, and trusted experiences for enterprises and mobile carriers. Tanla Platforms Limited is headquartered in Hyderabad. Tanla is listed on two national exchanges, the NSE and BSE (NSE: TANLA; BSE:532790) and included in prestigious indices such as the Nifty 500, BSE 500, Nifty Digital Index, Nifty Alpha FTSE Russell and MSCI.
Intel | May 21, 2022
Intel has released an open source tool to migrate code to SYCL1 through a project called SYCLomatic, which helps developers more easily port CUDA code to SYCL and C++ to accelerate cross-architecture programming for heterogeneous architectures. This open source project enables community collaboration to advance adoption of the SYCL standard, a key step in freeing developers from a single-vendor proprietary ecosystem.
“Migrating to C++ with SYCL gives code stronger ISO C++ alignment, multivendor support to relieve vendor lock-in and support for multiarchitecture to provide flexibility in harnessing the full power of new hardware innovations. SYCLomatic offers a valuable tool to automate much of the work, allowing developers to focus more on custom tuning than porting,” said James Reinders, Intel oneAPI evangelist.
Why It Matters
While hardware innovation has led to a diverse heterogeneous architectural landscape for computing, software development has become increasingly complex, making it difficult to take full advantage of CPUs and accelerators. Today’s developers and their teams are strapped for time, money and resources to accommodate the rewriting and testing of code to boost application performance for these different architectures. Developers are looking for open alternatives that improve time-to-value, and Intel is providing an easier, shorter pathway to enabling hardware choice.
What is SYCL and Project SYCLomatic
SYCL, a C++-based Khronos Group standard, extends C++ capabilities to support multiarchitecture and disjoint memory configurations. To initiate this project, Intel open-sourced the technology behind its DPC++ Compatibility Tool to further advance the migration capabilities for producing more SYCL-based applications. Reusing code across architectures simplifies development, reducing time and costs for ongoing code maintenance.
Utilizing the Apache 2.0 license with LLVM exception, the SYCLomatic project hosted on GitHub offers a community for developers to contribute and provide feedback to further open heterogeneous development across CPUs, GPUs and FPGAs.
How the SYCLomatic Tool Works
SYCLomatic assists developers in porting CUDA code to SYCL, typically migrating 90-95% of CUDA code automatically to SYCL code2. To finish the process, developers complete the rest of the coding manually and then custom-tune to the desired level of performance for the architecture.
How Code Migration Usage Works
Research organizations and Intel customers have successfully used the Intel DPC++ Compatibility Tool, which has the same technologies as SYCLomatic, to migrate CUDA code to SYCL (or Data Parallel C++, oneAPI’s implementation of SYCL) on multiple vendors’ architectures. Examples include the University of Stockholm with GROMACS 20223, Zuse Institute Berlin (ZIB) with easyWave, Samsung Medison and Bittware (view oneAPI DevSummit content for more examples). Multiple customers are also testing code on current and upcoming Intel Xe architecture-based GPUs, including Argonne National Laboratory Aurora supercomputer, Leibniz Supercomputing Centre (LRZ), GE Healthcare and others.
Where to Get SYCLomatic
SYCLomatic is a GitHub project. The GitHub portal includes a “contributing.md” guide describing the steps for technical contributions to the project to ensure maximum ease. Developers are encouraged to use the tool and provide feedback and contributions to advance the tool’s evolution.
“CRK-HACC is an N-body cosmological simulation code actively under development. To prepare for Aurora, the Intel DPC++ Compatibility Tool allowed us to quickly migrate over 20 kernels to SYCL. Since the current version of the code migration tool does not support migration to functors, we wrote a simple clang tool to refactor the resulting SYCL source code to meet our needs. With the open source SYCLomatic project, we plan to integrate our previous work for a more robust solution and contribute to making functors part of the available migration options,” said Steve (Esteban) Rangel of HACC (Hardware/Hybrid Accelerated Cosmology Code), Cosmological Physics & Advanced Computing (anl.gov).
Resources for Developers
SYCLomatic project on GitHub | Contributing.md guide
Get started developing: Book: Mastering Programming of Heterogeneous Systems using C++ & SYCL | Essentials of SYCL training
CodeProject: Using oneAPI to convert CUDA code to SYCL
Intel DevCloud: A free environment to access Intel oneAPI Tools and develop and test code across a variety of Intel® architectures (CPU, GPU, FPGA).
1SYCL is a trademark of the Khronos Group Inc.
2Intel estimates as of September 2021. Based on measurements on a set of 70 HPC benchmarks and samples, with examples like Rodinia, SHOC, PENNANT. Results may vary.
3The GROMACS development team ported its CUDA code to Data Parallel C++ (DPC++), which is a SYCL implementation for oneAPI, in order to create new cross-architecture-ready code. See also Experiences adding SYCL support to GROMACS, and GROMACS 2022 Advances Open Source Drug Discovery with oneAPI.
Intel is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better.
Geminus.ai | June 14, 2022
Geminus.AI announces the launch of its first product; a predictive intelligence platform that has the power to transform how companies design and operate industrial products and processes. The Geminus Platform fuses adaptive AI with physics using multi-fidelity modeling to enable creation of predictive models that uniquely combine high accuracy and speed, fast updating, and quantified uncertainty.
The product allows users to leverage key information on hand, such as low- and high-fidelity simulation, as well as real-world data, to efficiently create high performing predictive models that can be used for a variety of use cases including process design and control optimization.
"Lam’s interest in Geminus is driven by the potential to employ their hybrid modeling capability to better predict how our equipment would behave in high volume manufacturing and optimize designs accordingly.”
Faran Nouri, a director of Geminus and VP at Lam Research, an investor in Geminus
The investments an organization makes in modeling technology can be staggering and the risks associated with generating expensive, ineffective models are high. Compared to conventional physics-based alternatives, Geminus improves ROI with faster, more effective models, reducing the risks of bad decision making. It also provides another step toward enabling digital twins, with all the advantages they can offer.
“We believe that AI in its current form will struggle to deliver ROI in complex systems that cannot tolerate insufficient accuracy and dynamically change over time. For example, our models can power digital twins that predict the behavior of highly complex processes, and enable increased productivity, at a level that does not yet exist today,” said Greg Fallon, CEO of Geminus.ai.
Geminus exists to address the challenges of conventional AI, which includes heavy data requirements, long training times, and difficulty updating. The Geminus platform uses novel, physics-informed AI computing to translate constraints of the physical world inside resilient digital models. Furthermore, it requires only sparse data, and models are easily updated with the infusion of new data points. Data scientists and modeling engineers can use the platform to predict the behavior of complex systems and help them make informed decisions.
Cycle.io | August 04, 2022
Cycle.io, an up-and-coming low-ops application deployment platform, is thrilled to announce the support for NVIDIA’s data center-class line of GPUs. This enhancement of the platform enables developers to build GPU-accelerated applications which require a high level of computing power for scientific and engineering purposes:
One Platform With Batteries Included: Cycle provides all the necessary features development teams need to deploy, scale, and monitor everything from basic websites to complex SaaS and PaaS applications.
Multi-Cloud Container Orchestration: Enables developers to use the tools and technologies they’re already familiar with across multiple cloud providers in parallel.
Ultra-Current Infrastructure With Control: Organizations are able to maintain control and ownership of their infrastructure while the Cycle platform ensures that all servers are always current, with the latest updates being deployed on a semi-weekly basis.
Turnkey Team Collaboration: Easily add and remove developers from your team, and infrastructure, all with a few clicks. Cycle makes it easy for developers to gain observability over the individual components that make up today’s modern applications.
The launch of Cycle’s GPU support coincides with Vultr’s release of their new GPU-line of cloud servers, built upon the NVIDIA A100 GPU. Vultr, a leading independent provider of cloud infrastructure, provides both virtualized and bare-metal cloud infrastructure across 20 regions globally. Through the Cycle.io and Vultr partnership, developers can easily deploy performance-sensitive, GPU-dependent applications at a price point that makes sense for all use cases.
“Because of the additional power of a massively parallel architecture, GPUs make it possible to handle multiple tasks simultaneously, giving developers more compute power. There is a growing need for GPU parallel processing by developers of AI and Machine Learning big-data intensive solutions. With containers, this has been very difficult for the architect. But with our new partnership alongside Vultr, Cycle is making this simple to do for developers.”
Jake Warner, CEO, and Founder of Cycle.io.
“We greatly value our partnership with Cycle. Now that Vultr is offering VMs and bare metal accelerated with NVIDIA A100 Tensor Core GPUs, customers can use Cycle and Vultr together to easily run containers for deep learning, high-performance computing, and data analytics workloads,” said J.J. Kardwell, CEO of Vultr’s parent company, Constant.
About Cycle Cycle is the all-in-one low-ops platform built for deploying, scaling, and monitoring your websites, backends, and APIs. With automatic platform updates, standardized deployments, a powerful API, and bottleneck crushing automation—the platform offers batteries included, and no DevOps team is required. Founded in 2015, the company is headquartered in Reno, NV.