AI Tech, AI Applications, Software Unveils Merlin AI to Instantly Transform Large Language Model Outputs Into Complete Business Processes

Businesswire | May 15, 2023 | Read time : 06:00 min Unveils Merlin AI to Instantly Transform Large Language, the leader in low-code automation and integration, today unveiled Tray Merlin AI, a new natural language automation capability in the Tray platform that instantly transforms large language model (LLM) outputs into complete business processes, without the need for LLM training or exposing customer data to the LLM. This new advancement makes the Tray platform the first iPaaS solution that brings together the power of flexible, scalable automation; support for advanced business logic; and native generative AI capabilities that anyone can use. Leveraging OpenAI technology and’s extensive workflow, authentication, connector and API technologies, business technologists, front-line employees and developers across the enterprise can now more quickly and efficiently build, iterate on and improve sophisticated workflows that transcend the software stack. By simply asking Merlin—in the same way a team member would ask a colleague—users can obtain answers at the point-of-decision and automate critical business tasks. Additionally, Merlin eliminates the need for any IT and engineering involvement in what typically entailed weeks to months of integration effort to build automations or answer queries. This new advancement builds on’s vision to lower the barriers that prohibit enterprise-wide automation in the face of the unintended consequences of digital transformation.

All final workflows created by Tray Merlin AI are presented visually in a low-code format to allow the user to modify, improve or augment its functionality, as necessary, and share with others for reuse. Unlike many other applications that interface with LLMs, the operational capabilities of Merlin AI and the underlying Tray platform are self-contained, meaning Merlin only needs to fetch small pieces of information from the LLM on an as-needed basis during the integration building process. As a result, customer data is never exposed or sent to the LLM.

“The outcomes enterprises can achieve with LLMs, such as those of OpenAI, are only as good as a user’s ability to take action on the results of the query,” said Alistair Russell, co-founder and CTO at “Tray Merlin AI completely changes the game as it instantly automates business processes and answers queries through natural language instruction. Think of Merlin as giving the LLM ‘brain’ a Tray ‘body’—a body that can take action without passing customer data back to the LLM and requires no further training to execute complex business tasks.” Taps AI to Address the Unintended Consequences of Digital Transformation

As businesses continue to grapple with the unintended consequences of rapid digital transformation efforts—such as technical debt, complex tech stacks and business process inefficiencies—every department is under enormous pressure to deliver fast results that not only meet customers' demands for timely, high-quality services, but also help the company remain profitable in the most efficient way possible.

Leaders must overcome IT and developer bottlenecks by tapping into a hidden talent opportunity—employees who have technical skills, but are not using them in their primary job function. Equipping these employees with self-service AI automation is a critical enabler to driving digital initiatives at speed. According to Gartner®, “Almost half of all non-IT employees are now business technologists, and companies that successfully enable them are 2.6 times more likely to achieve digital business goals.”1

“Generative AI is currently one of the most disruptive forces in iPaaS,” said Alexander Wurm, Senior Analyst at Nucleus Research. “Vendors who embrace AI in their platforms provide an incredible step-change in ease-of-use that boosts the productivity of experienced users. When done well, it delivers immediate value to many users in the enterprise for which the benefits of the technology were previously out of reach. Companies who adopt iPaaS solutions with native generative AI capabilities will be at a substantial velocity, efficiency and agility advantage.”

Tray Merlin AI: Natural Language Automation to Accelerate and Improve the Execution of Complex Tasks

A core element of the Tray platform architecture, Merlin AI works seamlessly with’s connector, workflow and API technologies—and other platform capabilities including data transformation, robust authentication mechanisms and support for advanced business logic—to deliver a flexible, low-code and AI-augmented automation builder.

With Merlin, anyone—regardless of their level of technical expertise—can build complete integrations with the assistance of, or solely using, NLP, radically simplifying the automation building process for all users and supercharging team productivity. For example, Merlin can act as an autonomous assistant capable of executing complex tasks, such as collecting or moving information across disparate systems, answering questions or building automated workflows to complete a business process.

“AI is revolutionizing automation, and is at the forefront of this movement. The arrival of generative AI and the pace of innovation it enables will spell the end of the iPaaS architectures that were built for a different time,” said Rich Waldron, co-founder and CEO at “Tray Merlin AI brings together the power of flexible, scalable automation and AI to accelerate and improve automation outcomes for our customers, enabling them to solve business problems better, faster and more independently than ever before.”

Tray Merlin AI powers the following new capabilities to accelerate automation and integration projects across the enterprise, while reducing the burden on business technologists and developers:

  • Securely build out automation details and iterate or improve existing processes: Marketing Ops teams can refine their lead delivery processes by reducing the time it takes to deliver high-value leads to sales reps, enabling them to respond to prospects promptly. Using just natural language instructions, the person building marketing automations can improve their current process by asking Merlin to build a workflow to capture lead scoring data from Marketo, assign qualified leads to reps in Salesforce immediately any time it’s over a certain threshold, build the Outreach sequence and notify reps via Slack or Teams of the newly qualified lead. When finished, all the steps are displayed in the low-code visual builder through which the user can review and make any modifications, if required.
  • Quickly build or augment more substantial workflows: Business technologists who are proficient at building on the Tray platform can type their request and parameters and Merlin will automatically build a workflow with the relevant business logic. Sales and customer success teams who want to deliver a more frictionless customer experience by streamlining the pre- and post-sales processes, can use natural language instructions to create a workflow that will automatically update their project management system every time a deal closes with the relevant client information from their CRM and notify the professional services team via email. With Merlin, teams can ensure there’s no delay in project execution, which results in higher levels of customer satisfaction.
  • Obtain answers at the point-of-decision with on-demand, AI-powered automation: For non-technical employees, such as department managers, C-level executives and business users, who are not and do not need to be familiar with the Tray platform, Merlin can be used to quickly answer urgent business queries. A CMO seeking to optimize social media investments can query Merlin to identify the top lead sources for the largest “Closed Won” accounts by revenue and cross-reference the results with LinkedIn followers. To accomplish this, Merlin integrates the company’s CRM, marketing automation platform and social media management platform to deliver the result instantly. Additionally, sharing these results with the marketing leadership team via Slack or Teams can be built into the workflow based on the CMO’s natural language instruction. Merlin automatically knows when and where to search for authentications, displays all the actions it plans to carry out and obtains confirmation from the user before executing these actions.

By tapping into the power of AI via a natural language interface, further unlocks the full potential of automation and makes building automated workflows across many use cases even more accessible to all employees, significantly increasing employee efficiency and productivity.

“Builders in our organization already rely on Tray as they rapidly develop the low-code workflows needed to improve operational processes across our business,” said Brendon Ritz, Senior Director of Marketing Ops at ThoughtSpot, the leading AI-powered analytics company. “With the addition of Merlin AI to the Tray platform, our business technologists can interface with Tray using natural language instruction to build faster and build better. Even more impactful is the ability for us to put the power of automation in the hands of our front-line employees and managers so they can instantly get answers to business queries that would have otherwise required weeks or months of development to work out the required integrations.”

​​About is a low-code automation platform that can easily turn unique business processes into repeatable and scalable workflows that evolve whenever business needs change. Unlike iPaaS solutions, which are expensive, complex, and code-intensive,’s flexible self-service platform makes it simple to build integrations using any API and connect enterprise applications at scale without incremental costs. Process innovation is today’s competitive advantage since companies can no longer differentiate on their tech stack alone. The promise of SaaS led to an avalanche of siloed point solutions that require businesses to force their processes into rigid, predetermined schema. The Tray Platform removes these limitations, empowering both non-technical and technical users to create sophisticated workflow automations that streamline data movement and actions across multiple applications. Freed from tedious and repetitive tasks, product leaders and IT are able to uplevel their skill set with automation to unlock their full potential and do things in a way that’s right for their business. Love your work. Automate the rest.™


Security breaches can happen anywhere. With hybrid workforces working in more places, and IT supporting more cloud platforms and SaaS applications, “anywhere” is bigger and more varied than it's ever been. To protect anywhere, you need security that’s everywhere. With Cloudflare, you can protect employees, websites, apps and net


Security breaches can happen anywhere. With hybrid workforces working in more places, and IT supporting more cloud platforms and SaaS applications, “anywhere” is bigger and more varied than it's ever been. To protect anywhere, you need security that’s everywhere. With Cloudflare, you can protect employees, websites, apps and net

Related News

Software, Future Tech, Application Development Platform

Lucid Software Launches APIs and Developer Platform

iTWire | August 21, 2023

Lucid Software, the leader in visual collaboration software, today announced the launch of its latest APIs and developer platform, which will enable Lucid users and partners to better streamline their workflows by creating, publishing, and distributing their apps and integrations within the Lucid Visual Collaboration Suite. Top partners utilizing the APIs and developer platform at launch include Salesforce, Slack, ServiceNow, and Notion. “Our APIs and developer platform empower users and partners to build and deliver powerful, customized solutions for their teams and customers, further increasing efficiency and reducing context switching across apps,” said Dan Lawyer, chief product officer at Lucid Software. "Today’s teams need solutions that cut through the complexity and work where they work. By utilizing our latest APIs, users and partners will be able to leverage Lucid to strengthen existing workflows with the power of visual collaboration, enabling them to more quickly achieve alignment and accelerate innovation and productivity." Users can customize Lucidchart and Lucidspark to their use cases and bring visualization into their workflows by using the developer platform to build, test and distribute apps and integrations to their teams. Users will also be able to tap into the vast resources of Lucid's extensive worldwide user base, which boasts over 60 million users, by easily sharing their apps and leveraging the creative output of other users. The newest iteration of APIs will allow developers to create custom shape libraries, bidirectional data integrations and auto-visualizations to accommodate a wide array of use cases. This update builds upon already available functionality, such as embedding Lucid docs in other applications and managing access to docs and folders. Users can also import their own assets into Lucid (like links, photos and videos) to clarify complexity. Examples of ways partners are already leveraging these features include: Salesforce: Users can easily customize templates, charts and projects in the Salesforce shape library in Lucidchart to standardize documentation across the company. Slack: Slack’s open and extensible platform makes it easy for users to quickly create, access and share Lucid visuals without ever leaving Slack. ServiceNow: Users can automatically generate Lucidchart diagrams from their ServiceNow® Application Portfolio Management (APM) data to quickly visualize and understand areas for optimization across an application portfolio. Notion: Lucid documents can be directly embedded, viewed and shared in Notion so that teams can keep their visuals and context all in one place. Headroom: Users can hold AI-powered meetings directly in Lucid to transition from brainstorming to decision-making without leaving the platform. Streamline: Users can access Streamline's extensive library of icon and illustration sets directly in Lucid to add beautiful and consistent imagery to boards and diagrams. Rewatch: Rewatch videos can be shared directly into Lucid documents so users can capture, share and retain knowledge asynchronously. The launch of Lucid’s new APIs and developer platform, along with powerful data-driven visuals and existing advanced customizations and integrations, further enables users and developers to quickly build advanced workflows for their teams and get more done faster. To learn more, visit About Lucid Software Lucid Software is the leader in visual collaboration, helping teams see and build the future from idea to reality. With its products—Lucidchart, Lucidspark, and Lucidscale—teams can align around a shared vision, clarify complexity, and collaborate visually, no matter where they're located. Top businesses use Lucid's products all around the world, including customers such as Google, GE and NBC Universal. Lucid's partners include industry leaders such as Google, Atlassian and Microsoft. Since the company's founding, it has received numerous awards for its products, business and workplace culture. For more information, visit

Read More

General AI, AI Applications, Software

Prophecy Launches Generative AI Platform, Powering AI Applications on Enterprise Data

Businesswire | June 26, 2023

Prophecy, the low-code data transformation platform, today announced two new product offerings: Prophecy Generative AI Platform and Prophecy Data Copilot. The new Prophecy Generative AI Platform provides a simple way for organizations to power generative AI applications using privately-owned, enterprise data. This new offering leverages Prophecy's flagship product, the low-code data transformation platform. In addition, Prophecy is also announcing the launch of the Prophecy Data Copilot, an AI assistant that automatically creates data pipelines based on natural language prompts, and improves pipeline quality with greater test coverage. “We are seeing tremendous results from building applications that combine enterprise data with off-the-shelf LLM models. Our belief is that for the vast majority of enterprise use cases, the era of hiring machine learning engineers who train proprietary models, or specialized LLMs is over,” shared Prophecy Co-Founder and CEO, Raj Bains. “Now all you will need is a private knowledge warehouse that provides private context along with questions and the LLMs give very relevant answers. With Prophecy’s new platform release, a data or application engineer can build an application like a support-bot on private data in a week”. Generative AI technology is becoming mainstream, with nearly all businesses looking to adopt and implement this technology into their existing workflows. But today's generative AI applications are trained against public data, limiting their usefulness when it comes to answering questions that are specific to an organization. The Prophecy Generative AI Platform democratizes AI for technical and non-technical teams by: Putting the power of generative AI against enterprise data in the hands of every user in every organization Speeding up the development of generative AI applications against enterprise data Supporting nearly unlimited use cases by leveraging the full breadth of enterprise data “We see generative AI based copilots making low-code the default way to build applications and data pipelines in enterprises,” shared Prophecy Co-Founder, Maciej Szpakowski. “We already see a 30-40% productivity boost from low-code adoption in our customer base, that coupled with another 30-40% from copilots will make starting with code financially untenable for the majority of enterprise use cases. The future of enterprise development is definitely copilot powered low-code, and we’re excited to see our customers get this boost as they adopt Prophecy Data Copilot.” Prophecy Data Copilot assists and automates numerous data-related tasks. Data Copilot: Democratizes the creation and delivery of data products. Data products are underpinned by data transformations. With Copilot, users can describe transformations using natural language. Copilot then converts these natural language prompts into Prophecy visual components and transformation logic, along with corresponding code. Improves efficiency and quality of pipeline creation. Copilot suggests data tests, enabling greater test coverage and increasing confidence in data. Copilot also suggests what transformations to use next, minimizing errors when joining, filtering, and aggregating data. Prevents data misuse. Copilot automatically generates descriptions for data pipelines and datasets. This includes, for example, an explanation of how a column was computed. “Generative AI and LLMs have become mainstream practically overnight and the hype is warranted,” shared SanjMo Principal, Sanjeev Mohan. “However, today’s applications, such as ChatGPT, artificially constrain the usefulness to data that is available in the public domain. The next frontier is applying this power technology to privately-held, enterprise data, something I didn’t think would happen till years from now. These new solutions from Prophecy will enable teams to use their data to move the needle in significant ways.” With the launch and platform updates, Prophecy further democratizes data transformation, boosting productivity and empowering non-technical users to access and leverage data in their everyday workflows. To learn more about Prophecy Data Copilot and Prophecy Generative AI Platform please visit the Prophecy blog. About Prophecy Prophecy is a low-code data transformation platform that offers an easy-to-use visual interface to build, deploy and manage data pipelines with software-engineering best practices. Prophecy is trusted by enterprises, including multiple companies in the Fortune 50 where hundreds of engineers run thousands of ETL workloads every day. Prophecy is backed by some of the top VCs, including Insight Partners and SignalFire. Learn how Prophecy can help your data engineering in the cloud at

Read More

AI Tech, General AI, Software

Mattermost Introduces “OpenOps” to Speed Responsible Evaluation of Generative AI Applied to Workflows

Globenewswire | July 04, 2023

At the 2023 Collision Conference, Mattermost, Inc., the secure collaboration platform for technical teams, announced the launch of “OpenOps”, an open-source approach to accelerating the responsible evaluation of AI-enhanced workflows and usage policies while maintaining data control and avoiding vendor lock-in. OpenOps emerges at the intersection of the race to leverage AI for competitive advantage and the urgent need to run trustworthy operations, including the development of usage and oversight policies and ensuring regulatory and contractually-obligated data controls. It aims to help clear key bottlenecks between these critical concerns by enabling developers and organizations to self-host a “sandbox” environment with full data control to responsibly evaluate the benefits and risks of different AI models and usage policies on real-world, multi-user chat collaboration workflows. The system can be used to evaluate self-hosted LLMs listed on Hugging Face, including Falcon LLM and GPT4All, when usage is optimized for data control, as well as hyperscaled, vendor-hosted models from the Azure AI platform, OpenAI ChatGPT and Anthropic Claude when usage is optimized for performance. The first release of the OpenOps platform enables evaluation of a range of AI-augmented use cases including: Automated Question and Answer: During collaborative and individual work users can ask questions to generative AI models, either self-hosted or vendor-hosted, to learn about different subject matters the model supports. Discussion Summarization: AI-generated summaries can be created from self-hosted, chat-based discussions to accelerate information flows and decision-making while reducing the time and cost required for organizations to stay up-to-date. Contextual Interrogation: Users can ask follow-up questions to thread summaries generated by AI bots to learn more about the underlying information without going into the raw data. For example, a discussion summary from an AI bot about a certain individual making a series of requests about troubleshooting issues could be interrogated via the AI bot for more context on why the individual made the requests and how they intended to use the information. Sentiment Analysis: AI bots can analyze the sentiment of messages, which can be used to recommend and deliver emoji reactions on those messages on a user’s behalf. For example, after detecting a celebratory sentiment an AI bot may add a “fire” emoji reaction indicating excitement. Reinforcement Learning from Human Feedback (RLHF) Collection: To help evaluate and train AI models, the system can collect feedback from users on responses from different prompts and models by recording the “thumbs up/thumbs down” signals end users select. The data can be used in future to both fine tune existing models, as well as providing input for evaluating alternate models on past user prompts. This open source, self-hosted framework offers a "Customer-Controlled Operations and AI Architecture," providing an operational hub for coordination and automation with AI bots connected to interchangeable, self-hosted Generative AI and LLM backends from services like Hugging Face that can scale up to private cloud and data center architectures, as well as scale down to run on a developer’s laptop for research and exploration. At the same time, it can also connect to hyperscaled, vendor-hosted models from the Azure AI platform as well as OpenAI. “Every organization is in a race to define how AI accelerates their competitive advantage,” says Mattermost CEO, Ian Tien, “We created OpenOps to help organizations responsibly unlock their potential with the ability to evaluate a broad range of usage policies and AI models in their ability to accelerate in-house workflows in concert.” The OpenOps framework recommends a four phase approach to developing AI-augmentations: 1 - Self-Hosted Sandbox - Have technical teams set up a self-hosted “sandbox” environment as a safe space with data control and auditability to explore and demonstrate Generative AI technologies. The OpenOps sandbox can include just web-based multi-user chat collaboration, or be extended to include desktop and mobile applications, integrations from different in-house tools to simulate a production environment, as well as integration with other collaboration environments, such as specific Microsoft Teams channels. 2 - Data Control Framework - Technical teams conduct an initial evaluation of different AI models on in-house use cases, and setting a starting point for usage policies covering data control issues with different models based on whether models are self-hosted or vendor-hosted, and in vendor-hosted models based on different data handling assurances. For example, data control policies could range from completely blocking vendor-hosted AIs, to blocking the suspected use of sensitive data such as credit card numbers or private keys, or custom policies that can be encoded into the environment. 3 - Trust, Safety and Compliance Framework - Trust, safety and compliance teams are invited into the sandbox environment to observe and interact with initial AI-enhanced use cases and work with technical teams to develop usage and oversight policies in addition to data control. For example, setting guidelines on whether AI can be used to help managers write performance evaluations for their teams, or whether researching techniques for developing malicious software can be researched using AI. 4 - Pilot and Production - Once a baseline for usage policies and initial AI-enhancements are available, a group of pilot users can be added to the sandbox environment to assess the benefits of the augmentations. Technical teams can iterate on adding workflow augmentations using different AI models while Trust, Safety and Compliance teams can monitor usage with full auditability and iterate on usage policies and their implementations. As the pilot system matures, the full set of enhancements can be deployed to production environments that can run on a production-ized version of the OpenOps framework. The OpenOps framework includes the following capabilities: Self-Hosted Operational Hub: OpenOps allows for self-hosted operational workflows on a real-time messaging platform across web, mobile and desktop from the Mattermost open-source project. Integrations with in-house systems and popular developer tools to help enrich AI backends with critical, contextual data. Workflow automation accelerates response times while reducing error rates and risk. AI Bots with Interchangeable AI Backends: OpenOps enables AI bots to be integrated into operations while connected to an interchangeable array of AI platforms. For maximum data control, work with self-hosted, open-source LLM models including GPT4All and Falcon LLM from services like Hugging Face. For maximum performance, tap into third-party AI frameworking including OpenAI ChatGPT, the Azure AI Platform and Anthropic Claude. Full Data Control: OpenOps enables organizations to self-host, control, and monitor all data, IP, and network traffic using their existing security and compliance infrastructure. This allows organizations to develop a rich corpus of real-world training data for future AI backend evaluation and fine-tuning. Free and Open Source: Available under the MIT and Apache 2 licenses, OpenOps is a free, open-source system, enabling enterprises to easily deploy and run the complete architecture. Scalability: OpenOps offers the flexibility to deploy on private clouds, data centers, or even a standard laptop. The system also removes the need for specialized hardware such as GPUs, broadening the number of developers who can explore self-hosted AI models. The OpenOps framework is currently experimental and can be downloaded from About Mattermost Mattermost provides a secure, extensible hub for technical and operational teams that need to meet nation-state-level security and trust requirements. We serve technology, public sector, and national defense industries with customers ranging from tech giants to the U.S. Department of Defense to governmental agencies around the world. Our self-hosted and cloud offerings provide a robust platform for technical communication across web, desktop and mobile supporting operational workflow, incident collaboration, integration with Dev/Sec/Ops and in-house toolchains and connecting with a broad range of unified communications platforms. We run on an open source platform vetted and deployed by the world’s most secure and mission critical organizations, that is co-built with over 4,000 open source project contributors who’ve provided over 30,000 code improvements towards our shared product vision, which is translated into 20 languages.

Read More