How Artificial Intelligence Helps in Health Care

Artificial intelligence has disrupted different spheres of our lives and a number of industries. It has already become commonplace to such an extent that we sometimes aren’t even aware that we’re using the benefits of this advanced technology. For example, every time Amazon recommends a product you might be interested in, or you ask Alexa to order some food online, it’s AI at work.  But apart from these very practical use cases that make our lives easier, AI has also revolutionized health care, and together with its subsets machine learning (ML), natural language processing (NLP), and others, it will continue to evolve and establish itself as a critical component of this industry.

Spotlight

NCC Group

NCC Group is a global expert in cyber security and risk mitigation, working with businesses to protect their brand, value and reputation against the ever-evolving threat landscape. With our knowledge, experience and global footprint, we are best placed to help businesses identify, assess, mitigate & respond to the risks they face.

OTHER ARTICLES
AI Tech

Are Telcos Ready for a Quantum Leap?

Article | September 8, 2023

Quantum technologies present both an opportunity for telcos to solve difficult problems and provide new services and a security threat that could require extensive IT investment. Are Telcos Ready for a Quantum Leap? When Andrew Lord, Senior Manager, Optical Networks and Quantum Research at BT, first started presenting quantum technologies at customer events six or seven years ago, his was the graveyard shift, he says, entertaining attendees at the end of the day with talk of 'crazy quantum stuff.' "But that is no longer the case," says Lord. "Over the last two years, I've noticed a shift where I now speak before lunch, and customers actively seek us out." Two developments may be causing the shift: Customers’ growing awareness of the threats and opportunities that quantum computing presents, plus a recent spike in investment in quantum technology. In 2022, investors plowed $2.35 billion into quantum technology startups, which include companies in quantum computing, communications and sensing, according to McKinsey. The public sector has also been digging deep into its pockets. Last year, the United States added $1.8 billion to its previous spending on quantum technology, and the EU committed an extra $1.2 billion, the consultancy noted, while China made total investments of $15.3 billion. According to Luke Ibbetson, Head of Group R&D at Vodafone, quantum computing's promise lies in solving a probabilistic equation within a few hours. This task would take a classical computer a million years to accomplish. This breakthrough would enable telcos to address optimization problems related to network planning, optimization, and base station placement. The flip side is that a powerful quantum computer could also break the public-key cryptography that protects today’s IT systems from hackers. As a spokesperson at Deutsche Telekom remarks: “Telcos will have to react to the threat of quantum computers to communication security because their core business model is at risk, which is offering secure digital communications.” The idea of quantum computing posing a security threat is not new. In 1994, Peter Shor, a mathematician working at AT&T Bell Labs, showed how a quantum computer could solve the logarithms used to encrypt data. “His work simultaneously ignited multiple new lines of research in quantum computing, information science, and cryptography,” according to an article by the Massachusetts Institute of Technology, where Shor is currently working. Beyond The Lab What has changed nearly thirty years on is that quantum computing is creeping out of the lab. Sizeable obstacles to large-scale quantum computing, however, remain. Quantum computers are highly sensitive to interference from noise, temperature, movement or electromagnetic fields and, therefore, very difficult and expensive to build and operate, especially at scale: IBM’s latest quantum processor, for example, operates at a very low temperature of approximately 0.02 degrees Kelvin. When Deutsche Telekom’s T-Labs tested telco use cases, it found quantum computing coped well with small problem statements. “However, when the problem size was scaled to real-world problem sizes, the quality of the QComp solution degraded,” according to the spokesperson. The company is now awaiting the next generation of quantum computing platforms to redo the analyses. All of this means, for now, quantum computers are not large and powerful enough to crack Shor’s algorithm. The question is, when will someone succeed? The Global Risk Institute tracks the quantum threat timeline. In its latest annual report, the organization asked 40 quantum experts whether they thought it likely that within the next ten years, a quantum computer would break an encryption scheme like RSA-2048 in under 24 hours. Over half the respondents judged the event to be more than 5% likely, and almost a quarter considered it to be more than 50% likely. Any breakthrough will come from a relatively small number of actors. Today, governments and academic institutions are home to around half of the 163 projects accounted for worldwide by Global Quantum Intelligence, a research and analysis company, according to its CEO, André M. König, with big technology companies and specialized startups accounting for the rest. Q2K Nonetheless, the impact of quantum computing could be widespread, even if relatively few of them are built. The challenge of preparing for a post-quantum future is often called Q2K in reference to the Y2K bug. In the late 1990s, many (but not all) governmental organizations and companies spent millions of dollars on Y2K systems integration to ensure that IT programs written from the 1960s through the 1980s would be able to recognize dates after December 31, 1999, all while being uncertain of the scale or the impact of the risk if they didn’t. ‘Q2K’ differs in that there is no specific deadline, and the dangers of a major security breach are much clearer cut. However, it is similar in demanding a lot of work on aging systems. “Cryptography is used everywhere,” points out Lory Thorpe, IBM’s Director of Global Solutions and Offerings, Telecommunications. She adds, “Because telco systems have been built over periods of decades, people don’t actually know where cryptography is being used. So, if you start to look at the impact of public key cryptography and digital signatures being compromised, you start to look at how those two things impact open source, how that impacts the core network, the radio network, [and] OSS/BSS, network management, how the network management speaks to the network functions and so on.” This complexity is why some analysts recommend that telcos take action now. “You’re going to find tens of thousands of vulnerabilities that are critical and vulnerable to a quantum attack. So, do you have to worry about it today? Absolutely - even if it’s in 2035,” says König. “Anyone who has ever done [IT implementation projects], and anyone who’s ever worked in cybersecurity [knows], tens of thousands of vulnerabilities that are critical [requires] years and years and years of just traditional integration work. So, even if you’re skeptical about quantum, if you haven’t started today, it is almost too late already.” Don’t Panic! For the past two to three years, Vodafone has been preparing to migrate some of its cryptographic systems to be quantum-safe, according to Ibbetson. He believes there is no need to panic about this. However, telcos must start planning now. König said, "The telecoms industry as a whole is not moving as quickly as some other sectors, notably the banking, pharmaceutical, and automotive industries. In these sectors, post-quantum security planning often involves CEOs at a very strategic level." For this reason, Vodafone joined forces with IBM in September 2022 to establish the GSMA Post-Quantum Telco Network Taskforce. “Even though many industries are preparing to be able to defend against future quantum threats, we didn’t see anything happening particularly in in the telco space, and we wanted to make sure that it was a focus,” says Ibbetson. “Obviously it will turn into an IT-style transformation, but it’s starting now with understanding what it is we need to mobilize that.” AT&T has also been working to pinpoint what needs to be addressed. Last year, the company said it aims to be quantum-ready by 2025, in the sense that it will have done its due diligence and identified a clear path forward. Minding Your PQCs Companies across multiple sectors are looking to post-quantum cryptography (PQC) to secure their systems, which will use new algorithms that are much harder to crack than RSA. König contends that PQC needs to become “a standard component of companies’ agile defense posture” and believes the development of PQC systems by software and hardware companies will help keep upgrade costs under control. “From a financial point of view, vendors do a fantastic job bringing this to market and making it very accessible,” says König. Lord, who has been researching quantum technologies at BT for over a decade, is also confident that there is “going to be much more available technology.” As a result, even smaller telcos will be able to invest in securing their systems. “It doesn't need a big boy with lots of money [for] research to do something around PQC. There’s a lot of work going on to ratify the best of those solutions,” says Lord. There are several reasons why eyes are on software based PQC. Firstly, it can be used to secure data that was encrypted in the past, quantum computing advances will make vulnerable in the future. In addition, the quantum-based alternative to PQC for securing network traffic called quantum key distribution (QKD), comes with a huge drawback for wireless operators. QKD is hardware-based and uses quantum mechanics to prevent interception across optical fiber and satellite (i.e., free space optical) networks, making it secure, albeit expensive. But for reasons of physics, it does not work on mobile networks. Setting Standards Given the importance of PQC, a lot of effort is going into standardizing robust algorithms. The political weight of the US and the size of its technology industry mean that the US government’s National Institute of Standards and Technology (NIST) is playing a key role in the technical evaluation of post-quantum standardization algorithms and creating standards. NIST expects to publish the first set of post-quantum cryptography standards in 2024. In the meantime, Dustin Moody, a NIST mathematician, recommends (in answers emailed to inform) that companies “become familiar and do some testing with the algorithms being standardized, and how they will fit in your products and applications. Ensure that you are using current best-practice cryptographic algorithms and security strengths in your existing applications. Have somebody designated to be leading the effort to transition. QKD There is no absolute guarantee, however, that a quantum computer in the future won’t find a way to crack PQC. Therefore, institutions such as government agencies and banks remain interested in using QKD fiber and satellite networks to ensure the highest levels of security for data transmission. The European Commission, for example, is working with the 27 EU Member States and the European Space Agency (ESA) to design, develop and deploy a QKD-based European Quantum Communication Infrastructure (EuroQCI). It will be made up of fiber networks linking strategic sites at national and cross-border levels and a space segment based on satellites. EuroQCI will reinforce the protection of Europe’s governmental institutions, their data centers, hospitals, energy grids, and more,” according to the EU. Telecom operators are involved in some of the national programs, including Orange, which is coordinating France’s part of the program called FranceQCI (Quantum Communication Infrastructure). Separately, this month, Toshiba and Orange announced they had successfully demonstrated the viability of deploying QKD on existing commercial networks. Outside the EU, BT has already built and is now operating a commercial metro quantum-encryption network in London. “The London network has three quantum nodes, which are the bearers carrying the quantum traffic for all of the access ingress,” explains Lord. For example, a customer in London's Canary Wharf could link via the network to the nearest quantum-enabled BT exchange. From there, it joins a metro network, which carries the keys from multiple customers “in an aggregated cost-effective way to the egress points,” according to Lord. “It is not trivial because you can mess things up and [get] the wrong keys,” explains Lord. “You really have to be more careful about authentication and key management. And then it's all about how you engineer your quantum resources to handle bigger aggregation.” It also gives BT the opportunity to explore how to integrate quantum systems downstream into its whole network. “What I'm telling the quantum world is that they need to get into the real world because a system that uses quantum is still going to be 90%, non-quantum and all of the usual networking rules and engineering practices apply. You still need to know how to handle fiber. You still need to know how to provision a piece of equipment and integrate it into a network.” SK Telecom is also heavily involved in quantum-related research, with developments including QKD systems for the control and interworking of quantum cryptography communication networks. Japan is another important center of QKD research. A QKD network has existed in Tokyo since 2010, and in 2020, financial services company Nomura Securities Co., Ltd. tested the transmission of data across the Tokyo QKD network. As the EU’s project makes clear, satellite is an important part of the mix. Lord expects satellite-based QKD networks to come on stream as of 2025 and 2026, enabling the purchase of wholesale quantum keys from a dedicated satellite quantum provider. Back in 2017, China already used the satellite to make the first very long-distance transmission of data secured by QKD between Beijing and Vienna, a distance of 7,000km. Securing The Edge There are additional efforts to secure communications with edge devices. BT’s Lord, for example, sees a role for digital fingerprints for IoT devices, phones, cars and smart meters in the form of a physical unclonable function (PUF) silicon chip, which, because of random imperfections in its manufacture, cannot be copied. In the UK, BT is trialing a combination of QKD and PUF to secure the end-to-end journey of a driverless car. The connection to the roadside depends on standard radio with PUF authentication, while transmission from the roadside unit onward, as well as the overall control of the autonomous vehicle network, incorporate QKD, explains Lord. SK Telecom has developed what it describes as a quantum-enhanced cryptographic chip with Korea Computer & Systems (KCS) and ID Quantique. Telefónica Spain has partnered on the development of a quantum-safe 5G SIM card and has integrated quantum technology into its cloud service hosted in its virtual data centers. Given China’s heavy investment in quantum technologies, it is no surprise to see its telecom operators involved in the field. China Telecom, for example, recently invested three billion yuan ($434m) in quantum technology deployment, according to Reuters. Quantum in The Cloud Some of America's biggest technology companies are investing in quantum computing. Today, it is even possible to access quantum computing facilities via the cloud, albeit at on small scale. IBM's cloud access to quantum computers is free for the most basic level, rising to $1.60 per second for the next level. And it is just the beginning. America's big tech companies are racing to build quantum computers at scale. One measure of scale is the size of a quantum processor, which is measured in qubits. While a traditional computer stores information as a 0 or 1, a qubit can represent both 0 and 1 simultaneously. This unique property enables a quantum computer to explore multiple potential solutions to a problem simultaneously; and the greater the stability of its qubits, the more efficient it becomes. IBM has a long history in quantum research and development. In 1998, it unveiled what was then a ground-breaking 2-qubit computer. By 2022, it had produced a 433-qubit processor, and in 2023, it aims to produce a 1,121-qubit processor. Separately, this month, it announced the construction of its first quantum data center in Europe, which it expects to begin offering commercial services as of next year. Google is also firmly in the race to build a large-scale quantum computer. In 2019, a paper in Nature featured Google’s Sycamore processor and the speed with which it undertakes computational tasks. More recent work includes an experimental demonstration of it’s possible to reduce errors by increasing the number of qubits. Microsoft reckons that "a quantum machine capable of solving many of the hardest problems facing humanity will ultimately require at least 1 million stable qubits that can perform 1 quintillion operations while making at most a single error." To this end, it is working on what it calls a new type of qubit, a topological qubit. Amazon announced in 2021 an AWS Center for Quantum Computing on the Caltech campus to build a fault-tolerant quantum computer.

Read More
Software, Future Tech, Application Development Platform

Empowering Industry 4.0 with Artificial Intelligence

Article | August 7, 2023

The next step in industrial technology is about robotics, computers and equipment becoming connected to the Internet of Things (IoT) and enhanced by machine learning algorithms. Industry 4.0 has the potential to be a powerful driver of economic growth, predicted to add between $500 billion- $1.5 trillion in value to the global economy between 2018 and 2022, according to a report by Capgemini.

Read More
Software, Low-Code App Development, Application Development Platform

How Artificial Intelligence Is Transforming Businesses

Article | August 4, 2023

Whilst there are many people that associate AI with sci-fi novels and films, its reputation as an antagonist to fictional dystopic worlds is now becoming a thing of the past, as the technology becomes more and more integrated into our everyday lives. AI technologies have become increasingly more present in our daily lives, not just with Alexa’s in the home, but also throughout businesses everywhere, disrupting a variety of different industries with often tremendous results. The technology has helped to streamline even the most mundane of tasks whilst having a breath-taking impact on a company’s efficiency and productivity

Read More

The advances of AI in healthcare

Article | February 11, 2020

With the Government investing £250 million into the project, the Lab will consider how to use AI for the benefit of patients – whether this be the deployment of existing AI methods, the development of new technologies or the testing of their safety. Amongst other things, the initiative will aim to deliver earlier diagnoses of cancer. It is estimated that in excess of 50,000 extra patients could see their cancer being detected at an early stage, thus boosting survival rates. More specifically, a study has shown that AI is quicker in identifying brain tumour tissue than a pathologist.This would have a positive knock-on effect in other areas, such as enabling money to be saved (that otherwise would have been spent on further treatment) and reducing the workload of staff (at a time when there is a crisis in NHS workforce numbers).

Read More

Spotlight

NCC Group

NCC Group is a global expert in cyber security and risk mitigation, working with businesses to protect their brand, value and reputation against the ever-evolving threat landscape. With our knowledge, experience and global footprint, we are best placed to help businesses identify, assess, mitigate & respond to the risks they face.

Related News

AI Tech

IBM Launches watsonx Code Assistant, Delivers Generative AI-powered Code Generation Capabilities Built for Enterprise Application Modernization

PR Newswire | October 26, 2023

Today IBM (NYSE: IBM) launched watsonx Code Assistant, a generative AI-powered assistant that helps enterprise developers and IT operators code more quickly and more accurately using natural language prompts. The product currently delivers on two specific enterprise use cases. First, IT Automation with watsonx Code Assistant for Red Hat Ansible Lightspeed, for tasks such as network configuration and code deployment. Second, mainframe application modernization with watsonx Code Assistant for Z, for translation of COBOL to Java on IBM Z. Designed to accelerate development while maintaining the principles of trust, security, and compliance, the product leverages generative AI based on IBM's Granite foundation models for code running on IBM's watsonx platform. Granite uses the decoder architecture, which underpins large language model capabilities to predict what is next in a sequence to support natural language processing tasks. IBM is exploring opportunities to tune watsonx Code Assistant with additional domain-specific generative AI capabilities to assist in code generation, code explanation, and the full end-to-end software development lifecycle to continue to drive enterprise application modernization. According to a recent IDC report, "Because it relies on a model trained on curated data, watsonx Code Assistant can help enterprises improve code quality by propagating best practices through code recommendations, instead of polluting enterprise code bases with code generated by models trained on unvetted repositories."2 With this launch, watsonx Code Assistant joins watsonx Orchestrate and watsonx Assistant in IBM's growing line of watsonx assistants that provide enterprises with tangible ways to implement generative AI, said Kareem Yusuf, Ph.D, Senior Vice President, Product Management and Growth, IBM Software. Watsonx Code Assistant puts AI-assisted code development and application modernization tools directly into the hands of developers – in a naturally integrated way that is designed to be non-disruptive – to help address skills gaps and increase productivity. Additionally, IBM Consulting brings deep domain expertise across these use cases working closely with clients across industries such as banking, insurance, healthcare and government, to build strategies to allow them to take advantage of the potential of generative AI and code generation to accelerate modernization. IT Automation - IBM watsonx Code Assistant for Red Hat Ansible Lightspeed The Ansible Automation Platform helps enterprise developers and IT operators implement automation, using Ansible Playbooks, for IT tasks including infrastructure management, hybrid cloud deployment, network configuration, application deployment and more. With IBM watsonx Code Assistant for Red Hat Ansible Lightspeed, platform users can input plain English prompts to automatically generate task recommendations for Ansible Playbooks that adhere to best practices in task creation and maintenance. This way, a greater number of team members can create Ansible Playbooks more efficiently and implement automation engineered to be more resilient and easier to support without in-depth training. Technical Preview Key Data: Approximately 4,000 developers participated in the technical preview. 85% overall average acceptance rate of the AI-generated content recommendations. (from July 27 – Oct 23, 2023, based on over 41,000 recommendations) Productivity improvements in the range of 20-45%. "Red Hat has already shown what domain-specific AI can do for IT automation at the community level," said Ashesh Badani, Senior Vice President and Chief Product Officer, Red Hat. "The release of watsonx Code Assistant for Red Hat Ansible Lightspeed has the potential to close skills gaps, create greater organizational efficiencies and free enterprise IT to deliver even more business value." The Hybrid Cloud Platforms team within the IBM CIO Office uses Red Hat Ansible Automation Platform to support a wide range of tasks within their IT environment whether it's patching, resolving vulnerabilities, or running regular health checks of their systems. Bob Epstein, Leader of IBM CIO Hybrid Cloud Platforms, expects that the number of developers able to produce Ansible Playbooks with the full release version could increase as much as 10x as watsonx Code Assistant for Red Hat Ansible Lightspeed empowers other team members such as Site Reliability Engineers who can use natural language to generate Ansible-specific automation tasks. "I like to look at our modernization journey in these stages: In the past we were crawling, doing a lot of things manually. Then, when we started automating, we were walking. Once we implemented Red Hat Ansible Automation Platform, we were running. And as we look ahead, with watsonx Code Assistant for Red Hat Ansible Lightspeed, I think we will be able to fly," said Robert Barron, Architect, Hybrid Cloud Platforms, IBM CIO Office. Mainframe Application Modernization – IBM watsonx Code Assistant for Z IBM watsonx Code Assistant for Z helps enable faster translation of COBOL to Java on IBM Z and enhances developer productivity on the platform. It is being designed to assist businesses in leveraging generative AI and automated tooling to accelerate their mainframe application modernization – while allowing clients to take advantage of the performance, security and resiliency capabilities of IBM Z. Today, the product follows the application modernization lifecycle, starting with an application discovery capability, which maps out a technical understanding of the application and its dependencies. Then, an automated refactoring capability leverages the information captured in application discovery to identify selected elements to decompose the monolithic application into modular COBOL business services. Finally, watsonx Code Assistant for Z leverages generative AI to transform individual COBOL business services into object-oriented Java code. The next step in the lifecycle is validation testing. Anticipated in a future release, the product will support automated test case generation to validate the new COBOL or Java services. TCS and IBM hold a long-term partnership that fosters a collaborative ecosystem to develop joint successes for their customers and stakeholders. Leveraging this partnership and the deep contextual knowledge, TCS has grown a purpose-led, dedicated, full-service practice for in-place application modernization. "There is a significant need for the developer productivity gains that generative AI can bring to transform applications on the mainframe," said Keshav Varma, ISU Head, Technology, Software and Services Business Unit, TCS. "While watsonx Code Assistant for Z has only just become available, we have several clients that have already requested that we create proofs of concept for them. With decades of enterprise experience from both our companies, we look forward to building on our deep partnership with IBM using watsonx." IBM Consulting Brings Expertise to Help Clients with IT Automation and Modernization Early IBM Consulting engagements for both watsonx Code Assistant for Red Hat Ansible Lightspeed and watsonx Code Assistant for Z aim to provide clients with the ability to deliver continuous automation, Ansible Playbook productivity, quality improvements, and transformation of IT operations – in addition to helping them identify the right application areas to modernize with Z. For those looking for more personalized use cases with watsonx Code Assistant, IBM Consulting and IBM Client Engineering can work side-by-side with clients to identify specific pain points and solve critical business and technical challenges from the users' perspective. IBM Consulting brings deep industry expertise in application modernization, IT automation and generative AI via dedicated Red Hat and watsonx practices that work closely with IBM Research, IBM Technology and Red Hat. Statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Read More

Software

DataStax Launches New Integration with LangChain, Enables Developers to Easily Build Production-ready Generative AI Applications

Business Wire | October 25, 2023

DataStax, the company that powers generative AI applications with real-time, scalable data, today announced a new integration with LangChain, the most popular orchestration framework for developing applications with large language models (LLMs). The integration makes it easy to add Astra DB – the real-time database for developers building production Gen AI applications – or Apache Cassandra, as a new vector source in the LangChain framework. As many companies implement retrieval augmented generation (RAG) – the process of providing context from outside data sources to deliver more accurate LLM query responses – into their generative AI applications, they require a vector store that gives them real-time updates with zero latency on critical, real-life production workloads. Generative AI applications built with RAG stacks require a vector-enabled database and an orchestration framework like LangChain, to provide memory or context to LLMs for accurate and relevant answers. Developers use LangChain as the leading AI-first toolkit to connect their application to different data sources. The new integration lets developers leverage the power of the Astra DB vector database for their LLM, AI assistant, and real-time generative AI projects through the LangChain plugin architecture for vector stores. Together, Astra DB and LangChain help developers to take advantage of framework features like vector similarity search, semantic caching, term-based search, LLM-response caching, and data injection from Astra DB (or Cassandra) into prompt templates. In a RAG application, the model receives supplementary data or context from various sources — most often a database that can store vectors, said Harrison Chase, CEO, LangChain. Building a generative AI app requires a robust, powerful database, and we ensure our users have access to the best options on the market via our simple plugin architecture. With integrations like DataStax's LangChain connector, incorporating Astra DB or Apache Cassandra as a vector store becomes a seamless and intuitive process. “Developers at startups and enterprises alike are using LangChain to build generative AI apps, so a deep native integration is a must-have,” said Ed Anuff, CPO, DataStax. “The ability for developers to easily use Astra DB as their vector database of choice, directly from LangChain, streamlines the process of building the personalized AI applications that companies need. In fact, we’re already seeing customers benefit from our joint technologies as healthcare AI company, Skypoint, is using Astra DB and LangChain to power its generative AI healthcare model.” To learn more, join the live webinar on October 26 at 9am PT, where LangChain founder and CEO, Harrison Chase, and SkyPoint founder and CEO, Tisson Mathew, discuss their experience building production RAG applications. About DataStax DataStax is the company that powers generative AI applications with real-time, scalable data with production-ready vector data tools that generative AI applications need, and seamless integration with developers’ stacks of choice. The Astra DB vector database provides developers with elegant APIs, powerful real-time data pipelines, and complete ecosystem integrations to quickly build and deploy production-level AI applications. With DataStax, any enterprise can mobilize real-time data to quickly build smart, high-growth AI applications at unlimited scale, on any cloud. Hundreds of the world’s leading enterprises, including Audi, Bud Financial, Capital One, SkyPoint Cloud, Verizon, VerSe Innovation, and many more rely on DataStax to deliver real-time AI. Learn more at DataStax.com. Apache, Apache Cassandra, and Cassandra, are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States, and/or other countries.

Read More

Software

Deloitte Launches Innovative 'DARTbot' Internal Chatbot

PR Newswire | October 18, 2023

Deloitte today announced its development and deployment of "DARTbot," an internal chatbot powered by cutting-edge Generative Artificial Intelligence. DARTbot is capable of generating intelligent responses and providing valuable insights to support nearly 18,000 of Deloitte's U.S. Audit & Assurance professionals in their daily tasks and decision-making processes. Deloitte is infusing Generative AI applications and capabilities across its organization to help its professionals become more efficient and productive. These applications and productivity tools are focused on proprietary functional and industry content that is applied with Deloitte's Trustworthy AI™ framework, managing AI risks such as hallucinations, and improving user confidence and trust. As part of this ongoing commitment, Deloitte is rolling out purpose specific Large Language Models (LLMs) and chatbots to support specialized teams across its business, including DARTbot to support Audit & Assurance professionals. Integrating Generative AI into our technology solutions, combined with the experience, critical thinking and professional judgment of our professionals, will allow us to deliver deeper insights and a differentiated client experience with distinction and trust, said Dipti Gulati, U.S. CEO, Audit & Assurance, Deloitte & Touche LLP. Designed with user experience in mind, DARTbot's user-friendly interface allows Deloitte's Audit & Assurance professionals to seamlessly interact with the chatbot. The system is meticulously integrated with vast datasets of relevant industry knowledge and leading practices, exhibiting the accuracy and reliability of its responses, and facilitating further analysis by providing references to source materials. With the deployment of DARTbot, Deloitte aims to transform the way its professionals work, enhancing their productivity and enabling them to focus more on applying professional objectivity, skepticism, and evaluating bias. The chatbot acts as a virtual assistant, providing real-time guidance, answering queries, and assisting professionals in navigating complex accounting questions. "We are excited to introduce DARTbot as an invaluable resource for our Audit & Assurance professionals. This internal chatbot represents Deloitte's ongoing commitment to leveraging cutting-edge technology to empower our teams and deliver exceptional client service. DARTbot will help our professionals quickly research complex accounting questions and elevate the overall audit and assurance experience," said Chris Griffin, managing partner, U.S. Audit & Assurance transformation and technology, Deloitte & Touche LLP. One of the key priorities throughout the development process was ensuring data security and confidentiality. Deloitte has implemented robust security measures, leveraging state-of-the-art encryption protocols and access controls. The chatbot operates within a dedicated, secure, self-contained environment that does not use any user input data to train the model. "DARTbot represents a significant milestone in Deloitte's ongoing commitment to harnessing the power of emerging technologies. Through continued internal innovation, we're able to assess and deploy new technologies to benefit our professionals and provide deeper insights to our clients," said Will Bible, partner, U.S. Audit & Assurance digital transformation and innovation leader, Deloitte & Touche LLP. Deloitte is also increasing AI fluency, training more than 120,000 professionals as part of the next generation of AI talent via the AI Academy, a Deloitte Technology Academy program, as well as investing more than $2 billion in global technology learning and development initiatives to boost skills in the application of key technology areas, including AI, to key industry and functional issues.

Read More

AI Tech

IBM Launches watsonx Code Assistant, Delivers Generative AI-powered Code Generation Capabilities Built for Enterprise Application Modernization

PR Newswire | October 26, 2023

Today IBM (NYSE: IBM) launched watsonx Code Assistant, a generative AI-powered assistant that helps enterprise developers and IT operators code more quickly and more accurately using natural language prompts. The product currently delivers on two specific enterprise use cases. First, IT Automation with watsonx Code Assistant for Red Hat Ansible Lightspeed, for tasks such as network configuration and code deployment. Second, mainframe application modernization with watsonx Code Assistant for Z, for translation of COBOL to Java on IBM Z. Designed to accelerate development while maintaining the principles of trust, security, and compliance, the product leverages generative AI based on IBM's Granite foundation models for code running on IBM's watsonx platform. Granite uses the decoder architecture, which underpins large language model capabilities to predict what is next in a sequence to support natural language processing tasks. IBM is exploring opportunities to tune watsonx Code Assistant with additional domain-specific generative AI capabilities to assist in code generation, code explanation, and the full end-to-end software development lifecycle to continue to drive enterprise application modernization. According to a recent IDC report, "Because it relies on a model trained on curated data, watsonx Code Assistant can help enterprises improve code quality by propagating best practices through code recommendations, instead of polluting enterprise code bases with code generated by models trained on unvetted repositories."2 With this launch, watsonx Code Assistant joins watsonx Orchestrate and watsonx Assistant in IBM's growing line of watsonx assistants that provide enterprises with tangible ways to implement generative AI, said Kareem Yusuf, Ph.D, Senior Vice President, Product Management and Growth, IBM Software. Watsonx Code Assistant puts AI-assisted code development and application modernization tools directly into the hands of developers – in a naturally integrated way that is designed to be non-disruptive – to help address skills gaps and increase productivity. Additionally, IBM Consulting brings deep domain expertise across these use cases working closely with clients across industries such as banking, insurance, healthcare and government, to build strategies to allow them to take advantage of the potential of generative AI and code generation to accelerate modernization. IT Automation - IBM watsonx Code Assistant for Red Hat Ansible Lightspeed The Ansible Automation Platform helps enterprise developers and IT operators implement automation, using Ansible Playbooks, for IT tasks including infrastructure management, hybrid cloud deployment, network configuration, application deployment and more. With IBM watsonx Code Assistant for Red Hat Ansible Lightspeed, platform users can input plain English prompts to automatically generate task recommendations for Ansible Playbooks that adhere to best practices in task creation and maintenance. This way, a greater number of team members can create Ansible Playbooks more efficiently and implement automation engineered to be more resilient and easier to support without in-depth training. Technical Preview Key Data: Approximately 4,000 developers participated in the technical preview. 85% overall average acceptance rate of the AI-generated content recommendations. (from July 27 – Oct 23, 2023, based on over 41,000 recommendations) Productivity improvements in the range of 20-45%. "Red Hat has already shown what domain-specific AI can do for IT automation at the community level," said Ashesh Badani, Senior Vice President and Chief Product Officer, Red Hat. "The release of watsonx Code Assistant for Red Hat Ansible Lightspeed has the potential to close skills gaps, create greater organizational efficiencies and free enterprise IT to deliver even more business value." The Hybrid Cloud Platforms team within the IBM CIO Office uses Red Hat Ansible Automation Platform to support a wide range of tasks within their IT environment whether it's patching, resolving vulnerabilities, or running regular health checks of their systems. Bob Epstein, Leader of IBM CIO Hybrid Cloud Platforms, expects that the number of developers able to produce Ansible Playbooks with the full release version could increase as much as 10x as watsonx Code Assistant for Red Hat Ansible Lightspeed empowers other team members such as Site Reliability Engineers who can use natural language to generate Ansible-specific automation tasks. "I like to look at our modernization journey in these stages: In the past we were crawling, doing a lot of things manually. Then, when we started automating, we were walking. Once we implemented Red Hat Ansible Automation Platform, we were running. And as we look ahead, with watsonx Code Assistant for Red Hat Ansible Lightspeed, I think we will be able to fly," said Robert Barron, Architect, Hybrid Cloud Platforms, IBM CIO Office. Mainframe Application Modernization – IBM watsonx Code Assistant for Z IBM watsonx Code Assistant for Z helps enable faster translation of COBOL to Java on IBM Z and enhances developer productivity on the platform. It is being designed to assist businesses in leveraging generative AI and automated tooling to accelerate their mainframe application modernization – while allowing clients to take advantage of the performance, security and resiliency capabilities of IBM Z. Today, the product follows the application modernization lifecycle, starting with an application discovery capability, which maps out a technical understanding of the application and its dependencies. Then, an automated refactoring capability leverages the information captured in application discovery to identify selected elements to decompose the monolithic application into modular COBOL business services. Finally, watsonx Code Assistant for Z leverages generative AI to transform individual COBOL business services into object-oriented Java code. The next step in the lifecycle is validation testing. Anticipated in a future release, the product will support automated test case generation to validate the new COBOL or Java services. TCS and IBM hold a long-term partnership that fosters a collaborative ecosystem to develop joint successes for their customers and stakeholders. Leveraging this partnership and the deep contextual knowledge, TCS has grown a purpose-led, dedicated, full-service practice for in-place application modernization. "There is a significant need for the developer productivity gains that generative AI can bring to transform applications on the mainframe," said Keshav Varma, ISU Head, Technology, Software and Services Business Unit, TCS. "While watsonx Code Assistant for Z has only just become available, we have several clients that have already requested that we create proofs of concept for them. With decades of enterprise experience from both our companies, we look forward to building on our deep partnership with IBM using watsonx." IBM Consulting Brings Expertise to Help Clients with IT Automation and Modernization Early IBM Consulting engagements for both watsonx Code Assistant for Red Hat Ansible Lightspeed and watsonx Code Assistant for Z aim to provide clients with the ability to deliver continuous automation, Ansible Playbook productivity, quality improvements, and transformation of IT operations – in addition to helping them identify the right application areas to modernize with Z. For those looking for more personalized use cases with watsonx Code Assistant, IBM Consulting and IBM Client Engineering can work side-by-side with clients to identify specific pain points and solve critical business and technical challenges from the users' perspective. IBM Consulting brings deep industry expertise in application modernization, IT automation and generative AI via dedicated Red Hat and watsonx practices that work closely with IBM Research, IBM Technology and Red Hat. Statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Read More

Software

DataStax Launches New Integration with LangChain, Enables Developers to Easily Build Production-ready Generative AI Applications

Business Wire | October 25, 2023

DataStax, the company that powers generative AI applications with real-time, scalable data, today announced a new integration with LangChain, the most popular orchestration framework for developing applications with large language models (LLMs). The integration makes it easy to add Astra DB – the real-time database for developers building production Gen AI applications – or Apache Cassandra, as a new vector source in the LangChain framework. As many companies implement retrieval augmented generation (RAG) – the process of providing context from outside data sources to deliver more accurate LLM query responses – into their generative AI applications, they require a vector store that gives them real-time updates with zero latency on critical, real-life production workloads. Generative AI applications built with RAG stacks require a vector-enabled database and an orchestration framework like LangChain, to provide memory or context to LLMs for accurate and relevant answers. Developers use LangChain as the leading AI-first toolkit to connect their application to different data sources. The new integration lets developers leverage the power of the Astra DB vector database for their LLM, AI assistant, and real-time generative AI projects through the LangChain plugin architecture for vector stores. Together, Astra DB and LangChain help developers to take advantage of framework features like vector similarity search, semantic caching, term-based search, LLM-response caching, and data injection from Astra DB (or Cassandra) into prompt templates. In a RAG application, the model receives supplementary data or context from various sources — most often a database that can store vectors, said Harrison Chase, CEO, LangChain. Building a generative AI app requires a robust, powerful database, and we ensure our users have access to the best options on the market via our simple plugin architecture. With integrations like DataStax's LangChain connector, incorporating Astra DB or Apache Cassandra as a vector store becomes a seamless and intuitive process. “Developers at startups and enterprises alike are using LangChain to build generative AI apps, so a deep native integration is a must-have,” said Ed Anuff, CPO, DataStax. “The ability for developers to easily use Astra DB as their vector database of choice, directly from LangChain, streamlines the process of building the personalized AI applications that companies need. In fact, we’re already seeing customers benefit from our joint technologies as healthcare AI company, Skypoint, is using Astra DB and LangChain to power its generative AI healthcare model.” To learn more, join the live webinar on October 26 at 9am PT, where LangChain founder and CEO, Harrison Chase, and SkyPoint founder and CEO, Tisson Mathew, discuss their experience building production RAG applications. About DataStax DataStax is the company that powers generative AI applications with real-time, scalable data with production-ready vector data tools that generative AI applications need, and seamless integration with developers’ stacks of choice. The Astra DB vector database provides developers with elegant APIs, powerful real-time data pipelines, and complete ecosystem integrations to quickly build and deploy production-level AI applications. With DataStax, any enterprise can mobilize real-time data to quickly build smart, high-growth AI applications at unlimited scale, on any cloud. Hundreds of the world’s leading enterprises, including Audi, Bud Financial, Capital One, SkyPoint Cloud, Verizon, VerSe Innovation, and many more rely on DataStax to deliver real-time AI. Learn more at DataStax.com. Apache, Apache Cassandra, and Cassandra, are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States, and/or other countries.

Read More

Software

Deloitte Launches Innovative 'DARTbot' Internal Chatbot

PR Newswire | October 18, 2023

Deloitte today announced its development and deployment of "DARTbot," an internal chatbot powered by cutting-edge Generative Artificial Intelligence. DARTbot is capable of generating intelligent responses and providing valuable insights to support nearly 18,000 of Deloitte's U.S. Audit & Assurance professionals in their daily tasks and decision-making processes. Deloitte is infusing Generative AI applications and capabilities across its organization to help its professionals become more efficient and productive. These applications and productivity tools are focused on proprietary functional and industry content that is applied with Deloitte's Trustworthy AI™ framework, managing AI risks such as hallucinations, and improving user confidence and trust. As part of this ongoing commitment, Deloitte is rolling out purpose specific Large Language Models (LLMs) and chatbots to support specialized teams across its business, including DARTbot to support Audit & Assurance professionals. Integrating Generative AI into our technology solutions, combined with the experience, critical thinking and professional judgment of our professionals, will allow us to deliver deeper insights and a differentiated client experience with distinction and trust, said Dipti Gulati, U.S. CEO, Audit & Assurance, Deloitte & Touche LLP. Designed with user experience in mind, DARTbot's user-friendly interface allows Deloitte's Audit & Assurance professionals to seamlessly interact with the chatbot. The system is meticulously integrated with vast datasets of relevant industry knowledge and leading practices, exhibiting the accuracy and reliability of its responses, and facilitating further analysis by providing references to source materials. With the deployment of DARTbot, Deloitte aims to transform the way its professionals work, enhancing their productivity and enabling them to focus more on applying professional objectivity, skepticism, and evaluating bias. The chatbot acts as a virtual assistant, providing real-time guidance, answering queries, and assisting professionals in navigating complex accounting questions. "We are excited to introduce DARTbot as an invaluable resource for our Audit & Assurance professionals. This internal chatbot represents Deloitte's ongoing commitment to leveraging cutting-edge technology to empower our teams and deliver exceptional client service. DARTbot will help our professionals quickly research complex accounting questions and elevate the overall audit and assurance experience," said Chris Griffin, managing partner, U.S. Audit & Assurance transformation and technology, Deloitte & Touche LLP. One of the key priorities throughout the development process was ensuring data security and confidentiality. Deloitte has implemented robust security measures, leveraging state-of-the-art encryption protocols and access controls. The chatbot operates within a dedicated, secure, self-contained environment that does not use any user input data to train the model. "DARTbot represents a significant milestone in Deloitte's ongoing commitment to harnessing the power of emerging technologies. Through continued internal innovation, we're able to assess and deploy new technologies to benefit our professionals and provide deeper insights to our clients," said Will Bible, partner, U.S. Audit & Assurance digital transformation and innovation leader, Deloitte & Touche LLP. Deloitte is also increasing AI fluency, training more than 120,000 professionals as part of the next generation of AI talent via the AI Academy, a Deloitte Technology Academy program, as well as investing more than $2 billion in global technology learning and development initiatives to boost skills in the application of key technology areas, including AI, to key industry and functional issues.

Read More

Events

AI.dev

Conference

AI.dev

Conference