A Conversation on Machine Learning with Amazon

| December 16, 2019

article image
Hear from Amazon leaders on lessons learned and best practices in machine learning based on the company’s own experience applying the technology to everything from recommendations to forecasting to improve the customer experience.

Spotlight

QuantiQ Technology

QuantiQ is a leading Microsoft Dynamics partner, deploying industry leading CRM, AX and NAV solutions. We are skilled at simplifying business processes and enhancing business performance to drive growth, profitability and efficiency. Providing business consulting, design, implementation and managed services out of our London headquarters, we have successfully deployed Microsoft Dynamics over 650 times.

OTHER ARTICLES

The 6 Biggest Challenges to AI Marketing Success

Article | July 8, 2021

More and more businesses are utilizing modern-day opportunities that Artificial Intelligence (AI) brings to the digital world. Perhaps, it is the most necessary step for the companies to stay competitive in 2021 and beyond. With the rise of technology, AI-powered marketing platforms are becoming more common and simpler to use. However, this does not mean that they do not have any challenges. A survey conducted by Teradata, a data analytics firm, reports that around 80% of enterprise-level organizations have already embraced some form of AI. Out of them, approximately 32% of businesses use AI algorithms for marketing purposes. However, more than 90% of these companies have already anticipated significant barriers to adopt and integrate AI. In this article, we shed light on the six biggest challenges in AI marketing. It will help you act and avoid common problems if you encounter such roadblocks when integrating AI into your marketing strategy. Here are some highlights of this article: ● Many popular media sources have created hype around AI. Therefore, people, in general, don’t trust it. ● There isn’t enough skilled workforce to fill AI-related positions in organizations. ● AI software needs high-quality data. Unfortunately, maintaining such data quality is not that easy. ● AI software needs significant investment. ● Many small businesses lack IT infrastructure resources. Cloud services help them overcome this problem. As you can now understand, most challenges in AI marketing revolve around business alignment, data, or people. While every organization varies and will face the AI adoption process differently, there are a few common challenges in AI marketing you should be aware of. So, without further ado, let’s take a look at the most common AI challenges that digital marketers face. Lack of Knowledge of AI Systems When it's about total AI implementation, your company’s management must have a deeper understanding of the role of AI in Digital Marketing, the latest AI trends, data challenges, and all other essential aspects. However, many marketers lack a proper understanding of the use of AI technologies in marketing. On top of this, unfortunately, AI comes with a variety of fears and myths. While some people think they need an in-house data science team for complete AI adoption, others believe in those sci-fi fantasies showing how smart robots can end humanity. Insufficient knowledge of AI is one of the biggest challenges in AI marketing. It hinders the AI implementation in several ways and ultimately delays the success. How to get rid of this? First things first — start by acquiring knowledge. It might sound a bit demotivating, but we do not mean you have to be a data scientist for this. You can look at other giants in the industry, carefully analyze how they are deploying AI into their business, and act accordingly. Next, know more about the current AI technologies for marketing — you can either DIY or get help from an expert. Once you have adequate knowledge about it, you know what to expect from AI and what not. Challenges in Integration Deployment and integration of new technology requires skills. Integrating Artificial Intelligence into your business is not an easy task. It is a complicated job and requires proper knowledge. You first have to set up interfaces and other elements to address all your business needs. Such steps may require complex coding. Developers must consider feeding the data into the system, labeling, data storage, data infrastructure needs, and much more while setting up the elements. Then comes the model training and testing part. It is necessary for the following reasons: ● To check the effectiveness of your AI ● Develop a feedback loop for constant improvement ● Data sampling for reducing the stored data and run models even faster The biggest challenge here is — how to confirm if it's working correctly? And, is it worth the money you are investing? Arguably, the only and the most effective way to overcome this hurdle is to work closely with your vendor to ensure that everyone is well aware of the process. Plus, there should not be any limitations in the vendor’s expertise. They should be capable of guiding you beyond building the AI models. When you implement Artificial Intelligence with the right strategy, you indirectly reduce the risk of failure. And, once you successfully implement AI into your system, you will still have to educate your marketers to use it efficiently. In this way, your people can understand how to interpret the results they receive by proper implementation and effective use of the AI model. Poor Data Quality or Lack of Data High-quality data is essential for Artificial Intelligence. Any AI system will come up with poor results if you provide it with insufficient or poor-quality data. As the Big Data world is evolving every day, businesses are gathering vast amounts of data. However, this data is not always up to the mark. It's either insufficient or not good enough to drive a profitable AI marketing strategy. Such data-related challenges in AI marketing prevent companies from capitalizing on Big Data. For this reason, as a business, you should always make sure the data you get is clean and rich in quality. Otherwise, you will experience unsatisfactory results from the AI, and it will negatively influence the overall success of your AI-powered marketing campaigns. Budget Constraints for AI Implementation Many companies lack the necessary budget for implementing AI into the system. Even though AI has the power to provide impressive Returns of Investment (ROI), hefty investments are still one of the biggest challenges in AI marketing, especially for smaller and mid-size companies where the budgets are already stretched. AI-powered platforms come with high-performance hardware and complex software. And, the deployment and maintenance of such components are costly. Such budgeting challenges in AI marketing can limit the opportunities for businesses to utilize AI technology to the fullest. Thankfully, this is now becoming a thing of the past as many affordable AI vendors are coming ahead for the rescue. With them, you do not have to invest in developing in-house solutions. Moreover, they allow you to implement AI tech in a relatively cheaper and faster way. Privacy and Regulations Artificial intelligence is still new to this world, and it's growing at an incredible pace. Chances are that the rules and regulations surrounding AI will change and tighten up over the coming days. The data collection and use of data policies already impact businesses that collect and use data from the customers based in the European Union and drive their Artificial Intelligence systems. The EU implemented GDPR in 2018, and it has made the data collection, and data usage rules even stricter for companies. Ultimately, companies now have to be extra careful while collecting and using customer data. Furthermore, several businesses are restricted from storing the data offsite for regulatory purposes. This means that they can no longer utilize cloud-based AI marketing services. Constantly Changing Marketing Landscape AI is a new marketing tool and can bring disruption to traditional marketing operations. For this reason, marketers evaluate how AI can create new jobs and, at the same time, replace older jobs. One survey suggests that AI marketing tools are more likely to replace the jobs of around 6 out of 10 marketing analysts and marketing specialists over the coming years. Overcoming The Challenges in AI Marketing Yes, such challenges in AI marketing can sometimes slow down your campaigns and affect the outcomes of your AI-driven software. But fortunately, there are a variety of alternative solutions. You need to consider the following steps to rule out the common challenges in AI marketing we discussed earlier. ● Develop a target oriented marketing strategy ● Get the money before you roll out AI in marketing ● Train your marketers ● Recruit the right talent Developing business cases, recruiting talented marketers, measuring the ROI, and getting the required investment — probably, none of these steps sound interesting. But, when it is about the reality check of your AI marketing strategies, they are absolute methods that can open the door to actual Artificial Intelligence payoffs. In the end, every company's responsibility is to make sure that they are using the AI system responsibly so that they can benefit their customers in the best way possible. Frequently Asked Questions How does AI affect marketing? AI helps marketers to spot the latest internet trends and predict them for the future. Such trends are necessary to learn the current marketing facts and eventually help with significant tasks such as budget allocation and setting up the target audience. Plus, AI effectively reduces the money and time usually spent by companies on digital advertising. Simultaneously, it leads businesses towards smarter and more targeted advertising campaigns. As a result, many companies have implemented AI into their digital marketing strategies as it can increase sales and save money at the same time. On a bigger scale, AI has an impact on global trends, sustainability, and scalability. Even government issues, major public concerns, and major cities around the globe have seen positive effects of AI. AI can make the world a better place if used in the right way! How is AI used in digital marketing? Companies are utilizing some stand-out developments for improving the customer experience with the proper use of AI. For example: ● Image recognition technology ● Predictive and targeted content ● Content creation ● Chatbots With these, AI enhances customer support, and provides more relevant and targeted content to the customers. Why is artificial intelligence critical in marketing? With the correct use of Artificial intelligence, businesses can collect, analyze and store a large amount of data. As a result, AI is the best way to learn the latest marketing trends and incorporate them into your marketing strategy. In general, Artificial Intelligence has the power to help your company reach potential customers and provide them with easy access to make purchases. { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [{ "@type": "Question", "name": "How does AI affect marketing?", "acceptedAnswer": { "@type": "Answer", "text": "AI helps marketers to spot the latest internet trends and predict them for the future. Such trends are necessary to learn the current marketing facts and eventually help with significant tasks such as budget allocation and setting up the target audience. Plus, AI effectively reduces the money and time usually spent by companies on digital advertising. Simultaneously, it leads businesses towards smarter and more targeted advertising campaigns. As a result, many companies have implemented AI into their digital marketing strategies as it can increase sales and save money at the same time. On a bigger scale, AI has an impact on global trends, sustainability, and scalability. Even government issues, major public concerns, and major cities around the globe have seen positive effects of AI. AI can make the world a better place if used in the right way!" } },{ "@type": "Question", "name": "How is AI used in digital marketing?", "acceptedAnswer": { "@type": "Answer", "text": "Companies are utilizing some stand-out developments for improving the customer experience with the proper use of AI. For example: ● Image recognition technology ● Predictive and targeted content ● Content creation ● Chatbots With these, AI enhances customer support and provides more relevant and targeted content to the customers." } },{ "@type": "Question", "name": "Why is artificial intelligence critical in marketing?", "acceptedAnswer": { "@type": "Answer", "text": "With the correct use of Artificial intelligence, businesses can collect, analyze and store a large amount of data. As a result, AI is the best way to learn the latest marketing trends and incorporate them into your marketing strategy. In general, Artificial Intelligence has the power to help your company reach potential customers and provide them with easy access to make purchases." } }] }

Read More

Adobe Outlines AI Enhancements For Experience Cloud

Article | April 2, 2020

More than 30 per cent of the 400 patents Adobe filed last year were specifically related to artificial intelligence and machine learning. Adobe introduced its AI capability Sensei four years ago and has since launched hundreds of new AI-powered features and capabilities across its portfolio. Within its marketing, advertising and commerce clouds, the AI services aim to help marketers to optimise campaigns, automate tedious tasks and provide predictive capabilities. Speaking during the virtual edition of Adobe Summit this week, Anil Chakravarthy, the new head of Adobe’s digital experience business, outlined which AI features will be available to Adobe experience cloud customers in 2020. Chakravarthy announced the general availability of two of these services Customer AI and Attribution AI.

Read More

Real-time Object Detection in Video (with intro to Yolo v3)

Article | March 15, 2021

A few weeks ago, I created a Tensorflow model that would take an image of a street, break it down into regions, and - using a convolutional neural network - highlight areas of the image that contained vehicles, like cars and trucks. I called it an image classification AI, but what I had really created was an object detection and location program, following a typical convolutional method that has been used for decades - as far back as the seventies. By cutting up my input image into regions and passing each one into the network to get class predictions, I had created an algorithm that roughly locates classified objects. Other people have created programs that perform this operation frame by frame on real time video, allowing computer programs to draw boxes around recognised objects, understand what they are and track their motion through space. In this article, I’ll give an interesting introduction to object detection in real-time video. I’ll explain why this kind of artificial intelligence is important, how it works, and how you can implement your own system with Yolo V3. From there, you can build a huge variety of real time object tracking programs, and I’ve included links to further material too. Importance of real-time object detection Object detection, location and classification has really become a massive field in machine learning in the past decade, with GPUs and faster computers appearing on the market, which has allowed more computationally expensive deep learning nets to run heavy operations. Real time object detection in video is one such AI, and it has been used for a wide variety of purposes over the past few years. In surveillance, convolutional models have been trained on human facial data to recognise and identify faces. An AI can then analyse each frame of a video and locate recognised faces, classifying them with remarkable precision. Real-time object detection has also been used to measure traffic levels on heavily frequented streets. AIs can identify cars and count the number of vehicles in a scene, and then track that number over time, providing crucial information about congested roads. In wildlife, with enough training data a model can learn to spot and classify types of animals. For example, a great example was done with tracking racoons through a webcam, here. All you need is enough training images to build your own custom model, and such artificial intelligence programs are actively being used all around the world. Background to Yolo V3 Until about ten years ago, the technology required to perform real-time object tracking was not available to the general public. Fortunately for us, in 2021 there are many machine learning libraries available and practically anyone can get started with these amazing programs. Arguably the best object detection algorithms for amateurs - and often even professionals - is You Only Look Once, or YOLO. This collection of algorithms and datasets was created in the 2000s and became incredibly popular thanks to its impressive accuracy and speed, which lends it easily to live computer vision. My method for object detection and recognition I mentioned at the start of this article happens to be a fairly established technique. Traditional object recognition would split up each frame of a video into “regions”, flatten them into strings of pixel values, and pass them through a deep learning neural network one by one. The algorithm would then output a 0 to 1 value indicating the chance that the specific region has a recognized object - or rather, a part of a recognized object - within its bounds. Finally, the algorithm would output all the regions that were above a particular “certainty” threshold, and then it would compile adjacent regions into bounding boxes around recognized objects. Fairly straightforward, but when it comes down to the details, this algorithm isn’t exactly the best. Yolo V3 uses a different method to identify objects in real time video, and it’s this algorithm that gives it its desirable balance between speed and accuracy - allowing it to fairly accurately detect objects and draw bounding boxes around them at about thirty frames per second. Darknet-53 is Yolo’s latest Fully Convolutional Network, or FCN, and is packed with over a hundred convolutional layers. While traditional methods pass one region at a time through the algorithm, Darknet-53 takes the entire frame, flattening it before running the pixel values through 106 layers. They systematically split the image down into separate regions, predicting probability values for each one, before assembling connected regions to create “bounding boxes” around recognized objects. Coding Luckily for us there’s a really easy way we can implement YoloV3 in real time video simply with our webcams; effectively this program can be run on pretty much any computer with a webcam. You should note however that the library does prefer a fast computer to run at a good framerate. If you have a GPU it’s definitely worth using it! The way we’ll use YoloV3 is through a library called ImageAI. This library provides a ton of machine learning resources for image and video recognition, including YoloV3, meaning all we have to do is download the pre-trained weights for the standard YoloV3 model and set it to work with ImageAI. You can download the YoloV3 model here. Place this in your working directory. We’ll start with our imports as follows: import numpy as np import cv2 from imageai import Detection Of course, if you don’t have ImageAI, you can get it using “pip install imageai” on your command line or Python console, like normal. CV2 will be used to access your webcam and grab frames from it, so make sure any webcam settings on your device are set to default so access is allowed. Next, we need to load the deep learning model. This is a pre-trained, pre-weighted Keras model that can classify objects into about a hundred different categories and draw accurate bounding boxes around them. As mentioned before, it uses the Darknet model. Let’s load it in: modelpath = "path/yolo.h5" yolo = Detection.ObjectDetection() yolo.setModelTypeAsYOLOv3() yolo.setModelPath(modelpath) yolo.loadModel() All we’re doing here is creating a model and loading in the Keras h5 file to get it started with the pre-built network - fairly self-explanatory. Then, we’ll use CV2 to access the webcam as a camera object and define its parameters so we can get those frames that are needed for object detection: cam = cv2.VideoCapture(0) cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1300) cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 1500) You’ll need to set the 0 in cv2.VideCapture(0) to 1 if you’re using a front webcam, or if your webcam isn’t showing up with 0 as the setting. Great, so we have imported everything, loaded in our model and set up a camera object with CV2. We now need to create a run loop: while True: ret, img = cam.read() This will allow us to get the next immediate frame from the webcam as an image. Our program doesn’t run at a set framerate; it’ll go as fast as your processor/camera will allow. Next, we need to get an output image with bounding boxes drawn around the detected and classified objects, and it’ll also be handy to get some print-out lines of what the model is seeing: img, preds = yolo.detectCustomObjectsFromImage(input_image=img, custom_objects=None, input_type="array", output_type="array", minimum_percentage_probability=70, display_percentage_probability=False, display_object_name=True) As you can see, we’re just using the model to predict the objects and output an annotated image. You can play around with the minimum_percentage_probability to see what margin of confidence you want the model to classify objects with, and if you want to see the confidence percentages on the screen, set display_percentage_probability to True. To wrap the loop up, we’ll just show the annotated images, and close the program if the user wants to exit: cv2.imshow("", img) if (cv2.waitKey(1) & 0xFF == ord("q")) or (cv2.waitKey(1)==27): break Last thing we need to do outside the loop is to shut the camera object; cam.release() And that’s it! It’s really that simple to use real time object detection in video. If you run the program, you’ll see a window open that displays annotated frames from your webcam, with bounding boxes displayed around classified objects. Obviously we’re using a pre-built model, but many applications make use of YoloV3’s standard classification network, and there are plenty of options with ImageAI to train the model on custom datasets so it can recognize objects outside of the standard categories. Thus, you’re not sacrificing much by using ImageAI. Good luck with your projects if you choose to use this code! Conclusion Yolo V3 is a great algorithm for object detection that can detect a multitude of objects with impressive speed and accuracy, making it ideal for video feeds as we showed on the examples aboves. Yolo v3 is important but it’s true power comes when combined with other algorithms that can help it process information faster, or even increasing the number of detected objects. Similar algorithms to these are used today in the industry and have been perfected over the years. Today self-driving cars for example will use techniques similar to those described in this article, together with lane recognition algorithms and bird view to map the surroundings of a car and pass that information to the pilot system, which then will decide the best course of action.

Read More

COVID19: A crisis that necessitates Open Data

Article | August 13, 2020

The coronavirus outbreak in China has grown to a pandemic and is affecting the global health & social and economic dynamics. An ever increasing velocity and scale of analysis — in terms of both processing and access is required to succeed in the face of unimaginable shifts of market; health and social paradigms. The COVID-19 pandemic is accompanied by an Infodemic. With the global Novel Coronavirus pandemic filling headlines, TV news space and social media it can seem as if we are drowning in information and data about the virus. With so much data being pushed at us and shared it can be hard for the general public to know what is correct, what is useful and (unfortunately) what is dangerous. In general, levels of trust in scientists are quite high albeit with differences across countries and regions. A 2019 survey conducted across 140 countries showed that, globally, 72% of the respondents trusted scientists at “high” or “medium” levels. However, the proportion expressing “high” or “medium” levels of trust in science ranged from about 90% in Northern and Western Europe to 68% in South America and 48% in Central Africa (Rabesandratana, 2020). In times of crisis, like the ongoing spread of COVID-19, both scientific & non-scientific data should be a trusted source for information, analysis and decision making. While global sharing and collaboration of research data has reached unprecedented levels, challenges remain. Trust in at least some of the data is relatively low, and outstanding issues include the lack of specific standards, co-ordination and interoperability, as well as data quality and interpretation. To strengthen the contribution of open science to the COVID-19 response, policy makers need to ensure adequate data governance models, interoperable standards, sustainable data sharing agreements involving public sector, private sector and civil society, incentives for researchers, sustainable infrastructures, human and institutional capabilities and mechanisms for access to data across borders. The COVID19 data is cited critical for vaccine discovery; planning and forecasting for healthcare set up; emergency systems set up and expected to contribute to policy objectives like higher transparency and accountability, more informed policy debates, better public services, greater citizen engagement, and new business development. This is precisely why the need to have “open data” access to COVID-19 information is critical for humanity to succeed. In global emergencies like the coronavirus (COVID-19) pandemic, open science policies can remove obstacles to the free flow of research data and ideas, and thus accelerate the pace of research critical to combating the disease. UNESCO have set up open access to few data is leading a major role in this direction. Thankfully though, scientists around the world working on COVID-19 are able to work together, share data and findings and hopefully make a difference to the containment, treatment and eventually vaccines for COVID-19. Science and technology are essential to humanity’s collective response to the COVID-19 pandemic. Yet the extent to which policymaking is shaped by scientific evidence and by technological possibilities varies across governments and societies, and can often be limited. At the same time, collaborations across science and technology communities have grown in response to the current crisis, holding promise for enhanced cooperation in the future as well. A prominent example of this is the Coalition for Epidemic Preparedness Innovations (CEPI), launched in 2017 as a partnership between public, private, philanthropic and civil society organizations to accelerate the development of epidemic vaccines. Its ongoing work has cut the expected development time for a COVID-19 vaccine to 12–18 months, and its grants are providing quick funding for some promising early candidates. It is estimated that an investment of USD 2 billion will be needed, with resources being made available from a variety of sources (Yamey, et al., 2020). The Open COVID Pledge was launched in April 2020 by an international coalition of scientists, lawyers, and technology companies, and calls on authors to make all intellectual property (IP) under their control available, free of charge, and without encumbrances to help end the COVID-19 pandemic, and reduce the impact of the disease. Some notable signatories include Intel, Facebook, Amazon, IBM, Sandia National Laboratories, Hewlett Packard, Microsoft, Uber, Open Knowledge Foundation, the Massachusetts Institute of Technology, and AT&T. The signatories will offer a specific non-exclusive royalty-free Open COVID license to use IP for the purpose of diagnosing, preventing and treating COVID-19. Also illustrating the power of open science, online platforms are increasingly facilitating collaborative work of COVID-19 researchers around the world. A few examples include: 1. Research on treatments and vaccines is supported by Elixir, REACTing, CEPI and others. 2. WHO funded research and data organization. 3. London School of Hygiene and Tropical Medicine releases a dataset about the environments that have led to significant clusters of COVID-19 cases,containing more than 250 records with date, location, if the event was indoors or outdoors, and how many individuals became infected. (7/24/20) 4. The European Union Science Hub publishes a report on the concept of data-driven Mobility Functional Areas (MFAs). They demonstrate how mobile data calculated at a European regional scale can be useful for informing policies related to COVID-19 and future outbreaks. (7/16/20) While clinical, epidemiological and laboratory data about COVID-19 is widely available, including genomic sequencing of the pathogen, a number of challenges remain: 1. All data is not sufficiently findable, accessible, interoperable and reusable (FAIR), or not yet FAIR data. 2. Sources of data tend to be dispersed, even though many pooling initiatives are under way, curation needs to be operated “on the fly”. 3. In addition, many issues arise around the interpretation of data – this can be illustrated by the widely followed epidemiological statistics. Typically, the statistics concern “confirmed cases”, “deaths” and “recoveries”. Each of these items seem to be treated differently in different countries, and are sometimes subject to methodological changes within the same country. 4. Specific standards for COVID-19 data therefore need to be established, and this is one of the priorities of the UK COVID-19 Strategy. A working group within Research Data Alliance has been set up to propose such standards at an international level. Given the achievements and challenges of open science in the current crisis, lessons from prior experience & from SARS and MARS outbreaks globally can be drawn to assist the design of open science initiatives to address the COVID-19 crisis. The following actions can help to further strengthen open science in support of responses to the COVID-19 crisis: 1. Providing regulatory frameworks that would enable interoperability within the networks of large electronic health records providers, patient mediated exchanges, and peer-to-peer direct exchanges. Data standards need to ensure that data is findable, accessible, interoperable and reusable, including general data standards, as well as specific standards for the pandemic. 2. Working together by public actors, private actors, and civil society to develop and/or clarify a governance framework for the trusted reuse of privately-held research data toward the public interest. This framework should include governance principles, open data policies, trusted data reuse agreements, transparency requirements and safeguards, and accountability mechanisms, including ethical councils, that clearly define duties of care for data accessed in emergency contexts. 3. Securing adequate infrastructure (including data and software repositories, computational infrastructure, and digital collaboration platforms) to allow for recurrent occurrences of emergency situations. This includes a global network of certified trustworthy and interlinked repositories with compatible standards to guarantee the long-term preservation of FAIR COVID-19 data, as well as the preparedness for any future emergencies. 4. Ensuring that adequate human capital and institutional capabilities are in place to manage, create, curate and reuse research data – both in individual institutions and in institutions that act as data aggregators, whose role is real-time curation of data from different sources. In increasingly knowledge-based societies and economies, data are a key resource. Enhanced access to publicly funded data enables research and innovation, and has far-reaching effects on resource efficiency, productivity and competitiveness, creating benefits for society at large. Yet these benefits must also be balanced against associated risks to privacy, intellectual property, national security and the public interest. Entities such as UNESCO are helping the open science movement to progress towards establishing norms and standards that will facilitate greater, and more timely, access to scientific research across the world. Independent scientific assessments that inform the work of many United Nations bodies are indicating areas needing urgent action, and international cooperation can help with national capacities to implement them. At the same time, actively engaging with different stakeholders in countries around the dissemination of the findings of such assessments can help in building public trust in science.

Read More

Spotlight

QuantiQ Technology

QuantiQ is a leading Microsoft Dynamics partner, deploying industry leading CRM, AX and NAV solutions. We are skilled at simplifying business processes and enhancing business performance to drive growth, profitability and efficiency. Providing business consulting, design, implementation and managed services out of our London headquarters, we have successfully deployed Microsoft Dynamics over 650 times.

Events