Article | January 20, 2021
When you work with data, the most prominent challenge is picking the right manner to represent data in a readable format. With proper visualization, it becomes easier to convey the message present in the analyzed data. The best choice, in this case, is to use Data Visualization.
By definition, data visualization is the graphical representation of the data and information. Using elements like maps, charts, graphs, etc. gives an easier view to understanding the patterns, trends, outliers, and performance.
Combination Chart or C3 is based on D3 and provides reusable chart libraries for applications. C3 offers a class to each element to help in defining a custom style that can later extend structure from D3. One of the major benefits of using C3js is that you can even update the rendered chart.
Similar to C3, ReCharts also uses D3 while expressing declarative components. Being lightweight and rendering on SVG elements, it helps in creating stunning charts. The library consists of some beautiful chart examples. Moreover, the charts here can also be customized according to needs. It is great for static charts but can lag with multiple animations. With an intuitive API, it becomes highly powerful and responsive.
Based on SVG JS library charts, Highcharts is quite popular among large organizations. Highcharts comes with the entire ecosystem for various project templates. It is even compatible with old browsers. Even the non-developers can use it with ease because of its interactive chart editor. It is being used by popular brands like Microsoft.
Article | February 24, 2020
With fraud losses accounting for about 3% of the nation’s overall health care spending each year, pharmacy benefit management firms would be wise to seize the opportunity to battle endemic fraud, waste, and abuse (FWA). The upside can be significant, according to the National Health Care Anti-Fraud Association, which estimates annual losses at $68 billion nationally. While the savings are potentially huge, identifying instances of FWA hiding in the massive data volumes generated by prescribing, dispensing, and covering medicines is no easy feat.
Article | March 15, 2021
A few weeks ago, I created a Tensorflow model that would take an image of a street, break it down into regions, and - using a convolutional neural network - highlight areas of the image that contained vehicles, like cars and trucks.
I called it an image classification AI, but what I had really created was an object detection and location program, following a typical convolutional method that has been used for decades - as far back as the seventies.
By cutting up my input image into regions and passing each one into the network to get class predictions, I had created an algorithm that roughly locates classified objects. Other people have created programs that perform this operation frame by frame on real time video, allowing computer programs to draw boxes around recognised objects, understand what they are and track their motion through space.
In this article, I’ll give an interesting introduction to object detection in real-time video. I’ll explain why this kind of artificial intelligence is important, how it works, and how you can implement your own system with Yolo V3. From there, you can build a huge variety of real time object tracking programs, and I’ve included links to further material too.
Importance of real-time object detection
Object detection, location and classification has really become a massive field in machine learning in the past decade, with GPUs and faster computers appearing on the market, which has allowed more computationally expensive deep learning nets to run heavy operations. Real time object detection in video is one such AI, and it has been used for a wide variety of purposes over the past few years.
In surveillance, convolutional models have been trained on human facial data to recognise and identify faces. An AI can then analyse each frame of a video and locate recognised faces, classifying them with remarkable precision.
Real-time object detection has also been used to measure traffic levels on heavily frequented streets. AIs can identify cars and count the number of vehicles in a scene, and then track that number over time, providing crucial information about congested roads.
In wildlife, with enough training data a model can learn to spot and classify types of animals. For example, a great example was done with tracking racoons through a webcam, here. All you need is enough training images to build your own custom model, and such artificial intelligence programs are actively being used all around the world.
Background to Yolo V3
Until about ten years ago, the technology required to perform real-time object tracking was not available to the general public. Fortunately for us, in 2021 there are many machine learning libraries available and practically anyone can get started with these amazing programs.
Arguably the best object detection algorithms for amateurs - and often even professionals - is You Only Look Once, or YOLO. This collection of algorithms and datasets was created in the 2000s and became incredibly popular thanks to its impressive accuracy and speed, which lends it easily to live computer vision.
My method for object detection and recognition I mentioned at the start of this article happens to be a fairly established technique. Traditional object recognition would split up each frame of a video into “regions”, flatten them into strings of pixel values, and pass them through a deep learning neural network one by one. The algorithm would then output a 0 to 1 value indicating the chance that the specific region has a recognized object - or rather, a part of a recognized object - within its bounds.
Finally, the algorithm would output all the regions that were above a particular “certainty” threshold, and then it would compile adjacent regions into bounding boxes around recognized objects. Fairly straightforward, but when it comes down to the details, this algorithm isn’t exactly the best.
Yolo V3 uses a different method to identify objects in real time video, and it’s this algorithm that gives it its desirable balance between speed and accuracy - allowing it to fairly accurately detect objects and draw bounding boxes around them at about thirty frames per second.
Darknet-53 is Yolo’s latest Fully Convolutional Network, or FCN, and is packed with over a hundred convolutional layers. While traditional methods pass one region at a time through the algorithm, Darknet-53 takes the entire frame, flattening it before running the pixel values through 106 layers. They systematically split the image down into separate regions, predicting probability values for each one, before assembling connected regions to create “bounding boxes” around recognized objects.
Luckily for us there’s a really easy way we can implement YoloV3 in real time video simply with our webcams; effectively this program can be run on pretty much any computer with a webcam. You should note however that the library does prefer a fast computer to run at a good framerate. If you have a GPU it’s definitely worth using it!
The way we’ll use YoloV3 is through a library called ImageAI. This library provides a ton of machine learning resources for image and video recognition, including YoloV3, meaning all we have to do is download the pre-trained weights for the standard YoloV3 model and set it to work with ImageAI. You can download the YoloV3 model here. Place this in your working directory.
We’ll start with our imports as follows:
import numpy as np
from imageai import Detection
Of course, if you don’t have ImageAI, you can get it using “pip install imageai” on your command line or Python console, like normal. CV2 will be used to access your webcam and grab frames from it, so make sure any webcam settings on your device are set to default so access is allowed.
Next, we need to load the deep learning model. This is a pre-trained, pre-weighted Keras model that can classify objects into about a hundred different categories and draw accurate bounding boxes around them. As mentioned before, it uses the Darknet model. Let’s load it in:
modelpath = "path/yolo.h5"
yolo = Detection.ObjectDetection()
All we’re doing here is creating a model and loading in the Keras h5 file to get it started with the pre-built network - fairly self-explanatory.
Then, we’ll use CV2 to access the webcam as a camera object and define its parameters so we can get those frames that are needed for object detection:
cam = cv2.VideoCapture(0)
You’ll need to set the 0 in cv2.VideCapture(0) to 1 if you’re using a front webcam, or if your webcam isn’t showing up with 0 as the setting. Great, so we have imported everything, loaded in our model and set up a camera object with CV2. We now need to create a run loop:
ret, img = cam.read()
This will allow us to get the next immediate frame from the webcam as an image. Our program doesn’t run at a set framerate; it’ll go as fast as your processor/camera will allow.
Next, we need to get an output image with bounding boxes drawn around the detected and classified objects, and it’ll also be handy to get some print-out lines of what the model is seeing:
img, preds = yolo.detectCustomObjectsFromImage(input_image=img,
As you can see, we’re just using the model to predict the objects and output an annotated image. You can play around with the minimum_percentage_probability to see what margin of confidence you want the model to classify objects with, and if you want to see the confidence percentages on the screen, set display_percentage_probability to True.
To wrap the loop up, we’ll just show the annotated images, and close the program if the user wants to exit:
if (cv2.waitKey(1) & 0xFF == ord("q")) or (cv2.waitKey(1)==27):
Last thing we need to do outside the loop is to shut the camera object;
And that’s it! It’s really that simple to use real time object detection in video. If you run the program, you’ll see a window open that displays annotated frames from your webcam, with bounding boxes displayed around classified objects.
Obviously we’re using a pre-built model, but many applications make use of YoloV3’s standard classification network, and there are plenty of options with ImageAI to train the model on custom datasets so it can recognize objects outside of the standard categories. Thus, you’re not sacrificing much by using ImageAI.
Good luck with your projects if you choose to use this code!
Yolo V3 is a great algorithm for object detection that can detect a multitude of objects with impressive speed and accuracy, making it ideal for video feeds as we showed on the examples aboves.
Yolo v3 is important but it’s true power comes when combined with other algorithms that can help it process information faster, or even increasing the number of detected objects. Similar algorithms to these are used today in the industry and have been perfected over the years.
Today self-driving cars for example will use techniques similar to those described in this article, together with lane recognition algorithms and bird view to map the surroundings of a car and pass that information to the pilot system, which then will decide the best course of action.
Article | May 17, 2021
Common view is that AI software adoption is 'on its way' and it will soon replace many jobs (example self-driving cars with drivers etc.) and the majority of companies are starting to embrace the efficiencies that AI brings now.
Being a practitioner of AI software development and being involved in many projects in my company AI Technologies, I always found my direct experience in the field in contrast with what the media generally portraits about AI adoption.
In this article I want to give my view on how AI projects affect the work dynamics into clients work processes and compare that with the studies available on the impact of AI and new technologies on work. This should help the reader, especially if he is an executive, to set the right expectations and mentality when he is assessing the potential investment into a new AI project and if his company is ready for it.
To start with, any software development project, including AI, can be summarized into 3 stages: proof of concept (POC) when the prototype has been built, product development when the software is actually engineered at scale, live support/continuous improvements. It occurs often that projects in AI will not go pass the POC stage and this is often due to
1) not right IT/data infrastructure in place
2) not specialist people have been hired to handle the new software or digital transformation process has not been planned yet.
Regarding point 2, the most difficult issue is around hiring data scientists or data/machine learning engineers because many companies struggle with that.
In fact, in a March 2021 O’Reilly survey of enterprise AI adoption, it has been found that “the most significant barrier to AI adoption is the lack of skilled people and the difficulty of hiring.” And in 2019 software it has been estimated that there were around 144,000 AI- related job openings, but only around 26,000 developers and specialists seeking work.
Of course hiring an internal data scientist, it is not the only problem in restructuring the workforce. Often a corporation has to be able to re-train entire teams to be able to fully benefit from a new AI software.
I can give an example. As many readers know a sales process involves 3 stages: lead generation, q&a call/mails with potential clients and deal closing. Now, a couple of years ago AI Technologies had been engaged to automatize the q&a call stage and we build a ai bot to manage the 'standard questions' a potential client may ask (without getting into the details, using AI and technically word3vec encoding, it is very possible to automate mails/chatbot for 'standardized questions' like 'how much it cost?' 'how long is the warranty for' etc.). Using this new internal solution, it meant the team responsible for the q&a would have been retrained either to increase the number of leads or the number of closing. The company simply decided to not embark into the transformation process required to benefit the new AI adoption.
This example, in various forms, it is actually quite common: companies unless they are really innovative prefer to continue with their corroborated internal procedures unless some competitors threat their profitability.
This bring to the fact that actually AI is not an out of the shelves solution which can be plugged in with no effort. As the moment a POC is under development it should be a good norm to plan a digital transformation process within the company.
Also it is worth mentioning that, it is unlikely that the workforce has to be dismissed or made redundant as many expected following AI adoption. Just following the example above, what the AI bot does actually is to get over the repetitive tasks (q&a) so people can do more creative work engaging more clients (lead generation) or convincing to buy ( deal closing). Of course, it means that some people have to be retrained but also means that with the same people, you can close/generate more sales.
It is a misconception to think that AI solutions will make human work redundant , we just need to adapt to new jobs.
My example resembles a classical example on adoption of ATMs. When ATMs were introduced in 1969, conventional wisdom expected the number of banking locations to shrink, but instead, it actually made it possible to set up many more of them, it became cost-effective. There were under 200,000 bank tellers in 1970, but over 400,000 a decade later.
The other common problem to face when companies want to embrace AI adoption (point 1), it is their current infrastructure: databases, servers, and crm systems have to be already in place. To put it simply, any AI system requires data to work with so it naturally sits on top of data infrastructure in day to day business operations.
In the last two years AI Technologies has been engaged to work with a large public organization (70,000 employees) to build a solution to automatically detect malicious behavior of its employees manipulating their data. To build the AI software we had also designed a system to stream data from each employee terminal into a central database for processing. This infrastructure was not present at the beginning of the project since before the need for malicious detection was arised, the organization never really realized the necessity to gather certain data: a simple login and logout time was all the needed to monitor the activity of their employees (which company folder/file they accessed etc. was not important).
This is a common situation and most of the companies' infrastructure are usually not ready to be used directly with AI solutions: their current infrastructure was simply designed with other objectives in mind.
For sake of completeness, most companies decide to invest their internal resources in other areas of the business rather than crm or expensive data structures. There is no blame on this choice, at the end any business has to be profitable and investing in infrastructure is not always easy to quantify the return of investment.
If anything, this article should have given an idea of the major pitfalls approaching AI projects which can be summarized as follows:
• AI solutions are not out of the shelves , ready made software that can be immediately put in use: they often require new skilled hires within the client organization and potentially a plan how to re-utilized part of the workforce.
• It is often a myth that AI solutions will necessarily replace the employees although it is possible that they have to be retrained.
• Any AI project works on data and infrastructure which are necessary to benefit the new solutions. Before embarking on AI projects an organization has to either budget in a new infrastructure or at the very least an upgrade of the one in use.
In essence, due to the implication on both employees and infrastructure, AI adoption should be considered as a digital transformation process more than a software development project. After the overwhelming hype of attention of the recent years, I would expect that in the next 2-3 years more companies will start to realize what AI projects really are and how to best use them.