Improving SD-WAN Infrastructure Security with IBM

| September 14, 2017

article image
Software Defined-WAN helps enterprises accelerate hybrid cloud adoption. It also optimizes the cost of using different network technologies like Multiprotocol Label Switching (MPLS), and commercial broadband to connect remote offices and/or branches to one another.  While the move to SD-WAN makes business sense, SD-WAN users are worried about their networks’ vulnerability with this relatively new and game-changing technology for network access.

Spotlight

e-BizSoft

Whatever your next goal is – upgrading your accounting system, automating manual processes, ecommerce enabling your website, going international with your business, reducing your IT running costs, or simply seeing how your business is doing on a day to day basis more easily & quickly – we’ve done it all, and we can do it all for you.

OTHER ARTICLES

RPA and the Future of Outsourcing

Article | December 21, 2020

There is no doubt that in recent years technology advancement in industry has increased exponentially, but so has customer expectations. These days customers expect to have their questions answered and needs met nearly instantaneously, putting an added burden on companies to keep up with consumer demand while somehow maintaining cost of doing business at a reasonable level. Businesses must evolve to survive in the current climate and Robotic Process Automation may be the catalyst needed for companies to take the next leap forward. What is Robotic Process Automation? Robotic Process Automation (RPA), is a type of software that is created to mimic mundane or repetitive human tasks. The software can remove the burden of repetitive processing tasks from humans themselves, allowing people to handle the more complex tasks or problems within a company. Automation software can be programmed to do a wide array of technological jobs, following all of the rules that it is given to follow. Types of Businesses that can benefit from RPA Put simply, many businesses that utilize technology can benefit from intelligent automation, but here are a few examples. · Computer/IT and Telecommunication companies: These types of companies require a lot of customer support, much of which can easily be accomplished using automation software. RPA can help by creating electronic tickets and then responding to them when a customer sends in a question or request for service. The tickets can then be transferred to the correct human worker to be completed. · Accounting firms: Automated software can contact clients, confirm that payments are being made, and even sync with banks online, taking human error out of the equation. · Online stores: Online stores can and do use RPA in order to accept orders and communicate with customers, all without the need of a live person. This will enable a customer service agent to handle any inquiries of higher priority or that require critical thinking. Future of Outsourcing RPA can change the way in which companies outsource. Companies often outsource to foreign countries as a way to get tasks completed when they cannot afford to hire workers locally. This sends company money to other countries to pay the outsourced workers, removing income from the local economy while risking that customers might not receive the best quality of service. Automation software takes the outsourced worker out of the equation, filling in for the jobs that were previously handled by a foreign worker. Not only will this keep company employee expenditure within the country where they do business in, but it removes the stress and stigma that is associated with hiring staff from other countries. A company who chooses to invest in technology is ultimately investing in the satisfaction of their customers and longevity of their business. Intelligent automation enables organizations to keep local workers on staff to handle the complex tasks while letting the technology handle the more mundane duties. This will dramatically cut down on costs while increasing the quality of service. As a result, a company’s bottom line should see improvements while consumers begin to receive greater support and service, likely helping a business develop a more loyal group of repeat customers for years to come.

Read More

NEUROMORPHIC CHIPS: THE THIRD WAVE OF ARTIFICIAL INTELLIGENCE

Article | April 10, 2020

The age of traditional computers is reaching its limit. Without innovations taking place, it is difficult to move past the technology threshold. Hence it is necessary to bring major design transformation with improved performance that can change the way we view computers. The Moore’s law (named after Gordon Moore, in 1965) states that the number of transistors in a dense integrated circuit doubles about every two years while their price halves. But now the law is losing its validity. Hence hardware and software experts have come up with two solutions: Quantum Computing and Neuromorphic Computing. While quantum computing has made major strides, neuromorphic is still in its lab stage, until recently when Intel announced its neuromorphic chip, Loihi. This may indicate the third wave of Artificial Intelligence.

Read More

Real-time Object Detection in Video (with intro to Yolo v3)

Article | March 15, 2021

A few weeks ago, I created a Tensorflow model that would take an image of a street, break it down into regions, and - using a convolutional neural network - highlight areas of the image that contained vehicles, like cars and trucks. I called it an image classification AI, but what I had really created was an object detection and location program, following a typical convolutional method that has been used for decades - as far back as the seventies. By cutting up my input image into regions and passing each one into the network to get class predictions, I had created an algorithm that roughly locates classified objects. Other people have created programs that perform this operation frame by frame on real time video, allowing computer programs to draw boxes around recognised objects, understand what they are and track their motion through space. In this article, I’ll give an interesting introduction to object detection in real-time video. I’ll explain why this kind of artificial intelligence is important, how it works, and how you can implement your own system with Yolo V3. From there, you can build a huge variety of real time object tracking programs, and I’ve included links to further material too. Importance of real-time object detection Object detection, location and classification has really become a massive field in machine learning in the past decade, with GPUs and faster computers appearing on the market, which has allowed more computationally expensive deep learning nets to run heavy operations. Real time object detection in video is one such AI, and it has been used for a wide variety of purposes over the past few years. In surveillance, convolutional models have been trained on human facial data to recognise and identify faces. An AI can then analyse each frame of a video and locate recognised faces, classifying them with remarkable precision. Real-time object detection has also been used to measure traffic levels on heavily frequented streets. AIs can identify cars and count the number of vehicles in a scene, and then track that number over time, providing crucial information about congested roads. In wildlife, with enough training data a model can learn to spot and classify types of animals. For example, a great example was done with tracking racoons through a webcam, here. All you need is enough training images to build your own custom model, and such artificial intelligence programs are actively being used all around the world. Background to Yolo V3 Until about ten years ago, the technology required to perform real-time object tracking was not available to the general public. Fortunately for us, in 2021 there are many machine learning libraries available and practically anyone can get started with these amazing programs. Arguably the best object detection algorithms for amateurs - and often even professionals - is You Only Look Once, or YOLO. This collection of algorithms and datasets was created in the 2000s and became incredibly popular thanks to its impressive accuracy and speed, which lends it easily to live computer vision. My method for object detection and recognition I mentioned at the start of this article happens to be a fairly established technique. Traditional object recognition would split up each frame of a video into “regions”, flatten them into strings of pixel values, and pass them through a deep learning neural network one by one. The algorithm would then output a 0 to 1 value indicating the chance that the specific region has a recognized object - or rather, a part of a recognized object - within its bounds. Finally, the algorithm would output all the regions that were above a particular “certainty” threshold, and then it would compile adjacent regions into bounding boxes around recognized objects. Fairly straightforward, but when it comes down to the details, this algorithm isn’t exactly the best. Yolo V3 uses a different method to identify objects in real time video, and it’s this algorithm that gives it its desirable balance between speed and accuracy - allowing it to fairly accurately detect objects and draw bounding boxes around them at about thirty frames per second. Darknet-53 is Yolo’s latest Fully Convolutional Network, or FCN, and is packed with over a hundred convolutional layers. While traditional methods pass one region at a time through the algorithm, Darknet-53 takes the entire frame, flattening it before running the pixel values through 106 layers. They systematically split the image down into separate regions, predicting probability values for each one, before assembling connected regions to create “bounding boxes” around recognized objects. Coding Luckily for us there’s a really easy way we can implement YoloV3 in real time video simply with our webcams; effectively this program can be run on pretty much any computer with a webcam. You should note however that the library does prefer a fast computer to run at a good framerate. If you have a GPU it’s definitely worth using it! The way we’ll use YoloV3 is through a library called ImageAI. This library provides a ton of machine learning resources for image and video recognition, including YoloV3, meaning all we have to do is download the pre-trained weights for the standard YoloV3 model and set it to work with ImageAI. You can download the YoloV3 model here. Place this in your working directory. We’ll start with our imports as follows: import numpy as np import cv2 from imageai import Detection Of course, if you don’t have ImageAI, you can get it using “pip install imageai” on your command line or Python console, like normal. CV2 will be used to access your webcam and grab frames from it, so make sure any webcam settings on your device are set to default so access is allowed. Next, we need to load the deep learning model. This is a pre-trained, pre-weighted Keras model that can classify objects into about a hundred different categories and draw accurate bounding boxes around them. As mentioned before, it uses the Darknet model. Let’s load it in: modelpath = "path/yolo.h5" yolo = Detection.ObjectDetection() yolo.setModelTypeAsYOLOv3() yolo.setModelPath(modelpath) yolo.loadModel() All we’re doing here is creating a model and loading in the Keras h5 file to get it started with the pre-built network - fairly self-explanatory. Then, we’ll use CV2 to access the webcam as a camera object and define its parameters so we can get those frames that are needed for object detection: cam = cv2.VideoCapture(0) cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1300) cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 1500) You’ll need to set the 0 in cv2.VideCapture(0) to 1 if you’re using a front webcam, or if your webcam isn’t showing up with 0 as the setting. Great, so we have imported everything, loaded in our model and set up a camera object with CV2. We now need to create a run loop: while True: ret, img = cam.read() This will allow us to get the next immediate frame from the webcam as an image. Our program doesn’t run at a set framerate; it’ll go as fast as your processor/camera will allow. Next, we need to get an output image with bounding boxes drawn around the detected and classified objects, and it’ll also be handy to get some print-out lines of what the model is seeing: img, preds = yolo.detectCustomObjectsFromImage(input_image=img, custom_objects=None, input_type="array", output_type="array", minimum_percentage_probability=70, display_percentage_probability=False, display_object_name=True) As you can see, we’re just using the model to predict the objects and output an annotated image. You can play around with the minimum_percentage_probability to see what margin of confidence you want the model to classify objects with, and if you want to see the confidence percentages on the screen, set display_percentage_probability to True. To wrap the loop up, we’ll just show the annotated images, and close the program if the user wants to exit: cv2.imshow("", img) if (cv2.waitKey(1) & 0xFF == ord("q")) or (cv2.waitKey(1)==27): break Last thing we need to do outside the loop is to shut the camera object; cam.release() And that’s it! It’s really that simple to use real time object detection in video. If you run the program, you’ll see a window open that displays annotated frames from your webcam, with bounding boxes displayed around classified objects. Obviously we’re using a pre-built model, but many applications make use of YoloV3’s standard classification network, and there are plenty of options with ImageAI to train the model on custom datasets so it can recognize objects outside of the standard categories. Thus, you’re not sacrificing much by using ImageAI. Good luck with your projects if you choose to use this code! Conclusion Yolo V3 is a great algorithm for object detection that can detect a multitude of objects with impressive speed and accuracy, making it ideal for video feeds as we showed on the examples aboves. Yolo v3 is important but it’s true power comes when combined with other algorithms that can help it process information faster, or even increasing the number of detected objects. Similar algorithms to these are used today in the industry and have been perfected over the years. Today self-driving cars for example will use techniques similar to those described in this article, together with lane recognition algorithms and bird view to map the surroundings of a car and pass that information to the pilot system, which then will decide the best course of action.

Read More

How Landing AI is Using Machine Learning to Monitor Social Distancing

Article | April 20, 2020

One of the measures to reduce the spread of coronavirus in the current crisis is social distancing. This entails that when outside people should stay at least 2 metres (6 feet) away from each other at all times. While this is not a difficult requirement to meet for most people, workers in industries that have been identified as essential and continue to work during this current period of quarantine will not find it as easy to meet this rule. This is why Landing AI has created an AI-powered tool to ensure that people are maintaining a safe distance from each other.

Read More

Spotlight

e-BizSoft

Whatever your next goal is – upgrading your accounting system, automating manual processes, ecommerce enabling your website, going international with your business, reducing your IT running costs, or simply seeing how your business is doing on a day to day basis more easily & quickly – we’ve done it all, and we can do it all for you.

Events