Article | January 4, 2021
2020 has been an unprecedented year where we have seen more downs than ups. COVID-19 has impacted every aspect of our lives. But when it comes to digitisation and Artificial Intelligence, we have seen some impactful developments and achievements. As we approach the end of 2020, it is worth to look back at these AI stories to highlight the truths and discuss what it means for AI future direction.
The Great Truth:
Artificial intelligence played a crucial role in the detection and fight against COVID-19.
Indeed, we have seen the emergence of the use of AI at hospitals to evaluate chest CT scans. With the use of deep learning and image recognition, COVID patients were diagnosed thus enabling the medical team to follow the necessary protocols. Another application was the triage of COVID-19. Once a patient has been diagnosed with COVID, AI has been used to predict the likely severity of the illness so the medical staff can prioritize resources and treatments.
COVID has highlighted the need to deploy intelligent autonomous agents. As a result, we have seen both robots used at hospitals to diagnose COVID-19 patients and drones deployed to monitor if the public is adhering to social distancing rules.
Another major AI contribution in the fight against COVID-19 is in the area of vaccine and drug discovery. Moderna’s vaccine that has been approved by US Food and Drugs Administration has used machine learning to optimise mRNA sequencing.
The above is a proof that AI can make great contribution to mankind if it is used for “good”.
The Glowing Truths:
Some impressive AI results have been achieved. However, to leap forward a holistic and sustainable approach is needed.
2020 has seen some great AI achievements and leaps forward. The first example is Deepmind’s AlphaFold. The model scored highest at the Critical Assessment of Structure Prediction competition. The algorithm takes genetic information as inputs and outputs a three-dimensional structure. The model has impressively addressed a 50-year-old challenge of figuring out want shapes proteins fold into known as the “protein folding problem”.
While Deepmind’s AlphaFold is a great achievement, it is noted by some scientists that it is unclear how the model will work with more real-world complex proteins. Thus, more work is needed in this area.
The second example is OpenAI’s GPT3. The model is a very large network composed of 96 layers and 175 billion parameters. The model has shown impressive results for several tasks such as NLP questions & answering and generating code.
However, it is noted that the model does not have any kind of reasoning and does not understand what it is generating. Furthermore, its large size makes it very expensive. It is also unsustainable carbon footprint wise; its training is equivalent to driving a car to the moon and back.
While both AlphaFold and GPT3 models are both impressive achievements, there are some philosophical challenges/ questions that need to be addressed/ answered. The first question is about games/ simulated worlds vs. real world examples. Most often algorithms/models succeed in simulated world but fail in real world as the environment is more complex. How can we close the gap? How can we make the AI models succeed with complex tasks? I guess the first step is to apply AI to a real-world example with varied complexity levels.
The second question is about the structure and the size of AI models. Do models have to be big? Can we come up with a new generation of algorithms/ models that are smaller is size and have more efficient computations? Well to answer this question we have to take a pause on deeplearning and explore new venues.
The Gross Truths:
Ethics and bias remain the main drawbacks of Artificial Intelligence.
Over the last year, we had several prominent examples of AI ethics and bias issues. The first example relates to facial recognition: after several calls against mass surveillance, racial profiling and bias, and in light of Black Lives Matter movement starting in the United States, several tech companies such as Microsoft banned the police from using its facial recognition technology.
The second example relates to the use of an algorithm to predict exam results during COVID-19 period: after accusations and protests that the controversial algorithm was biased against students from poorer backgrounds, the United Kingdom government was forced to ditch the algorithm.
In the absence of regulations and tightened frameworks, ethics and bias will continue to be the main concerns surrounding the use of artificial intelligence.
Looking into the future, AI adoption will continue to accelerate, and we will probably see more breakthroughs achieved by only if we start looking at the subject in a holistic and sustainable view. Focusing models on real world problems and reducing the models carbon footprint will be a major step forward. We need to move away from thinking that “more” is always “more”. Sometimes “more” is “less”.
Article | May 24, 2021
Reveal makes extensive use of AI models. Generally, an AI model is a software program that has been trained on a set of data to perform specific tasks like recognizing certain patterns. Artificial intelligence models use decision-making algorithms to learn from the training and data and apply that learning to achieve specific pre-defined objectives.
Reveal offers a Model Library which consists of a collection of pre-existing models you can use straight out of the box, extend or adapt to suit your specific needs, or stack and pack to achieve a larger objective. We give you the ability to create your own AI models, which you can use for your own purposes as well as make available to others via our Model Marketplace. We also will work with you to create custom models such as the ones that drive DLA Piper's Aiscension and those used by Epiq in its new AI Model Library program.
Article | February 26, 2020
When talking about advents of artificial intelligence, we hear a lot about its adversarial attacks, specifically those that attempt to “deceive” an AI into believing, or to be more accurate, classifying, something incorrectly. For example, autonomous vehicles can be fooled into “thinking” stop signs are speed limit signs, pandas being identified as gibbons, or even having your favorite voice assistant be fooled by inaudible acoustic commands. Such examples showcase the narrative around AI deception. In another form, AI can be deceptive in manipulating the perceptions and beliefs of a person through “deepfakes” in video, audio, and images. The significant AI conferences held around the world are more frequently addressing the subject of AI deception too. And yet a lot of debates and discussions are happening on how we can defend against it through detection mechanisms
Article | March 11, 2020
After realizing the potential to affect change while studying systems engineering at the University of Virginia, Brigitte Hoyer Gosselink began her journey to discover how technology might have a scalable impact on the world. Gosselink worked within international development and later did strategy consulting for nonprofits before joining Google.org, where she is focused on increasing social impact and environmental sustainability work at innovative nonprofits. We talked to her about her efforts as head of product impact to bring emerging technology to organizations that serve humanity and the environment.