Deepfake Detectors Can be Bypassed into Identifying Fake Images as Real

  • AI systems trained to distinguish between real and synthetic content — are susceptible to adversarial attacks, or attacks leveraging inputs designed to cause mistakes in models.

  • Researchers demonstrated that it’s possible to bypass fake video detectors by adversarially modifying.

  • It’s a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric rise in deepfake content online.


In a paper published this week on the preprint server Arxiv.org, researchers from Google and the University of California at Berkeley demonstrate that even the best forensic classifiers — AI systems trained to distinguish between real and synthetic content — are susceptible to adversarial attacks, or attacks leveraging inputs designed to cause mistakes in models. Their work follows that of a team of researchers at the University of California at San Diego, who recently demonstrated that it’s possible to bypass fake video detectors by adversarially modifying — specifically, by injecting information into each frame — videos synthesized using existing AI generation methods.


It’s a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric rise in deepfake content online. Fake media might be used to sway opinions during an election or implicate a person in a crime, and it’s already been abused to generate pornographic material of actors and defraud a major energy producer.


The researchers first tackled the simpler task of evaluating classifiers to which they had unfettered access. Using this “white-box” threat model and a data set of 94,036 sample images, they modified synthesized images so that they were misclassified as real and vice versa, applying various attacks — a distortion-minimizing attack, a universal adversarial-patch attack, and a universal latent-space attack — to a classifier taken from the academic literature.


The distortion-minimizing attack, which involved adding a small perturbation (i.e., modifying a subset of pixels) to a synthetically generated image, caused one classifier to misclassify 71.3% of images with only 2% pixel changes and 89.7% of images with 4% pixel changes. Perhaps more alarmingly, the model classified 50% of real images as fake after the researchers distorted under 7% of the images’ pixels.


As for the loss-minimizing attack, which fixed the image distortion to be less than a specified threshold, it reduced the classifer’s accuracy from 96.6% to 27%. The universal adversarial-patch attack was even more effective — a visible noise pattern overlaid on two fake images spurred the model to classify them as real with a likelihood of 98% and 86%. And the final attack — the universal latent-space attack, where the team modified the underlying representation leveraged by an image-generating model to yield an adversarial image — reduced classification accuracy from 99% to 17%.


READ MORE: EVEN THE AI BEHIND DEEPFAKES CAN’T SAVE US FROM BEING DUPED


The researchers next investigated a black-box attack where the inner workings of the target classifier were unknown to them. They developed their own classifier by collecting one million images synthesized by an AI model and one million real images on which the aforementioned model was trained, and then training a separate system to classify images as fake or real and generating a white-box adversarial example on the source classifier using a distortion-minimizing attack. They report that this reduced their classifier’s accuracy from 85% to 0.03% and that when applied to a popular third-party classifier, it reduced that classifier’s accuracy from 96% to 22%.

To the extent that synthesized or manipulated content is used for nefarious purposes, the problem of detecting this content is inherently adversarial. We argue, therefore, that forensic classifiers need to build an adversarial model into their defences.

- Researchers


 

 


Demonstrating attacks on sensitive systems is not something that should be taken lightly, or done simply for sport. However, if such forensic classifiers are currently deployed, the false sense of security they provide may be worse than if they were not deployed at all — not only would a fake profile picture appear authentic, now it would be given additional credibility by a forensic classifier. Even if forensic classifiers are eventually defeated by a committed adversary, these classifiers are still valuable in that they make it more difficult and time-consuming to create a convincing fake.

- Researchers


Fortunately, a number of companies have published corpora in the hopes that the research community will pioneer new detection methods. To accelerate such efforts, Facebook — along with Amazon Web Services (AWS), the Partnership on AI, and academics from a number of universities — is spearheading the Deepfake Detection Challenge. The Challenge includes a data set of video samples labeled to indicate which were manipulated with AI. In September 2019, Google released a collection of visual deepfakes as part of the FaceForensics benchmark, which was cocreated by the Technical University of Munich and the University Federico II of Naples. More recently, researchers from SenseTime partnered with Nanyang Technological University in Singapore to design DeeperForensics-1.0, a data set for face forgery detection that they claim is the largest of its kind.


READ MORE: UNDERSTANDING AI DECEPTION AND HOW ONE CAN PREPARE AGAINST IT

Spotlight

Other News
AI Tech

AI and Big Data Expo North America announces leading Speaker Lineup

TechEx Events | March 07, 2024

AI and Big Data Expo North America announces new speakers! SANTA CLARA, CALIFORNIA, UNITED STATES, February 26, 2024 /EINPresswire.com/ -- TheAI and Big Expo North America, the leading event for Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP, has announced a fresh cohort of distinguishedspeakersfor its upcoming conference at the Santa Clara Convention Center on June 5-6, 2024. Some of the top industry speakers set to take the stage are: - Sam Hamilton - Head of Data & AI – Visa - Dr Astha Purohit - Director - Product (Tech) Ops – Walmart - Noorddin Taj - Head of Architecture and Design of Intelligent Operations - BP - Temi Odesanya - Director - AI Governance Automation - Thomson Reuters - Katie Sanders - Assistant Vice President – Tech - Union Pacific Railroad - Prasanth Nandanuru – SVP - Wells Fargo - Rodney Brooks - Professor Emeritus - MIT These esteemed speakers bring a wealth of knowledge and expertise to an already impressive lineup, promising attendees a truly enlightening experience. In addition to the speakers, theAI and Big Data Expo North Americawill feature a series of presentations covering a diverse range of topics in AI and Big Data exploring the latest innovations, implementations and strategies across a range of industries. Attendees can expect to gain valuable insights and practical strategies from presentations such as: How Gen AI Positively Augments Workforce Capabilities Trends in Computer Vision: Applications, Datasets, and Models Getting to Production-Ready: Challenges and Best Practices for Deploying AI Ensuring Your AI is Responsible and Ethical Mitigating Bias and Promoting Fairness in AI Systems Security Challenges in the Era of Gen AI and Data Science AI for Good: Social Impact and Ethics Selling Data Democratization to Executives Spreading Data Insights across the Business Barriers to Overcome: People, Processes, and Technology Optimizing the Customer Experience with AI Using AI to Drive Growth in a Regulated Industry Building an MLOps Foundation for AI at Scale The Expo offers a platform for exploration and discovery, showcasing how cutting-edge technologies are reshaping a myriad of industries, including manufacturing, transport, supply chain, government, legal sectors, financial services, energy, utilities, insurance, healthcare, retail, and more. Attendees will have the chance to witness firsthand the transformative power of AI and Big Data across various sectors, gaining insights that are crucial for staying ahead in today's rapidly evolving technological landscape. Anticipating a turnout of over 7000 attendees and featuring 200 speakers across various tracks, AI and Big Data Expo North America offers a unique opportunity for CTO’s, CDO’s, CIO’s , Heads of IOT, AI /ML, IT Directors and tech enthusiasts to stay abreast of the latest trends and innovations in AI, Big Data and related technologies. Organized by TechEx Events, the conference will also feature six co-located events, including the IoT Tech Expo, Intelligent Automation Conference, Cyber Security & Cloud Congress, Digital Transformation Week, and Edge Computing Expo, ensuring a comprehensive exploration of the technological landscape. Attendees can choose from various ticket options, providing access to engaging sessions, the bustling expo floor, premium tracks featuring industry leaders, a VIP networking party, and a sophisticated networking app facilitating connections ahead of the event. Secure your ticket with a 25% discount on tickets, available until March 31st, 2024. Save up to $300 on your ticket and be part of the conversation shaping the future of AI and Big Data technologies. For more information and to secure your place at AI and Big Data Expo North America, please visit https://www.ai-expo.net/northamerica/. About AI and Big Data Expo North America: The AI and Big Data Expo North America is a leading event in the AI and Big Data landscape, serving as a nexus for professionals, industry experts, and enthusiasts to explore and navigate the ever-evolving technological frontier. Through its focus on education, networking, and collaboration, the Expo continues to be a beacon for those eager to stay at the forefront of technological innovation. “AI and Big Data Expo North Americais a part ofTechEx. For more information regardingTechExplease see onlinehere.”

Read More