Ai Deep Learning Crimes: Be Cautious

Ai Deep Learning Crimes: Be Cautious

With the advancement in technology especially deep fakes, it is not far fetched that they are used by people with malicious propaganda. Previously, people have attempted to use technology to rig the elections. For instance, Russia is believed to have used technology in an attempt to influence the US presidential elections. Technology can also be used to spread propaganda or to start a war.

Deep fake videos are exploding online. Just recently a Reddit user, who published some deep fake pornographic videos of celebrities, saw his videos go viral around the world, with hundreds of millions of users sharing the video in a few short days. Since then, some of the popular deep fakes, which have been posted online and gone viral include Obama deep fake, and Nicolas cage deep fake. In fact there is an APP called ZAO which allows people to transplant themselves into scenes from their favorite movie.

AI Security with Adversarial Robustness and Explainable AI

With the advancement in Artificial Intelligence (AI) technology and evolution of cloud infrastructure, it is becoming easier and cheaper to develop these technologies. Fundamentally how they work is, they create a sample dataset of images of celebrities from publicly available sources, and then train an AI algorithm to learn the features of the face. This is a departure from the past, when fakes were created using Photoshop.

Such fakes were easier to detect than those created using AI. Due to the difficulty in detecting them, the deep fakes created using AI are a serious threat to national security. The government has noticed this. For instance, members of congress; Adam B. Schiff, Stephanie Murphy, and Carlos Curbelo wrote a letter to Daniel R. Coats, Director of National Intelligence, to report to congress about the implications of new technologies that allow malicious actors to fabricate audio, video and still images.

Rebellion Research

To detect deep fake videos, an algorithm can be employed to evaluate a video frame by frame to spot discrepancies. Online platforms and government agencies have noticed this and started taking actions on deep fakes. Reddit, the social media platform where the deep fake porn videos went viral, has been very proactive about enforcing its rules on deep fake videos. Reddit banned the user who made the pornographic deep fake videos for life and other users who were spreading deep fake content were also banned for life. In addition, The Defense Advanced Research Projects Agency (DARPA) has announced funding for technology that can identify manipulated videos and deep fakes. DARPA noted that the existing statistical methods for detecting fakes on media are becoming obsolete.

Ai Deep Learning Crimes: Be Cautious