AI, Neural Networks, and Deep Fakes

Reading Time: 5 minutes

Share:

How do you know if the video you are watching is real? What about the news? With the improvements to artificial intelligence and neural networks management campaigns, this may pose an interesting question.

Artificial intelligence (AI) has grown significantly over the years. In 2011, the IBM Watson played and won Jeopardy against two human players, one of whom was the longest unbeaten run of 74 wins and another with the largest game pot winning $3.25 million. At the time, Watson was 10 racks of servers in a datacenter. Now AI has improved to need fewer resources while taking advantage of years of hardware development. Watson was also as large as it was in order to provide a near-human response time. If your application can allow for processing time, artificial intelligence applications can run even on the most basic of computers. Many people will think of artificial intelligence like the computer from countless science fiction series. It can be far more mundane. So how does this relate?

Good Guy Neural Networks

On the “good side,” a basic artificial intelligence could go out on the Internet and generate news stories and podcasts for the morning commute based upon your prior listening experience. Google uses a subset of artificial intelligence called deep learning and neural networks to improve your search results. Credit cards run transactions through neural networks to scan for “out of behavior” charges that may strongly indicate fraud and send a text message asking you to confirm the purchase or to call the fraud prevention center. Financial institutions can scan transactions for signs of money laundering or compromised wire transfers.

The movie industry uses artificial intelligence to improve the appearance and natural look of animated effects. Adding a natural sway to trees and animals enhances immersion. Unnatural animation is often seen as jarring and detracts from a scene and can even become the center of attention and interfering with the movie. For example, AI neural networks can be used to map a person’s face on top of another person’s face. When done outside of acting, videos of these types are called “deep fakes.” These types of fakes have been improving, thus making them difficult to detect. These systems use the enhanced versions of the same technology that services like Snapchat use to add dog noses and cat ears to chat systems. Faces are mapped and converted to data points where the stock of images, such as cat ears, can be resized and stretched to fit the person in the video.

Bad Guy Neural Networks

On the “bad side,” artificial intelligence can be trained to scan for fake media and then report back about how it detected it was fake in order to improve the fake. For example, videos can be edited with other people’s faces. The AI can review the images and suggest improvements that further obscure that the video is edited. This is very similar to how the movie industry enhances movie scenes, but the use is nefarious. Authorities may be fooled into arresting a third party if the video has been edited to show them robbing a store. This method can also be used to alter videos of world leaders and in turn deliver an altered message.

For example, Jordan Peele created a video that appeared to be President Obama discussing deep fakes. In the video, Jordan pans himself into the video demonstrating the neural network tools actively altering the video to match his mouth to President Obama’s face movements. Additionally, audio neural networks exist that can listen to speech and use the audio to develop text to speech that sounds like the original person’s voice, further enhancing the fake video’s appearance of validity.

Neural Network Security Issues

This opens at least a few questions. If a neural network-based search engine is learning what you are interested in, will it limit exposure to other ideas or points of view? This may be appropriate for a specific search, but what if it is hiding information and opinions that the searcher isn’t aware of? This may be benign or malevolent. If entity A wants to block opinion X or information on subject Y, it may sway the opinions of populations. You may sway your own opinion accidentally, because the news site AI is feeding you stories that are like the previous ones you have read while hiding additional information or other opinions.

The AIs may have learned enough that the end-user is more likely to open specifically crafted phishing emails that entice that specific end-user based on the information it has learned.

These artificial intelligence and neural networks are being used to further enhance themselves and will improve over time. Fake videos will likely improve over time, and we must be aware that these fakes exist and make sure to ask the questions about the source and to look into the message.

Have questions about how neural networks and AI can affect your security? Contact us at any time.

This publication contains general information only and Sikich is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or any other professional advice or services. This publication is not a substitute for such professional advice or services, nor should you use it as a basis for any decision, action or omission that may affect you or your business. Before making any decision, taking any action or omitting an action that may affect you or your business, you should consult a qualified professional advisor. In addition, this publication may contain certain content generated by an artificial intelligence (AI) language model. You acknowledge that Sikich shall not be responsible for any loss sustained by you or any person who relies on this publication.

SIGN-UP FOR INSIGHTS

Join 14,000+ business executives and decision makers

Upcoming Events

Upcoming Events

Latest Insights

About The Author