top of page

Dark Art of Deepfakes

  • Writer: Aastha Thakker
    Aastha Thakker
  • Oct 29, 2025
  • 5 min read

Hey there!


We’ve all encountered fake people, but today, the spotlight is on something even more deceptive: deepfakes. In one of my earlier blogs, I touched on this briefly, but today let’s see more into it.


The term “deepfake” combines “deep learning” and “fake,” describing AI systems that produce highly realistic but completely artificial audio and video. You know how your brain learns to recognize patterns? Like how you can tell it’s your mom calling just by the way she says “hello”? Well, deepfake AI does something similar — it studies tons of videos and audio to learn these patterns, then creates new content that looks and sounds just like the real deal.


Sure, this tech can be fun (like putting your face in your favorite movie scenes), but it’s also like giving Photoshop superpowers. And just like any superpower, it comes with some pretty big responsibilities and risks — because not everyone’s planning to use it to make harmless memes.


Let’s look at some real-life deepfake drama that made headlines. Remember when a fake video showed Barack Obama saying things he never actually said? Someone created a fake video of President Zelensky telling Ukrainian soldiers to give up — pretty scary stuff during a war, right?


Let’s know some history too?


  • DNN (Deep Neural Networks): Neural networks with multiple hidden layers, often used in image, video, and speech processing.

  • Deepfake Term Origin: Coined in 2017 from a Reddit user who posted pornographic deepfakes.


    Spread quickly despite Reddit banning the forum, leading to concerns about privacy and identity fraud.

  • Early Machine Learning in Video: Video Rewrite Program (1997): Generated new videos with different speech using basic machine learning but not deepfakes.

  • Facial Recognition with DNNs: Mid-2010s: DNNs like DeepFace and DeepId were developed to classify human images.

  • Transition to Deepfake Creation: In 2014, GANs (Generative Adversarial Networks) and Autoencoders were introduced. These models, used in tools like Faceswap and DeepFaceLab, powered deepfake creation.

  • Early GAN Use for Reenactments: pix2pix (cGAN): Generated facial reenactments, evolving to CycleGAN for improved image quality.

  • LSTM Networks in Deepfakes: LSTM networks were used for generating mouth shapes from speech, with Obama being a notable target in early examples.

Creation of Deepfakes? Urm.. No, I am not teaching creation of course!


  1. The Basic Setup: Right Stuff First, grab high-quality videos of both faces. Just like casting twins for a movie, your faces need to match — similar head shapes, skin tones, and facial features. The better they match naturally, the more convincing your deepfake will be. This is where most people mess up by trying to swap totally different faces. Think you can whip up a deepfake with your old laptop? Think again! You’ll need a high-level graphics card (GPU) that might cost as much as a used car. While the software is free (like DeepFaceLab and FaceSwap).

  2. The Five-Course Process: Face Mapping Next, the computer breaks down every video into single frames. It’s like putting 30 tiny dots on each face to create a map. These dots help the AI understand exactly where everything goes — from eyebrows to chin. Think of it as creating a face blueprint.

  3. Gathering: Like picking ingredients, you need high-quality videos of both faces

  4. Face Extraction: The AI identifies and marks key facial features (like mapping out a recipe)

  5. Training: This is where the AI learns to cook — it studies both faces until it can recreate them

  6. Conversion: The actual face-swapping magic happens here

  7. Post-Processing: Adding the final garnish to make everything look real.

GANs consist of two primary components, the Generator and Discriminator. The role of generator is to generate fake images from random inputs that will fool detector, which has the job of determining which images are real, and which are fake outputs from generator.Starting with a randomised input, Generator applies upsampling layers which are simple layers with no weights that double the dimensions of input images and are required to produce regular image size outputs from a much smaller original input. Upsampling layers are followed by traditional convolution layers which learn to interpret the doubled input and create meaningful details. Overall, Genrator is responsible for generating new plausible images from latent space.

3. What Makes a Good Deepfake? : The AI starts studying both faces like a dedicated art student learning to copy a masterpiece. It practices over and over, comparing its work to the original faces until it gets really good at recreating them. If the AI can’t get it right after lots of practice, your source videos probably weren’t good enough.

· The source faces should be similar (like casting twins)

· High-quality footage with good lighting

· Lots of different angles and expressions

· Patience (we’re talking weeks or months of work)

· Mad editing skills for the final touches


Look at this example and see how rain is used as an object.
Look at this example and see how rain is used as an object.

4. The Not-So-Fun Facts: The computer takes everything it learned about both faces and creates the actual deepfake. It’s like digital plastic surgery, taking one person’s expressions and applying them to another person’s face, frame by frame.


· Creating convincing deepfakes is like trying to win MasterChef — it takes serious skills.


· You need specific hardware.


· The process involves tons of trial and error


· Even with all the right ingredients, results aren’t guaranteed



What’s happening in life?


The good news: different industries are fighting back against these digital actors.

Social media companies are like digital bouncers, creating tools to spot and kick out fake videos before they cause trouble. But let’s be honest — they’re still playing catch-up with the fakers.


Banks and financial companies are getting extra careful too. They’re worried about criminals using deepfakes to pretend to be you and steal your money. That’s why they’re developing fancy new ways to make sure you’re really you — kind of like having a super-powered ID checker.


Want to make a deepfake? Well, it depends on what you’re after. If you just want to goof around and put your face in an Avengers movie, there are apps like iFace that’ll do it in seconds. But creating those scary-good deepfakes that could actually fool people? That’s a whole different story. Audio is typically easier to fake than video, but making a realistic deepfake video requires a lot of effort.


Think of it like cooking — you can make instant noodles (quick face-swap apps), or you can make a gourmet meal (convincing deepfakes). The good stuff needs lots of ingredients (thousands of photos and videos), proper equipment (powerful computers), and plenty of time to cook (sometimes 100+ hours!).


But here’s the thing — as AI gets smarter, it’s getting easier. Remember that weird viral video of Will Smith eating spaghetti? That’s just a taste of what’s possible with today’s AI tools.


The good guys aren’t sitting idle though. As of 2024, most U.S. states are cracking down with laws against deepfake misuse, especially in elections. Big tech companies like Google, Meta, and OpenAI are also stepping up, adding digital watermarks and labels to AI-created content — kind of like putting a “Made by AI” sticker on everything fake.



So, what’s the conclusion?


Think of deepfake detection as a never-ending game of hide and seek. While scientists have created some pretty smart detection tools (claiming they’re right 99% of the time!), it’s not that simple. These digital bloodhounds are more like picky eaters — they might be great at spotting one type of fake but completely miss another.


The biggest challenge? Deepfakes keep getting better. While today’s fakes might be as stiff as a mannequin (static backgrounds, frozen lighting), tomorrow’s fakes could be as dynamic as a Hollywood movie — imagine a fake president jogging through a park while giving a speech!


The good news? Both tech companies and scientists are working hard to stay ahead. But just like a virus keeps mutating, deepfakes keep evolving. The key isn’t just building better detectors — it’s anticipating what the next generation of fakes might look like.


See you next Thursday! Bu-bye!


Comments


bottom of page