Enison
Contact
  • Home
  • Services
    • AI Hybrid BPO
    • AR Management Platform
    • MFI Platform
    • RAG Implementation Support
  • About
  • Blog
  • Recruit

Footer

Enison

エニソン株式会社

🇹🇭

Chamchuri Square 24F, 319 Phayathai Rd Pathum Wan,Bangkok 10330, Thailand

🇯🇵

〒104-0061 2F Ginza Otake Besidence, 1-22-11 Ginza, Chuo-ku, Tokyo 104-0061 03-6695-6749

🇱🇦

20 Samsenthai Road, Nongduang Nua Village, Sikhottabong District, Vientiane, Laos

Services

  • AI Hybrid BPO
  • AR Management Platform
  • MFI Platform
  • RAG Development Support

Support

  • Contact
  • Sales

Company

  • About Us
  • Blog
  • Careers

Legal

  • Terms of Service
  • Privacy Policy

© 2025-2026Enison Sole Co., Ltd. All rights reserved.

🇯🇵JA🇺🇸EN🇹🇭TH🇱🇦LO
5 Steps to Detect Deepfakes — A Guide to Combating AI Fake News and Voice Cloning | Enison Sole Co., Ltd.
  1. Home
  2. Blog
  3. 5 Steps to Detect Deepfakes — A Guide to Combating AI Fake News and Voice Cloning

5 Steps to Detect Deepfakes — A Guide to Combating AI Fake News and Voice Cloning

May 7, 2026
5 Steps to Detect Deepfakes — A Guide to Combating AI Fake News and Voice Cloning

Lead

Deepfakes refer to AI-generated images, videos, and audio that are so convincingly realistic they are indistinguishable from authentic content, and are used to deceive people. This article outlines a 5-step approach to identifying deepfakes, covering how to handle AI-generated fake news and voice cloning, how to verify provenance using C2PA / Content Credentials, and defensive measures against impersonation fraud (BEC) targeting businesses — written for both general readers who encounter misinformation on social media and corporate staff who receive fraudulent messages at work. Reading this alongside An Introduction to AI × Cyber Risk Management for SMBs will help you build a comprehensive framework before integrating AI into your operations.

5 Steps Before You Share

Before sharing deepfakes or AI-generated fake news, running through these 5 steps — verifying the source, cross-checking multiple sources, confirming provenance, not treating audio/images as sole evidence, and thinking slowly before acting — can prevent the vast majority of misinformation spread and fraud.

When you receive news, images, videos, or audio messages, running through the following 5 steps before sharing will significantly improve your ability to detect AI-generated misinformation.

  1. Verify the source — Identify which organization, outlet, or account is publishing the content, and determine whether it is a trustworthy source.
  2. Cross-check multiple sources — Don't rely on a single post; confirm whether other independent outlets are reporting the same information.
  3. Check provenance — If an image or file carries provenance metadata such as Content Credentials (C2PA), review the history of its creation and editing.
  4. Don't treat audio or images as sole evidence — Now that voice cloning and deepfakes are widespread, avoid making judgments based on audio and video alone.
  5. Think slowly, act deliberately — The more emotionally charged a piece of information feels, the more important it is to pause and re-verify before responding.

None of these steps require specialized tools. Building them into your daily habits is enough to protect yourself and your organization from a large share of misinformation.The sections below walk through the perspective needed to carry out each step.

Why AI-Generated Misinformation Is More Dangerous Than Before

AI-generated misinformation — whether text, images, audio, or video — can now be produced cheaply, quickly, and at scale. The NIST GenAI Profile and UNESCO identify the erosion of information integrity and the loss of trust in "evidence" as major risks of the generative AI era.

Fake news has always existed, but in the past, fabricated text was crude, synthetic images looked unnatural, and there was still room for a discerning eye to catch them. The widespread adoption of generative AI has changed this reality: deepfake videos, voice cloning, and AI-crafted phishing messages can all be produced cheaply, quickly, and at scale. It is no coincidence that "information integrity" and the misuse of synthetic content are listed among the key risks in NIST's Generative AI Profile of the AI Risk Management Framework — this directly reflects that shift.

UNESCO has also warned that deepfakes do not merely increase the volume of misinformation in the information ecosystem; they undermine trust in "evidence" and "facts" themselves. The collapse of the long-held assumption that "seeing it on video makes it real" or "there's audio so it must be that person" represents the most fundamental structural change of all.

4 Forms of AI Disinformation — Text / Images / Audio / Video

Conclusion: The nature of harm varies significantly by format (text / image / audio / video). The formats most likely to result in financial loss are voice cloning used to issue payment instructions and deepfake videos used for impersonation.

AI-generated deepfakes and misinformation can be broadly divided into four formats. Each carries a different type of risk and presents a different level of difficulty to address.

FormatExamplesPrimary Harms
TextFake news articles, fraudulent internal memos, fake product reviews, AI-crafted phishingMisinformation, brand damage, financial fraud
ImageSynthetic photos of politicians, fake images of disaster scenes, fraudulent product advertisementsManipulation of public opinion, impersonation, trademark infringement
AudioVoice cloning used to issue payment instructions, family impersonation scamsFinancial fraud, erosion of trust
VideoDeepfake videos, fake videos impersonating a CEOBEC, stock price manipulation, reputational damage

Impact on the General Public and Impact on B2B

For ordinary individuals, the impact manifests as sharing misinformation that spreads to family and friends, falling victim to fraudulent advertisements on social media, or having one's own photos or voice used in synthetic content without consent. In B2B contexts, the primary threat is attacks that impersonate supervisors, business partners, or regulatory authorities in order to extract payments or sensitive disclosures. The CISA/NSA/FBI joint guidance "Deepfake Threats to Organizations" also identifies impersonation-based deepfakes as a realistic threat targeting organizations.

For Japanese-affiliated local subsidiaries in the ASEAN region that employ multilingual staff, operations often involve a constant flow of chat messages and short voice messages between headquarters, local managers, and business partners. The less a communication channel is accustomed to identity verification, the more vulnerable it is to AI-driven impersonation attacks.

How to Spot Fake Text and Images

AI-generated text still exhibits tell-tale patterns such as "unsourced specific figures," "overly smooth prose," and "breaking news from a single outlet only," while deepfake images can be identified through checkpoints focused on "fingers, teeth, lighting, and text." However, visual inspection is only the first filter, and a multi-layered defense combining provenance verification is required.

The human eye is not perfect, but knowing a few patterns can reduce the frequency of false detections. What matters is not a binary of "spotted it means real, missed it means fake," but rather operating under the premise that "visual inspection is the first filter, not the final judgment."

5 Signs of AI-Generated Text

AI-written text has improved in quality, but in certain situations, patterns that are still easy to spot remain.

  • Specific figures are stated as fact with no sources provided
  • Sensational, emotionally charged headlines are followed by thin, shallow body text
  • The prose is too smooth, leaving the contours of the argument vague
  • "According to an expert" is written without identifying who that expert is
  • Despite being treated as breaking news, the same information cannot be found across multiple other outlets

"The writing is smooth, therefore it's trustworthy" no longer holds. Writing quality and content accuracy are separate metrics. What Google Search Central emphasizes in its Helpful Content guidelines is content that is "trustworthy and people-first"; thin content mass-produced by AI is not recommended from a search quality standpoint either.

How to Identify AI-Generated Images (Hands / Teeth / Light / Text)

Even as the quality of AI-generated images improves, there are checkpoints that frequently appear at this point in time.

  • Fingers: Number of fingers, unnatural joints, shape of fingernails
  • Teeth and ears: Unnatural tooth alignment, asymmetrical accessories
  • Light and shadow: Inconsistency between light source and shadow direction, contradictions from multiple light sources
  • Text information: Signs or background text that is illegible or distorted
  • Background: Unnatural breaks in blurring, degraded facial detail in crowds

However, as models evolve, these telltale signs will disappear. The focus is shifting toward technical provenance verification, such as Content Credentials, which will be covered in the next section.

The Limits of Relying on the "Naked Eye" Alone

Visual inspection is convenient, but it should not be allowed to monopolize judgment. Cases where AI-generated images cannot be detected at a glance will only increase going forward. For important decisions—contracts, money transfers, reporting, internal announcements—a multi-layered defense is required that combines visual inspection with verification of the source, cross-referencing multiple sources, and, where possible, confirmation of provenance information.

C2PA and Content Credentials — Verifying Media Provenance

C2PA (Coalition for Content Provenance and Authenticity) is an open standard that attaches to media "when, who, with what, and how it was edited" in a tamper-detectable form. The user-facing implementation of this is "Content Credentials," which records with a signature whether content was AI-generated.

As a mechanism for addressing the era in which visual detection is no longer sufficient, C2PA and Content Credentials are being established. This is a system that attaches provenance information—"when, who, with what, and how it was edited"—to digital media such as photos, videos, audio, and documents, in a tamper-detectable form.

What Is C2PA and How Does It Work

C2PA (Coalition for Content Provenance and Authenticity) is an open standard for verifying media provenance and authenticity. Adobe, Microsoft, the BBC, and many other organizations are participating, and it is spreading as a cross-industry standard. Technically, it works by embedding cryptographically signed metadata (a manifest) into media files, enabling subsequent verification.

The user-facing implementation of this standard is "Content Credentials." For photos taken with compatible cameras, images edited with compatible editors, and AI-generated images produced by compatible generators, information is recorded about the capture device, editing history, and whether the content was AI-generated.

Steps to Verify Content Credentials

When receiving media with Content Credentials attached, the verification process is as follows:

  1. Look for the media provenance icon in a browser extension, dedicated viewer, or compatible social media UI
  2. Click the icon to check the source, editor, editing history, and whether the content was AI-generated
  3. If necessary, confirm the absence of tampering through signature verification

In its guidance on Generative AI content, Google recommends attaching metadata such as IPTC DigitalSourceType to AI-generated images. Content Credentials can be positioned as a mechanism that makes this metadata more comprehensive and verifiable.

Limitations and Real-World Operation

That said, C2PA is not a silver bullet. Not all media carries provenance information, and once a screenshot is taken, the signature is lost. There are also cases where social media platforms strip metadata upon upload. In practice, therefore, a layered approach is realistic:

  • Actively check Content Credentials when they are present
  • Do not immediately assume something is fake just because they are absent (the media may be from an older photo or an incompatible device)
  • For important decisions, combine with alternative verification methods (direct confirmation with the sender, multiple trusted sources)

Provenance information is better used operationally as a tool that "accelerates confirmation of authenticity" rather than one that "guarantees authenticity outright."

Voice Cloning and Business Email Compromise — The B2B Case

Attacks in which an executive's voice is cloned from a sample of just a few dozen seconds using AI, and a voice message saying "Please transfer funds urgently" is sent to an accounting staff member, are occurring in the real world. The U.S. FTC has issued a warning to consumers, and a joint alert from CISA, NSA, and the FBI characterizes this as a serious threat targeting organizations.

The most acute manifestation of AI disinformation risk for businesses is the combination of voice cloning and Business Email Compromise (BEC). The U.S. FTC has issued consumer warnings about fraud using voice cloning, with reported cases involving demands for urgent money transfers while impersonating family members or acquaintances. Applying the same mechanism to corporate hierarchies is what constitutes impersonation-based BEC.

The "Wire Transfer Request Impersonating a Superior" Scenario

A typical attack unfolds as follows. The attacker first collects voice and facial samples of an executive from social media and corporate IR materials. They then generate a short voice message or video clip and send it to an accounting staff member. The content is something like, "Please transfer funds to this overseas account urgently. I'll explain the reason later," delivered in a voice and manner virtually identical to the real person. The accounting staff member, having heard the person's actual voice in audio or video rather than just an email, concludes it is genuine and executes the transfer.

The condition that enables the attack to succeed is the persistence of an implicit organizational trust that "audio or video means it's really that person." Now that AI can easily generate voice and video, this assumption can no longer be sustained.

Defense Through Two-Channel Verification

The cornerstone of defense is enforcing a rule that requires confirmation through two separate channels for any critical instruction.

  • Voice instructions must be re-confirmed via email, chat, or in person
  • The more urgently a wire transfer instruction is framed, deliberately pause and follow up later
  • Account change notifications from business partners must always be verified by calling them back on their registered phone number
  • Implement a "code word" system among executives, accounting staff, and business partners

CISA's Phishing Guidance also identifies the creation of urgency and the abuse of authority as typical attack patterns. The rule of pausing precisely when urgency feels highest is something even small businesses can adopt starting today.

FAQ

Here is a summary of frequently asked questions about deepfakes, AI-generated misinformation, and fake images.

Q1. Can AI Really Create Fake News?

Yes, AI can produce fluent, contextually adapted fake news. However, "reads naturally" and "is accurate" are two different things, and fluency provides no guarantee of factual accuracy. NIST's GenAI Profile identifies information integrity as a key risk of generative AI, and Google Search Central also recommends content that places trustworthy people at the center.

Q2. Can You Tell Apart AI-Generated Images?

Partially, yes. Unnatural rendering of fingers, inconsistencies in light and shadow, text on signs, and blurring at boundaries currently serve as visual indicators. However, as models continue to evolve, detection by the naked eye will become increasingly difficult, making it practical to combine visual inspection with provenance verification via Content Credentials (C2PA), source confirmation, and cross-referencing multiple sources.

Q3. Is It Safe to Enter Personal Information into AI?

Careful judgment is required. Before entering customer information, internal documents, or sensitive data, you should review the terms of service, data retention policies, and whether the service uses inputs for model training. UNESCO also addresses data protection and confidentiality in the context of AI literacy, treating them as issues on par with misinformation and deepfakes. For more details, please refer to How to Use AI Safely in Laos? A Practical Guide to Personal Data Protection.

Conclusion

AI is not frightening because it is inherently dangerous. It is frightening because it can produce fakes so sophisticated they are indistinguishable from the real thing — cheaply, quickly, and at scale. That is precisely why the priority is not to search for specialized detection tools, but to build habits such as the 5 steps before sharing, visual checks, Content Credentials, and two-path verification. In an era where AI destabilizes our ability to distinguish truth from falsehood, the most reliable course of action is the timeless habit of "stopping to verify."

Articles to Read Next

  • Where to Start with AI × Cyber Risk Management for SMBs? 6 Checklists to Get Organized in 30 Minutes — A foundational 4-part setup to prepare before using AI, including organizational defenses against deepfakes
  • How to Use AI Safely in Laos? A Practical Guide to Personal Data Protection — Practical rules for keeping personal and sensitive data out of AI systems

References

  • UNESCO. Deepfakes and the crisis of knowing. https://www.unesco.org/en/articles/deepfakes-and-crisis-knowing
  • UNESCO. AI and society: how to build a more responsible future. https://www.unesco.org/en/articles/ai-and-society-how-build-more-responsible-future
  • NIST. Artificial Intelligence Risk Management Framework: Generative AI Profile. https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-generative-artificial-intelligence
  • FTC. Fighting back against harmful voice cloning. https://consumer.ftc.gov/consumer-alerts/2024/04/fighting-back-against-harmful-voice-cloning
  • CISA. NSA, FBI, and CISA Release Cybersecurity Information Sheet on Deepfake Threats. https://www.cisa.gov/news-events/alerts/2023/09/12/nsa-fbi-and-cisa-release-cybersecurity-information-sheet-deepfake-threats
  • C2PA. Verifying Media Content Sources. https://c2pa.org/
  • Google Search Central. Creating Helpful, Reliable, People-First Content. https://developers.google.com/search/docs/fundamentals/creating-helpful-content
  • Google Search Central. Google Search's Guidance on Generative AI Content. https://developers.google.com/search/docs/fundamentals/using-gen-ai-content

Author & Supervisor

Chi
Enison

Chi

Majored in Information Science at the National University of Laos, where he contributed to the development of statistical software, building a practical foundation in data analysis and programming. He began his career in web and application development in 2021, and from 2023 onward gained extensive hands-on experience across both frontend and backend domains. At our company, he is responsible for the design and development of AI-powered web services, and is involved in projects that integrate natural language processing (NLP), machine learning, and generative AI and large language models (LLMs) into business systems. He has a voracious appetite for keeping up with the latest technologies and places great value on moving swiftly from technical validation to production implementation.

Contact Us

Recommended Articles

Where Should SMEs Start with AI & Cyber Risk Management? 6 Checklists to Get Organized in 30 Minutes
Updated: May 6, 2026

Where Should SMEs Start with AI & Cyber Risk Management? 6 Checklists to Get Organized in 30 Minutes

ASEAN Cross-Border AI Project — Implementation Guide for Multilingual RAG and Localization
Updated: May 5, 2026

ASEAN Cross-Border AI Project — Implementation Guide for Multilingual RAG and Localization

Categories

  • Laos(4)
  • AI & LLM(3)
  • DX & Digitalization(2)
  • Security(2)
  • Fintech(1)

Contents

  • Lead
  • 5 Steps Before You Share
  • Why AI-Generated Misinformation Is More Dangerous Than Before
  • 4 Forms of AI Disinformation — Text / Images / Audio / Video
  • Impact on the General Public and Impact on B2B
  • How to Spot Fake Text and Images
  • 5 Signs of AI-Generated Text
  • How to Identify AI-Generated Images (Hands / Teeth / Light / Text)
  • The Limits of Relying on the "Naked Eye" Alone
  • C2PA and Content Credentials — Verifying Media Provenance
  • What Is C2PA and How Does It Work
  • Steps to Verify Content Credentials
  • Limitations and Real-World Operation
  • Voice Cloning and Business Email Compromise — The B2B Case
  • The "Wire Transfer Request Impersonating a Superior" Scenario
  • Defense Through Two-Channel Verification
  • FAQ
  • Q1. Can AI Really Create Fake News?
  • Q2. Can You Tell Apart AI-Generated Images?
  • Q3. Is It Safe to Enter Personal Information into AI?
  • Conclusion