• New AI tools and training vital to combat deepfake impersonation

    From TechnologyDaily@1337:1/100 to All on Tuesday, November 05, 2024 15:30:05
    New AI tools and training vital to combat deepfake impersonation scams

    Date:
    Tue, 05 Nov 2024 15:14:02 +0000

    Description:
    In order to keep up with advances in deepfakes, businesses need to adopt an approach that combines tech with organization-wide cultural change.

    FULL STORY ======================================================================

    AI -generated images and videos are a growing risk to society and our
    economy - becoming easier to create, and much harder to differentiate from reality. Discussion so far has largely centered around political deepfakes by bad actors looking to influence democracies. This has proved to largely be unfounded in Europe, the UK, the EU and India - despite Sumsub data detecting upwards of a 245% YoY increase in deepfakes worldwide in 2024.

    Now, the concern is around deepfakes impacting organizations or people
    through financial fraud. Businesses know this when it comes to new customers
    - with identity verification and fraud monitoring a central part of any financial onboarding process.

    Deepfake-augmented phishing and impersonation scams, however, are something that businesses are not prepared for. Imposter scams remained the top fraud category in the US in 2023, with reported losses of $2.7 billion according to their Federal Trade Commission, and as deepfakes get better, more will fall victim. Business leaders know this: new data from Deloitte showed that surveyed executives experienced at least one (15.1%) or multiple (10.8%) deepfake financial fraud incidents in 2023.

    Although this is likely to increase, with over half of surveyed execs (51.6%) expecting an increase in the number and size of deepfake attacks targeting financial and accounting data, little is being done. One-fifth (20.1%) of those polled reported no confidence at all in their ability to respond effectively to deepfake financial fraud.

    While there are deepfake detection tools that are crucial for preventing external fraudsters from bypassing verification procedures during onboarding, businesses must also shield themselves from internal threats.

    Here, a low-trust approach to financial requests or other potentially impactful decisions, alongside new AI-augmented digital tools, are vital for businesses to detect deep fake-augmented phishing and impersonation scams. This means that training, education, and a change in our philosophical approach to visual and audible information must be implemented from the top down. A holistic deepfake strategy

    Sociocultural implausibilities: Perhaps the best tool against deepfake fraud is context and logic. Every stakeholder, at every step, must view information with a new found skepticism. In the recent case where a finance worker paid out $25 million after a video call with deepfaked chief financial officer - one would think why is the CFO asking for $25 million? and how out of the ordinary is this request? This is certainly easier in some contexts rather than others, as the most effective fraudster will design their approach so it seems well within someones normal behavior.

    Training: This new found skepticism must be a company wide approach. From
    the C-Suite down, and across to all stakeholders. Businesses need to
    establish a culture in which videos and telephone calls are subject to the same verification processes as emails and letters. Training should help establish this new way of thinking.

    A second opinion: Businesses would be wise to introduce processes which encourage getting a second opinion on audio and visual information, and any subsequent requests or actions. One person may not spot an error or inconsistency that someone else does.

    Biology: This may be the most obvious, but keep in mind natural movement and features. Perhaps someone on a video call doesnt blink very often, or the subtle movement in their throat as they speak isnt normal. Although deepfakes will become more sophisticated and realistic over time, they are still prone to inconsistencies.

    Break the pattern: As AI-generated deepfakes all rely on relevant data, they cant recreate actions which are out of the ordinary. For example, at time of writing, an audio deepfake may struggle to whistle or hum a tune
    convincingly, and for video calls, one could ask the caller to turn their
    head to the side or move something in front of their face. Not only is this
    an unusual movement, which data models are less likely to be trained on so extensively, they also break the anchor points that hold the generated visual information into place, which could result in blurring.

    Lighting: Video deepfakes rely on consistent lighting, so you could ask someone on a video call to change the light in their room or the screen
    theyre sitting in front of. Software programs also exist which can make someones screen flicker in a unique and unusual way. If the video doesnt properly mirror the light pattern, you know its a generated video.

    Tech: AI is aiding fraudsters, but it can also help stop them. New tools are being developed that can spot deepfakes by analyzing audio and visual information for inconsistencies and inaccuracies, such as the free-to-use For Fakes Sake for visual assets, or Pindrop for audio. Although these are not foolproof, they are an essential arsenal to help establish reality from fiction.

    Its important to note that no single solution, tool, or strategy should be totally relied upon, as the sophistication of deepfakes is rapidly increasing - and may evolve to beat some of these detection methods. Skepticism at every step

    In an age of mass synthetic information, businesses should look to extend the same level of skepticism towards trusting visual and audible information as they do towards new contracts, onboarding new users, and screening out
    illicit actors. For both internal and external threats, AI-augmented verification tools and new training and education regimes are central for minimizing potential financial risk from deepfakes.

    We've featured the best online cybersecurity course.

    This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/new-ai-tools-and-training-vital-to-combat-deepfa ke-impersonation-scams


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)