• Experts warn some ChatGPT models can be hacked to launch deepfake

    From TechnologyDaily@1337:1/100 to All on Monday, November 04, 2024 16:15:05
    Experts warn some ChatGPT models can be hacked to launch deepfake scams

    Date:
    Mon, 04 Nov 2024 16:04:00 +0000

    Description:
    OpenAI says it is building new safeguards to protect against scammers.

    FULL STORY ======================================================================

    Getting scammed by a chatbot is unfortunately no longer in the domain of science fiction, after researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how it could be done.

    Recently, Richard Fang, Dylan Bowman, and Daniel Kang from UIUC published a new paper in which they described how they abused OpenAIs latest AI model, called ChatGPT-4o, to fully automate some of the most common scams around.

    Now, OpenAIs latest model offers a voice-enabled AI agent, which gave the researchers the idea of trying to pull off a fully automated voice scam. They found ChatGPT-4o does have some safeguards which prevent the tool from being abused this way, but with a few jailbreaks, they managed to imitate an IRS agent. Advanced reasoning

    Success rates for these scams varied, the researchers found. Credential theft from Gmail worked 60% of the time, while others like crypto transfers had about 40% success. These scams were also relatively cheap to conduct, costing about $0.75 to $2.51 per successful attempt.

    Speaking to BleepingComputer , OpenAI explained its latest model, which is currently in preview, supports advanced reasoning and was built to better
    spot these kinds of abuses: "We're constantly making ChatGPT better at stopping deliberate attempts to trick it, without losing its helpfulness or creativity, the companys spokesperson told the publication.

    Our latest o1 reasoning model is our most capable and safest yet, significantly outperforming previous models in resisting deliberate attempts to generate unsafe content."

    OpenAI praised the researchers, saying these kinds of papers help ChatGPT get better.

    According to the US government, voice scams are considered fairly common. The premise is simple: an attacker would call the victim on the phone and, while pretending to help solve a problem, actually scam them out of money or sensitive information .

    In many cases, the attack first starts with a browser popup showing a fake virus warning, from a fake antivirus company. The popup urges the victim to call the provided phone number and clean their device. If the victim calls
    the number, the scammer picks up and guides them through the process, which concludes with the loss of data, or funds. More from TechRadar Pro ChatGPT could be worse than cryptocurrency when it comes to scams Here's a list of
    the best firewalls today These are the best endpoint protection tools right now



    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/experts-warn-some-chatgpt-models-can-be -hacked-to-launch-deepfake-scams


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)