• Meta's AI chief is right to call AI fearmongering 'BS' but not fo

    From TechnologyDaily@1337:1/100 to All on Friday, October 18, 2024 05:15:05
    Meta's AI chief is right to call AI fearmongering 'BS' but not for the reason he thinks

    Date:
    Fri, 18 Oct 2024 04:00:00 +0000

    Description:
    The real fear of AI should be how people use it.

    FULL STORY ======================================================================

    AI is the latest technology monster scaring people about the future. Legitimate concerns around things like ethical training, environmental
    impact, and scams using AI morph into nightmares of Skynet and the Matrix all too easily. The prospect of AI becoming sentient and overthrowing humanity is frequently raised, but, as Meta's AI chief Yann LeCun told The Wall Street Journal , the idea is "complete B.S." LeCun described AI as less intelligent than a cat and incapable of plotting or even desiring anything at all, let alone the downfall of our species.

    LeCun is right that AI is not going to scheme its way into murdering
    humanity, but that doesn't mean there's nothing to be worried about. I'm much more worried about people relying on AI to be smarter than it is. AI is just another technology, meaning it's not good or evil. But the law of unintended consequences suggests relying on AI for important, life-altering decisions isn't a good idea.

    Think of the disasters and near disasters caused by trusting technology over human decision-making. The rapid-fire trading of stocks using machines far faster than humans has caused more than one near meltdown of part of the economy. A much more literal meltdown almost occurred when a Soviet missile detection system glitched and claimed nuclear warheads were inbound. In that case, only a brave human at the controls prevented global armageddon.

    Now imagine AI as we know it today continues to trade on the stock market because humans gave it more comprehensive control. Then imagine AI accepting the faulty missile alert and being allowed to activate missiles without human input. AI Apocalpse Averted

    Yes, it sounds far-fetched that people would trust a technology famous for hallucinating facts to be in charge of nuclear weapons, but it's not that
    much of a stretch from some of what already occurs. The AI voice on the phone from customer service might have decided if you get a refund before you ever get a chance to explain why you deserve one, and there's no human listening and able to change their mind.

    AI will only do what we train it to do, and it uses human-provided data to do so. That means it reflects both our best and worst qualities. Which facet comes through depends on the circumstances. However, handing over too much decision-making to AI is a mistake at any level. AI can be a big help, but it shouldn't decide whether someone gets hired or whether an insurance policy pays for an operation. We should worry that humans will misuse AI, accidentally or otherwise, replacing human judgment.

    Microsoft's branding of AI assistants as Copilots is great because it evokes someone there to help you achieve your goals but who doesn't set them or take any more initiative than you allow. LeCun is correct that AI isn't any
    smarter than a cat, but a cat with the ability to push you, or all of humanity, off of a metaphorical counter is not something we should encourage. You might also like... Google's AI Overviews goes global, hopefully without the rock-eating suggestions Has Meta finally broken the Google Glass curse with its next-gen Orion glasses? AI is not the problem but regulation might
    be a big one according to one Godfather of AI



    ======================================================================
    Link to news story: https://www.techradar.com/computing/artificial-intelligence/metas-ai-chief-is- right-to-call-ai-fearmongering-b-s-but-not-for-the-reason-he-thinks


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)