• The EU AI Act: What do CISOs need to know to strengthen AI securi

    From TechnologyDaily@1337:1/100 to All on Thursday, November 07, 2024 10:00:05
    The EU AI Act: What do CISOs need to know to strengthen AI security?

    Date:
    Thu, 07 Nov 2024 09:51:32 +0000

    Description:
    What CISOs need to know about the EU AI Act to strengthen AI security.

    FULL STORY ======================================================================

    It's been a few months since the EU AI Act the worlds first comprehensive legal framework for Artificial Intelligence (AI) came into force.

    Its purpose? To ensure the responsible and secure development and use of AI
    in Europe.

    It marks a significant moment for AI regulation, responding to the rapid adoption of AI tools across critical sectors such as financial services and government, where the consequences of exploiting such technology could be catastrophic.

    The new act is one part of an emerging regulatory framework that reinforces the need for robust cybersecurity risk management including the European
    Cyber Resilience Act (CRA) and the Digital Operational Resilience Act (DORA). These will drive transparency and effective risk management of cybersecurity further up the business agenda albeit adding additional layers of complexity to compliance and operational resilience.

    For CISOs, navigating this sea of regulation is a considerable challenge. Key Proponents of the EU AI Act

    The AI Act introduced a new regulatory aspect of AI governance, sitting alongside existing legal frameworks such as data privacy , intellectual property and anti-discrimination laws.

    The key requirements include the establishment of a robust risk management system, security incident response policy and technical documentation demonstrating compliance with transparency obligations. It prohibits certain types of AI systems, for example, systems for emotion recognition or social scoring with the aim to reduce bias caused by algorithms.

    It also involves compliance across the entire supply chain. It is not just
    the primary providers of AI systems who must adhere to this regulation, but all parties involved including those integrating General Purpose AI (GPAI)
    and foundation models from third-parties.

    Failure to comply with these new rules can result in a maximum penalty of 35 million or 7% of a firms total worldwide annual turnover for the preceding financial year but this varies depending on the type of infringement and the size of the company.

    Hence, businesses will need to adhere to these new regulations if they wish
    to do business in the EU, but they should also take inspiration from other available guidance, such as the National Cyber Security Centres (NCSC) guidelines for secure AI system development, to foster a culture of responsible software development. Threats Targeted by the Act

    AI has the ability to streamline workflows and enhance productivity but if systems are compromised, it can expose critical vulnerabilities that may lead to extensive data breaches and security failures.

    As AI technology becomes more sophisticated and businesses more reliant on this transformative technology to support complex tasks, threat actors are also evolving to hijack AI models and steal data. This can lead to greater frequency of wide impact breaches and data leaks, such as the recent
    Snowflake or MOVEit attacks which impacted millions of end users.

    With this new EU AI Act, both the providers of foundation models and organizations using AI are accountable for identifying and mitigating these risks. By looking at the wider AI lifecycle and supply chain, the Act seeks
    to strengthen the overall cybersecurity and resilience of AI used in business
    and life.

    But it is important to remember that it is not just EU countries which are impacted. Companies abroad must also comply if they provide AI systems to the EU market, or if their AI systems affect individuals within the EU. With the Act requiring compliance across the entire supply chain not just AI
    providers this is a truly global imperative.

    So how can businesses adapt with all these new rules? Staying Compliant with Secure by Design Principles

    Complying with these requirements will be much more straightforward if security is built into the design phase of software development, rather than as an afterthought. Threat modeling which includes the rigorous analysis of software at the design phase is one way teams can more effectively adhere to these new regulations.

    Embedding Secure by Design principles into the AI development process can identify the types of threats that can cause harm to an organization, and helps businesses think through security risks in machine learning systems
    such as data poisoning, input manipulation, and data extraction. This also creates a collaborative environment between security and development teams, ensuring security is prioritized from the outset in-line with new regulation.

    In the US, the Cybersecurity and Infrastructure Security Agency (CISA) has pushed for producers of software used by the Federal Government to attest to secure-by-design principles. While this guidance is related to wider technological implementation, this Secure by Design approach is applicable to AI development and helps to promote the culture of responsible software building. Across the pond, the UK Ministry of Defence has already implemented Secure by Design principles, setting a standard for other industries to follow.

    For CISOs, this shift introduces a culture that anticipates regulatory requirements like the EU AI Act, enabling businesses to proactively meet compliance standards while building AI solutions. Key Learnings for CISOs

    AI is changing the game for businesses globally, so CISOs must take a proactive approach to cybersecurity.

    They should be looking to deploy Secure by Design principles to bring
    together security and developer teams more closely and provide AI software developers with the techniques needed to ensure that AI applications are secure at each stage of their development. By preparing data, and building
    and deploying a threat model of the system, developers can stress test their products at design time and mitigate against vulnerabilities to ensure their products are compliant with the new regulation from the very beginning.

    Its not just businesses in the EU that need to adhere to the new Act it applies to anyone wishing to work in these markets so having the right techniques and approaches to AI development at the start of the software development cycle will be critical.

    We've featured the best DevOps tools.

    This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/the-eu-ai-act-what-do-cisos-need-to-know-to-stre ngthen-ai-security


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)