• Ethical AI: Considerations ahead of regulations

    From TechnologyDaily@1337:1/100 to All on Tuesday, October 22, 2024 15:45:05
    Ethical AI: Considerations ahead of regulations

    Date:
    Tue, 22 Oct 2024 14:37:02 +0000

    Description:
    As AI advances at breakneck speed, we must prioritize responsible design and development of AI.

    FULL STORY ======================================================================

    The AI leviathan continues to tower over every datacenter , with
    organizations racing to deploy AI based solutions for immediate benefits today, or putting the infrastructure and models in place to reap an aspirational return from research projects in the long run. Regardless of where an organization is on its AI journey, the breakneck speed at which this technology is advancing has left regulators scrambling to catch up in terms
    of how AI should be moderated, ensuring the technology is used ethically. There is a pressing need to clarify accountability in cases of errors or unintended consequences. Theres also a clear need for the development of
    legal frameworks that provide guidelines for determining responsibility when AI systems cause harm or fail to meet the expected standards. What is ethical AI?

    Ethical AI means supporting the responsible design and development of AI systems and applications that are not harmful to people and society at large. While this is a noble goal, its not always easy to achieve and requires in-depth planning and constant vigilance. For developers and designers, key ethical considerations should at a minimum include the protection of
    sensitive training data and model parameters from manipulation. They should also provide real transparency into how AI models work and are impacted by
    new data, which is essential to ensuring proper oversight. Regardless of whether ethical AI is being approached by the C-level of a private business, government or regulatory body, it can be difficult to know where to start. Transparency as the foundation

    When planning AI deployment strategies, transparency should always be the starting point and the foundation on which all applications are built. This means providing insight, internally and externally, into how AI systems make decisions, how they arrive at outcomes and what data they use to do this. Transparency and accountability are essential for building trust in AI technologies and mitigating potential harms. Insight into an AI models mechanics including the data used to train it before it is applied is essential. When putting this into practice, there are ethical, privacy and copyright issues which must be addressed so the boundaries are clear when AI is deployed, especially when it comes to application in sectors such as healthcare. In the UK for example, the Information Commissioners Office has produced useful guidelines for ensuring transparency in AI. The repeatability of results remains a key area of focus to ensure that conscious or
    unconscious bias does not play a part when training a model, or when using a trained model for inference. Concern over aggregated data profiles

    Balancing privacy concerns with potential societal benefits will be an
    ongoing discussion as AI technologies evolve and there will always be trade-offs between what individuals give up in data versus what is gained by society. Personal data like shopping, fitness and healthcare records could be combined and used together, raising privacy and insurance risks for individuals. This is because aggregated and linked data sources can reveal an unprecedented level of detail about people's lives, behaviors and vulnerabilities. As more data streams are combined, the value of the aggregated profile is much higher, allowing for greater and potentially more targeted influence over individuals. Security of personal data becomes all
    the more important given the risks of data breaches and theft when so much valuable information is collected in one place. The need for data stewardship and transparency on sourcing and consent practices is fundamental. Ensuring personal data is handled securely and for agreed purposes will remain paramount to maintaining public trust in applications of this powerful technology. Regulation on the horizon

    Ultimately, ethical AI practices will require external guidance and the development of agreed standards. After all, organizations and commercial enterprises are a part of society not separate from it. The development of globally agreed ethical standards for AI is paramount. As the technology becomes ever more integrated internationally, finding workable solutions in this area will clearly be important. Yet, there are substantial obstacles to implementation, given divergent societal and legal views. Starting with areas of broad consensus, like fundamental rights and safety, could help make initial progress even if full harmonization proves elusive for now. Its encouraging that governments are taking a leadership stance in this area, through participation in international summits, including last years AI
    Safety Summit Hosted in the UK, the AI Seoul Summit 2024 and the upcoming Paris Cyber Summit.

    Any legislation resulting from regulatory decisions on AI must address concerns regarding liability. Legal frameworks need to be developed to
    outline guidelines for determining responsibility when AI systems cause harm or fail to meet the expected standards. Biases in AI models, often unintentionally perpetuated through biased training data, raise concerns
    about the potential reinforcement and perpetuation of societal inequalities. The ethical considerations surrounding AI are not secondary concerns but fundamentally required pillars that will shape the responsible development
    and deployment of AI technologies in the future.

    International cooperation will be crucial, as AI technologies are inherently global. Looking to international precedents like maritime law for universal standards is potentially a good starting point. While it is encouraging that addressing AI ethics is being increasingly recognized as an urgent priority requiring coordinated global action, we need to speed up our efforts to see tangible change within the next five years or we could hurtle past the point of no return, reaping unthinkable consequences on the back of fundamentally flawed and unethical AI. AI regulation benefits everyone

    AI is rapidly becoming pervasive in society and doing so in a nascent regulatory environment. We cannot afford to wait to regulate this technology, but at the same time, we must acknowledge that government policy and legislation takes time to formulate and to be approved. International agreements are also likely to take considerable time to be drafted and implemented. Any organization found to be using AI unethically, once regulations are rolled out, will face reputational damage and a loss of trust from the public. Thats reason enough for organizations to assess their use of AI now, ensuring they're applying ethical and transparent processes to their AI technologies and projects.

    We've featured the best AI phone.

    This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/ethical-ai-considerations-ahead-of-regulations


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)