Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Millions of low-cost Android devices turn home networks into crime platforms

    Kilmar Abrego Garcia, wrongly deported to El Salvador, brought back to U.S. to face human smuggling charges

    iPhone text with DMV fines is fake

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest VKontakte
    Sg Latest NewsSg Latest News
    • Home
    • Politics
    • Business
    • Technology
    • Entertainment
    • Health
    • Sports
    Sg Latest NewsSg Latest News
    Home»Technology»CISOs: Don’t block AI, but adopt it with eyes wide open
    Technology

    CISOs: Don’t block AI, but adopt it with eyes wide open

    AdminBy AdminNo Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The introduction of generative AI (GenAI) tools like ChatGPT, Claude, and Copilot has created new opportunities for efficiency and innovation – but also new risks. For organisations already managing sensitive data, compliance obligations, and a complex threat landscape, it’s essential not to rush into adoption without thoughtful risk assessment and policy alignment.

    As with any new technology, the first step should be understanding the intended and unintended uses of GenAI and evaluating both its strengths and weaknesses. This means resisting the urge to adopt AI tools simply because they’re popular. Risk should drive implementation – not the other way around.

    Organisations often assume they need entirely new policies for GenAI. In most cases, this isn’t necessary. A better approach is to extend existing frameworks – like acceptable use policies, data classification schemes, and ISO 27001-aligned ISMS documentation – to address GenAI-specific scenarios. Adding layers of disconnected policies can confuse staff and lead to policy fatigue. Instead, integrate GenAI risks into the tools and procedures employees already understand.

    A major blind spot is input security. Many people focus on whether AI-generated output is factually accurate or biased but overlook the more immediate risk: what staff are inputting into public LLMs. Prompts often include sensitive details – internal project names, client data, financial metrics, even credentials. If an employee wouldn’t send this information to an external contractor, they shouldn’t be feeding it to a publicly-hosted AI system.

    It’s also crucial to distinguish between different types of AI. Not all risks are created equal. The risks of using facial recognition in surveillance are different from giving a developer team access to an open-source GenAI model. Lumping these together under a single AI policy oversimplifies the risk landscape and may result in unnecessary controls – or worse, blind spots.

    There are five core risks that cyber security teams should address:

    Inadvertent data leakage: Through use of public GenAI tools or misconfigured internal systems.

    Data poisoning: Malicious inputs that influence AI models or internal decisions.

    Overtrust in AI output: Especially when staff can’t verify accuracy.

    Prompt injection and social engineering: Exploiting AI systems to exfiltrate data or manipulate users.

    Policy vacuum: Where AI use is happening informally without oversight or escalation paths.

    Addressing these risks isn’t just a matter of technology. It requires a focus on people. Education is essential. Staff must understand what GenAI is, how it works, and where it’s likely to go wrong. Role-specific training – for developers, HR teams, marketing staff – can significantly reduce misuse and build a culture of critical thinking.

    Policies must also outline acceptable use clearly. For example, is it okay to use ChatGPT for coding help, but not to write client communications? Can AI be used to summarise board minutes, or is that off-limits? Clear boundaries paired with feedback loops – where users can flag issues or get clarification – are key to ongoing safety.

    Finally, GenAI use must be grounded in cyber strategy. It’s easy to get swept up in AI hype, but leaders should start with the problem they’re solving – not the tool. If AI makes sense as part of that solution, it can be integrated safely and responsibly into existing frameworks.

    The goal isn’t to block AI. It’s to adopt it with eyes open – through structured risk assessment, policy integration, user education, and continuous improvement.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Admin
    • Website

    Related Posts

    Millions of low-cost Android devices turn home networks into crime platforms

    iPhone text with DMV fines is fake

    Goodbye Mr. Nice Guy? Investors dump Tesla on bet Trump may lash out at Musk through his car company

    Safecracking: The right combination – CBS News

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Microsoft’s Singapore office neither confirms nor denies local layoffs following global job cuts announcement

    Google reveals “material 3 expressive” design – Research Snipers

    Trump’s fast-tracked deal for a copper mine heightens existential fight for Apache

    Top Reviews
    9.1

    Review: Mi 10 Mobile with Qualcomm Snapdragon 870 Mobile Platform

    By Admin
    8.9

    Comparison of Mobile Phone Providers: 4G Connectivity & Speed

    By Admin
    8.9

    Which LED Lights for Nail Salon Safe? Comparison of Major Brands

    By Admin
    Sg Latest News
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • Get In Touch
    © 2025 SglatestNews. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.