Nov 4, 2025

The Human Is the New Attack Vector: Shadow AI, Insider Risk, and Behavioral Defenses

Whitepaper

Introduction

Generative AI tools like ChatGPT, Copilot, and other large language models (LLMs) are rapidly transforming workplace productivity. Yet alongside the benefits, these tools introduce new insider risks that are largely unintentional and behavioral. In the past, insider threats conjured images of malicious employees stealing data or sabotaging systems. Today, well-meaning employees themselves have become a major “attack vector” – pasting sensitive data into public AI chatbots, misinterpreting AI “hallucinations” as facts, or over-trusting AI-generated outputs without verification. The result is an emerging class of insider incidents driven not by malice, but by lack of awareness or oversight. This white paper examines how “shadow AI” (unsanctioned AI tool use) is creating inadvertent security exposures, and outlines a layered defense strategy focused on behavioral safeguards. We discuss real-world examples (from leaked source code to AI-created falsehoods), survey data on policy violations, key risk categories, and practical solutions. The goal is to help CISOs, data protection officers, and security architects enable safe AI use without stifling innovation, using a combination of technical controls, user training, and UX-friendly “nudges” rather than punitive measures. In short, we aim to illuminate how the human factor in AI can be managed as the new front line of defense, not by banning AI, but by guiding its responsible use.

The Rise of Shadow AI and Unintentional Insider Risk

Organizations are witnessing a “shadow AI” phenomenon, echoing the early days of shadow IT. Shadow AI refers to employees using AI tools, platforms, or cloud-based AI services without official approval or IT oversight[1][2]. This trend is exploding: one survey found 78% of employees are already using AI tools on the job, often without clear policies in place[3]. In fact, nearly 45% admitted to using AI tools at work even when their company has banned them[3]. This widespread, sometimes covert adoption of AI is creating a serious governance gap. Only about one-third of organizations have explicit AI usage policies, despite the majority of knowledge workers leveraging third-party AI tools[4]. In other words, productivity is outpacing policy, and security teams are struggling to keep visibility.

Crucially, the nature of the insider threat is changing. Traditional insider threats involve intentional wrongdoing (espionage, data theft, IT sabotage). By contrast, today’s AI-related insider risks are often unintentional – a product of convenience, curiosity, or pressure to deliver results. Employees paste confidential text into ChatGPT to get quick answers, or rely on AI suggestions to speed up tasks, without considering the security implications. These individuals aren’t trying to harm the company; if anything, they believe they’re being efficient. As one HR consultant put it, “Employees aren’t using banned AI tools because they’re reckless or don’t care…They’re using them because their employers haven’t kept up”[5]. When workers are under pressure to do more with less, they will reach for whatever tools help them – and if leadership hasn’t set any guardrails or provided safe alternatives, employees often don’t even realize they’re creating risk[6].

This shift toward non-malicious insider risk is borne out by research. According to a 2025 insider threat study, negligent or accidental incidents now represent the majority of insider security issues. For example, 55% of insider threat incidents in 2023 were attributed to negligent insiders (employees disregarding policies out of carelessness or convenience), as opposed to deliberate malicious insiders[7]. Moreover, a staggering 88% of data breaches are caused or exacerbated by employee mistakes – simple human errors that inadvertently open the door to incidents[8]. In the context of AI, these mistakes include things like copying internal data into an AI with unclear data handling, blindly following a flawed AI recommendation, or using an unauthorized AI app that may expose data. Threat actors are keenly aware of this “human element”: they can exploit insiders’ lapses or manipulate well-intentioned employees (e.g. via social engineering or malicious AI tools) to gain access to sensitive info[9][10]. In short, humans have become the weak link – the new attack vector – in enterprise AI security.

The good news is that because these risks stem from behavior rather than intent, they can be mitigated by changing behavior: through smart policies, education, and in-the-moment guidance. Before diving into defense strategies, let’s look at how these new insider risks manifest in practice.

Real-World Examples of AI-Related Insider Risk

Organizations worldwide have already encountered incidents that highlight the unintentional dangers of shadow AI use. A few notable examples and findings include:

  • Sensitive Data Leaked via AI Prompts: In early 2023, Samsung Electronics discovered that engineers in its semiconductor division had inadvertently leaked confidential data by typing it into ChatGPT[11]. In three separate instances, employees submitted proprietary code (for debugging and optimization help) and even pasted the transcript of an internal meeting into the public ChatGPT service to get a summary[11]. Because ChatGPT’s free version retains and learns from user inputs, Samsung feared that its trade secrets could become part of the model’s responses to other users[12]. The incident was severe enough that Samsung temporarily banned ChatGPT use and implemented an emergency limit on prompt size (1024 bytes) to stem the data leakage[13][14]. Samsung was not alone – major firms like JPMorgan and Verizon also blocked ChatGPT access once they realized they couldn’t track how employees were using it[15]. These reactions underscore how pasting company data into unsanctioned AI is now recognized as a form of insider data exfiltration, even if done without malicious intent. In fact, data from Cyberhaven (based on monitoring 1.6 million workers) revealed that by mid-2023, 4.7% of employees had pasted confidential data into ChatGPT, and 11% of all content employees paste into ChatGPT is sensitive[16][17]. The average company was leaking sensitive data to the chatbot hundreds of times each week via employee prompts[18]. Each such prompt can be seen as an inadvertent insider “breach” – with proprietary code, client information, or internal strategy documents potentially siphoned into an external model’s memory.

  • “Shadow AI” Tools Bypassing Policy: Employees often adopt AI tools without any visibility by IT, creating a shadow ecosystem of AI apps. A 2025 survey by Newsweek/Anagram found that nearly 1 in 2 employees (45%) knowingly used AI tools that their employer had explicitly banned[3]. Additionally, 57% of employees worldwide admit to hiding their AI use from supervisors[19]. This means policies alone (or outright bans) are frequently ignored on the ground. The motivation is typically productivity: 40% of workers said they would violate company policy if it helped them complete a task more efficiently】[20]. Shadow AI usage is pervasive – in one global report, 78% of respondents were using some form of AI at work without formal approval[21]. Almost every organization has this issue: a separate data security study found 98% of companies have employees using unsanctioned apps, including generative AI tools[22]. The risks of shadow AI are multifold. Employees may input confidential data into AI platforms with “unclear data handling practices”, potentially sending sensitive information to external servers (in one case, a popular AI tool processed user prompts on servers in a foreign country, raising compliance concerns)[23][24]. Or they might use browser extensions and plugins that quietly funnel data to third-party AI APIs without any security review[25]. Traditional security tools often fail to detect this, as the traffic looks like ordinary HTTPS web usage and the AI apps aren’t flagged as malware[26]. This creates a dangerous blind spot: data could be flowing out under the guise of normal work to places with no oversight[27]. IBM reported in 2023 that already 13% of organizations experienced an AI-related breach**, and 97% lacked proper controls over AI access – illustrating how unchecked shadow AI can lead directly to data compromise[28][29].

  • Hallucination Mishaps and Misinformation: So-called AI “hallucinations” – when an AI confidently generates false or fabricated information – have already caused real-world damage in workplaces. One headline example occurred in the legal industry: in 2023, a pair of attorneys famously filed a court brief that cited numerous fake case precedents, which had been entirely invented by ChatGPT. The lawyers had asked the AI for relevant case law and, trusting the output without verification, ended up submitting a brief with at least six non-existent cases, drawing sanctions from the judge[30]. In another case, an airline faced public embarrassment when its AI-powered customer service chatbot invented a policy that didn’t exist – a discount “bereavement fare.” A Canadian tribunal actually forced the airline to honor the phantom discount that the AI had quoted to a customer, leading to financial loss and “days of embarrassing headlines.”[31]. These incidents highlight how over-trusting AI output can translate into concrete legal, financial, and reputational harm. The risk is not only that an AI might be wrong – it’s that it sounds right enough that busy employees believe it. If staff treat AI responses as authoritative without validation, errors propagate. A Newsweek survey found 66% of employees who use AI admitted they do not verify the accuracy of AI outputs[32]. Over half had already encountered mistakes or “hallucinations” from unvetted AI use in their work[32]. From bogus analytics in reports to AI-written content full of fabrications, hallucinations can easily slip through if humans assume “the computer must be correct.” The damage ranges from minor (wasted time, confusion) to severe – for instance, publishing falsified information can trigger compliance issues or lawsuits. (In one media example, multiple news outlets unknowingly published an AI-generated summer reading list where 10 of the 15 book titles were completely made-up, causing a credibility crisis[33][34].) The bottom line: when humans misinterpret or fail to recognize AI hallucinations, the organization bears the risk – whether it’s a lawyer facing sanctions, a business making a bad decision, or a company accidentally spreading false information.

  • Over-Reliance on Unvalidated Outputs: Even when AI outputs aren’t obvious hallucinations, uncritical reliance on them can be dangerous. Many employees treat generative AI as an oracle or final authority, when in fact it should be a draft or helper. For example, an HR team might use an AI to draft a job posting and hastily post it, not realizing the AI inserted extraneous requirements (as happened when an AI wrote an entry-level job description that mistakenly required 5-7 years experience, resulting in zero applicants)[35]. Or a developer might use code from an AI assistant without security review, inadvertently introducing a vulnerability. These scenarios occur because the human fails to validate the AI’s work. It’s telling that fewer than half of employees globally have received any training on AI[32] – many don’t know how to double-check AI outputs or assume that if the AI produced it, it must be “approved.” This over-reliance is essentially a new type of human error: trusting software without verification. As one study noted, “employees are willing to trade compliance for convenience”[36][37]. Unfortunately, convenience-driven shortcuts have led to incidents of data exposure, compliance violations, and damaged trust within organizations[38]. In summary, failing to put a human-in-the-loop for critical uses of AI can turn a single incorrect output into a major issue. Unvalidated AI output is essentially unvetted data – and treating it as truth is a recipe for risk.

These examples illustrate that the threat surface has expanded to everyday behaviors. An employee rushing to meet a deadline might unintentionally cause a data leak or a strategic blunder via an AI tool. The intent isn’t malicious, but the impact can be just as severe as a traditional insider attack. Next, we categorize these risks more formally and then explore defenses to mitigate them.

Key Categories of AI-Related Insider Risks

1. Prompt Data Leakage (Confidential Input Exposure): Every time an employee enters a prompt into an AI chatbot or generative model, there’s a possibility they include sensitive information in that query. “Prompt leakage” refers to confidential or regulated data leaving the organization’s secure boundary via an AI prompt. This could be customer PII, source code, financial records, or strategy documents – any data pasted or uploaded into an external AI service. Unlike sending an email or uploading a file (which corporate DLP systems might catch), employees don’t always realize that entering text into a website is effectively transmitting data outside. The security concern is twofold: (a) The data could be stored by the AI provider and potentially used to train models or be seen by other users (as was the concern in the Samsung case)[12]. And (b) if the AI service is compromised or has a bug, that sensitive input might leak. We saw a glimpse of that when a ChatGPT bug briefly exposed snippets of other users’ chat histories to random people[39]. The scale of prompt leakage is significant – in early 2023, one data security firm observed on average hundreds of sensitive data entries into ChatGPT per company per week, with confidential text, internal documents, and even source code being input by employees[17][40]. Over 4% of all prompts to generative AI were found to contain sensitive corporate data in one broad analysis[41]. This category of risk is essentially an inadvertent data exfiltration: an employee, often just trying to get AI assistance, ends up funneling company secrets into a system outside the company’s control.

2. Shadow AI and Unsanctioned Tool Use: This risk category involves the use of unauthorized AI apps, services, or integrations – any AI tech that has not been vetted or is against policy, but is used “under the radar.” Much like shadow IT, shadow AI flourishes when employees feel a need that official tools don’t meet[1]. For example, if the company hasn’t provided an approved AI coding assistant, a developer might quietly use a free online AI tool to help with code, or an analyst might use a sketchy AI website to generate a report. The dangers here are multifaceted. First, these tools may have unknown security postures – they could be logging data in unsecured ways or even be malicious outright. (Security researchers warn that attackers might create fake “AI” web tools that lure users into inputting sensitive info, essentially a novel phishing method[42].) Second, even legitimate AI services can be risky if used without oversight: for instance, an employee using a new AI SaaS could inadvertently upload a client list or a design document, not realizing the service stores it or shares it. In one stark example, a new AI-powered app called “DeepSeek” was banned by the U.S. Congress after it came to light that its data was processed overseas, raising fears about espionage or privacy law violations[43][24]. Shadow AI also includes the use of browser plugins or AI features embedded in other software that employees enable on their own[25]. Because IT doesn’t know these tools are in use, there’s zero visibility or control – a true blind spot. An employee might be using a browser extension that sends every webpage they visit to an AI for summarization, inadvertently transmitting internal web app data to an external server. Or they might enable an “AI meeting assistant” that transcribes company meetings to an external cloud. Traditional defenses (firewalls, endpoint protection) often miss this because the AI traffic is encrypted and looks like normal web API calls[26][44]. The compliance implications are huge: data residency, privacy regulations, and contractual data safeguards can all be violated unknowingly via shadow AI use. In summary, unsanctioned AI tools amplify the classic insider risk of using unauthorized tech, but with the added twist that these tools actively process and potentially expose the data fed into them.

3. Hallucinations and Misinformed Decisions: Generative AI “hallucinations” refer to the model generating information that is inaccurate, misleading, or entirely fabricated – but often in a very convincing manner. When employees incorporate such output into their work, it can lead to poorly informed decisions or dissemination of false information. The risk here is essentially “garbage in, garbage out”, except the garbage is finely dressed as credible content. We’ve already seen employees embarrassed or organizations harmed because someone trusted an AI’s false answer: from lawyers citing fake cases[30], to a chatbot giving a customer an invented policy[31], to internal reports or emails that include AI-fabricated “facts” or statistics. Hallucinations can occur in any domain – an AI might invent a nonexistent regulatory requirement, misquote a standard, or hallucinate an error message during coding assistance. The danger is highest when the human recipient is not knowledgeable enough or is too time-pressed to double-check. Employees may misinterpret a hallucination as legitimate guidance, leading them to take actions based on falsehoods. For example, if an AI assistant tells a financial analyst “According to SEC Rule X, you must do Y” (when no such rule exists), and the analyst proceeds accordingly, the company could face compliance trouble. Or consider a salesperson using AI to draft an email that unknowingly includes a made-up reference or promise – this could bind the company to something unintentional or simply erode trust with the client if discovered. Hallucinations thus pose a cross-functional risk: legal (fake legal or compliance info can cause penalties), reputational (publishing false info), operational (acting on wrong instructions wastes resources), and even security (an AI might hallucinate a “fact” that a certain system is secure or up-to-date when it isn’t, leading to oversight)[45][46]. One particularly dangerous scenario is an AI hallucinating sensitive data: there have been cases where AI responses included what looked like personal data or confidential info – it was false, but if an employee believed it and, say, forwarded it, it could violate privacy rules[47]. The key point is that the AI’s confident tone can lull users into a false sense of security[48]. Without safeguards, hallucinations can easily become accepted as “truth” within an organization’s decision-making process, causing a cascade of mistakes.

4. Over-Trust in Unvalidated Outputs: This category is closely related to hallucinations but broader: it is the general risk of treating AI outputs as correct or final without human validation. Not every unvalidated output is a wild hallucination; it could be a mostly correct result with subtle errors. The risk is that employees may skip normal checks and balances because an AI delivered the answer. For instance, a marketer might use AI to generate a list of the top 10 customer pain points and immediately act on it, not realizing the AI’s list was drawn from outdated internet data and misses what’s true for their actual customers. Or an engineer might rely on an AI-generated configuration for a server without running it past the security team, resulting in an exposure. The human tendency to over-trust automation is well-documented – we see it in GPS directions blindly followed, or spelling errors introduced by autocorrect. With AI, this is amplified because the outputs seem thoughtful and authoritative. In the workplace, using AI output without a second look undermines existing validation processes. One survey noted that 66% of employees using AI do not verify the accuracy of AI-derived results, and more than half have encountered mistakes due to this lack of oversight[32]. Essentially, many users assume the AI’s role is like an infallible expert or a colleague that did their homework – an assumption that can be perilous. Over-trust also ties into automation bias: if an AI suggests a course of action, some might defer to it even if their own experience would say otherwise. This can degrade professional judgment over time. A concrete example is in coding: GitHub’s Copilot can suggest code that works syntactically, but might contain a security flaw or not handle edge cases. If a programmer stops thinking critically, those flaws slip into production. Another example is content moderation or data filtering: if AI flags something as safe, an analyst might not double-check it. The antidote to this risk is always a “human in the loop” for important outputs – but if employees become too dependent on AI with no mandate to review, errors will inevitably get through. Over-trust is in some sense the culmination of the prior risks: it’s the failure to treat AI as fallible. Combined with the other issues (data leakage and hallucinations), uncritical reliance can turn a helpful tool into a single point of failure in business processes.

Having identified these key risk areas, the pressing question is how to defend against them without simply banning AI or blaming employees for trying to do their jobs. The solution lies in a layered defense that blends technology, policy, and psychology – creating guardrails that address both the digital and human aspects of the problem.

Behavioral Defense Strategies for AI Insider Risk

Protecting against AI-related insider risks requires a shift in approach. Since much of the risk comes from behavior (well-intentioned but unsafe actions), the defenses must influence behavior in a positive way. A recurring theme is enabling safe AI usage rather than resorting to draconian bans. As one expert noted, “outright banning AI tools rarely works. It can discourage innovation, irritate staff, and push usage further underground”[49]. Instead, organizations should aim to “illuminate before you eliminate” – gain visibility into AI usage and guide it transparently, instead of trying to stamp it out in a whack-a-mole fashion[50]. Below, we outline a layered strategy combining technical controls, real-time user guidance, and cultural/educational measures:

1. Technical Controls and Visibility Tools

Start by upgrading your security toolkit to detect and control AI-related data flows. Traditional DLP and network filters may not recognize an employee copy-pasting sensitive text into a chat interface. New approaches are needed: for example, some organizations have deployed browser-level safeguards that monitor for certain patterns or keywords in web text inputs and can alert or block if an employee tries to submit confidential data to an external site[51]. These can be browser extensions or secure web gateways (SWG) with AI-aware filtering. A secure web gateway can intercept traffic to known AI API endpoints and apply policies – for instance, blocking access to unsanctioned AI services or preventing file uploads to such services[52][53]. Forcepoint reported their SWG is configured to dynamically categorize and filter AI tool URLs, decrypt SSL traffic, and enforce policies (like “allow ChatGPT but with no sensitive data uploads”)[54]. In essence, treat generative AI sites as a new category in your web filtering and cloud access security broker (CASB) solutions.

Another technical control is data classification at the source and real-time scanning. Since AI prompts are often free-text, using content inspection that looks for sensitive information can help. Some modern DLP systems or Data Detection and Response (DDR) tools can hook into copy/paste actions or analyze text buffers to catch things like someone copying out of a document marked “Confidential” into a browser[55][56]. Advanced solutions integrate with endpoint agents to trace when a user copies data from a corporate app and then immediately pastes into a web form, flagging that as a potential “data egress” event. In fact, the inability to track copy-paste has been a blind spot of legacy DLP, but newer AI data security products are emerging to fill this gap[57][55]. For example, Cyberhaven’s analysis could detect nearly 8,000 attempts per 100k employees per day to paste corporate data into ChatGPT by monitoring endpoint clipboard and browser actions[58][59]. Implementing such monitoring provides crucial visibility – you can’t mitigate what you can’t see[60].

CASB and SaaS monitoring also play a role: use CASB to discover what AI SaaS apps or APIs employees might be using without approval[61]. Network analytics can identify unusual connections, like repeated calls to an AI API endpoint from user workstations[62]. An example is monitoring DNS or TLS SNI for known AI services (OpenAI, Anthropic, etc.) and generating alerts if usage exceeds a threshold – this can uncover an “AI power user” or a department heavily relying on an unsanctioned tool. According to one report, JPMorgan initially had to ban ChatGPT partly because they “couldn’t determine how many employees were using the chatbot or for what”[63]. Gaining that telemetry through CASB/SWG logging or endpoint agents is the first step to a smarter policy (maybe you find only one team is using it heavily, and you can intervene with them specifically).

Crucially, technical controls should be tuned to avoid hampering legitimate use. The aim isn’t to block all AI, but to enforce safe boundaries. For instance, you might allow use of approved AI platforms (perhaps an internal sandboxed LLM or a vetted third-party service) while blocking or tightly monitoring the rest. Data should be classified (via a Data Security Posture Management tool, for example) so that if an employee tries to feed a customer’s SSN or a design document into an AI, the system recognizes the sensitivity and stops or warns on that action[64][65]. Technical defenses provide the backbone: they catch what users miss and serve as a safety net. However, they must be complemented with user-facing measures, because a determined or careless user can find workarounds if purely automated controls are too rigid. This is where behavioral and UX-focused defenses come in.

2. Real-Time Warnings and Security Nudges

One of the most powerful tools to shape security behavior is the use of “nudges” – gentle, contextual warnings or reminders that appear at the moment of risk. Instead of simply blocking an action with no explanation, a nudge interrupts the user just enough to make them think, but not so much as to feel like a heavy-handed blockade. For AI risks, consider deploying pop-up warnings or banners when users interact with external AI services on corporate devices. For example, if someone goes to paste content into a chatbot, a prompt could appear: “Reminder: Don’t share confidential data with unapproved AI tools” – and maybe require a checkbox confirmation. This aligns with the concept of just-in-time security training: hitting the user with a relevant tip or caution exactly when they’re about to potentially slip[66][67]. Research shows these security nudges can significantly reduce human error by causing people to pause and reconsider[67][68]. They leverage behavioral science – small changes in the decision context can lead to safer choices[69].

In practice, companies like Microsoft and Google have built nudges into their security products (e.g. Outlook warning “This email is from outside your organization, be cautious” is a classic nudge that has been effective in slowing down clicks on phishing emails[70]). For AI usage, a nudge might be as simple as a browser banner whenever a known AI site is accessed, or more sophisticated: an extension that scans the text the user has typed and if it detects something like an IP address, API key, or customer data, it pops up “This looks like sensitive info. Are you sure you want to send it to ChatGPT?”. The goal is not to completely forbid, but to make the user an informed participant in security at the critical moment. As an example, one security awareness firm suggests a nudge like: “Before sharing sensitive data, verify the tool’s legitimacy. Avoid entering company credentials into AI systems unless approved.”[71]. This kind of message, delivered right when an employee is about to possibly enter a password or upload a file to an AI service, can prompt them to double-check the URL or confirm that the AI is sanctioned. Another suggested nudge specifically for AI chats: “This platform is not approved for confidential info. Continue only with non-sensitive data.” Simple, clear language works best, keeping the tone helpful rather than scolding.

A key benefit of nudges is that they can be calibrated to user context and risk level. More severe actions (like trying to paste a large chunk of source code outside) might trigger a stronger warning or even a block pending manager override, whereas low-risk actions get just a light reminder. This adaptive approach prevents “nudge fatigue” – if users get too many pop-ups, they’ll start ignoring them[72][73]. So, organizations should fine-tune how often and when these prompts appear. Machine learning can even be used to optimize nudge timing and frequency so that they remain effective without becoming noise[74].

Crucially, nudges preserve user autonomy. They don’t outright prevent action (unless absolutely necessary); instead, they educate at decision points. This is important culturally – employees don’t feel they are being treated like untrustworthy children, but rather like partners in security. When a nudge says “Verify the accuracy of AI outputs – do these facts have references?” it is training the employee in real time to be skeptical of AI in a healthy way. Over time, these prompts can instill better habits: the user might start double-checking AI-generated text for factual consistency even when no prompt occurs, simply because they’ve been conditioned to do so. In effect, well-crafted nudges embed security thinking into daily workflow[75][76]. They act as constant mini-training, reinforcing policies (“Don’t upload confidential files to unsanctioned apps”) at the moment it matters, rather than employees vaguely recalling an annual training module.

For example, Google implemented a just-in-time nudge in their internal systems such that if an employee tried to share a document outside the company and that doc had sensitive content, a pop-up would warn: “This file contains sensitive data. Are you sure you want to share it externally?”[77]. Applying the same concept, an organization could have a prompt when using AI: “This output is AI-generated. Verify important details before using.” or “You are about to use an AI tool that is not officially approved – proceed with caution and do not input confidential info.” Nudges like these make the risk visible to the user, guiding them to safer behavior without outright stopping their work.

3. Education, Training, and Culture

Technology and prompts will help, but they must be underpinned by a strong foundation of user education and a supportive security culture. Employees need to understand why certain AI behaviors are dangerous. As basic as it sounds, many workers truly don’t realize that putting data into ChatGPT is effectively the same as posting it on an external site. Security leaders should update their awareness programs to explicitly cover AI usage guidelines: what is allowed, what is forbidden, and most importantly, the reasoning behind it. For instance, explain to staff that AI providers may retain and reuse input data[12], or that an AI response can include hidden inaccuracies that are hard to spot. Share high-profile examples (like the Samsung incident or the lawyer case) to illustrate consequences in relatable terms. When people see that “someone like me” made that mistake and it caused a serious issue, it hits home more than abstract admonitions.

According to a 2025 global survey, only 47% of employees have received any formal training on AI[32] – meaning over half are figuring out how to use these tools on their own, possibly learning bad habits. Closing this gap is urgent. Training should cover practical skills like how to spot AI “hallucination” red flags (e.g. an output with no cited sources, or an overly confident tone on a subjective matter)[78][79]. It should encourage the habit of verification: cross-check one or two facts from an AI’s answer before trusting the rest. It can also promote a “human in the loop” mindset: employees should know that for any high-impact content (legal documents, financial reports, customer communications), AI is a helper for a draft at best, and human review is mandatory before anything is finalized[80]. Some organizations adopt a rule that AI-generated material must be labeled and cannot be published externally without a person signing off. Such policies set clear expectations that AI is not a replacement for human judgment.

Another key aspect is establishing a blame-free, learning-oriented culture around AI errors. If employees fear punishment for admitting they used a banned AI tool or made a mistake with AI, they will continue to hide in the shadows (making shadow AI even more of a problem). Instead, encourage open dialogue: if someone tried an AI solution, have them share the experience – what worked, what went wrong. This can surface where the real pain points are and where employees felt the need to turn to unsanctioned tools. Bridging the gap between workers’ needs and IT’s provisions is crucial[81]. Often, shadow AI usage is a signal that official tools or processes aren’t meeting certain needs (e.g., generating a report is too slow, or translation services aren’t available, so employees found an AI alternative)[82]. Security teams should collaborate with business units to address these gaps, maybe by introducing approved AI solutions or improving existing workflows. When employees see that raising a request leads to a constructive solution (“IT got us a licensed, secure AI writing assistant because we asked instead of us secretly using ChatGPT”), they’re more likely to bring shadow AI into the light.

Leadership should also emphasize that the goal is to leverage AI safely, not to ban it outright. This message can alleviate the fear that being honest about AI use will get the tool taken away. Multiple experts have echoed that banning is futile – one calling the attempt to block AI use “tempting but challenging and potentially innovation-stifling”, advocating instead for creating approved “AI playgrounds” where employees can experiment within guardrails[83]. By providing a sanctioned environment (say, an internal sandboxed LLM that doesn’t train on inputs, or a vetted third-party AI with a strong contract), organizations give employees a safe option, reducing the temptation to use risky external tools. Training sessions can then direct users: “Use our company-provided AI for X tasks; do not use public tools for Y types of data.”

Finally, security awareness around AI should be continuous, not a one-off. The AI landscape evolves quickly (new tools, new threats like data-poisoning or prompt injection). Include AI topics in regular security newsletters, phishing simulations (e.g., simulating an “AI” offering to do an employee’s work if they just upload a file – would they fall for it?), and annual policy sign-offs. Consider holding interactive workshops or “AI safety days” where employees can learn hands-on how to use AI securely, ask questions, and even share their own tips. Remember that engagement and empowerment are more effective than enforcement alone. An employee who understands the why and how of AI safety is less likely to make a careless mistake than one who only knows “IT said no.” Cultivate champions in each department – tech-savvy volunteers who can help colleagues use AI tools in compliance with the rules and surface any new use cases or issues to the central team.

4. Governance and Layered Policies (Enabling Safe Innovation)

The final layer is the organizational governance framework that ties together technology and behavior. This includes formalizing an AI usage policy and integrating AI risk considerations into existing governance structures (like data protection committees or risk registers). An AI policy should define clear answers to: Which AI tools are approved for use? Which are prohibited? What kinds of data are never allowed to be input to an external AI (e.g., anything classified as confidential or regulated data)? If the organization is developing AI internally, what are the rules around training data and prompt security? By establishing these boundaries, employees have a reference point. According to experts, key actions include defining approved AI tools that meet security standards and specifying what data types can be used with AI models[84][85]. For example, the policy might say: “You may use XYZ AI service for general-purpose tasks with public data, but do not input any customer personal data or code. Our company provides an internal AI assistant for those purposes.” Some companies require labeling AI-generated content or have rules about when you must disclose AI involvement (especially relevant for things like marketing content or external communications)[86][87].

However, governance should not be about writing a policy and calling it a day. It needs to be dynamic and involve continuous feedback[88]. One recommendation is to fold AI into the existing data loss prevention and insider risk management processes[85]. That means tracking incidents of AI misuse just as you would other security incidents, and adjusting controls accordingly. If, for instance, you notice many developers are attempting to use Copilot and occasionally exposing repo data, maybe the governance response is to procure a safer, self-hosted coding AI and require its use. Essentially, treat AI risk as an evolving domain – update policies as the technology and usage patterns change. A cross-functional AI governance committee (involving IT, security, legal, HR, and business units) can be useful to evaluate new AI tools and approve them with necessary controls (like purchasing an enterprise version of a tool that doesn’t retain data).

Crucially, the tone of governance should be enabling, not punitive. The mantra should be “secure use, not no use.” As Newsweek reported, “Employers need to stop pretending they can ban their way out of it and start building smart, ethical policies that protect both the business and the people doing the work.”[89]. This means leadership openly acknowledges the productivity gains of AI and expresses a desire to harness it – safely. For example, instead of a stern email forbidding AI, the organization might announce: “We recognize AI tools can help you in your jobs. Here’s our company-approved AI toolkit and guidelines on using it responsibly. If you need an AI capability you don’t have, let’s discuss how to get it securely.” This approach encourages employees to comply because it meets them halfway. They’re more likely to follow rules that still let them accomplish their work with modern tools.

In terms of layered defense, think of it like multiple lines: technology at the perimeter and endpoints (to catch and prevent risky actions), nudges and prompts at the moment of decision (to guide user behavior), education and culture in the background (to instill understanding and trust), and governance over the top (to steer the whole effort and adapt to change). If one layer falters – say a user is savvy enough to try a new AI tool that slips past filters – another layer (like their own training or a peer warning them) might catch the slip. Or if a user absentmindedly ignores a pop-up warning, the network DLP might still block the upload. By combining layers, we avoid single points of failure.

A visual way to imagine this is the classic “defense-in-depth” model: no single solution is foolproof, but each adds hurdles that reduce the chance of an accident. For AI, an employee should have to go through multiple “gates” before a truly dangerous action occurs: e.g. (a) they try to use an AI tool and remember the training – maybe they stop, but if not, (b) the browser extension scans the content and pops up a warning – maybe they stop, but if not, (c) the system blocks the content because it contained classified info. Even if all automated gates fail, (d) the security monitoring catches it after the fact and the team can respond (re-training the individual, tightening rules, etc.). This layered approach was echoed by a Forcepoint strategy blending DSPM, DLP, CASB, and SWG such that data movement to AI is monitored and controlled at every layer from endpoint to cloud[90][91]. The result aimed for is that employees “can experiment safely, knowing that data movement is watched and governed”[91]. When done right, security becomes a guardrail for innovation rather than a roadblock.

Conclusion

AI’s transformative potential in the enterprise comes with new insider risks that are fundamentally human in nature. The same employees that companies trust with sensitive data can inadvertently expose that data or make ill-informed choices by misusing AI tools. This doesn’t mean employees are the enemy – on the contrary, they are the key to the solution. By recognizing that the human is the new attack vector, security leaders can pivot to defenses that focus on human-centric design: making it easier for people to do the right thing and harder to slip up. We have seen that shadow AI and unintentional misuse are widespread, driven by gaps in policy, awareness, and tool availability. The answer lies in transparency, guidance, and engagement.

Organizations should strive to create a culture of safe AI innovation, where employees feel empowered to use AI tools within a clear framework of what’s acceptable. This includes providing sanctioned AI resources (so there’s less temptation to go rogue), and continuously communicating the message that security and productivity can go hand-in-hand. The layered defense model we discussed – technical controls, real-time nudges, education, and governance – works together to cover the various failure points. It recognizes that prevention is ideal (stop the data leak or risky action before it happens), but detection and response are still critical (people will make mistakes, so catch them quickly and treat them as learning opportunities rather than occasions for punishment).

In summary, insider risk in the age of AI is less about malicious insiders and more about managing digital behavior. We must update our insider risk programs to account for this new reality. That means monitoring new channels (like AI prompts), training employees on new pitfalls (like hallucinations and data sharing with AI), and implementing controls that are as agile and user-friendly as the AI tools themselves. By doing so, we turn our workforce from a liability into an asset in AI security. After all, who better to defend against accidental AI risks than informed employees on the front lines? As one industry expert aptly noted, “Shadow AI isn’t always bad – it often means employees are trying to be more productive. Rather than ban it, illuminate it.”[50] By shedding light on AI usage and guiding it responsibly, companies can reap the productivity benefits of AI while keeping their data and systems secure. The human element – once thought of as the weakest link – can become the strongest defense, given the right tools and support.

Sources: [11][16][17][21][32][8][92][50] and others as cited in text.

[1] [2] [26] [49] [50] [51] [81] [82] [92] Shadow AI: The Next Insider Threat?

https://www.thecybersecurityreview.com/cxoinsight/shadow-ai-the-next-insider-threat-nwid-1266.html

[3] [5] [6] [19] [20] [21] [32] [36] [37] [38] [83] [89] Nearly Half of Employees Are Using Banned AI Tools at Work - Newsweek

https://www.newsweek.com/nearly-half-employees-are-using-banned-ai-tools-work-2110261

[4] [41] Shadow AI - Insider Risk Glossary

https://www.insiderisk.io/glossary/shadow-ai

[7] [8] [9] [10] Unintentional Insider Threats: How Accidents Put Your Data at Risk

https://www.zerofox.com/blog/unintentional-insider-threats/

[11] [12] [13] [39] Samsung employees leaked corporate data in ChatGPT: report | CIO Dive

https://www.ciodive.com/news/Samsung-Electronics-ChatGPT-leak-data-privacy/647137/

[14] [15] [16] [17] [18] [40] [55] [56] [57] [58] [59] [63] 11% of data employees paste into ChatGPT is confidential | Cyberhaven

https://www.cyberhaven.com/blog/4-2-of-workers-have-pasted-company-data-into-chatgpt

[22] [23] [24] [43] Hidden Risks of Shadow AI

https://www.varonis.com/blog/shadow-ai

[25] [27] [28] [29] [44] [52] [53] [54] [60] [61] [62] [64] [65] [84] [85] [88] [90] [91] What Is Shadow AI and How to Stop It

https://www.forcepoint.com/blog/insights/what-is-shadow-ai

[30] legal AI compliance - The Tech Savvy Lawyer

https://www.thetechsavvylawyer.page/blog/tag/legal+AI+compliance

[31] 10 AI Hallucinations Every Company Must Avoid | Galileo

https://galileo.ai/blog/ai-hallucination-examples

[33] [34] [35] [45] [46] [47] [48] [78] [79] [80] [86] [87] AI Hallucinations Could Cause Nightmares for Your Business: 10 Steps You Can Take to Safeguard Your GenAI Use | Fisher Phillips

https://www.fisherphillips.com/en/news-insights/ai-hallucinations-could-cause-nightmares-for-your-business.html

[42] [71] Embedding Security Behaviors with Nudges: Best Examples & Impact - Keepnet

https://keepnetlabs.com/blog/top-nudge-examples-in-cybersecurity-awareness

[66] [67] [68] [69] [70] [72] [73] [74] [75] [76] [77] Security nudges promise proactive security habits that reduce human errors   | SC Media

https://www.scworld.com/perspective/security-nudges-promise-proactive-security-habits-that-reduce-human-errors

Oximy

The Security Layer for the Age of AI

 Book a Demo

© 2025 Oximy Inc. All Rights Reserved.

Oximy

The Security Layer for the Age of AI

 Book a Demo

© 2025 Oximy Inc. All Rights Reserved.

Oximy

The Security Layer for the Age of AI

 Book a Demo

© 2025 Oximy Inc. All Rights Reserved.

Oximy

The Security Layer for

the Age of AI

 Book a Demo

© 2025 Oximy Inc. All Rights Reserved.

Oximy

The Security Layer for the Age of AI

 Book a Demo

© 2025 Oximy Inc. All Rights Reserved.