A friend runs operations at a bank you’ve most definitely heard of. Last quarter they had a rapid hiring phase. Dozens of roles. Hundreds of candidates. She used AI to screen resumes, draft interview questions, and compare candidates across criteria. Faster decisions. Better hires.
Then, the bank banned AI on company laptops.
She didn’t stop. She couldn’t. There was no way she could shoulder her workload without AI. So, she switched to her phone. Same work. Same tool. But this time, it was AI on a personal device. It took less than thirty seconds for her to make the switch.
The bank thought they increased security. In reality, they eliminated their only chance at governance.
Prohibition doesn’t stop AI usage. It stops AI visibility. Your team is using AI right now. The question isn’t will you allow it. The question is, will you see it.
IBM analyzed 600 organizations. Breaches involving shadow AI cost $670,000 more than breaches without it. 20% of breaches came specifically from unauthorized AI tools. 97% of AI-related breaches happened in systems without proper access controls.
Banning AI didn’t prevent those breaches. Banning AI was the catalyst for them.
Every company faces the same choice: prohibit AI and drive innovation underground or give vague “use responsibly” guidance and watch employees route around bureaucratic bottlenecks. Both approaches fail. Both create shadow AI. And shadow AI is the real problem.
The Shadow AI Economy
MIT studied 300 organizations. Workers at 90% of companies are using personal AI tools for work. Only 40% of companies provide official AI tools. The gap isn’t small. The majority of the workforce is using AI tools their organizations don’t know exist.
Not because they’re reckless. But because the official options are slower, more restrictive, or require approvals that take weeks. My friend at the bank couldn’t wait for procurement cycles. She had hiring deadlines. Her personal tool worked. The ban didn’t change the deadline. It changed visibility.
Harmonic Security analyzed 176,000+ AI prompts from 8,000 users. 6.7% contained sensitive data. Customer information, employee records, source code, financial data – all pasted into tools the company doesn’t control.
The pattern is consistent. Prohibition creates shadow usage. Shadow usage creates exposure. Exposure creates breaches.
How Prohibition Feels Right but Fails
Prohibition of AI feels like control. If we ban the tool, we eliminate the risk. Simple. Clear. Enforceable.
Samsung Electronics learned the hard way why this approach doesn’t work in April 2023. Three employees leaked semiconductor data through ChatGPT. They were debugging code, optimizing programs, and generating meeting minutes. Normal work that exposed proprietary designs.
Samsung immediately banned ChatGPT and every other AI assistant from company devices. Violation meant termination. And they began to build a proprietary internal AI platform to replace the tools.
The platform took months to build. All the while, employees found workarounds. Samsung eliminated visibility into AI usage. They didn’t eliminate AI usage.
The leak damage was permanent. Proprietary chip designs and source code became part of ChatGPT’s training data. Intellectual property permanently exposed. And to cap it off, the ban only prevented future visibility, not future risk.
When you ban AI but don’t remove the deadline pressure, employees choose speed over compliance. This isn’t a malicious choice. It’s a practical one.
When my friend switched over to her phone, the bank lost their chance to monitor, audit, or govern her AI usage. Same work. Same tool. Now invisible.
Vague Guidance Creates the Same Problem
Most companies don’t ban AI outright. Instead, they issue guidance in the form of “Use AI responsibly.” “Check with your manager.” “Ensure data privacy.”
Sounds reasonable, but it creates an identical shadow AI problem through a different mechanism.
Employees ask, “Can I use AI for this?” Manager says, “Let me check with IT.” IT says, “We need to review the use case.” Review time takes two weeks, and by then, the project deadline has passed.
Employees learn the lesson that permission takes longer than forgiveness. They use personal tools and apologize if caught. But they usually don’t get caught, because no one is monitoring personal devices.
Same outcome. Same shadow AI economy. Just a different path to get there. Vague guidance doesn’t prevent shadow AI. It opens the door for it while adding bureaucratic overhead.
What Actually Works to Support Safe AI Adoption
Memorial Sloan Kettering Cancer Center had 87 active AI projects in 2024. No central oversight. No risk assessment. A classic shadow AI problem in healthcare.
To address this, they built a three-lane system based on risk level. Low-risk projects got an “Express Pass” with two-week approval instead of months. High-risk projects went through intensive review.
In one year, they saw a 63% increase in AI project intake compared to 2023. Implementing the new governance structure accelerated innovation by removing bottlenecks while maintaining visibility.
PostNL and federal agencies managing classified information have seen the same pattern. Structure around AI usage enabled faster adoption. Stopped lecturing on how not to use AI, and started providing lanes on how to use AI.
The Framework to Curb Shadow AI
The pattern across Memorial Sloan Kettering, federal agencies, and PostNL is identical. Risk-based lanes. Decision frameworks. Autonomous operation.
At Doyon Technology Group, it took us 45 minutes to deploy our framework during a single all hands. Four lanes. Three questions. Five overrides.
Doyon Technology Group’s AI Usage Framework:
- Lane 1: Prohibited. Free tools are prohibited. No free versions of ChatGPT, Gemini, Claude, or Copilot, period. No business agreements, unclear data ownership, and compliance exposure. This is where my friend’s bank stopped. It didn’t work.
- Lane 2: Enterprise Tools. ChatGPT Teams, Microsoft 365 Copilot, Claude Enterprise, Adobe Firefly. These paid tools are the lane for all our internal work and much of our client work. Most of our daily work lives here. This is the lane that eliminates shadow AI because it gives us administrative visibility into AI usage across the enterprise.
- Lane 3: Private Deployment. Internal builds. For federal contracts, tribal government, and regulated environments where enterprise tools can’t meet contract requirements.
- Lane 4: Human-Only. AI can inform. AI cannot perform. Human performs and verifies. For non-negotiable decisions.
Not sure which lane your AI use case falls under? Find out in ten seconds by answering these three questions*:
Question 1: Whose data are you using? Internal or ours goes to Lane 2. Client or partner goes to Lane 3 minimum.
Question 2: What breaks if things go wrong? Reputation goes to Lane 2. Money or compliance goes to Lane 3. If human lives or professional licenses are at stake, straight to Lane 4.
Question 3: Who signs their name? If you can’t explain how you got the answer, use a higher lane. Accountability can’t be delegated.
*NOTE: Four triggers override this filter. If you see these words in your task, Lane 4 automatically:
- Final financial decisions. Certified projections. Budget approvals. Contract pricing.
- Interpreting regulations. Binding policy. Contract terms. Legal advice.
- Safety and Irreversibility. Crisis situations. Physical safety. Mental health. Security incidents. Can you undo this if AI got it wrong? If no, Lane 4.
- Representation. Speaking on behalf of company, client, partner, government.
Each trigger overrides lower-risk lanes. If any appear in your task, skip the flowchart and go straight to Lane 4: Human-Only.
People don’t need lectures. They need lanes. Give them guardrails and allow them to be productive but safe.
The Visibility Imperative
My friend at the bank is still using AI. Still screening candidates on her phone. Still making faster hiring decisions than her peers. The bank still thinks their prohibition protects them.
They’re wrong. They eliminated the only mechanism that could protect them: visibility into their teams’ AI usage. You can’t audit what you can’t see. You can’t govern what happens on personal devices. You can’t prevent breaches in shadow systems.
The regulators don’t care that you banned AI. They care that customer data was leaked. They care that breach notifications were late. They care that governance controls didn’t exist.
The smartest organizations aren’t asking “Can we use AI?” They’re asking “Which AI tool should we use for this?” The question changes from permission to navigation. From control theater to real governance.
Over the next eighteen months, expect three shifts:
- First, insurance carriers will start asking about AI governance frameworks during cyber policy renewals. No framework means higher premiums or no coverage.
- Second, regulators will treat “we banned it” the same as “we had no controls” when assessing penalties.
- Third, shadow AI incidents will become board-level conversations after the first eight-figure settlement.
The organizations that build lanes early can avoid shadow AI breaches and accelerate past competitors still paralyzed by permission theater. Navigation scales. Prohibition doesn’t.
Check your internal audit logs; your team is using AI right now. Build your four lanes this week. Start with Lane 2. List the three AI tools you’ll sanction. Send the list to your team by Friday.
If you’re ready to implement an AI usage framework but need some help, connect with Doyon Technology Group today. Our AI consultants will partner with you through the implementation process, so you can feel confident in your journey forward.
––––––

About the Author
Greg Starling serves as the Head of Emerging Technology for Doyon Technology Group. He has been a thought leader for the past twenty years, focusing on technology trends, and has contributed to published articles in Forbes, Wired, Inc., Mashable, and Entrepreneur magazines. He holds multiple patents and has been twice named as Innovator of the Year by the Journal Record. Greg also runs one of the largest AI information communities worldwide.
Doyon Technology Group (DTG), a subsidiary of Doyon, Limited, was established in 2023 in Anchorage, Alaska to manage the Doyon portfolio of technology companies: Arctic Information Technology (Arctic IT®), Arctic IT Government Solutions, and designDATA. DTG companies offer a variety of technology services including managed services, cybersecurity, and professional software implementations and support for cloud business applications.

