Employees in almost every sector have quietly started to use generative AI tools – from public chatbots and browser plug‑ins to translation engines and coding assistants – to speed up their daily work. When these tools are used in an organisation without the knowledge, approval or control of the employer or the IT department, we speak of the term “shadow AI”.
Recent surveys show that this is no longer a marginal phenomenon. A clear majority of organisations report unauthorised AI use, and in some markets, more than half of employees say they use unapproved consumer AI tools at work at least weekly. For many companies, shadow AI is therefore not a hypothetical future problem, but a reality embedded in everyday workflows and networks.
For Estonian companies, the topic is particularly relevant. Estonia is one of the most digitalised economies in the EU, and employees are generally technologically literate and quick to adopt new tools. This article provides a practical overview of what shadow AI is, the main legal risk areas for employers, and concrete steps companies can take to bring AI use under control in a proportionate, business‑friendly way.
1. Shadow AI – what is it and why are your employees already using it?
The term “shadow AI” is borrowed from the earlier concept of “shadow IT”. It describes situations in which employees use AI tools – such as public chatbots, image or code generators, translation services, or productivity plug‑ins – outside the organisation’s official IT landscape. The AI is typically accessed through a browser, a personal account, or a smartphone, without prior risk assessment, contract review, or data‑protection analysis by the organisation.
Typical examples include employees who paste client correspondence into a chatbot to draft a reply, upload internal reports for summarisation, and ask AI to generate marketing texts or legal opinions based on various documents. In many cases, motivation is positive and understandable: to save time, reduce routine work, and improve the perceived quality of output.
The core problem is not technology, but the lack of visibility and governance. If the employer does not know which AI tools are used, for what purposes, and with what data, it becomes difficult to assess whether legal requirements are met or to react effectively if sensitive information has been exposed.
In practice, a blanket ban on AI is rarely realistic. A more sustainable approach is to accept that employees will use AI in some form and to manage that use in a controlled, contractually robust, and legally compliant way.
2. Where the law meets shadow AI: key risk areas
Shadow AI cuts across multiple areas of law. For example, for most Estonian employers, the main risks concern confidentiality and trade secrets, personal data and data protection, as well as intellectual property and contractual liability.
2.1 Confidential information and trade secrets
Under Estonian law, business secrets are protected primarily by the Restriction of Unfair Competition and Protection of Business Secrets Act, which implements the EU Trade Secrets Directive. In simplified terms, information may qualify as a business secret if it is not generally known, has commercial value because it is secret, and the business has taken reasonable steps to keep it confidential.
When employees copy contracts, source code, pricing models, product roadmaps, or other sensitive information into AI tools, there is a real risk that these “reasonable steps” can no longer be demonstrated. Many public AI services reserve broad rights to log and store prompts and outputs, to use them to improve their models, and to share them within a wider corporate group, sometimes outside the EU. Even if the probability of a concrete leak appears low, the organisation may struggle to convince a court that it has handled its business secrets with due care.
If confidential client information is involved, unauthorised disclosure may also breach contractual confidentiality clauses, sector‑specific secrecy obligations or professional ethics rules (for example, in regulated professions such as the Bar). This can expose the company to damages claims, regulatory sanctions, and reputational harm.
For this reason, shadow AI should be treated as a potential data leak channel. Organisations that have invested heavily in access controls, encryption, and secure collaboration tools can inadvertently undermine those efforts if employees informally “copy‑paste” the same information into external AI platforms that the company neither controls nor has properly assessed.
2.2 Personal data, GDPR and Estonian data‑protection law
In many real‑life prompts, employees include personal data – names, contact details, notes, customer complaints, or even health‑related information. From a legal perspective, the employer remains with the data controller and is responsible for ensuring that processing is lawful, transparent, and proportionate, even if the employee chose the tool without permission.
The General Data Protection Regulation (GDPR) in the EU requires, among other things, a clear legal basis for processing, a defined purpose, data minimisation, appropriate security measures, and, where applicable, a valid mechanism for international data transfers. When personal data is sent to public AI services outside the organisation’s control, it is often unclear whether these conditions are fulfilled, who acts as processor or joint controller, and where the data is actually stored.
High‑risk scenarios include the use of AI in recruitment, employee evaluation, creditworthiness assessment, or other profiling that can significantly affect individuals. In such cases, the GDPR may require a data protection impact assessment (DPIA), enhanced transparency, and safeguards against discriminatory or unfair outcomes. Shadow AI use bypasses these checks and increases the likelihood of hidden bias or automated decisions.
If a data subject later exercises their rights – for example to obtain a copy of their data or to know which systems have been used to profile them – it may be very difficult for the employer to reconstruct how shadow AI tools were involved. This, in turn, can lead the local (Estonian) Data Protection Inspectorate to conclude that the controller does not have sufficient control over its processing activities.
2.3 Intellectual property and contractual liability
Generative AI systems are trained on vast datasets that may contain copyrighted works and proprietary materials. Although the legal debate on training data and text‑and‑data mining exceptions is still evolving, companies should be aware that AI outputs can sometimes reproduce or closely resemble third‑party content. If employees use such output in marketing materials, software products, or publications without proper checks, the company may face infringement allegations.
On the input side, employees may feed client documents, designs, databases, or code into AI tools in ways that breach licence terms or non‑disclosure agreements. Many contracts explicitly prohibit sharing certain information with third parties or using it to train external systems. Even where no explicit prohibition exists, such sharing can contradict the commercial expectations between the parties.
Finally, shadow AI increases the risk of poor‑quality or fabricated content (“hallucinations”) reaching clients or partners. If business decisions or legal assessments are based on erroneous AI output that was not properly verified, this can lead to contractual disputes, liability for incorrect advice, and damage to the organisation’s credibility.
3. The new regulatory landscape: EU AI Act and Estonian supervision
Beyond existing rules, companies must now also act according to the EU Artificial Intelligence Act – the first comprehensive AI regulation in the world. The AI Act follows a risk‑based approach: it prohibits certain unacceptable AI practices, imposes requirements on high‑risk AI systems, and introduces transparency duties for a broader set of AI use cases.
Most Estonian businesses will not be “providers” of AI systems in the strict sense of the Act, but they will often be “deployers” – organisations that use an AI system under their authority in a professional context. For deployers of high‑risk AI systems, the Act imposes obligations such as using the system in accordance with the provider’s instructions, ensuring human oversight, and keeping relevant logs.
Many AI applications in employment and access to essential private services fall into the high‑risk category, including AI tools used for recruitment, candidate screening, employee evaluation, and certain automated decision‑making in HR and financial services. If such tools are adopted informally by business units as shadow AI – without classification, documentation, or oversight – the company may find itself subject to high‑risk obligations without having the governance structures required by the Act.
Estonia has begun to organise supervision of the AI Act across several authorities. The Consumer Protection and Technical Regulatory Authority (TTJA) has been designated as one of the competent authorities for AI systems and provides public guidance on AI‑related obligations, while the Data Protection Inspectorate (AKI) continues to supervise compliance with data‑protection rules, including in AI‑driven processing.
Mapping and governing AI use – including shadow AI – will be essential for determining which obligations apply and for demonstrating compliance to regulators, clients, and business partners.
4. Building an AI‑safe organisation: practical steps for EU companies
There is no single template that fits every organisation, but certain building blocks appear in most effective AI‑governance frameworks. The following steps can help employers bring shadow AI out of the shadows and turn it into a managed tool rather than an unmanaged risk.
Map how AI is actually used in your organisation
Start with the assumption that AI is already being used, even if no official tools have been rolled out. Conduct interviews and anonymous surveys, speak with key teams such as marketing, HR, IT and software development, and review technical logs where appropriate. The goal is not to punish, but to understand which tools are used for which tasks and what kinds of data are involved.
Decide on your risk appetite and define allowed use cases
A blanket prohibition is likely to be ignored in practice. Instead, distinguish between low‑risk and high‑risk use cases. For example, you might permit the use of public AI tools for generic brainstorming or language polishing, provided that no personal data or business secrets are included, while prohibiting their use for processing client files, HR data, or confidential technical documentation. Where AI is central to a process, consider using enterprise‑grade solutions with EU data hosting and clear contractual safeguards.
Adopt an AI‑use policy and integrate it into internal rules
An AI policy should define key terms, specify roles and responsibilities, list approved tools, and set out clear “do’s and don’ts” with concrete examples. It should also describe the process for approving new tools, the minimum requirements for vendors, and how employees should document reliance on AI in their work products.
Update employment documentation and provide training
Employment contracts, job descriptions, and internal work rules should reflect the organisation’s expectations around the use of AI and digital tools more generally. This includes reinforcing confidentiality obligations in the context of online services and clarifying which misuse of AI may lead to disciplinary consequences. Regular, practical training – tailored to different functions – is essential so that employees understand both the risks and the benefits of AI.
Ensure GDPR and local data protection compliance for AI use
For each significant AI use case, identify the data flows and determine whether personal data is processed. Update records of processing activities, select an appropriate legal basis, review and update privacy notices and, where necessary, carry out data protection impact assessments. Conclude data‑processing agreements with AI vendors, verify sub‑processors and international transfers, and define how you will respond to data‑subject requests relating to AI‑assisted processing.
Protect trade secrets and confidential information
Identify what qualifies as a business secret in your organisation, classify and label such information appropriately, and restrict access on a need‑to‑know basis. Make it explicit in policies and training that business secrets and other sensitive information may not be entered into public AI tools. Where necessary, implement technical controls such as blocking access to certain services, monitoring outbound traffic, and deploying tools to reduce the risk of accidental leakage.
Plan for incidents and establish AI governance
Designate a person responsible for coordinating AI‑related issues. Develop a playbook for dealing with incidents, including suspected data leaks or problematic AI‑assisted decisions, and ensure that it is aligned with your existing data‑breach and crisis‑management procedures. Keep documentation of your risk assessments, decisions, and safeguards to demonstrate that the company is taking reasonable, proportionate steps to manage AI‑related risks.
5. How a law firm can support you – and why it is worth acting now
Shadow AI will not disappear. As AI capabilities become embedded in office software and sector‑specific business applications, the distinction between “using AI” and “not using AI” will quickly blur. The key question for companies is therefore not whether employees use AI, but whether that use is visible, controlled, and legally compliant.
At Magnusson, our technology, AI, and data‑protection experts advise clients across the EU on exactly these questions – from mapping AI use and designing governance frameworks to updating employment documentation, negotiating contracts with AI vendors, and supporting incident response and communication with supervisory authorities.
Addressing shadow AI proactively offers several benefits: it reduces the risk of data leaks and regulatory investigations, preserves the protection of trade secrets, strengthens clients’ and partners’ trust, and prepares the organisation for the requirements of the EU AI Act. Companies that start now will be better placed to harness the advantages of AI in a way that is both innovative and responsible – and to demonstrate to regulators and business partners that they are managing AI risks with the same seriousness as other core compliance topics.
Contact
Sander Peterson
Associate
Intellectual Property, AI Law, Commercial, Data Protection, Gaming, Marketing Law, Technology
Send me an email
Jaanus Mägi
Managing Partner
Capital Markets, Commercial, Corporate and M&A, EU and Competition, Foreign Direct Investment (FDI), Marketing Law, Media, Sports and Entertainment, Retail and consumers, Technology
Send me an email +372 670 8401 +372 501 2120