Appify Intelligence - AI Development & Automation Specialists
Appify Intelligence — Home
Home
AI ConsultingAI Augmented Web SolutionsAI Chatbots & AgentsAI AutomationRAG SystemsAI Dashboards
ExpertiseSuccess storiesBlogContact
Appify Intelligence - AI Development & Automation Specialists
Home
AI ConsultingAI Augmented Web SolutionsAI Chatbots & AgentsAI AutomationRAG SystemsAI Dashboards
ExpertiseSuccess storiesBlogContact
Appify Intelligence - AI Development & Automation SpecialistsAppify Intelligence — HomeHomeServicesExpertiseSuccess StoriesContact us

Appify

AI Business solutions experts

Trusted partners in driving innovation, systems automation, business intelligence and sustainable competitive advantage with AI.

Schedule a meeting

Book a free initial consultation with our app development experts and let's discuss your app design and development options.

Book a Call

Business Hours

Monday - Friday:9:00 AM - 5:00 PM
Saturday - Sunday:Closed

Contact

1 800 852 307hello@appify.digital

Head Office

Appify Ltd., Ashfield, Tullamore, Co. Offaly, Ireland. R35 KX60

View on Map

Customer Reviews

5.0(22 reviews)

Jaspal Kharbanda

"What is really impressive was a value-driven engagement with Appify. They genuinely care about delivering quality."

Stephen Gribben

"Appify have become more than just my tech partner... Their communication led to seamless collaboration."

Leave a Review

Find Us

Google MapsGet Directions
Part of Appify Digital
LinkedInYouTubeInstagramTikTokFacebook
© Appify Digital 2026
  1. Back to blog
AI Governance

Can AI legally read your clients' bank statements? An Irish guide (2026)

What an Irish accounting practice needs before pasting client data into ChatGPT, Claude or Copilot: GDPR Article 28, the DPC's guidance, and the 2026 vendor residency picture.

For: Owners and partners of Irish accounting practices weighing the legal exposure of putting client data into AI tools

AI
Appify Intelligence Team
|14 May 2026|13 minutes
Rows of yellow ring-binder folders on office shelves, the kind an accounting practice keeps client files in

A partner at a 25-staff practice in Cork pastes a client's January bank statement into Claude to summarise the unusual movements. The statement contains the client's name, IBAN, two transfers from a counterparty in France, and a direct debit to a private healthcare provider. That paste is, depending on which Claude product, which contract, and which routing path, somewhere on a spectrum from "fine, you have a DPA and EU-residency inference" to "you have just triggered an Article 28 GDPR breach, a CCAB-I (Consultative Committee of Accountancy Bodies - Ireland) confidentiality breach, and an undisclosed-AI-use issue at your next PI (professional indemnity) renewal."

The rules are not obvious, and the vendor picture in 2026 is moving fast. OpenAI shipped EU data residency in February 2025; native Anthropic API still has none; Microsoft 365 Copilot's optional EU-only inference setting was flipped to default-off for new tenants in late March 2026; the Anthropic subprocessor inside M365 Copilot is disabled by default for EU/EFTA (European Free Trade Association)/UK tenants but a single admin click changes that.

The five questions every paste should answer

Before any piece of client data enters an AI tool, answer all five:

  1. DPA. Is the tool covered by a current data processing agreement (DPA) signed by the practice, naming the specific AI product, with standard contractual clauses (SCCs) for any EU-to-third-country transfer?
  2. Training. Is the data being used to train future models? Vendor commercial-tier defaults are usually no; consumer tiers are usually yes.
  3. Residency. Where is inference physically happening? In the EU, in the US, or somewhere routed through "wherever there is capacity"?
  4. Engagement letter. Does the client engagement letter authorise this kind of processing, with this category of recipient?
  5. Audit. If the client asked you tomorrow, could you tell them exactly what you did with their data, where it went, and how to delete it?

Each is a hard floor under a specific provision of law. The rest of the post unpacks each one and shows, vendor by vendor, where the answer is yes in 2026-05.

GDPR Article 28: what an Irish practice needs from any AI vendor

When the practice pastes client data into an AI tool, the practice is the controller and the AI vendor is the processor. That makes GDPR Article 28 the load-bearing legal text.

Article 28(3) requires a binding contract between the controller and processor that sets out: processor-only processing on documented instructions; staff confidentiality undertakings; security appropriate to the risk; sub-processor controls including written authorisation and equivalent contractual obligations; an audit right; assistance with data-subject requests and breach notification; and return or deletion at the end of processing. The full Article is on EUR-Lex.

The Irish Data Protection Commission (DPC), Ireland's lead authority for the GDPR, sets out specifics for AI tools in its July 2024 blog on AI, LLMs and Data Protection. The DPC says any controller using a generative AI service "should obtain clarity on what personal data the AI product uses, how the provider uses that personal data, and whether the personal data is retained or reused." A click-through DPA on a vendor's account page is the bare minimum and rarely covers the actual product configuration in use. The practice should know what the DPA names. If it names "Claude" but you are using "Claude on AWS Bedrock", the gap matters.

Two DPC actions in the last 18 months show the regulator is acting on AI cases. The DPC's injunctive-relief action against X (Twitter) over the training of Grok in September 2024, and its statement on Meta AI's training pause on 21 May 2025. Both involved EU users' personal data being processed for AI training without an adequate legal basis. The DPC's 2024 enforcement totals also netted EUR 652 million in fines across its caseload. AI-related regulatory action against controllers and processors in Ireland is operational.

The framework tightened further in December 2024 when the European Data Protection Board (EDPB), the EU-level body that coordinates national DPAs, adopted Opinion 28/2024 on AI models and personal data at the request of the Irish DPC. The Opinion sets a case-by-case test for whether an AI model can be considered anonymous, and a three-step test for any controller relying on legitimate interest as the lawful basis for AI processing. The DPC's welcome statement makes plain it intends to apply the Opinion in supervision. For a practice, the takeaway is that the EDPB Opinion now sits alongside Article 6 lawful basis as the framework the regulator will apply to AI-specific processing.

The Garante fine has been annulled

One fine often quoted in vendor risk decks: the Italian Garante's EUR 15 million decision against OpenAI from 2 November 2024 was widely cited as the first major generative-AI GDPR enforcement. It is no longer binding. The Court of Rome annulled the decision on appeal in judgment 4153/2026 on 18 March 2026. Decks still quoting it as binding precedent are out of date.

Vendor data-residency status as of 2026-05

This is the section that ages fastest. Date-stamped: 2026-05-14. Verify each row at the vendor's documentation before signing a contract. Several vendor help-centre pages return access-blocks to automated fetchers, so a human visit to each source link below is part of the work.

Vendor EU residency in 2026-05? Training opt-out? Notes Source
OpenAI API (Projects with EU residency) Yes, in-region with the Zero Data Retention amendment requested. Yes (default on the API). Dublin / EU region available for ChatGPT Enterprise, ChatGPT Edu, and the API Platform. openai.com/index/introducing-data-residency-in-europe
Anthropic Claude API (native) No. US infrastructure; EEA-to-US transfer under SCCs with EDPB 01/2020 supplementary measures. Yes (training excluded by default on commercial tiers; 7-day API log retention since 14 September 2025). The default Anthropic API route does not give EU residency. privacy.claude.com server-locations article
Claude via AWS Bedrock Yes; EU inference profiles available. Yes. The current EU residency path for Claude in production accounting workflows. Vendor docs at AWS Bedrock; cross-reference at contract time.
Claude via Google Vertex AI Yes; EU inference profiles available. Yes. The alternative EU residency path. Vendor docs at Vertex AI; cross-reference at contract time.
Microsoft 365 Copilot EU Data Boundary covers data at rest since 1 March 2024. Flex Routing is ON by default for tenants created after 25 March 2026, which means inference can leave the EU Data Boundary (US, Canada, Australia) during peak load. An admin must opt out in the M365 admin centre to stay EU-only. Tenant data is not used to train foundation LLMs. Default flipped in late March 2026 for new tenants; April 2026 for the wider population. Microsoft Learn: Copilot Flex Routing and M365 Copilot privacy
Microsoft 365 Copilot - Anthropic models (added 7 January 2026) Out of scope for the EU Data Boundary. Disabled by default for EU, EFTA, and UK tenants. n/a (model-vendor terms apply). If an IT admin enables the Anthropic models inside M365 Copilot, data leaves the EU Data Boundary. Microsoft: connect to AI subprocessor
Google Vertex AI (Gemini) Yes in four EU regions: europe-west9 (FR), europe-west2 (UK), europe-west3 (DE), europe-west4 (NL). ML processing and data at rest stay in the selected region. Yes; the Vertex API is not used for training. Gemini 3.x models are not yet in EU regions as of May 2026. For GDPR-bound workflows, the available EU options are Gemini 2.5 Pro and 2.0 Flash in europe-west4. Vertex AI data residency and Gemini API regions

Safe configurations in 2026-05

The summary for the practice: the safe configurations for client-data work are OpenAI with EU residency and Zero Data Retention; Claude via AWS Bedrock or Vertex AI EU profiles; Gemini 2.x in europe-west4; and M365 Copilot with Flex Routing manually disabled and the Anthropic subprocessor left off. Anything else needs a written DPA review before any client data goes near it.

The EU AI Act in 2026: what is in force after the Omnibus delay

In force already: 2 February 2025 brought the prohibited AI practices in Article 5 and the AI literacy obligation in Article 4. 2 August 2025 brought general-purpose AI (GPAI) model obligations in Chapter V, the governance rules in Chapter VII, and the penalty ceilings in Article 99. DLA Piper's coverage is the cleanest secondary source on this layer.

Originally scheduled for 2 August 2026: the high-risk AI system obligations covering biometrics, employment, education, law enforcement, critical infrastructure, and border management.

Now deferred. On 7 May 2026 the Council and Parliament reached a political agreement on the Digital AI Omnibus. High-risk obligations push to 2 December 2027 for stand-alone systems and 2 August 2028 for product-embedded high-risk systems. IAPP explainer and Hogan Lovells commentary walk through the change. Article 50 transparency and Commission enforcement powers stay on the original Article 113 timetable.

For an accounting practice, the high-risk pieces of the Act are usually not directly engaged. Practices are deployers of third-party AI for internal work, and most document-summary or chat use does not classify as Annex III high-risk. What does engage is the Article 4 literacy obligation (in force now), the Article 50 transparency obligation if you put a chatbot in front of a client, and any future deployer obligation if the practice touches credit decisioning or hiring inside its own operations.

The companion piece on the SME fine carve-out and Ireland's regulator picture lives at the EU AI Act post on this site, with the appify.digital general-audience primer as the basic reference.

Ireland: the AI Office and the General Scheme of the Regulation of AI Bill 2026

The Department of Enterprise published the General Scheme of the Regulation of Artificial Intelligence Bill 2026 earlier this year. It proposes a statutory independent body, Oifig Intleachta Shaorga na hEireann, designed to be operational by August 2026 as Ireland's single point of contact for AI Act enforcement.

The model is distributed. Thirteen Market Surveillance Authorities will supervise different classes of AI system; the Central Bank of Ireland supervises AI used by regulated financial service providers. Matheson's commentary and William Fry's analysis lay out the regulator map, and Pinsent Masons' coverage of the DPC's GDPR-AI risk posture gives the regulator-side reasoning. The practical reading for accountants: AI used in financial-services workflows will end up under both Central Bank and DPC oversight rather than DPC alone. If the practice does outsourced finance work for a regulated entity, the regulator question becomes layered.

Professional indemnity: the question every partner should ask their broker

Standard PI policies were not written with generative AI in mind, and the omission can void a payout when it matters. The "fair presentation of the risk" obligation under insurance law extends to the use of AI tools in client work. Dolmen Insurance Brokers in Ireland markets a PI capability designed for AI-active firms. We flag them as a primary example without endorsing the product itself; the actionable advice is that a partner needs to ask their existing broker the right questions at renewal.

Four to put on the renewal agenda:

  1. Does this policy carve out claims arising from AI-driven errors or AI-assisted advice?
  2. Does the policy require disclosure of which AI tools are in use, and at what point during the policy year?
  3. Is there a sub-limit or co-insurance clause that applies specifically to AI-related claims?
  4. What is the breach-notification timeline if a confidentiality incident involves an AI tool, and does it differ from the standard incident notification clause?

Chartered Accountants Ireland's PII regulations page is the institute-level reference. The most recent PII update, effective 1 September 2024, did not add explicit AI-disclosure language. The practice is on its own to make the disclosure happen.

CCAB-I and institute guidance: what is binding and what is not

As of 2026-05, the institutes have not issued binding AI-specific guidance for members in practice. The CCAB Ethics Group is drafting a Statement to the Profession on the ethical use of AI plus case studies, with an autumn webinar scheduled. ICAEW's GenAI insights hub and GenAI Accelerator Programme is the most developed members' curriculum on either island.

Until binding guidance publishes, the existing CAI Code of Ethics carries the weight. The five fundamental principles - integrity, objectivity, professional competence and due care, confidentiality, and professional behaviour - apply directly to AI tool selection. The 2023 Accountancy Ireland article "Navigating the ethics of AI" is the closest current institute-published member reference, and CAI's knowledge-centre AI hub tracks updates. The institute has neither said "you cannot use AI on client data" nor "you can"; it has said, via the Code, that confidentiality and professional competence apply to whatever tool you use. The practice has to write its own internal policy until the institute publishes one.

The five-question decision tree

Bring the five together. Each question, the rule it rests on, and the test:

Q1 - DPA. Does this vendor have a current DPA signed by the practice, naming the AI product specifically, with SCCs for any EU-to-third-country transfer? Rule: GDPR Article 28. Test: pull the DPA from the vendor account page, check the named products, check the dates.

Q2 - Training. Is this data going to be used to train future models? Rule: vendor commercial-tier default, plus the EDPB Opinion 28/2024 case-by-case anonymity test. Test: never paste into consumer ChatGPT, consumer Claude.ai, or any vendor's free tier; on commercial tiers, check the training opt-out on the account.

Q3 - Residency. Where is inference physically happening? Rule: GDPR Chapter V on international transfers plus the vendor's published architecture. Test: the table in this article, re-verified at the vendor's documentation. For OpenAI, the EU residency option must be selected per project. For Claude, the inference must be via Bedrock or Vertex AI in an EU region. For M365 Copilot, Flex Routing must be opted out of and the Anthropic subprocessor left disabled.

Q4 - Engagement letter. Does the client engagement letter authorise this kind of processing, with this category of recipient? Rule: contract law plus the CAI engagement-letter content guidance. Test: read the data-processing clause; if it does not name AI tools or "automated processing by third-party software", it does not authorise this. Engagement-letter content references from JMCC are a starting reference for the Irish practice perspective.

Q5 - Audit. Could you tell the client tomorrow exactly what you did with their data, where it went, and how to delete it? Rule: the DPC's repeated insistence that data subjects must be able to exercise their rights. Test: write down, before pasting, what tool, what model, what residency setting, what retention. If you cannot answer, you should not paste.

Once the policy and tool list are in place, a staff member can run the five questions in about 90 seconds before they hit paste. The investment is upfront, in writing the policy down and naming the approved tools.

Close: what this looks like, wired in

We build the workflow side of this for accounting practices. The client-chase loop on /accounting-chase runs on EU-residency models, with training opt-out enforced at the vendor account level, and with a DPA mapped to the engagement letter scope a practice's own clients will recognise. Human review sits at every decision point. Logs are auditable per client. We do not paste client data into consumer-grade AI tools, and we will not ship a workflow for a practice that does. The first call is free; we will walk the five questions through your current tool stack and tell you honestly whether the gap is the contract, the tooling, the engagement letter, or the staff briefing.

The broader pattern is well documented elsewhere. The piece on why most enterprise AI pilots do not move the P&L sets out the governance, evaluation, and integration disciplines that apply at any scale. For an Irish practice in 2026, the first piece of governance is writing the policy described in this post and circulating it to every staff member who has a vendor login.

Tagged

ai-governancegdpreu-ai-actdata-residencyirelandaccountantspi-insurance

Ready to talk?

If this post maps to a problem you're hitting, we'd like to hear about it. We turn AI experiments into production systems.

Start a conversation

Related articles

Empty plenary chamber of the European Parliament in Strasbourg with EU flags at the podium

AI Governance

EU AI Act fines for SMEs (Aug 2026): €35k, not €35m. Article 99(6) explained

Most coverage quotes the €35m / 7% turnover fine. Article 99(6) caps SME fines at the lower figure, often €35k. What ships 2 August 2026, Ireland's status, and what to do.

Office desk with a blank notepad, pen on top, and a laptop to one side

AI Strategy

Why 95% of enterprise AI pilots don't move the P&L (and what shipping teams do)

The viral MIT finding has been widely misread. Three 2025 studies point to the same four disciplines that separate the teams shipping AI into production.

Wooden office desk with paper financial documents stacked next to a calculator and pen in warm afternoon light

AI for Accountants

Xero JAX, Sage Copilot, or a partner build: when each wins for an Irish practice

What Xero JAX, Sage Copilot, BrightPay and Dext actually ship in May 2026, where Ireland availability lags, and the three places a partner build still earns its keep.