What generative ai tools can be used by tax administration officials?

The reason that led me to share this article with you is, in light of the remarkable advances that generative Artificial Intelligence (AI) is producing with the emergence of various tools that promise to improve the efficiency and effectiveness of the different processes, to reflect on what generative AI tools tax administration officials (TAs) can use to perform their daily work. That is, can they freely use any tool available on the market? Or should they only use the tools duly approved by the TAs in which they work?

It is interesting to note that there are multiple generative AI tools available on the market in open-source mode with free access and in some cases in experimentation stages. There are text generation tools and content assistants that allow you to summarize texts, translate texts, generate content, visual creativity tools and audio generation, productivity, and professional development tools, from writing emails to project management or programming, among others.

In this regard, first of all, I must clarify my position always in favor of technology since, as I said in this same blog in 2017, I am convinced that technology should collaborate to simplify the complex world of taxes.[1]  Every day we see examples of how technology can simplify processes, such as electronic invoicing, various tax compliance facilitation tools such as pre-completed tax returns or mobile applications created by TAs.

I understand that eventually all TAs officials will end up using generative AI tools due to the undoubted advantages that many of them have. Now, having settled this initial position, I want to alert you to the specific risks that occur with the use of these tools.

In this regard, it is interesting to cite as a concrete example the cases that have emerged in the various countries of professionals who used these tools and through them generated false or non-existent content. In the case Ayinde vs Haringey, the lawyer Sarah Forey submitted a brief with false jurisprudential quotations and a misinterpretation of the law. She claims that she used a personal list of cases, but she could not justify the origin of the quotations and did not correct the errors.

As a result, she was financially sanctioned and referred to the Bar Association. The judgment was handed down by the High Court of the United Kingdom (Kings Bench Division) on June 6, 2025.

Other cases were filed in Gautier v. Goodyear (Texas, July 2024), Morgan & Morgan (USA February 2025) Butler vs Snow in (Alabama May 2025), Al Haroun vs Qatar National Bank QPSC and Anor 2025 EWHC 1588 Comm (UK June 2025). In all these cases, an improper use of generative AI tools by lawyers was found, without proper verification of the content before presenting it to the court. The courts warned that although AI can be useful, it presents risks if left unchecked. Generative AI tools can generate plausible but false texts (made-up jurisprudence, incorrect citations). Therefore, they said that lawyers should verify all information coming from AI, as they would with any source since there is no valid excuse to present false material to the court.

The Federal Trade Commission of the United States (FTC) has also imposed a fine of $193,000 and other sanctions on “DoNotPay”, a startup that promoted itself as the creator of the world’s first robot lawyer with artificial intelligence[2]. The sanction responds to accusations of misleading advertising, as the company claimed that its chatbot could replace human lawyers, which was not true.

Faced with these risks, I would like to highlight the approved regulatory frameworks specific to the use of AI in various countries, such as the European Union’s AI Regulation. In addition, normative recommendations regarding the use of AI tools are being issued in many countries. To cite a recent example, the Arkansas Supreme Court published a draft administrative order that expressly prohibits judicial officials and users of the internal management system from entering confidential or sealed information into generative AI tools (ChatGPT, Copilot, Gemini, etc.).The text warns about the possibility that these platforms retain, reproduce or reuse the data entered to train models, which may constitute a violation of professional ethics rules, restricted access rules and functional confidentiality duties. Their use can take place only in controlled environments or authorized projects, under institutional supervision.

We should note that all these aforementioned risks are also applicable in the field of TAs. There are also other risks of using these generative AI tools, such as those related to information security, based on the essential principles of confidentiality, integrity and availability of data and systems. The tools can reflect or amplify existing prejudices in the data with which we trained them, reproducing stereotypes with biased responses. Likewise, the use of these tools in TAs can expose organizations to cyber-attacks, data loss, or unauthorized access.

In this regard, I find it highly positive that many TAs are making their AI use strategies known. For example, in Spain, they approved an AI strategy of the Tax Agency that focuses on using AI to improve taxpayer assistance, increase efficiency and collection through data analysis and the fight against tax fraud. The principles of responsibility and a “human centric” approach, ensuring respect for rights, ethics, transparency, and the prevalence of human decisions will guide their usage.

Therefore, returning to the questions that motivated us to write this post, can TAs officials freely use any tool available on the market? Or should they only use the tools approved by the TAs in which they work? In this regard, I understand that it is vital that officials only use the tools provided by the TAs. This, to avoid all the risks that we have discussed in the present as well as many others that may arise from the free use of any Generative AI tool. It seems particularly important to us that the TAs are issuing recommendations on the use of these tools for their officials. The authorities should state with complete clarity to avoid using tools not provided by the TAs. Likewise, for example, to order the staff not to share information classified as “confidential” or “secret reserved”.

The AI tools that should be used by TA officials provided by these agencies should respect several fundamental principles for lawful and ethical AI for the benefit of citizens, such as the principle of explainability, transparency, accountability and human oversight, among others, as well as the protection of personal data and the duty of confidentiality. Throughout the process, it will be essential to continuously train the personnel of the TAs in the use of these tools, promoting interdisciplinary work and constant updating.

It is clear that AI presents a scenario of disruptive changes for TAs between the medium and long term.  Digital transformation implies not only structural but also cultural changes. TAs need to understand how technology impacts their functions and develop the skills needed to use it efficiently. But we must never lose sight that ICTs are tools to obtain better results, which is to say they are not an objective in itself, the technology “as fashion” is useless, but we must always ask which will be the strategic objective of its incorporation.

In conclusion, AI should not focus on replacing public competencies but should increase or complement human capabilities so that people can add value to their tasks while improving the quality and efficiency of public functions for citizens.

References:

[1] Will technology simplify taxes? https://www.ciat.org/la-tecnologia-simplificara-los-impuestos/

[2] Watch Alfredo Collosa The first robot lawyer fined for misleading his users.https://observatorioblockchain.com/ia/el-primer-abogado-robot-multado-por-enganar-a-sus-usuarios/

 

 

29 total views, 29 views today

Leave a Reply

Your email address will not be published.

CIAT Subscriptions

Browse through the site without restrictions. Consult and download the contents.

Subscribe to our electronic newsletters:

  • Blog
  • Academic offer (Only in spanish)
  • Newsletter
  • Publications
  • News alert

Activate subscription

CIAT Members

Representatives, Correspondent and Authorized staff (TA)