This is not just about incorporating artificial intelligence
In all the reports and statistics, we see that every day the Tax Administrations (TAs) are advancing in terms of using Artificial Intelligence (AI) in their various functions and processes, to be more efficient and effective.
This advance motivates me to write this post, asking myself how do our TAs find a balance between the collection goals and the due respect for the rights and guarantees of taxpayers?
We must ensure that the improper implementation of technology does not lead to setbacks, causing tax administrations to damage their image and thereby losing their legitimacy in the eyes of the public.
That is why, in my opinion, incorporating AI is not sufficient, it is vital to always respect the current legal framework of each country where the rights and guarantees of citizens are reflected.
Unfortunately, many countries do not have a regulation for AI. I am referring to a legal standard that establishes principles, obligations and limits for the development, deployment, and use of AI systems in a territory, whose purpose is to protect fundamental rights (privacy, non-discrimination, due process), guarantee security and responsibility, and promote innovation within a clear legal framework.
However, I think that the TAs should move forward and first of all they must elaborate a defined AI strategy and publish it.
Many countries have already taken that first step. For example, in Spain, the Tax Agency adopted an AI strategy[1] which focuses on using AI to improve the taxpayer assistance, increase efficiency and revenue collection through data analysis, and combat tax fraud.
The implementation will be guided by the principles of responsibility and a “human centric” approach, ensuring respect for rights, ethics, transparency, and the prevalence of human decisions.
Likewise, the Spanish Tax Agency has published various topics related to AI where not only the fundamental principles are explained (transparency, explainability, human supervision, security and governance, data protection, etc.) but also present use cases where the technology is being applied.[2]
Similarly, Australia’s ATO published its AI strategy, including why it uses the technology, how it ensures its governance and current use cases.[3]
In Brazil, Receita Federal published its AI policy in February of this year, which defines principles, guidelines, and safeguards for the responsible use of AI systems, reinforcing the institution’s commitment to security, transparency, and human supervision at all stages[4].
In Canada, a Federal Government-applicable AI strategy for the period 2025 to 2027 was published, which details all the principles applicable to the use of AI.[5]
A relevant fact on the subject is that a Subcommittee on tax administration and AI was created at the United Nations in October 2025[6] . It has the mandate to prepare a draft practical guide for the implementation of AI for TAs, providing guidance on issues such as fraud detection, risk assessment and mitigation, taxpayer service, governance frameworks and safeguards, including data and gender bias and promoting data integrity and confidentiality.
The guide should be clearly written, include practical examples, and be designed primarily to assist tax administrators in developing countries, including the least developed countries, and should reflect the realities and top priorities of those countries at their respective stages of capacity building.
Second, in addition to the AI strategy, I also consider it important to evaluate the AI systems used by tax administrations, and it is advisable for this evaluation to be conducted on a regular basis by a body external to the administration itself.
For example, in the United Kingdom the Institute of Fiscal Studies (IFS) published a remarkably interesting report entitled “Artificial intelligence in automated decision-making in tax administration: the case for legal, justiciable and enforceable safeguards”[7] which focuses on the use of AI in the administrative management of HMRC to make discretionary or subjective decisions.
However, once AI is implemented in TA management, when a decision is made solely by AI (in particular, machine learning (ML) with no human intervention in the decision-making process), this would reflect a decision made by the system (and not a decision by an HMRC officer); the decision made by AI would be based on the model’s own interpretation of the data (whether labeled or not) and the correlations extracted by the model.
This would reflect a shift in the role of the main decision-maker from HMRC official to AI.
That is why the report recommends an initiative-taking approach to regulating the use of AI in TA management, rather than a reactive approach, especially considering some of the risks (which have manifested themselves in several jurisdictions that have already adopted AI in public administration).
The document recommends two alternative solutions: tax-specific AI legislation or an AI Charter from HMRC (including some of the key standards and values that HMRC must respect) (together, both measures are called “Fiscal legal Guarantees for AI”).
in the USA, the Treasury Department evaluated the AI systems used by the IRS[8] and he said it is imperative that the IRS accelerate the implementation of governance and oversight structures to ensure accountability and the responsible use of AI in the programs and processes it develops.
They found that while the IRS had a process for tracking and reporting its AI project inventory as required, the reporting was inconsistent, due to evolving guidelines and efforts to interpret those guidelines.
The assessment notes that the inventory is inconsistent and subject to change with regard to the identified AI projects, and this can be attributed to the novelty of this technology and the lack of guidance from a whole-of-government perspective.
In addition, it was noted that the IRS will face the ongoing challenge of identifying and restricting employee access to generative AI websites.
As we can see, continuous evaluation is important since models can reproduce historical biases and apply unfair sanctions. The exposure of personal data can violate privacy rules, and opaque or erroneous decisions can damage institutional legitimacy, all while considering that there are security risks against attacks that could manipulate inputs to falsify results.
It is clear that, when used properly, AI can help local governments operate more efficiently and effectively; however, we must never lose sight of the fact that using AI without proper governance leads to errors, bias, and a loss of public trust.
An AI strategy for TAs should have clear objectives that link AI with specific fiscal goals, a governance framework that assigns responsibilities, use policies and transparency criteria, and data management that guarantees quality, traceability, and access controls.
Likewise, the strategy should include a continuous technical evaluation (performance metrics, bias tests, explainability), human supervision in sensitive decisions and ways of defense or recursive mechanisms for taxpayers.
Finally, the strategy should consider training and organizational change so that staff understand and manage AI.
As I said at the beginning, it is not just about incorporating AI, but I consider that having an AI strategy and continuously evaluating AI systems is essential for TAs to benefit from using the technology without incurring legal, operational, or reputational risks.
In short, AI has multiple benefits for TAs to be more efficient and effective, but its use without proper governance generates errors, biases, and loss of public trust.
What do you think?
References:
[1] https://sede.agenciatributaria.gob.es/static_files/AEAT_Intranet/Gabinete/Estrategia_IA.pdf
[2] https://sede.agenciatributaria.gob.es/Sede/gobierno-abierto/transparencia/informacion-institucional-organizativa-planificacion/inteligencia-artificial.html
[3] https://www.ato.gov.au/about-ato/commitments-and-reporting/information-and-privacy/ato-ai-transparency-statement?utm_source=copilot.com
[4] https://www.gov.br/receitafederal/pt-br/assuntos/noticias/2026/fevereiro/receita-federal-publica-politica-de-inteligencia-artificial-com-foco-em-responsabilidade-transparencia-e-supervisao-humana
[5] https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-overview.html
[6] https://financing.desa.un.org/subcommittee-tax-administration-and-artificial-intelligence
[7] Nathwani, K. (2024). (2024). Artificial intelligence in automated decision-making in tax administration: the case for legal, justiciable, and enforceable safeguards. London: Institute for Fiscal Studies. Available at: https://ifs.org.uk/publications/artificial-intelligence-automated-decision-making-tax-administration-case-legal
[8] In this regard, see Collosa Alfredo “evaluation of the AI systems used by the IRS. Blockchain Observatory. Spain https://observatorioblockchain.com/ia/evaluacion-de-los-sistemas-de-ia-que-utiliza-el-irs-de-eeuu/
35 total views, 8 views today