What future does artificial intelligence have in Tax Administrations?
I have heard that the best way to predict the future is to know the past and the present. In terms of new technologies and especially when it comes to Artificial Intelligence (AI), the answer is not so simple and even less so when it comes to predicting their use and application in Tax Administrations (TAs).
As proof of this, on November 30, 2022, the release of OpenAI’s Chat GPT marked a turning point in the field.
Until that date we knew the “predictive AI,” which analyzes the data to offer recommendations, forecasts, and perspectives, but with Chat GPT the “generative AI” emerged, as the one that learns from the data and uses patterns to generate text, images, music, and code without problems.
The use of AI has multiple benefits for TAs, among which are improvement in information management, analyzing data behavior, improving citizen services, optimizing processes, and reducing costs, among others.
AI also has the ability to facilitate the detection of previously undetectable or hidden correlations, suspicious activity, trends, indicators, etc., which could allow for early detection and preventive action, or mitigation of these risks in real time, which would help to reduce the tax gap and increase tax collection.
For this reason, AI is expanding very rapidly in the distinct functions of TAs considering the immense data capacity available and the increase in computational power with the consequent advance of recent technologies such as generative AI.
We should also consider that the use of AI presents risks, among which distinct types of biases stand out, unjustified use of citizens’ data with violation of personal data privacy, use of non-transparent AI systems, lack of accountability, inadequate protection of taxpayers’ rights and guarantees.
To face these risks, various initiatives or regulatory frameworks have emerged so that AI is always used for the benefit of citizens in an ethical and responsible way, among which the EU AI Law, the OECD AI principles, Ethical Guidelines for Reliable AI of the EU, the UNESCO Global Agreement on the Ethics of AI, as well as multiple laws of the countries that are being approved.
Regarding the trends in the use of AI in TAs, it is interesting to note that already in the “Tax Administration 2023” report of the OECD Tax Administration Series [1] it was said that more than 60% of TAS offer virtual or digital assistants to help answer taxpayer queries and support self-service, a change of almost 30 percentage points compared to 2018.
About 95% of TAs report that they use data science and analytical tools to manipulate third-party electronic data, including other TAs, as well as internally generated electronic data to guide their compliance work. This is an increase of more than 20% compared to 2018.
Additionally, 54.4 percent of TAs are using AI and machine learning in their tax management, and 50% applied robotic process automation, with increases of more than 22% from 2018 to 2021.
Another part of the 2023 document states that “More than 80% of TAs report that they are using or are in the implementation phase for the future use of state-of-the-art techniques to exploit data in ways that reduce the need for human intervention.
For its part, in the recent publication Tax Administration 2024 [2] highlights that progress continues with the use of AI, saying that TAs have been employing technological innovations for years and the number of TAs using virtual assistants, AI and application programming interfaces continues to increase.
For example, among the TAs covered by the publication, the adoption of the use of virtual assistants and AI has almost doubled since 2018.
The authors say that the use of AI, including machine learning, for risk assessment and fraud detection is already implemented or in the process of being implemented in about half of the TAs covered in the publication.
The report maintains that this increasingly sophisticated use of analytics on extended data sets is leading to improved risk management and the development of a range of intervention measures, including through automated processes.
Likewise, the Institute for Fiscal Studies (IFS), an independent economic research institute of the United Kingdom, published a very interesting report entitled in English “Artificial intelligence in automated decision-making in tax administration: the case for legal, justiciable and enforceable safeguards” [3] which focuses on the use of AI in administrative management in TAs to make discretionary or subjective decisions.
Although the report is based on the use of AI for automated decision-making in the TA of the United Kingdom, which is the HMRC, I understand it is extremely useful for all TA and interested parties in the subject.
In the current operational framework, in general, when it comes to decisions made by HMRC (discretionary decisions), HMRC officials are the main decision-makers, although they can count on the help of technology to make them.
However, once AI is implemented in TA management, when a decision is made solely by AI (in particular, machine learning (ML) with no human intervention in the decision-making process), this would reflect a decision made by the system (and not a decision by an HMRC officer); the decision made by AI would be based on the model’s own interpretation of the data (whether labeled or not) and the correlations extracted by the model.
This would reflect a shift in the role of the main decision-maker from HMRC official to AI.
Even when an HMRC official is required to review a decision made by the AI before the decision has an impact on a taxpayer and the HMRC official provides an explanation of why that decision was reached, when a black box AI is used, that explanation is based solely on the official’s understanding of why a particular decision was reached due to the opacity of the black box AI.
The HMRC official’s explanation would involve reverse engineering, the logic and basis of the decision generated by the AI, and this reverse engineering instills an element of uncertainty and unreliability in the explanations provided by the HMRC official.
That is why the report recommends an initiative-taking approach to regulating the use of AI in TA management, rather than a reactive approach, especially considering some of the risks (which have manifested themselves in several jurisdictions that have already adopted AI in public administration).
The document recommends two alternative solutions: tax-specific AI legislation or an AI Charter from HMRC (including some key standards and values that HMRC must respect) (together, both measures are considered as “Tax legal Guarantees for AI”).
In short, what will be the future of AI in TAs, which is the question that motivated me to write this blog? I think that despite the uncertainty about how this technology will evolve, on the one hand, and the regulatory framework, on the other, I think that AI in TAs will continue to expand in all its functions, even in others, not yet in use today.
To cite an example, the virtual assistants that are used for information and assistance today can be used for human resources in the selection and recruitment of personnel, as many private sector companies already do, as well as to support auditors in their work and why not, to exchange tax information within countries between different organizations as well as internationally.
Likewise, TAs officials will increasingly use AI-based assistants for their daily tasks and why not think that there will be “AI officials” who can perform tasks independently, make decisions and even interact with other officials.
If I were to predict what factors the future of AI in TAs will depend on, I would say that we should analyze several factors such as advances in AI, e.g., what AI will come after Generative AI.
Likewise, a key factor will be the progress of the regulatory frameworks in each of the countries where the TAs exercise their functions.
Finally, the decisions of justice in the matter I understand will be a vital issue since in each specific case of use of AI by the TAs, the jurisdictional bodies will have to say whether citizens’ rights are observed or violated.
What I am convinced of is that TAs should never lose sight of the fact that technology such as AI is a tool and not an objective in itself, that is to say that they should only incorporate it when the advantages outweigh the disadvantages and ultimately solve the problems that arise and serve to be more efficient and effective in their task.
But in all this it will always be vital to use AI in an ethical and responsible way, that is, seeking not to violate the rights and guarantees of citizens.
What is your opinion on this subject?
[1] OECD (2023), Tax Administration 2023: Comparative Information on OECD and other Advanced and Emerging Economies, OECD Publishing, Paris, https://doi.org/10.1787/900b6382-en.
[2] OECD (2024), Tax Administration 2024: Comparative Information on OECD and other Advanced and Emerging Economies, OECD Publishing, Paris, https://doi.org/10.1787/2d5fba9c-en.
[3]Nathwani, K. (2024). Artificial intelligence in automated decision-making in tax administration: the case for legal, justiciable, and enforceable safeguards. London: Institute for Fiscal Studies. Available at: https://ifs.org.uk/publications/artificial-intelligence-automated-decision-making-tax-administration-case-legal (accessed: 12 September 2024).
48 total views, 12 views today