wEY talks with Jeff Saviano, Global Tax Innovation Leader at EY and MIT Connection Science Fellow, about how artificial intelligence (AI) can be used to improve tax operations. Saviano also talks about how accountability plays an important role in the success of Generative AI.
There have been many advances in AI. Is this technology being used for tax purposes? if so, how?
Generative AI has fueled an explosion of innovation and experimentation as more companies and individuals begin to explore use cases for AI. Rule-based taxing systems are particularly ripe for disruption. With these AI capabilities now becoming more advanced, we’ve ushered in a new era of visibility into data and companies have more opportunities to unlock value, generate revenue, help manage risk, drive improvements, and more. Drive efficiency, and deliver critical business insights – and tax is at the forefront.
How can AI be used to improve tax operations?
We are seeing an increasing use of AI in the tax function. There are many Generative AI applications for tax planning, generally in 3 categories:
- Tax research tools: Ask tax questions to generative AI systems – the results are impressive. We got a glimpse of this with the rollout of ChatGPT4 and a specific tax example – questions prompting both quantitative and qualitative answers.
- Data Search: An extraordinary amount of the tax professional’s time is spent searching for and compiling data. The search capabilities of Generative AI are a significant improvement over traditional data search systems and are now helping tax professionals find and manage the data needed to comply with complex tax obligations.
- Monitoring Tax Law Changes: Tax laws around the world are rapidly evolving. Early experiments with generative AI tools to help companies monitor tax-related law changes around the world are proving to be quite successful.
Law is at the forefront of AI technology. What opportunities does this present for legislation/tax and what additions can we expect?
AI is already moving up the legal tech stack and generative AI is proving to be quite powerful, as the technology is enabling legal professionals to make more informed decisions. In addition, technology is automating legal processes, aiding in the drafting/analysis of legal documents and identifying patterns in data to solve complex legal challenges. And it’s not about AI for profit; These new AI systems have the potential to make legal services more accessible to the most vulnerable members of society.
With tax law, because it is a rules-based system and the way documents, data and various materials are structured, we see AI as an extremely helpful tool from a research point of view Will save tax professionals time when searching. This is especially relevant as 50% of the most senior tax leaders expect to experience more, and more in-depth, tax audits over the next two years, according to recent EY data, streamlining the functions of tax teams. AI can play an important role.
Why does accountability play an important role in the success of Generative AI?
The many generative AI opportunities and diversity of applications being explored must be balanced against some of the risks inherent with this powerful technology. While AI enthusiasm is off the charts, the AI community should strive for responsible AI in both development and application.
Several risks associated with generative AI are emerging:
- Regulatory Compliance: AI is currently regulated in many jurisdictions, and this regulatory oversight will only intensify; Specifically related to data privacy and security.
- Reduce biased results: AI can produce biased results. Even if users don’t control the data that fuels the AI system, there are specific actions users can take to reduce test bias and trouble results.
- Ensure transparency and explanation: Some AI tools lack transparency, and results cannot be interpreted. Users need to be aware of this limitation and to independently validate the key results.
- Protect the human workforce: Both the public and private sectors have an obligation to focus intensely on this risk, launching new initiatives to train workers for the jobs of tomorrow. This is a great opportunity for public/private collaboration and partnership.
A wider set of responsibilities associated with technology ‘supervision’ is also emerging when considering applications of the law. Lawyers have an ethical duty to understand how these systems work and ensure that technology systems are properly supervised, just as lawyers supervise their junior colleagues.
Why should collective intelligence and human-centredness be the priority?
As AI becomes more widely integrated into businesses and industries at large, the evidence is clear that it is essential for organizations to put humans at the center. In fact, research by Ernst & Young (EY) found that those who do are 2.6 times more likely to convert successfully than those who don’t. Collective intelligence represents a symbiosis of human and machine, with real people living at the center and enabled by AI.
This is certainly true for knowledge workers, where AI systems are supporting professionals, rather than AI knowledge workers taking over their jobs. Human-centered AI does not aim to replace humans but to enhance our capabilities through intelligent, human-informed technology.
The powerful combination of machine learning with human values will yield the best results. This is the true essence of collective intelligence, recognizing that machines are clearly better than humans at some things; areas such as data analysis, fraud detection, and language translation.
However, human values such as emotional intelligence and creativity are also critically important. This collective intelligence enables humans within organizations to make more informed decisions and find more innovative solutions.
This interview was originally published in our TradeTalks newsletter. Sign up Here Access exclusive market analysis by a new industry expert each week. We also make sure to check out last week’s TradeTalks video.
The views and opinions expressed here are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.