AI Model Security and Adversarial Techniques in Finance and Accounting Analytics

Authors

  • Pratik Koshiya

DOI:

https://doi.org/10.47941/ijce.3027

Keywords:

AI Model Security, Adversarial Techniques, Finance analytics, Accounting Analytics

Abstract

AI integration in finance and accounting domain is revolutionizing core areas such as fraud detection, algorithmic trading, credit underwriting, and audit analytics. So, while they make operations more efficient and improve decision-making, and this introduce unique challenges and security threats specific to AI systems. This paper aims at giving a complete rundown of such vulnerabilities which are fast emerging for financial AI models, such as data poisoning, adversarial inputs, model extraction, and membership inference attacks, and provides examples based on real-world financial scenarios to show how these attacks could counter fraud detection, influence market behavior, or compromise sensitive data. The study also considers practical defense mechanisms-including adversarial training, input validation, privacy-preserving methods, and live model monitoring-that could be instituted at any point along the AI value chain. Recognizing the urgent need of robustness and regulatory compliance, the paper advocates “security-by-design” approach facilitated by cross-functional teams. These insights are intended to help both practitioners and policy makers work towards the creation of secure and trustworthy AI systems that meet operational and regulatory demands required by the modern financial ecosystem.

Downloads

Download data is not yet available.

References

M. Korolov, “How AI can help you stay ahead of cybersecurity threats,” CSO Online, Oct. 19, 2017.

T. Sweeney, “8 ways to spot an insider threat,” Dark Reading, Sept. 6, 2019.

M. Korolov, “How AI can help your organization stay a step ahead of cyberattackers,” csoonline, Oct 2017.

A. Chakraborty, A. Biswas, and A. K. Khan, “Artificial Intelligence for Cybersecurity: Threats, Attacks and Mitigation,” arXiv, Sep. 2022.

M. Schmitt, “Securing the Digital World: Protecting smart infrastructures and digital industries with Artificial Intelligence-enabled malware and intrusion detection,” arXiv, Oct. 15, 2023.

S. Raja Sindiramutty, “Autonomous Threat Hunting: A Future Paradigm for AI Driven Threat Intelligence,” arXiv, Dec. 30, 2023.

I. H. Sarker, H. Janicke, L. Maglaras, and S. Camtepe, “Data Driven Intelligence can Revolutionize Today's Cybersecurity World: A Position Paper,” arXiv, Aug. 9, 2023.

Federal Reserve Board, “SR 11 07: Guidance on Model Risk Management,” Apr. 4, 2011.

Palo Alto Networks, “What are adversarial attacks on AI/Machine Learning,” Cyberpedia.

“AI use cases in financial services,” SmartDev, 2023.

“The role of Artificial Intelligence and Robotic Process Automation (RPA) in fraud detection: Enhancing financial security through automation,” ResearchGate, 2023.

“NIST AI Risk Management Framework,” Wiz.io Academy, 2024.

“How OWASP guidelines secure your AI systems,” Salesforce Blog, 2024.

“Risks of AI in banks & insurance companies,” Lumenova.ai Blog, 2024.

“Must-have AI security policies for enterprises: A detailed guide,” Qualys Blog, Feb. 7, 2025.

J. Tamene, “AI-Based RPA’s Work Automation Operation to Respond to Hacking Threats Using Collected Threat Logs,” Applied Sciences, vol. 14, no. 22, Art. no. 10217, Nov. 2022.

J. Tamene, “Detecting and Preventing Data Poisoning Attacks

on AI Models,” 2025 PhotonIcs & Electromagnetics Research Symposium, Abu Dhabi, UAE, 4-8 May

“The Role of Artificial Intelligence and Robotic Process Automation (RPA) in Fraud Detection: Enhancing Financial Security through Automation,” ResearchGate, 2025.

Downloads

Published

2025-07-26

How to Cite

Koshiya, P. (2025). AI Model Security and Adversarial Techniques in Finance and Accounting Analytics. International Journal of Computing and Engineering, 7(17), 12–20. https://doi.org/10.47941/ijce.3027

Issue

Section

Articles