Tuesday, August 26, 2025

Can AI wrongly flag legitimate transactions as ‘suspicious’? RBI flags key concerns in latest report

Date:

In its latest report, the Reserve Bank of India (RBI) has flagged some concerns relating to the impact of artificial intelligence on the world of finance. The report says that automation can potentially amplify faults across high-volume transactions.

For example, an AI-powered fraud detection system that incorrectly flags legitimate transactions as suspicious or fails to detect actual fraud due to model drift can cause financial losses and reputational damage, as mentioned in RBI’s FREE-AI Committee Report, Framework for Responsible and Ethical Enablement of Artificial Intelligence.

Another danger that the RBI’s report proposes relates to the credit scoring model. It says that a credit scoring model that depends on real-time data feeds could fail on account of data corruption in upstream systems.

While emphasising the importance of monitoring, it says that if monitoring is not done consistently, AI systems can degrade over time, delivering sub-optimal or inaccurate outcomes.

The Financial Stability Board (FSB) has also highlighted that artificial intelligence can reinforce existing vulnerabilities. One of such concerns is where AI models, learning from historical patterns, could reinforce market trends, thereby exacerbating boom-bust cycles.

When multiple institutions make use of similar AI models or strategies, it could lead to a herding effect where synchronised behaviours could magnify market volatility and stress.

Excessive dependence on AI for risk management and trading could expose institutions to model convergence risk, just as dependence on analogous algorithms could undermine market diversity and resilience.

The opacity of AI systems could make it difficult to predict how shocks transmit through interconnected financial systems, especially at times of crisis.

Blurring of lines

AI deployments blur the lines of responsibility between various stakeholders. This difficulty in allocating liability can expose institutions to legal risk, regulatory sanctions, and reputational harm, particularly when AI-driven decisions affect customer rights, credit approvals, or investment outcomes.

For example, if an AI model shows biased outcomes due to inadequately representative training data, questions may arise as to whether the responsibility lies with the deploying institution, the model developer, or the data provider.

For all personal finance updates, visit here

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Nazara Tech says Pokerbaazi suspends real money gaming; Stock set for worst week since listing

Nazara Technologies Ltd. on Friday, August 22, said its...

Mutual funds: What is Goal SIP Calculator and how can you use it to achieve your financial goals?

Mutual funds: The ‘Goal SIP (systematic investment plan) Calculator’...

TSX rises as big-bank earnings begin on positive note

(विश्लेषक टिप्पणी के साथ अपडेट, बाजार खुली...

US government mulling stake acquisition in defence sector companies – Howard Lutnick

The US administration is actively deliberating on the best...