Can AI write and submit a suspicious activity report?

Can AI write and submit a suspicious activity report?

I pondered this subject the other week and canvassed the opinions of others, including fellow financial crime professionals. Then, this morning I read an article in the Guardian newspaper written by artificial intelligence (AI). So, I determined to stop pondering and start writing.

I am a big fan of AI, I don’t hold any fears or anxieties about a world controlled by cruel and demanding robots. The Guardian article rightly points out, we have far more to fear from humans influenced by hatred seeking to inflict violence upon each other. I don’t believe in ghosts either, my mum always said, “You have far more to fear from the living than the dead”.

The Guardian article was written by a self-taught robot, which proposes we can teach a robot to be an anti-money laundering (AML) compliance professional. I believe AI can make a big difference in the fight against financial crime, because of the capacity to examine vast quantities of data and identity gaps, contradictions, anomalies, perhaps even suspicions of money laundering or other financial crime. AI can be in multiple places, simultaneously, 24 hours a day. Properly programmed, AI can represent you; it can apply your thinking, your policy and logic. Imagine multiple Martin Woods’… (my wife has told me one is more than enough).

Within transaction monitoring, AI can logically articulate why a transaction or series of transactions is unusual, but can it go a step further and apply a label of suspicion? AI can learn the law and regulations; thus AI can understand the ingredients required for an offence to be committed, but is the determination of suspicion an opinion? Can AI give an opinion? The Guardian article references the non-judgmental characteristics of AI.

In many countries the human being who files the suspicious activity reports (SARs), holds an appointed and approved position. Consequently, this person has responsibility and accountability. I have previously held this position and more than once, I have found myself wrapped up in legal arguments with lawyers representing customers and judges opining upon my actions. I cannot envisage a robot in the witness box at the High Court in London.

My reservations are compounded by my sense of ‘gut instinct’, we all have it and we apply it. Some years ago, scientists at Cambridge University undertook a study which pitched man against machine. More specifically, the study pitched a trading algorithm against an experienced manual trader. Guess who won? The manual trader won, and the scientists concluded it was his gut instinct, influenced by his experience, emotional intelligence and wider reading of the market which gave him an advantage.

The study did not assert the algorithm could not trade effectively, rather it asserted the manual trader was better. When I train financial crime professionals and other staff within regulated businesses, I tell them never to supress their own gut instinct, because it is are seldom wrong. This begs the question; can AI develop a gut instinct? For sure, AI constantly learns, develops and improves, but gut instinct is not always logical, and AI makes all decisions based upon logic, I think.

I don’t know if there is a definitive answer to the question I have posed, but if I were to find myself in the witness box in a courtroom, summoned there because of a SAR submitted under my watch, I would want to have been the author of that SAR.