How has fraud detection improved over the years?

Share Post

Share on facebook
Share on linkedin
Share on twitter
Share on email
Reading Time: 4 minutes

Key Takeaways

  •  Fraud detection has evolved over the branches of transparent detection and interactive challenge (aka CAPTCHA).
  • Botnets that formerly included only a few bots are now controlling thousands of bots and use digital techniques and advanced header signatures to surpass protections
  • Defenders have migrated to PoW and headless signatures to suppress the reach of attackers.
  • The global fraud detection and prevention market is exhibited to grow to USD 129.17 billion by 2029.
  • The integration of AI and fraud detection has enabled organisations to ward off fraud attempts, improve internal security and ease corporate operations.

Fraud detection and its importance

 

Fraud detection is meant to prevent businesses from falling prey to fraudsters under false pretence. It recognises unauthorised activities such as phishing, identity thefts, etc., by using defences like machine learning and data analytics.

As the world transitions towards digital channels, fraudsters have a vast terrain to exploit and bolster on.

 

On comparing data from Q3 of 2020 with Q1 of 2021, TransUnion found out that the percentage of suspected digital fraud attempts originating from India on financial services businesses had increased by 89 per cent.

Citing the intricacy of attackers, capable fraud detection tools are indispensable.

 

Evolution of fraud detection

 

Fraud detection has evolved over the years to overcome the sophistication of digital attacks. This evolution has branched in the form of transparent detection and interactive challenge (aka CAPTCHA).

 

1) Transparent Detection

This method does not require user interaction. It accesses the signals obtained through code that is present on the web page or application used by the client. Further, it collects information from the way the client communicates with the server.

 

The early 2000s

Basic data-matching systems were extensively used for risk identification in the late 1990s.

With the dawn of a new century, digital mediums advanced so did the attacks on them. However, the botnets were of rudimentary nature and since they consisted of only a few bots, the botnets had to send a number of requests. Clearly, blocking such bots was also simple. Blocking the IP address, rate-limiting, and maintaining blacklists of bots that placed an abnormal number of requests were the popular methods of detecting malpractices.

 

Companies began to collaborate with combined intelligence sources to share data on fraudsters to disarm the attackers from operating freely.

 

Furthermore, because of the simple script used by botnets, their HTTP header signature was in heavy contrast to that of a legitimate browser.

 

2012

 

The script used by fraudsters gained more skill. Now, the botnets were a culmination of 100s of nodes. This period witnessed the introduction of user-agent rotation, advanced header signatures and the spread of digital traffic.

 

To top up the alarming regularity of fraudsters, algorithms were developed that could compute the reputation of an IP address.

An attempt to test clients for JavaScript, unfamiliar to attackers at the time, was initiated.

 

2016

 

The size of the botnets had skyrocketed to include thousands of bots. This fermented into furnished frauds, to the point where attackers gained devise characteristics (aka fingerprints) of legitimate systems to pass JavaScript.

As a result, defenders mothered a new defence – Proof of Work (PoW).

 

PoW was robust and included features that could better test the authenticity of a request. It included challenges such as complex mathematical and cryptographic puzzles.

In parallel, device reputation was developed to validate fingerprints.

 

2018

 

Now, semi-legitimate companies were offering services in fraudulent activities. The attackers were no longer individual entities. Further, about this time, attackers had a sort victory over PoW and device reputation with the help of headless browser technologies.

In retaliation, defenders came up with behavioural biometric detection.

 

2) Interactive Challenge

 

Herein, users are presented with a simple puzzle to interact with and solve.

 

2010 – Word puzzles

 

Users could access a field after entering a string of characteristics and letters. Later, the complexity of the combination of characteristics had to be increased to challenge the use of Optical Character Recognition technology (OCR) used by attackers.

 

2013 – Image puzzles

 

Herein, users were given some images and were asked to select the images that fit the description. This evolution in fraud detection was unsuspecting to the attackers but was soon overcome as they updated to image recognition technology.

 

2015 – Minigames

 

The botnets were now using shelf machine learning algorithms to overthrow the image puzzles. Thus, captcha providers came up with several mini-games.

 

2018 – New forms of puzzles

 

Defenders constantly evolved the games to outdated attackers’ engines with the aim to make attacking costly.

 

Fraud detection-2022

 

“The rapid shift by enterprises to digital environments such as online banking and eCommerce during the pandemic resulted in a surge in cyberattacks.” – Jarad Carleton, Frost & Sullivan.

 

Lately, the use of mobile devices is customary. This in turn has led to an increase in digital malpractices. The first half of 2021 reported a total of 1.2 billion bot attacks, as per LexisNexis.

 

Organisations are faced with increased pressure to recognise fraud as attacks from hackers has an average occurrence rate of 1 in every 39 secondsThe major forms of fraud detection in 2022 are identity fraud, online fraud, payment fraud, insurance fraud, mortgage fraud, mobile fraud, etc.

 

A rapid surge in deepfake identity fraud and real-time payment fraud can also be seen. Moreover, the anonymity of crypto offers a breeding ground for attackers where funds can pipe out in seconds. The global fraud detection and prevention market is exhibited to grow to USD 129.17 billion by 2029.

 

 

 

AI and fraud detection

 

The integration of AI and fraud detection has enabled organisations to better detect fraud attempts, improve internal security and ease corporate operations.

 

With attackers using digital techniques to avoid detection, the best-known way to combat the complex nature of modern digital fraud is to equip fraud detection with machine learning and AI.

 

It is necessary to normalise the use of AI and merge it into mainstream systems. Supervised machine learning can be used to train models to detect fraud much faster than manual approaches.

 

Additionally, supervised and non-supervised machine learning can be combined to spot digital anomalies. Fraud prevention leaders can deliver payment scores in 250 milliseconds, using AI to understand the data and give a response.

 

Another rapid way is to fine-tune fraud prevention scores. Herein, each transaction is examined based on multiple fraud indicators. Later, a composite “score” is generated, indicating the level of risk that the transaction represents

 

Conclusion

 

In conclusion, although getting entirely rid of digital attackers is inconceivable, constant attempts are made by developers to curb the rise in digital fraud.  Despite these attempts, users are under constant fear of becoming a victim of digital fraud and look towards cyber insurance for protection. The cyber insurance market around the world is expected to exceed USD20 Billion by 2025. It safeguards against liabilities arising directly from a cyber security breach.

Stay Connected

Check out our latest posts

Deep dive into the protection gap in India

Reading Time: 5 minutes Key Takeaways A protection gap is the amount of insurance protection that you need minus the insurance protection that you already have. The insurance protection