Advertisement
Advertisement
A Hong Kong police officer holds up seized evidence on May 3, relating to the arrest of 100 people over cyber fraud, including a love scam in which a victim in Canada lost HK$23 million (US$2.9 million) after falling for a swindler’s “sweet talk”. Photo: Jelly Tse
Opinion
Thanh Tai Vo
Thanh Tai Vo

Hong Kong won’t deter financial fraud by blacklisting bank accounts

  • Scammers typically use multiple accounts, and the time it takes to report the crime and investigations to conclude will also undermine the effectiveness of the alert system
  • As more victims are now deceived into initiating payments themselves, it would make sense to track user behaviour and flag unusual IP addresses
In the first half of 2023, there was a significant uptick in financial fraud recorded in Hong Kong. Fraudsters were responsible for total financial losses of HK$2.69 billion (US$343 million), representing a 28 per cent increase over the same period last year.
As businesses rush to tackle this growing problem, the police force, Monetary Authority and the Hong Kong Association of Banks are reportedly working together to stay ahead of fraudsters. They plan to launch an alert system supported by a bank account blacklist next month. While this represents a step forward, it still falls short of what is needed to put an end to these deceptive schemes.

The introduction of a blacklist based on past cases is indeed a commendable step. However, fraudsters typically use multiple accounts at various traditional or virtual banks. Victims might not realise they have been scammed for weeks and it might take even longer to acknowledge what has happened and report it to the authorities.

Furthermore, police investigations also take time. During this critical window, bad actors could create new mule accounts and initiate a new cycle of deception. While it is uncertain how frequently the blacklist would receive updates, this gap will diminish the overall efficacy of the alert system.

Amid the current surge in data breaches and phishing attacks, fraudsters have turned to opening bank accounts using stolen or fake identities. Additionally, bad actors can hire someone to create bank accounts on their behalf, indicating that the implementation may only alter the methods by which fraudsters engage in their illicit activities, rather than addressing the underlying issue.
Cybercriminals are also exploiting the capabilities of artificial intelligence (AI) deepfake technology to pilfer the identities of genuine individuals. By using deepfake technology, attackers can fabricate fraudulent documents and manipulate facial features or voices to establish counterfeit accounts or submit loan applications in the victim’s name.
A Hong Kong police officer demonstrates how to identify and prevent AI-related deceptions during a press briefing on June 30. Opening a bank account is becoming increasingly easy for scammers who are using fake identities and AI deepfake technology. Photo: Xiaomei Chen

All this shows that depending solely on a blacklist is inadequate for mitigating fraud risks.

Government bodies and organisations need to consider additional data attributes for effective fraud detection. These include information related to a user’s device, digital ID, IP, phone number, email and behavioural patterns, all of which play a crucial role in distinguishing between bots and legitimate customers.

The blacklist primarily focuses on addressing traditional third-party fraud, particularly cases involving unauthorised access to a victim’s bank account using stolen credentials to facilitate illicit money transfers. However, the realm of digital fraud has evolved from traditional third-party fraud to encompass more advanced forms of social engineering scams.

In these situations, fraudsters frequently deceive victims into initiating payments themselves. Investment fraud, romance fraud and online employment fraud are examples of such fraud. It’s important to highlight that investment scams result in significant financial losses. Data from Singapore for the first half of the year shows that this type of scam results in losses on average of about S$60,000 (US$43,800).

All he wanted was some Peking duck, but a Singapore scam stole his life savings

Identifying such fraud poses distinctive challenges for conventional detection methods, which typically emphasise the recognition of unauthorised third-party access to an account. However, in the context of fraud involving social engineering, the transaction is initiated by a legitimate customer. This underlines the necessity for more advanced approaches and protocols.

Fraudsters continuously adapt and innovate to devise new methods of deceiving individuals. Depending solely on a scam alert system places the entire burden on regulators to address the issue; this is not a realistic expectation. What is crucial is a collaborative effort, where businesses, customers and regulators join forces in their mission to outmanoeuvre fraudsters.

Other indicators should be incorporated into fraud prevention programmes. First, we should detect anomalies in users’ behavioural patterns. When users engage in activities under stress or external instructions, their behaviour may deviate from their typical patterns.

Fraud prevention programmes should track indicators of unusual behaviour. As such, an analysis of how users interact with their devices would be useful. Photo: Shutterstock

In addition to the standard data points such as devices, digital IDs, IPs, phone numbers and emails, a thorough analysis of how users interact with their devices provides a more holistic risk profile. This approach can aid in identifying warning signs at an early stage when there is a departure from typical behaviour.

Second is tracking active calls during a new beneficiary set-up. When a user is in the process of establishing a new beneficiary account, the presence of an ongoing call and any deviations from their typical device interaction patterns can act as vital indicators. These deviations might suggest that the user is receiving instructions via phone, raising potential red flags.

Asia’s scam menace prompts rare China-Asean-UN unified response

Third, environmental factors can act as indicators. A recent United Nations report raised concern about hundreds of thousands of individuals being trafficked and compelled to engage in online scams in Southeast Asia, particularly in the border areas between Myanmar, Laos, Thailand and China.

It is noteworthy that actions originating from new IP addresses, particularly those located in remote and mountainous border regions, may suggest a potential account takeover.

These multidimensional risk indicators may seem unrelated but, when combining all attributes, they provide a more comprehensive and complete view of user identity risk. This approach becomes even more powerful when combined with real-time data processing.

By uniting stakeholders in Hong Kong and consolidating these identity attributes within a trusted network, the city will have the capability to successfully address emerging threats in this ever-changing scenario.

Thanh Tai Vo is director of fraud and identity strategy, Asia-Pacific, at LexisNexis Risk Solutions

Post