As the FBI report suggests, generative AI is largely responsible for the rise in financial crime.
“It lowers the playing field.” matt o’neila former Secret Service agent, 5OH consultingsaid.
O’Neill has previously said that cybercriminals specialize in specific parts of crime or specific technologies. They then work together to essentially provide “cybercrime as a service” to each other to defraud victims.
But O’Neill says AI has now become pervasive to the point where cybercriminals don’t actually need any level of technical proficiency.
“Two years ago, low-level attackers at the bottom didn’t have much success and were pure volume plays, but now with the advent of AI, it’s much easier to create sophisticated attacks.” said O’Neill. .
Cybersecurity experts believe that fraudsters are still in the early stages of using AI, but they have already seen some great applications.
Adams and his team recently came across a website spoofing a genuine title company. He is very concerned about this.
“It was a direct replica of the actual title company’s website. Everything was the same except for the phone number, and they were already infiltrating transactions posing as title companies,” Adams said. “Situations like this scare me the most, especially when it comes to advances in AI, because it’s no longer a bunch of humans trying to figure out how to rebuild a website. It’s very simple to just do that and rebuild it.”
But sophisticated website spoofing isn’t the only way fraudsters are using AI. Cybersecurity experts said generative AI applications are also emerging in everyday situations such as phishing scams. Industry leaders say fraudsters’ use of AI makes their scams more believable, and unfortunately for victims, it’s working.
According to the study In a study by Harvard University’s Fredrick Heiding, Bruce Schneier, and Arun Vishwanath, 60% of survey participants fell victim to automated AI-based phishing. The researchers said this is consistent with the success rate of non-AI phishing messages created by human experts. But what worries researchers most is that large-scale language models (LLMs) can be used to automate the entire phishing process, reducing the cost of phishing attacks by more than 95%.
“We therefore expect the quality and quantity of phishing to increase significantly in the coming years,” the researchers wrote. article In Harvard Business Review.
The company’s CEO, Andy White, warns that phishing scams are becoming more sophisticated. close lockEspecially since much of the focus in cybersecurity is on more sophisticated attacks rather than the phishing scams that have been around for decades.
“We don’t think about phishing as a way for fraudsters to use AI to infiltrate the real estate industry. The more people can create a more believable fraudulent link, the more people can click on it, and the more people can infiltrate the title company’s systems and impersonate them. It is also possible to send an email from the title company itself rather than the account, or to change any account numbers that are sent to the fraudulent account,” White said.
While this is scary in itself, cybersecurity experts warn that the ease with which highly convincing deepfake videos can be created could lead to even more terrifying scams.
“The technical hurdles and level of sophistication to carry out these attacks are no longer that high, and the cost of the hardware to carry out the attacks has also come down to a reasonable level,” said the identity verification company. said John Heathman, Chief Information Security Officer. evidencesaid. “We expect to see more instances of real-time face swapping and real-time production of deepfake videos throughout the year.”
Adams believes deepfakes are a very real threat to the housing industry, but he believes we won’t see any fraud using this technology for several months.
“This year we’re going to see some very impressive fake IDs for things like virtual notaries and that’s going to be one of the biggest risks this year, but in terms of deepfakes and fraud. I mean, with Zoom, you don’t know if you’re really talking to the person, but I think we’ll start to find out later this year or early 2026,” Adams said.
With this in mind, cybersecurity experts agree that housing industry professionals are easily overwhelmed by the threat of fraudsters and newly honed AI capabilities, but it’s not all doom and gloom. I believe not.
“Small and medium-sized businesses are becoming more mature in security, with security enhancements like conditional access and dial-up, and this is to be expected,” said Kevin Ninshelser, CEO of a cybersecurity company. premier onesaid.
Fraudsters may have new tricks up their sleeves, but the “good people” also have some new tools at their disposal, Ninchehelser said.
“Many security devices now also use AI, which is extremely helpful in discovering and mitigating more attacks,” said Ninchehelser.
Companies working with Premier One on cybersecurity have begun using AI-powered email filtering products, which Ninsholser says has been a game-changer in preventing both fraud and ransomware attacks. .
“Email filters used to just look at patterns, but then attackers stopped using patterns and started using AI. Our AI tools are looking at behavior and intent. We can stop attempts and attacks coming in via email,” Nincehelser said. Said. “AI tools don’t just look at the link in an email like a human would, they look at the three steps that link takes and what it asks the user to do. From a defense perspective. AI email security is one of the most powerful new technologies to date to solve this problem.”
O’Neill acknowledges the need for advanced fraud detection and prevention tools, but believes the housing industry could also use government push to further improve cybersecurity.
“I am working with state legislators to ensure that basic steps need to be taken when working with clients, such as multi-factor authentication and the use of secure communication platforms other than web-based email. “We’re creating some kind of duty of care requirement. We’re doing transactions over a certain amount,” he said.
O’Neill said at the federal level, there is pressure on the financial sector to leverage leverage. Patriot Act 314b To enable financial institutions to share information with each other. He believes that broader adoption of this regulation will go a long way in preventing fraud.
Part of the challenge, O’Neal said, is that because 314Bs are currently voluntary, many banks have decided not to actively participate. Because of this, banks are often not responsible for losses, which are simply passed on to consumers.
“If we can’t do that anymore, we’re going to have to communicate with each other,” O’Neill said. “If financial institutions take steps such as matching account numbers to account holder names, some meaningful change could occur.”