Friday, December 27, 2024

HomeCyberSecurityGenerative AI is a looming cybersecurity threat

Generative AI is a looming cybersecurity threat

IBM X-Force hasn’t seen any AI-engineered campaigns, yet, but mentions of AI and ChatGPT are proliferating on the dark web. 

The X-Force Threat Intelligence Index 2024 report identified over 800,000 references to the emerging technology on illicit and dark web forums last year. While X-Force does expect AI-enabled attacks in the near term, the real threat will emerge when AI enterprise adoption matures.

Right now, there are simply too many AI systems at play. 

Though OpenAI’s ChatGPT has become synonymous with generative AI, a competition is afoot to determine which large language models are the most effective and transparent. In a test of 10 popular AI models, Google’s Gemini outpaced competitors, followed by OpenAI’s GPT-4 and Meta’s Llama 2. The test, created by Vero AI, measures the visibility, integrity, optimization, legislative preparedness, effectiveness and transparency of models. 

Businesses are leaning on their cloud and software providers to facilitate AI adoption. Coca-Cola has a $1.1 billion partnership with Microsoft to use its cloud and generative AI services. And General Mills used Google’s PaLM 2 model to deploy a private generative AI tool to its employees. 

To this point, cybercriminals are focused on ransomware, business email compromise and cryptojacking, X-Force found. But the threat intelligence firm expects that when a single AI technology reaches 50% market share — or when there are no more than 3 primary AI offerings — the cybercrime ecosystem will start developing tools and attacks to go after AI. 

AI can boost already dominant attack campaigns

In February, Microsoft reported that hackers from North Korea, Iran and Russia were using Open AI to mount cybersecurity attacks, which the company then said they shut down. 

That’s not surprising, Melissa Ruzzi, director of artificial intelligence at AppOmni, said. Generative AI can turbo boost social engineering and phishing attacks. 

Threat actors can tailor sophisticated phishing attacks by scraping data about users from all corners of the internet, matching pieces of information that might seem similar to know more about a person. For example, they could identify an Instagram handle that isn’t exactly a person’s real name. 

This isn’t just about sending spoof emails either, Ruzzi said. AI can be used to pretend to be a real person applying for a job at a company, which includes a nicely written cover letter and a PDF resume. 

Hackers can also use AI to determine how long a position has been open “so they can see how desperate you are,” she said. 

Threat actors can use generative AI to crack passwords, increase the launch volume of previously successful attacks, and get around cybersecurity defenses, she added. The sky is, unfortunately, the limit.

AI could be used to bend the truth

Generative AI is also being used to fuel disinformation and misinformation campaigns, said Adam Meyers, CrowdStrike’s senior vice president of counter adversary operations. In the Crowdstrike 2024 Global Threat Report, the company tracked AI images related to the Israel and Hamas war. 

While the faked images are relatively easy to spot — in some faked images, people having six fingers, for example — it’s still being used, and most likely will continue to be refined through 2024 to disrupt elections. According to Crowdstrike’s research, more than 42% of the global population will be voting in presidential, parliamentary and/or general elections this year. 

“We’re already seeing the use of generative AI deep fakes for misinformation,” Meyers said, including fake President Biden robocalls that were deployed during the recent New Hampshire primaries. 

That doesn’t mean corporate cybersecurity professionals are off the hook, because attacks won’t just be contained in the political arena. 

If someone can make a deep fake of a country president, said Meyers, they can also potentially make a deep fake of a company president. A malicious actor could use that deep fake on a Zoom call to get employees to do things like make money transfers, and time it to when the hackers know, via social engineering, that the president will be unavailable at that time.

“The volume of attacks that they can create and the quality of the attacks is really starting to get kind of scary,” Ruzzi said.


Source link

Bookmark (0)
Please login to bookmark Close
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Sponsored Business

- Advertisment -spot_img