Quantcast
Channel: News In Context Archives - The Innovator
Viewing all articles
Browse latest Browse all 29

State-Sponsored Hackers Caught Using GenAI Tools

$
0
0

This week’s headlines confirm what cyber crime experts already knew: state sponsored hackers are using GenAI tools for nefarious purposes, making corporates more vulnerable than ever to cyberattacks.

Based on collaboration and information sharing with Microsoft, OpenAI, the company behind ChatGPT, said it caught and disrupted five state-affiliated malicious actors: two from China, one from Iran, one from North Korea and one from Russia. The identified OpenAI accounts associated with these actors were terminated.

The latest discovery is an example of how nation-state threat actors and cyber crime groups are exploring and testing different AI technologies as they emerge to understand the potential value to their operations and the security controls they may need to circumvent, Microsoft said in a blog posting reporting the discovery.

Indeed, cyber criminals are already harnessing the power of GenAI to create more sophisticated social engineering schemes, find network vulnerabilities faster, produce synthetic media for impersonation, intimidation, or identity theft, and automate phishing attempts and malware development, says a blog posting from cybersecurity company Check Point.

Over a year ago Check Point  exposed initial evidence of cyber criminals showing interest in using ChatGPT to create malware, encryption tools, and other attack vectors that leverage Generative AI.  It also documented how Russian cyber criminals were starting to discuss how to bypass any restrictions to begin using ChatGPT for illicit purposes.

One year after the launch of ChatGPT, “we observe that the use of generative AI has become the new normal for many cyber crime services, especially in the area of impersonation and social engineering,” Check Point said it its blog.

A recent case in Hong Kong is a case in point. A finance employee  received a message from the company’schief financial officer asking for a $25.6 million transfer. Though initially suspicious that it could be a phishing email, the employee’s fears were allayed after a video call with the CFO and other colleagues he recognized. But everyone on the video call was an AI-generated deepfake. It was only after he checked with the head office that he discovered the deceit. But by then the money was gone.

GenAI offers a variety of advantages to cyber criminals including:

  • Reconnaissance: AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations, since the technology’s ability to summarize data at pace will make it easier to identify high-value assets, according to a report by the UK’s National Cyber Security Centre (NCSC).
  • Social engineering: GenAI can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often make phishing attempts easy to spot. This capability is expected to increase over the next two years as models evolve and uptake increases, says the NCSC report. “Phishing, typically aimed either at delivering malware or stealing password information, plays an important role in providing the initial network accesses that cyber criminals need to carry out ransomware attacks or other cyber crime,” the report says. ?It is therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware threat in the near term.”
  • Malware and exploit development: AI has the potential to generate malware that could evade detection by current security filters, but only if it is trained on quality exploit data, says the NCSC report. It notes that “there is a realistic possibility that highly capable states have repositories of malware that are large enough to effectively train an AI model for this purpose.”
  • Spam and KYC Frauds:The integration of AI in spam services to bypass security controls and in Know Your Customer (KYC) verification services for creating fake identification documents, “signify a new level of sophistication in cybercrime”, according to Check Point.

Cyber resilience challenges will become more acute as the technology develops, warns the NCSC report. In the coming year GenAI and large language models (LLMs) “will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts,” says the CSNC report. “The time between release of security updates to fix newly identified vulnerabilities and threat actors exploiting unpatched software is already reducing. This has exacerbated the challenge for network managers to patch known vulnerabilities before they can be exploited. AI is highly likely to accelerate this challenge as reconnaissance to identify vulnerable devices becomes quicker and more precise.”

IN OTHER NEWS THIS WEEK:

CYBERSECURITY

U.S. Government Warns State-Sponsored Chinese Hackers Are Targeting The Country’s Critical Infrastructure

The U.S. Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), and Federal Bureau of Investigation (FBI) warned this week that People’s Republic of China (PRC) state-sponsored hackers are seeking to pre-position themselves on IT networks for disruptive or destructive cyberattacks against U.S. critical infrastructure in the event of a major crisis or conflict with the United States.

ARTIFICIAL INTELLIGENCE

Nokia Unveils AI Assistant For Industrial Workers

Nokia unveiled an AI-powered tool that generates messages for industrial workers, including warnings about faulty machinery based on real-time data and recommended ways to boost factory output.The tool, “MX Workmate”, will expand on Nokia’s existing communications technology used by industrial clients by harnessing generative AI large language models (LLMs) to write human-like text, the company said in a statement.

OpenAI Introduces AI Models That Turn Text Into Video

Microsoft-backed OpenAI is working on a software that can generate minute-long videos based on text prompts, the company said in a statement. Sora works similarly to OpenAI’s image-generation AI tool, DALL-E. A user types out a desired scene and Sora will return a high-definition video clip. Sora can also generate video clips inspired by still images, and extend existing videos or fill in missing frames.

SUSTAINABILITY

Google To Share Gas And Methane Leaks Spotted From Space

Google and environmental group Environmental Defense Fund unveiled a partnership to expose sources of climate-warming emissions from oil and gas operations that will be detected from space by a new satellite. MethaneSAT will launch in March, one of several satellites that are being deployed to monitor methane emissions across the globe to pinpoint major sources of the invisible but potent greenhouse gas.

To access more of The Innovator’s News In Context articles click here.

 

The post State-Sponsored Hackers Caught Using GenAI Tools appeared first on The Innovator.


Viewing all articles
Browse latest Browse all 29

Trending Articles