If you are using the widely and wildly popular AI app ChatGPT, or if you are planning on using it, keep in mind that a lawsuit might eventually be headed your way.
Let’s say that someone comes forward and claims that your service or work effort has allegedly caused them harm.
They decide to sue you.
The person suing you might realize that you do not have deep pockets, diminishing the odds of getting much dough out of you. They are going to ergo angle toward something that does have tons of bucks. Something that would be a juicy target for the lawsuit. Something that somehow pertains to your service and they contend contributed directly or indirectly to the alleged harm involved.
Well, naturally, they would seek to sue OpenAI (the maker of ChatGPT) and try to embroil them into the lawsuit too.
You might not have looked closely at the licensing terms associated with your signing up to use ChatGPT. Most people don’t. They assume that the licensing is the usual legalese that is impenetrable. Plus, the assumption is that there is nothing in there that will be worthy of particular attention. Just the usual ramblings of arcane legal stuff.
Well, you might want to consider Section 7a of the existing licensing agreement as posted on the OpenAI website and associated with and encompassing your use of ChatGPT:
“Section 7. Indemnification; Disclaimer of Warranties; Limitations on Liability: (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.”
In normal language, this generally suggests that if OpenAI gets sued for something you have done with their services or products such as ChatGPT, you are considered by them to be on the hook for “any claims, losses and expenses (including attorneys’ fees)” thereof.
Bottom line, you might have to cover your own legal expenses plus whatever financial hit you take from the lawsuit, and furthermore potentially cover the legal expenses and related financial hit that OpenAI incurs due to the lawsuit.
For the past several years, cybercriminals have been using artificial intelligence to hack into corporate systems and disrupt business operations. But powerful new generative AI tools such as ChatGPT present business leaders with a new set of challenges.
Consider these entirely plausible scenarios:
- A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company’s marketing materials and phishing messages that have been successful in the past. It succeeds in fooling people who have been well trained in email awareness, because it doesn’t look like the messages they’ve been trained to detect.
- An AI bot calls an accounts payable employee and speaks using a (deepfake) voice that sounds like the boss’s. After exchanging some pleasantries, the “boss” asks the employee to transfer thousands of dollars to an account to “pay an invoice.” The employee knows they shouldn’t do this, but the boss is allowed to ask for exceptions, aren’t they?
- Hackers use AI to realistically “poison” the information in a system, creating a valuable stock portfolio that they can cash out before the deceit is discovered.
- In a very convincing fake email exchange created using generative AI, a company’s top executives appear to be discussing how to cover up a financial shortfall. The “leaked” message spreads wildly with the help of an army of social media bots, leading to a plunge in the company’s stock price and permanent reputational damage.
These scenarios might sound all too familiar to those who have been paying attention to stories of deepfakes wreaking havoc on social media or painful breaches in corporate IT systems. But the nature of the new threats is in a different, scarier category because the underlying technology has become “smarter.”
When OpenAI launched their revolutionary AI language model ChatGPT in November, millions of users were floored by its capabilities. For many, however, curiosity quickly gave way to earnest concern around the tool’s potential to advance bad actors’ agendas. Specifically, ChatGPT opens up new avenues for hackers to potentially breach advanced cybersecurity software. For a sector already reeling from a 38% global increase in data breaches in 2022, it’s critical that leaders recognize the growing impact of AI and act accordingly.
Before we can formulate solutions, we must identify the key threats that arise from ChatGPT’s widespread use. They include: 1) AI-Generated Phishing Scams, and 2) Duping ChatGPT into Writing Malicious Code.
Regulating AI Usage and Capabilities: While there’s significant discussion around bad actors leveraging the AI to help hack external software, what’s seldom discussed is the potential for ChatGPT itself to be hacked. From there, bad actors could disseminate misinformation from a source that’s typically seen as, and designed to be, impartial.
Generative artificial intelligence is transforming cybersecurity, aiding both attackers and defenders. Cybercriminals are harnessing AI to launch sophisticated and novel attacks at large scale. And defenders are using the same technology to protect critical infrastructure, government organizations, and corporate networks, said Christopher Ahlberg, CEO of threat intelligence platform Recorded Future.
Generative AI has helped bad actors innovate and develop new attack strategies, enabling them to stay one step ahead of cybersecurity defenses. AI helps cybercriminals automate attacks, scan attack surfaces, and generate content that resonates with various geographic regions and demographics, allowing them to target a broader range of potential victims across different countries. Cybercriminals adopted the technology to create convincing phishing emails. AI-generated text helps attackers produce highly personalized emails and text messages more likely to deceive targets.
“I think you don’t have to think very creatively to realize that, man, this can actually help [cybercriminals] be authors, which is a problem,” Ahlberg said.
Defenders are using AI to fend off attacks. Organizations are using the tech to prevent leaks and find network vulnerabilities proactively. It also dynamically automates tasks such as setting up alerts for specific keywords and detecting sensitive information online. Threat hunters are using AI to identify unusual patterns and summarize large amounts of data, connecting the dots across multiple sources of information and hidden patterns.
The threat actor known as BatLoader has been observed conducting a malicious campaign using Google Search Ads to deliver imposter web pages for ChatGPT and Midjourney. Security researchers at eSentire’s Threat Response Unit (TRU) described the campaign in an advisory published on Tuesday.
“[ChatGPT and Midjourney] are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord),” reads the technical write-up. “This vacuum has been exploited by threat actors looking to drive AI app-seekers to imposter web pages promoting fake apps.”
eSentire also explained that, in its latest campaign impersonating ChatGPT, BatLoader uses MSIX Windows App Installer files to infect devices with Redline Stealer.
The installation involves running an executable file and a PowerShell script, which then installs and executes Redline Stealer. The script also executes two requests to the C2 panel, recording the start time and victim’s IP address and documenting the successful payload installation.
May 17, 2023 – Infosecurity Magazine
ChatGPT, a popular chatbot from OpenAI, has potential data privacy flaws that have not been properly addressed, cybersecurity experts at Surfshark have warned.
Italy has recently lifted a temporary ban on ChatGPT after the country’s privacy watchdog said that OpenAI had met its demands over unlawful data-collection practices.
Yet steps taken by the Microsoft-backed company are limited in scope and may continue to be in violation of privacy rights, according to Surfshark analysis.
The basis of the Italian ban was the use of personal data to train ChatGPT models without user consent. This is in violation of the EU’s data protection and privacy laws known as General Data Protection Regulation, or GDPR.
To address these concerns, OpenAI has provided users in the EU with an opt-out of their data being collected to train the AI model.
“However, the form is only available in the EU, and those who do not actively fill out the form can expect their data to remain on the platform,” Surfshark said.
So Apple has restricted the use of OpenAI’s ChatGPT and Microsoft’s Copilot, The Wall Street Journal reports. ChatGPT has been on the ban list for months, Bloomberg’s Mark Gurman adds.
It’s not just Apple, but also Samsung and Verizon in the tech world and a who’s who of banks (Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JPMorgan). This is because of the possibility of confidential data escaping; in any event, ChatGPT’s privacy policy explicitly says your prompts can be used to train its models unless you opt out. The fear of leaks isn’t unfounded: in March, a bug in ChatGPT revealed data from other users.
5 Ways Hackers Will Use ChatGPT for Cyberattacks
1. Malware Obfuscation
Threat actors use obfuscation techniques to evolve malware signatures that bypass traditional signature-based security controls. Each time researchers at CyberArk interacted with ChatGPT, a distinct code was provided, capable of generating various iterations of the identical malware application. Therefore, hackers could use ChatGPT to generate a virtually infinite number of malware variants that would be difficult for traditional signature-based security controls to detect.
By leveraging the capabilities of ChatGPT, hackers can create polymorphic malware that can evade detection and continue to infect systems over a prolonged period. Additionally, ChatGPT can be used to craft sophisticated phishing attacks that can trick even the most cautious users into divulging sensitive information.
2. Phishing and Social Engineering
Phishing attempts were often easy to spot in the past due to poor grammatical and spelling errors. However, with ChatGPT, cybercriminals can create convincing and accurate phishing messages that are almost indistinguishable from legitimate ones, making it easier to trick unsuspecting individuals.
Software company Blackberry has shared examples of phishing hooks and compromised business messages that ChatGPT can create despite OpenAI having implemented measures to prevent it from responding to such requests.
3. Ransomware and Financial Fraud
With its ability to generate human-like responses and understand natural language, hackers can use ChatGPT to craft spear-phishing emails that are more convincing and tailored to their targets, increasing the chances of success. For example, it can facilitate fraudulent investment opportunities and CEO fraud. Hackers can use it to generate fake investment pitches or emails impersonating CEOs or other high-level executives, tricking unsuspecting victims into sending money or sensitive information.
Furthermore, ChatGPT can automate the process of creating malware and encryption algorithms – even hackers with limited technical experience can use advanced AI to build the core elements of ransomware-type programs, making it easier for them to launch attacks. Another potential implication of ChatGPT for ransomware is its ability to learn from past attacks and adapt to new security measures. In response to this evolving threat, organizations can conduct regular security audits, use advanced threat detection tools, and provide regular cybersecurity training to employees.
4. Telegram OpenAI Bot
Telegram OpenAI bot as a service has been a subject of interest for developers and hackers alike. Recently, Check Point Research discovered that hackers had found a way to bypass restrictions and are using it to sell illicit services in underground crime forums.
The hackers’ technique involves using the application programming interface (API) for OpenAI’s text-DaVinci-003 model instead of the ChatGPT variant of the GPT-3 models designed explicitly for chatbot applications. OpenAI makes the text-DaVinci-003 API and other model APIs available to developers to integrate the AI bot into their applications. However, the API versions do not enforce restrictions on malicious content.
As a result, the hackers have found that they can use the current version of OpenAI’s API to create malicious content, such as phishing emails and malware code, without the barriers OpenAI has set. One user on a forum is now selling a service that combines the API and the Telegram messaging app. The first 20 queries are free; after that, users pay $5.50 for every 100 queries. This raises concerns among security experts, who worry that this service will only encourage more hackers to use AI-powered bots to create and spread malicious content.
5. Spreading Misinformation
The recent discovery of a fake ChatGPT Chrome browser extension that hijacks Facebook accounts and creates rogue admin accounts is just one example of how cybercriminals exploit the popularity of OpenAI’s ChatGPT to distribute malware and spread misinformation.
The extension, promoted through Facebook-sponsored posts and installed 2,000 times per day since March 3, was engineered to harvest Facebook account data using an already active, authenticated session. Hackers used two bogus Facebook applications to maintain backdoor access and obtain full control of the target profiles. Once compromised, these accounts were used for advertising the malware, allowing it to propagate further.
HALOCK recognized in 2024 Verizon Data Breach Investigations Report (DBIR) on how to estimate risk.
Estimate risk based on real threat data. Read Appendix D in the 2024 Verizon Data Breach Investigations Report (DBIR) to augment your risk analysis.