Call Us for more information
Toll Free 800-531-3858

Is ChatGPT A Cybersecurity Risk?

No matter your industry, AI is a topic of many discussions and meetings these days. How does it work? How can it be used to improve work output? ChatGPT’s deep learning algorithms and breadth of capabilities are exciting to think about, but as with any new technology, there are questions and concerns that need to be addressed before you dive into the deep end. One major concern every business should consider are the cybersecurity risks to business systems and networks. Let’s take a look at how the use of ChatGPT or similar technologies can increase risks to hacks into your networks and data.

Ease of Malicious Code Writing

AI is already being used to increase the productivity of workflows for programmers and writing code will likely be one of the most prominent ChatGPT capabilities. With no built-in system to flag malware code writing, it’s subsequently one of the top security risks. And it’s already happening. In January 2023, researchers at Check Point reported that they found users in underground hacking forums showing the use of ChatGPT to create encryption scripts and malware, specifically a Python-based file stealer. By accelerating the time it takes to write code quickly, malicious actors can use ChatGPT to increase their amount of hacking attempts.

Increase in Phishing Emails

The most common form of cybersecurity crime, an estimated 3.4 billion malicious emails are sent every day. Although ChatGPT does not support the creation of malicious content, there is no way to prevent it at this time. Malicious actors can use ChatGPT to iterate the wording of their prompts to create more personable and persuasive emails. Because ChatGPT supports more than 20 different languages, it is easier for hackers to write more grammatically correct scripts, with fewer spelling and syntax errors, and improve their chances of getting users to take the requested action.

Increase in Data Security Breaches

As a natural language processing model, ChatGPT (and other AI learning-based language models) are trained on an enormous amount of textual data that is processed to produce responses that sounds human. Keep in mind that these technologies not only learn from what’s already out on the Internet but also learns from what users on input and request. And because ChatGPT does not ask permission to use or reuse data, there is potential that AI chatbots violate confidentially agreements, HIPPA and more. In an example from Dark Reading, a doctor input his patient’s name and their medical condition and asked ChatGPT to craft a letter to the patient’s insurance company—a stark reminder that employees need to be cognizant of the information and data being shared.

As we all continue to explore ChatGPT, and other AI and machine learning technologies, keep in mind that a robust and comprehensive cybersecurity program is a good first step to safeguarding your organization from security threats. Using the right tools and strategies can help you identify threats quickly and mitigate them efficiently. Interested in how Computer Solution can help? Contact us today and learn more about our cybersecurity assessments here.