Can neural networks keep secrets? Data protection when working with AI

Neural networks are creeping into every area of our lives: from big data analysis, speech synthesis, and image creation to controlling autonomous vehicles and aircraft. In 2024, Tesla added neural network support for autopilot, AI has long been used in drone shows to form various shapes and QR codes in the sky, marketers and designers use AI to generate illustrations and text.

After the release of ChatGPT at the end of 2022 and its popularity, many companies have been actively developing their services based on GPT models. With various services and AI-based bots, neural networks have become accessible to a wide range of users. But if you don't follow information security rules, using these services and neural networks involves certain risks. Let’s talk about those.

Risks of using neural networks

The euphoria caused by the discovery of GPT chat for many people has been replaced by caution. With so many services based on language models, free and paid, users have noticed that chatbots can provide unreliable or harmful information. Especially dangerous is incorrect information about health, nutrition and finances, weapon manufacturing, drug distribution and more. 

Moreover, neural networks are getting better and better and the latest versions can create incredibly realistic fakes, synthesizing voice or video. Scammers use these features to deceive their victims by forging messages and calls from acquaintances and videos with famous personalities.

The main threat is that many users trust neural networks and chatbots in general. Surrounded by an aura of accuracy and objectivity, people forget that neural networks can work with fictional facts, provide false info and generally make wrong conclusions. It has been proven many times that mistakes happen. If you ask silly questions, the damage will be minimal. But, if you use chatbots to solve finance or medicine issues, the consequences can be devastating. Plus, often to get an answer from a neural network, you need to provide some data.

A big question is what will happen to that data afterwards. No one guarantees that the information about you that you included in the queries will not subsequently appear somewhere on the darknet or become the basis for a sophisticated phishing attack.

In March 2024 bug hunters at Offensive AI Lab found a way to decrypt and read intercepted responses thanks to data encryption feature in ChatGPT and Microsoft Copilot. Regardless of how fast OpenAI patched this vulnerability, it’s a great example of how malicious actors can use API vulnerabilities to steal your data, including passwords or corporate info. And vulnerabilities can be used to DDoS the system and bypass protection.

There are several types of attacks on AI and it is important to know the difference. For example, evasion attacks (modifying of input data) are potentially the most common. If the model requires input data to work, it can be modified appropriately to disrupt the AI. On the other hand, data poisoning attacks are long-term. A trojan in the AI model will remain even after retraining. All this can be combined into adversarial attacks — a way to fool a neural network to produce an incorrect result.

Neural networks are not yet protected from attacks, data falsification and interference in their work for malicious purposes, so users should be aware and follow certain rules when working with chatbots.

Precautions and recommendations

The technology of large language models is rapidly developing, penetrating deeper into our lives, and gaining more users. To protect yourself and your data from potential threats, follow some rules when working with neural networks:

  • Don’t share confidential info with chatbots;
  • Download neural network apps and services from reliable sources;
  • Verify the info provided by the chatbot.

Moreover, the main recommendation when working with public neural networks is not to assume that your dialogue with it is private. It's better to avoid a situation where the questions asked contain any private information about you or your company. The exception is if you are working with an isolated instance of a neural network, located in your environment and for which your company is responsible.

Also, check the services through which you interact with the neural network. An unknown channel im your messenger promising free work with all known LLM models definitely should't be trusted.

Companies, whose employees use neural networks at work, should be extra cautious. The interest of malicious actors in corporate data is higher, and they look for sensitive organizational information first and foremost.

The best way to protect against cyber threats is to have ongoing cybersecurity and AI training for employees. This is a must have in any workflow. Through training, it is possible to improve specialists' skills and, consequently, reduce the number of attacks by more than 70%.

Additional measures should also be taken to enhance the overall IT security of the company. First of all, you need to develop improved AI training algorithms considering its vulnerabilities, which will make the model more reliable by 87%. It is also necessary to "train" the neural network: to let it handle artificially created cyber attacks to improve the algorithm. This will help reduce the number of hacks by 84%. Moreover, it is necessary to constantly update software to reduce vulnerabilities by more than 90%.

Conclusion

Both companies and ordinary users already tasted the benefits of neural networks. In many areas, they help solve everyday tasks and save time and money. For example, generative neural networks affected the cost of making movies, TV series and other videos where graphics and processing are needed. At the same time, roughly the same neural networks have caused a wave of deep fakes, such as new variant of the Fake Boss attack. 

Every user must understand that the neural network is vulnerable. Just like a messenger, mailbox, or work task planner — it can be hacked or fail, so it is important to work with it consciously.