Bad Actors Use AI Too – Security Measures Your Company Should Keep In Mind
by: Mark Paulding and Dhara Shah
So Your Company Is Considering Incorporating AI… What Should You Be Thinking About?
The buzz around artificial intelligence (AI) seemingly increases with each passing week – with talk of new regulations, new vendors, and new security issues. As businesses begin to integrate AI, it is more important now than ever to pause and consider the security risks at play.
As with engaging any new technology or vendor – it is critical that your business first makes sure that there is a strong security program in place. Oftentimes, enterprise security programs collapse in the rush to buy the latest tool or hire a new vendor. While it is important to continue to develop your products, be sure not to shortchange your security in the process.
So how can you integrate AI into your business with a security mindset? Depending on how you are utilizing AI, consider the following-
Understand the potential risks the AI system can bring, and set forth guidelines on how to best mitigate these risks. If engaging a third party, be sure to audit its AI system before use. This includes understanding what data was used to create the AI system, its reliability, and any potential biases (as further discussed by our team here.
Enforce secure coding standards, development frameworks, and tools. Keep in mind that AI generated code is still subject to threats, and should still follow regular coding guidelines to reduce and eradicate vulnerabilities.
Ensure a level of human involvement to monitor the AI system and identify any gaps. Human oversight is especially critical in early stages to help mitigate attacks based on vulnerabilities created by AI.
Update employee policies and provide training on AI, including to note permissible uses of AI and how to handle potential threats – such as social engineering attacks. Keeping in mind that the triggers we knew before (i.e., poor grammar) may no longer be as telling due to the use of large language models (LLMs).
Conduct regular security assessments to identify potential vulnerabilities and patch AI systems to address them. Explore using anomaly detection rather than signature-based detection to become aware of any unknown threats. And, update your security incident response plan as needed to address the AI systems.
Having a strong security framework in place will allow you to best mitigate the potential risks that AI may introduce to your business. While chatter of risks associated with AI have been hitting headlines, you may have realized that these threats do not present a fundamental change to what we know already. Rather, the threats have been incremental – AI has simply made the threats we already know in cyberspace more efficient for bad actors to execute.
Common threats enhanced by AI include: (1) Phishing attacks – in which natural language processing and machine learning algorithms can provide bad actors with the ability to better mimic legitimate content and communications. This would make it harder for both email spam filters and end users to flag fraudulent messages and lead to a direct increase in phishing attacks. (2) Deepfake attacks, where deepfake technology can create true to life images, audio, and videos. When in the hands of bad actors, this could be utilized to spread misinformation and deceive users into providing sensitive data. And, (3) automated AI attacks, where AI algorithms could be used to identify vulnerabilities in a businesses’ system and launch attacks to exploit these weaknesses. These are just a few of many ways AI can increase already known risks within cyberspace. And as AI continues to become more present in our day to day lives, we will likely see bad actors utilizing this technology in novel ways.
So, What Should My Business Do Now? Without delay, make sure your security measures are in alignment with industry standards. Before implementing AI into your services or engaging with an AI vendor, ask questions such as- what are the risks associated with using AI and AI tools? How are they being mitigated? How can we best train our employees on permissible uses of AI? What other training do we need in place? Is there a level of human oversight to ensure the AI tool is acting as expected? How can we detect and prevent security incidents? And, keep in mind that concerns with utilizing AI extends beyond just security risks- as further discussed here.
Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.