Image credit : Wikipedia
Many chief information security officers are in a tizzy as a result of big tech companies’ soaring investment in chatbots and artificial intelligence in the midst of widespread layoffs and a slowdown in growth.
With generative AI gaining headlines thanks to products like OpenAI’s ChatGPT, Microsoft’s Bing AI, Google’s Bard, and Elon Musk’s proposal for his own chatbot, chief information security officers must approach this technology with caution and get ready with the required security safeguards.
Large language models (LLMs), or algorithms that create a chatbot’s human-like chats, underpin the technology behind GPT, or generative pretrained transformers. However, since not every business has a GPT, employers must keep an eye on how staff members use it.
Similar to how employees use their personal computers or cell phones, Michael Chui, a partner at the McKinsey Global Institute, asserts that people will embrace generative AI if they find it valuable for their jobs.
“Even when it’s not sanctioned or blessed by IT, people are finding [chatbots] useful,” Chui said.
“Throughout history, we’ve found technologies which are so compelling that individuals are willing to pay for it,” he said. “People were buying mobile phones long before businesses said, ‘I will supply this to you.’ PCs were similar, so we’re seeing the equivalent now with generative AI.”
As a result, there’s “catch up” for companies in terms of how the are going to approach security measures, Chui added.
Experts believe there are some areas where CISOs and businesses should start, whether it’s a common business practice like observing what information is shared on an AI platform or integrating a company-sanctioned GPT in the workplace.
CISOs already struggle with numerous issues, such as possible cybersecurity breaches and growing automation requirements. CISOs can begin by learning the fundamentals of security as AI and GPT enter the workplace.
Companies can license access to an existing AI platform, according to Chui, so they can keep an eye on what workers are saying to chatbots and ensure that any shared information is secure.
“If you’re a corporation, you don’t want your employees prompting a publicly available chatbot with confidential information,” Chui said. “So, you could put technical means in place, where you can license the software and have an enforceable legal agreement about where your data goes or doesn’t go.”
According to Chui, licensing software use entails additional checks and balances. It’s usual practice for businesses to license software, whether or not it uses artificial intelligence, to protect sensitive data, control where it is stored, and set usage restrictions for staff.
Also read : User Data is Being Stolen by False ChatGPT Apps: Delete These Apps Right Away
“If you have an agreement, you can audit the software, so you can see if they’re protecting the data in the ways that you want it to be protected,” Chui said.
According to Chui, the majority of businesses currently use cloud-based software to store information, thus jumping ahead and providing staff with a company-approved AI platform demonstrates compliance with standard business practices.