With ChatGPT and generative AI, new worlds of possibility are opening up, with applications for businesses both big and small. But when a new technology appears in the market, it can take some time for its full potential to be understood.
So when something like ChatGPT comes along, there are some important questions to address. Does technology like this have a place in your business? If it does, do you have the expertise to fully evaluate it? Does using it have security implications? And perhaps most importantly, are staff already using it, even if you aren’t aware of it?
It should go without saying that staff don’t set out to undermine security and cause problems for their company on purpose. Nine times out of ten, if they bend the rules or take shortcuts with security policies, it’s because they’re looking for better or easier ways to do their job.
So if a new tool comes along that seems to be helpful, it's entirely possible that they might use it without being aware of all the implications and consequences that might result.
"Tools like ChatGPT can be really useful and can seem to present a lot of advantages. The work of ten people can be done by one if it’s set up correctly. But from a security point of view, there can also be problems," said Muhammad Salahuddin Jawad, security architect with Vodafone Ireland.
"While we might use them to allow us to perform work faster or to automate aspects of our job, hackers and cybercriminals are also looking at these tools to make their work easier too. They are looking for ways to automate and streamline their activities, and if others are using a new tool they will also look at what vulnerabilities that open up."
Muhammad Jawad believes that the explosion in AI capability that is taking place right now heralds the birth of a new era in cybersecurity, one in which AI will be pitted against AI to break security protocols.
"These are powerful technologies, and they can be used for good and for bad. They will rapidly become too clever for people to effectively counter and we will need to produce good AI to defend against bad," he said.
While ChatGPT can be extremely impressive and can give the illusion that it has some kind of intelligence, in reality its focus is quite narrow. It is what’s known as a 'large language model' or LLM, software that predicts what kind of text it should generate next, based on what has been requested by the user.
It's been trained on huge amounts of information culled from the internet so it appears to be able to answer any question, create complex pieces of text and even write code to achieve complete tasks, but it has no insight into the meaning of what it is saying. As a result, it's capable of what’s known as 'hallucinations' in which it produces plausible answers to questions that are nevertheless not correct.
"It's important to be aware of how potentially sensitive information is used with these technologies. For example, what if someone in your company decided to use ChatGPT to prepare a presentation. Surely that would be harmless?" said Muhammad Jawad.
"But in order to do that, they'd have to feed the data they wanted to present into the application, so that it could prepare the slides. If that is sensitive company data, then there are obvious problems. You can't do that with sensitive information, and it's important that staff know that."
There are also other issues that can arise if, for example, staff use generative AI to write speeches, articles or blog posts that they then present as their own work. If these are posted online and are later found to have been generated by AI, that could cause reputational damage.
In addition, it’s possible that others using the same kind of requests may also get similar results and also post them, leaving readers wondering who copied from who?
"New technologies like ChatGPT will continue to appear and evolve, so the best solution is to have an overall approach that can deal with these developments without needing to be adapted to each one. We recommend that companies adopt a 'zero trust' approach to how they manage technology," said Muhammad Jawad.
"It's important to have a policy that everyone is aware of. It shouldn't matter if someone is sitting right next to you in an office, all communications should be predicated on the presumption that you don't know or trust the identity of where they have come from. And likewise, all applications like ChatGPT should be treated with suspicion."
Zero trust is a security strategy that requires all users of corporate IT assets, whether inside or outside an organisation's network, to be identified, authorised and have their access continuously validated for security before being granted access to applications, data and assets.
The digital equivalent of 'presume the worst and hope for the best,' it assumes that there is no traditional network edge -- networks can be local, in the cloud or there can be a combination of these approaches with resources accessed from anywhere.
To read more about ChatGPT and the positive impact it can have on your business, click here.
For more 1-2-1 business support, you can chat with our friendly V-Hub Digital Advisors.