It seems like every week we get a new announcement about the benefits or pitfalls of AI.
by Ian Curwen
In the last couple of weeks, we’ve seen the launch of a new Chinese AI model and an article from BBC News worth thinking about.
The launch of DeepSeek seemingly took the world by surprise. If we believe what we read, this model is as good as those we’re all familiar with but cost a fraction of the cost to develop and using far fewer resources.
Despite US concerns about TikTok’s Chinese ownership, people are rushing to download the DeepSeek app. They’re blindly accepting terms and conditions which are at best opaque, and at worst non-existent. For a few days after DeepSeek became popular, the links to the terms and conditions were broken. Despite this, it still topped the download charts.
Regardless of the specific security concerns about DeepSeek, it’s a helpful reminder of both AI’s omnipresence and the security risks associated with all generative AI models.
These tools generate content based on information they have learned from training data. The data used to train these models varies depending, and we rarely fully know what sources they use. It often includes both copyrighted and non-copyrighted data, as well as content that is uploaded into the model by users.
This means that everyone should be cautious of the information they share through any AI model. If you upload a picture of your child, how is this used by the model? What about your family finances or personal information? Are you releasing information that might be innocent in isolation but could cause problems when used cumulatively?
For any business, the risks are just as real, and could be much more significant. This is especially the case at Sellafield, where so much of our data can be considered sensitive.
The truth with most AI models is that we don’t really know how they use our data. But we can be pretty sure they do.
This means we must be cautious about opening up these channels.
However, the status quo is not without its risks. As this BBC article outlines, people are increasingly using AI in their personal lives. This means they’re likely to want to use them in the workplace too.
If we don’t authorise the use of any official AI platforms or models, then the risk is they’ll use their own – and we have no control over that or which one they choose. And it’s not just models, it’s the plethora of different AI apps that exist. These are designed to make our lives easier by solving a niche issue or completing a specific task. We probably know even less about the ways they share our data and the models they use to do so.
Right now, some of our people are testing the corporate web version of Microsoft Copilot. They’re looking at how the platform could be used, and what the pitfalls might be of doing so. In this version, responses aren’t opened up to the generative model overall. This provides more data security. It also means that some functionality isn’t as advanced as it might be on the unrestricted versions – it’s a balance, isn’t it?
I’ve used it a few times, and it’s proven most valuable when asking it to analyse data sets, like narrative responses to a survey or sorting feedback. It can do this much quicker than a human, and almost certainly doesn’t have our own biases and assumptions.
But AI can get an answer wrong. There are plenty of examples of models making things up too (like references to non-existent papers or creating its own data in tables).
Someone told me you should treat any AI model like a new employee or consultant. At first, you’re likely to give them lot of guidance. You’re likely to review their work before using or sharing more widely. Do the same with AI.
The communications impact
So, what does this mean for communications?
For internal communications:
· Make sure you have a clear policy statement on use of AI
· Work with your IT departments to test this. You shouldn’t be comparing using AI with not using it, but with people using shadow AI of their choice
For external communications:
· Have a clear and honest statement on when you are using AI, and if you’re using data you have on your customers or residents, be clear on this.
If you work in communications, the reality is that people are increasingly going to expect you to know how to use a wide range of AI tools to aid communications. We’re probably already past the point where people expect you to be able to pen a good prompt and extrapolate useful content from a system.
You’re also going to be expected to be able to create a plethora of AI content – from how to guides to podcasts and videos. You’ll need to know both how, and when, to do this.
Ian Curwen is a stakeholder relations and internal communications professional working in the nuclear industry
Image by Ian created via AI
*Sign up for the comms2point0 eMag*
The comms2point0 eMag features exclusive new content, free give-aways, special offers, first dibs on new events and much, much more.
Sound good? Join over 3.7k other comms people who have subscribed. You can sign up to it right here
AND follow comms2point0 on Bluesky here.