This started on LinkedIn when I came across the claim from someone who’s been to BlackHat that “Nobody and I mean nobody knows how to secure it”.
I’ve had my own quick response, calling on my early career observing Nigerian princes faxing their requests for assistance in money laundering, and how every single wave of technology — email, Photoshop, even the internet — was heralded by doom-sayers as the bringer of ultimate destruction. From a hype perspective, we've gone from "Gen AI is a new and shiny toy" to "We must all have an additive Gen AI strategy" to "Gen AI is going to end the world."
I’ve spent years (decades even 🧓) in this field, working through successive hype-cycle booms and busts, and built tools for data leak prevention and phishing detection. And you know what? There is no need for panic.

Approaching Gen AI (or any new technology) with a calm mind, educating users, putting guardrails in place, and staying on top of both the tech and security landscape is just the same as we've been doing for decades with any other technology.
I've written recently about the Ethics of AI. It's one of the things — along with security, privacy, governance, et al — that I've always been building into my applications, "AI-powered" or otherwise. Maybe it's my background in enterprise and government software, but these have always been prime considerations.
In fact, that was one of the key considerations why I was hired for (and excited to accept) my current role. They liked that I’m not a blind fanboi, and that I could talk about more dimensions than a simple “let’s shove Gen AI everywhere, because it can do everything!” They actually want me to steer the company towards ethical AI, which encompasses security and privacy considerations as well as ethical data sourcing, along with general environmental, social, and governance (ESG) considerations. It’s a major differentiator for our customers.
Back to security. First, let's look at what the current risk landscape actually looks like:
Gen AI isn't new — the proliferation of it due to the hype cycle is.
Its mass adoption is leading to new issues, such as user privacy and copyright ethics; security is just another aspect.
Many of these considerations are unfortunately not top of mind for those developing a new technology (or those rushing to adopt it), and are lagging.
The general public isn't educated on the risks, but they never are.
So there are two main communities of risks: one for those implementing AI, the other is for the general public (including your employees).
Let’s start with the general public. It's a new tool, and has an accelerated pace and reach, but so did email over snail-mail. Saying "Nobody and I mean nobody knows how to secure it” is far from the truth. Whether it’s emails from your boss’ personal email asking urgently for the latest client list because he just can’t access it or if it’s a man with a hardhat and a clipboard politely asking for access to the server room to check the electrical connection — we’ve ‘solved’ those problems by educating people to pay more attention. (Which is, BTW, why spam emails are badly written: they need to weed out the smart people who’ll just waste their time).
Gen AI has some of the same and some new risks. Spam and scams aren’t new, and will always be a cat-and-mouse game of increasing complexity. Deepfakes are becoming better and make it harder, but these aren’t new security issues, just people being people. You need to educate users and find a human-centred balance of security and operational efficiency. (There’s a whole discussion about the erosion of trust across society over the past 70 years which is way outside scope here).
Then there are the risks of actually using LLMs (or other generators) naively, and potentially exposing data that then gets ingested into public models. This is another data leak vector, same as leaving speaker notes in presentations sent to the press, or leaving PDF files on your site when the page that linked them is gone.
In both these examples, the same data security principles apply: educate users, minimise what private info you keep (and know why you keep it, or else dispose of it), protect the raw data, prepare for the unavoidable incidents (from stuff ups to infiltration), etc. All it takes is someone with a security background and an understanding of LLMs to devise and enact security policies around Gen AI like anything else, which is a far cry from the statement above.
Second, when you are implementing technologies, it’s important to remember that Gen AI isn’t the only type of artificial intelligence technology out there. It is the latest evolution and a powerful one, but not the one tool for all the jobs.
In all cases, though, machine learning of any sort relies on data. You should consider the source of the data, rights of use, privacy, storage security, environmental and social costs — just like accumulating any other data. Consider and plan for when those things change (e.g. privacy legislation — GDPR didn’t make anyone go bankrupt).
Have internal and external discussion about your stance on security. Review the security of your whole supply chain (eg the cloud provider, 3rd party libraries, GenAI models and APIs you use, etc). Involve your security people early, take heed of their input. Put yourself in the shoes of your customers, and consider what they’d care about, what they’d find offensive if (when) a breach happens. Reach a balance that works for your customers and your business.
As always, there is little value in blind panic, and you should evaluate everyone (me too 😉) about what motives they may have. Even if you agree with them. Especially if you agree with them. Who is proclaiming AI is the solution to everything? (Silicone Valley money). Who is claiming it’s the end of days? (Consultants and those who profit from attention). Who’s saying calm down and think about ethics and ESG when it comes to AI? (Lots of people, from Mozilla to me, who think the world would be better if people were nicer to each other).
We in product talk about desirability, feasibility, and viability, but we should also always talk about security, ethics, privacy, and ESG in general. Welcome to the world of responsible product development!
I’ve been blogging a lot about AI recently. Obviously from a product management and development lens, but I’m curious: do you see value in these articles? Would you rather I add back more posts about general product management concepts? What brought you here, and what keeps you reading?
Please let me know in the comments!

