unknowing insider represented by manhole in a dark street - Banyan Security

Recently, I came across a slick new generative AI tool that proposed to take your presentation, read the content of each slide, give a generated background picture, and modify the layouts to produce a work that looked like it had a team of designers cranking all weekend on it. Using this tool would give you the superpowers of being a Powerpoint, or better yet, a Keynote wizard! Serendipitously for me, I was working on a presentation for an upcoming executive workshop where I wanted to wow everyone with my talk, and I like being an insider on new tech that saves me production time. Naturally, I signed up for their free tier.

The promise that Generative AI brings to professionals is to provide superpowers to the everyday activities of white collar work. The people that understand the tools and can work with them, stand the chance to become a mythical 10x unicorn, more productive and higher quality output than all your peers. I want to be that person, who shows the cutting edge of what is possible, that I have harnessed the mighty beast of Generative AI, controlling it with my finger tips. Deftly I upload my slides and hit submit, waiting for a few minutes for the AI magic to happen and then my work is ready.

The Reality of the Accidental Insider

At the workshop, I suavely hook my laptop to the projector and launch into the dazzling display of creativity. The reception from the crowd is instant and glorious. Everyone is stunned, breathless ooohs and ahhhs come from around the table. I bask in the glory.

Well…no. In fact, nobody cared, and I got one comment from my boss that my visuals were distracting from the point of my material.  Undaunted, I post my experience on both LinkedIn and Reddit where my microfans adore my work with 10 upvotes instantly, I’m on fire!

circuit image - unknowing insider inside man chat gpt security banyan security

But I don’t forget I added my executive slides to the training data set of some random company’s LLM. In so doing, I’m not just a powerpoint wizard, I’m an inadvertent insider – an accidental mole.

There are estimated to be 3-500 new AI powered startups coming in the next 6+ months. By early 2024, there will be a new headline every few days about how this or that tool will be able to perform herculean feats. For everyone that you hear a headline around, there are dozens that you will go unnoticed.

There will be startups making Sales Reps more effective in their outreach emails, possibly even in their voice calls being simulated. Services to write your financial reports more effectively have been funded. Even services to make your headshots epic based on your camera roll.

ChatGPT Security Attacks?

If I were an attacker, I would be eagerly awaiting the onrush of new startups building services upon LLMs. I’d probably start with seeing what data could leak out directly through interacting with the prompts. I assume they would not return credit card numbers, but what the heck, I’d try that first. Then, I’d query the LLM for what an AWS IAM Key looks like, or perhaps ask if it could suggest any encryption keys. This is all the obvious stuff, but if someone had uploaded their source code, who knows what might leak out?

circuit image - unknowing insider inside man chat gpt security banyan security

Most likely though, the Generative AI services coming out won’t be public chat bots like ChatGPT. They will be more targeted services. They’ll be services that are tuned for industries. Take for example, a service that does a financial analysis, just upload your quick books and it will compare you against other companies your size and provide an analysis of where you can save money. For these, I’d prompt to see if I could get context about a specific company. Perhaps my upload is faked, but uses a real company’s information, who knows what it would return? Even if it hallucinates the responses, I will have something quite realistic that I could use nefariously. Perhaps I just email the results to a mid-level account at my target company and freak them out with why I have what appears to be such a detailed analysis of their books 😉

Especially ones that are so enticing for a professional workforce. There are many ways to take advantage of these services, from the use of the service itself and ‘hacking’ it with prompt engineering techniques, to breaching a startup that is not yet focused on security while they build the product market fit, to creating fake products to phish people directly. The strategy is to take advantage of the latest hype and hope enterprises drop their guard.

At the end of the day, the risk of uploading sensitive data to an AI and having that leak is the same risk that has been around since the companies began leveraging services over the Internet. When cloud computing began it was scorned because you couldn’t trust the providers, when cloud file sharing began it was derided as insecure. Could you trust a cloud office productivity suite? In fact, could you trust any SaaS provider that offers a new wizbang capability? 

Looking back in hindsight, it’s easy to realize that the strong hammer of blocking access to these services was not the best route. Rarely was this effective. Full blocking new services caused significant productivity loss when compared to peer organizations that permitted experimentation and learning. Worse yet, blocking access gave rise to the Department of No. and the revenge of Shadow IT.

Looking forward, any categorically new capability can feel scary and new. We want to believe this time it’s different, that these tools are more powerful. Well, yes, that is always the case. These tools are generationally more advanced. But the same security lessons always apply. Balance your workforce risk but be mindful you don’t crush your workforce productivity.

author avatar
Colin Rand
Colin Rand is an engineer and contributor.