Get IT Started Podcast

GISGID EP 24 – Colin Rand and Den Jones

Welcome to another episode of Get It Started, Get It Done. I’m your host, Den Jones. Um, if we don’t make it in the software business, then this is the fallback plan, uh, so I guess we’re lucky our shit’s good. So, um, we don’t have to fall back on that, um, plan. And, uh, so with me, great guest, Colin Rand. I will let Colin introduce himself because I would hate to screw it all up, and even, you know, get his title wrong. ‘Cause I thought he was head cleaner but, you know, I believe he does something else.

View Transcript

Colin Rand (00:58):

Close. Close.

Den Jones (00:59):

Colin.

Colin Rand (01:00):

I am… Well, my name is Colin Rand. I go by Mr. Fancy Pants Security Guy internally here. Um, I will not go into that backstory, that is a conversation for a different day, different podcast, on a different channel, but, uh, stay tuned for this one first. So my name is Colin, I run R&D for Banyan Security which is a short abbreviation for research and development. And, um, so I get to make up a lot of cool stuff. We throw stuff at the wall, see what sticks, but it’s all related to security which is pretty fun these days, because security is… Is a pretty hot topic.

Den Jones (01:36):

Yeah, I believe so.

Colin Rand (01:37):

That’s about it. Yeah.

Den Jones (01:37):

I believe so. And then today we’re gonna talk a little bit about that AI business, that AI security thing, right? So-

Colin Rand (01:45):

It… It’s important.

Den Jones (01:46):

Wh- why do-

Colin Rand (01:46):

I’m struggling with the I part right now, but we’ll add the A and we’ll see what we can do.

Den Jones (01:52):

(laughs) yeah. Yeah, yeah. Well, so first of all for, uh, those who don’t know wh- wh- what… How would you describe AI to our… Our viewers and listeners?

Colin Rand (02:02):

So AI, artificial intelligence, is the paradigm of computer science, and that’s the end of the technical talk. It’s basically trying to figure out how to have computers, uh, emulate humans, and act as humanoid kind of intelligence, and use that to perform human-like functions. So in the digital world, that’s creating content and interactivity that, uh, would pass the Turing test. That’s really what the goal of AI is, so, you should… If th- the most successful AI should be indistinguishable from a human, so-

Den Jones (02:38):

Awesome.

Colin Rand (02:39):

We are nowhere remotely close to that, thank goodness, because, uh, that’s… Uh, still the realm of science fiction.

Den Jones (02:47):

Well, you know, I did watch the Matrix, and impending doom, you know, it’s all looming upon us and stuff, the machines are taking over. That’s what I hear. Um, b- but there’s a couple of acronyms that float around when it comes to AI, so, um, LLM. So can you share, what is LLM mean and th… Giv- giv- give us an example of an LLM.

Colin Rand (03:16):

Yeah. LLM, so large language model, is a representation of language and written content, uh, primarily, not necessarily, ’cause there’s also oth- other forms of media. And this model, what it can do is suggest… I hesitate to say “predict,” but it does kind of predict, uh, the next word that a human would expect, uh, to be uttered or written, um, when asked a- a question, or interacting in a conversation. That’s the gist of it, it’s basically predicting the next word that’s gonna be said.

(03:54):

Uh, so to answer the second part of your question, uh, what are the ones that are well known? So there’s a company, OpenAI, probably has the most well known of these large language models right now. It’s called ChatGPT, and it’s running the risk of becoming called the Xerox, so everything hereafter will be a ChatGPT, but we’ll see. Maybe the term will stick as, uh, LLM.

Den Jones (04:13):

(laughs).

Colin Rand (04:15):

But ChatGPT is this, uh, fun little online… It’s a little bit more than a toy, but it’s still kind of in its toy stage where you can go and you can ask it questions, and you can say, “hey, write me a two-page essay sounding like a seventh grader on the American Civil War,” and it will product something with lots of, “hey bro, what’s ups?” Splashed in. So it’s a… It… It actually gives you a decent description of what happened in the American Civil War. So it’s a pretty powerful tool from that perspective, but it’s got some big drawbacks, and that’s what you’re hearing, uh, is that it still… It doesn’t actually know anything, it doesn’t actually do anything, it’s just trying to predict what you would expect next.

(04:55):

So sometimes it makes stuff up. So it would say, you know, “the Civil War was fought in South America.” And you’re like, “wait a second, we were just talking about the US Civil War.” And it… It… The model is not intelligent. It doesn’t realize you shifted context between North and South continents. Um, but it will do that. And you have to still pay attention to what the output is to… In order t- to trust it, so to speak.

Den Jones (05:20):

Yeah, I- I… I’ve read a bit about, um, reports from people who are saying that it sounds very convincing, the tone is very authorative, referencing articles and things that… That are meant to be real. And then when you dig in and then you start researching it, you realize it’s all bullshit.

Colin Rand (05:39):

[inaudible 00:05:40].

Den Jones (05:40):

Um, or some of it’s… Some of it’s right, and then some of it’s not, but all of it is in the tone or style with a level of authority that you believe it’s all correct. Um-

Colin Rand (05:52):

‘Cause what it’s done… What it’s done is it’s gone out and got vast amounts of data a- and articles from all over the internet, and it’s basically put them in a giant meat grinder, cranked them up, and it’s like, it’s giving you sausage back. Prediction sausage where nothing is net new, but it’s just kind of taking bits and pieces of here and letting you predict what is going to be next. And so sometimes it sounds really insightful, but it’s just because it’s a combination of words you haven’t seen before. There’s no actual intelligence yet.

Den Jones (06:22):

I even… Yeah, th- there was a podcast I was listening to, obviously one, ’cause you know, I like to listen to intelligent podcasts, I don’t listen to this one, but I listen to other ones, you know?

Colin Rand (06:32):

(laughs).

Den Jones (06:33):

And they were… They were just saying where, um, some professors were suing, um, the ChatGPT’s company, ’cause they were like, um… They had been referenced in saying stuff and publiss… Publishing stuff, materials that were incorrect, and actually really slanderous to the character of that professor.

Colin Rand (07:00):

Right.

Den Jones (07:01):

A- and there’s no recourse-

Colin Rand (07:02):

Misattribution.

Den Jones (07:03):

You don’t get this information fixed or… Or improved, or, you know, questioned and stuff. Now, so, LLMs, is… Is that the only type of AI, or is that just the AI we more focus on?

Colin Rand (07:21):

That’s the AI du jour. It’s the… It’s the… What’s the hot topic right now. There are a lot of different models and, um, machine learning methodologies all kind of striving towards this like, general intelligence model. But, uh, it’s just the one that’s like, caughten every… Gotten everybody by a storm, you know, started it, uh, probably last fall with the early forms of ChatGPT-

Den Jones (07:42):

Yeah.

Colin Rand (07:42):

Where they were like, “oh look, it can code a function for you. Oh, look, it can make an image for you.” And now it’s able to write essays, so it’s… The rate of in- in… Wh- what’s gotten ev- everybody excited is, very few… Few people will take a look at the current generation of the tool and say, “wow, that’s the be-all end-all.” But the rate it’s progressing is what has caught people off guard. And so that’s, you know… Th- there’s different models. Um, the latest is Facebook has a model, and Meta has a model app, and-

Den Jones (08:11):

Yeah.

Colin Rand (08:12):

It’s for audio, uh, recreation that they’ve said is, “too dangerous to release to the general public.” Um, so I’m glad they’re keeping it to themselves, you know? Th- that makes me feel so much better.

Den Jones (08:24):

Yeah. And don’t we… Uh, it’s funny ’cause, you know, I used to work at Adobe so I follow what they’re doing, and their… Their Firefly AI is-

Colin Rand (08:32):

Yeah, that is cool, too.

Den Jones (08:33):

Is phenomenal. I mean, it’s cool.

Colin Rand (08:34):

Really cool.

Den Jones (08:35):

Um, and I… But I do… I do remember as well, Adobe were developing technology to the tech Deep Fix-

Colin Rand (08:42):

Mm-hmm.

Den Jones (08:42):

In video, audio, pictures, so in the future you’d know straight away whether that thing is authentic or not. Um, I can’t… I can’t remember if they ever released anything or whether that’s still in the works, but they… They started-

Colin Rand (08:57):

It is a very, very hard problem to solve, especially when the genetor… Generative AI models, as they get improvement over time, and whatever comes next, uh, it… It will be… It’s questionable right now, in my mind at least, whether you will be able to authoritatively detect something with better than 50/50 rate that is AI generated.

Den Jones (09:20):

Yeah, well I’ll tell you what. Uh, the Beatles, they are releasing another record for the first time in over 50 years and, uh-

Colin Rand (09:27):

Oh, I- I’m so excited.

Den Jones (09:29):

They used… Yeah, they used AI to take an old demo from John Lennon and-

Colin Rand (09:34):

Oh, you’re serious?

Den Jones (09:34):

And… Yeah, I’m serious.

Colin Rand (09:35):

Oh, I thought you were kidding. Oh, I thought (laughs).

Den Jones (09:37):

No, no, no. You can… Yeah, you can Google it. Oh, no. No, you can Google this shit. So it… It was all over the British news. Um, and also Techmeme Ride Home is one of those podcasts I listen to a lot. And, um, they were just talking about how they used AI to distinguish and separate the piano, the background noise, his voice, and then take other recordings of him a- and improve the quality of his voice recording. I mean, it’s… It’s pretty slick. I mean, this… This shit is just-

Colin Rand (10:10):

Yeah.

Den Jones (10:10):

Going crazy. And then there’s been a lot of, um, AI used for music production, uh, which is… Is very interesting, you know?

Colin Rand (10:20):

Yeah.

Den Jones (10:20):

So if I want Eminem… If I want Eminem to rap on my next release, then-

Colin Rand (10:24):

Yeah.

Den Jones (10:24):

I can… I can use AI for that. So yeah. Now… Now talking about, y- y- you mentioned the phrase a minute ago. You were like, um, oh fuck. It’s something like, uh, “not ready for it.” You know, “it’s not quite prime time” and stuff. Or “caught off guard.”

Colin Rand (10:42):

Mm-hmm.

Den Jones (10:43):

I- I think the term caught off guard is very apt when it comes to enterprises and how they handle AI. So l- let’s talk a little about… So with your R&D hat on, um, a little bit about Banyan. What is it that you guys are looking at and working on when it comes to AI and security around AI that would maybe benefit our customers.

Colin Rand (11:07):

Yeah, that’s a great topic. It’s, um… It- it’s a really… So, all right. I break it down in a couple of ways. There’s two categories where I think AI is going to have a massive impact. Um, there is going to be externally, uh, the baddy… The bad actors using this technology to break into the enterprise. And fundamentally, their… Their main strategies already are social engineering and phishing. It’s the best way you get in the front. The best way you get in is to be invited in. And if you think about what we were just talking about, like generative content, you, an individual, having a hard time distinguishing what is real and what is generative, are going to be faced with th- these new spear phishing attacks, where we’re describing it as an industry of like, spear phishing at scale, hyper-personalized to you, your interests, and your knowledge.

(12:04):

The context it will have is going to be phenomenal, so imagine this, uh… Uh, you know, there’s some… There’s an email crafted to you that’s been tuned off of LinkedIn and your profile, other people’s profiles like you, jobs that you’re interested in, career trajectories. Trained on your corporate corpus of what’s happening at the company you’re in, and it… And a fake recruiter says, “wow, Colin, I love that you’ve been doing R&D for a couple of years. I think you would be great at this next role. Um, here’s why, here’s what I love abour your background, and here’s where I think you’re a great fit for us.”

(12:40):

It’s flattering to me, it’s personalized, the English is crisp, and they say, “hey, set up a time on my calendar, here’s a link to it, and schedule time with me.” And bam, I’ve just been phished. I clicked the link and that’s it. So-

Den Jones (12:56):

And, uh, there was a telltale… There was a telltale sign in there though, Colin, ’cause he said-

Colin Rand (13:00):

Yeah.

Den Jones (13:00):

He thinks you’d be amazing in that role. That should have been-

Colin Rand (13:03):

I know, flattering.

Den Jones (13:04):

Your trigger. You should have been triggered there.

Colin Rand (13:06):

I’d go like, “aha, this person doesn’t know me.”

Den Jones (13:09):

Doesn’t know me that well, you know?

Colin Rand (13:13):

(laughs). But you think about it and, um… I- I’ll continue on this track of like, what is the… This… The ut… You know, another way to think about it is just this colossal noise at the front door. Um, th- this is going to happen to everybody all the time, and you can’t tell the difference between what is different and what is fake. So how is a legitimate recruiter supposed to reach me now, right? And it’s like, wait, uh, that’s a problem.

(13:36):

Now, put on your enterprise hat. We’re at an enterprise software company, we have… Everybody’s fav… I’m sorry, I’m going to offend some people. Everybody’s favorite, uh, in Enterprise is the sales reps who dial for dollars and call you all day long. “Hey, I’ve got something,” right? Their job is this close t- to spam. And so AI technologies and in the Enterprise are already looking to, “how can I have this SDR sales development rep function be done by an AI, turning out the custom emails?” Right? Trying to get you a link to schedule a demo.

(14:07):

There is such a small line between that as a legitimate purpose, and the illegitimate purpose I just described, but the technology to power it is the same.

Den Jones (14:16):

Yeah.

Colin Rand (14:17):

It’s still gonna train on the LinkedIn profiles, it’s still gonna train on the corporate, you know, public data. And so you’re gonna get this scenario where you can’t… You don’t know what to trust anymore, and that… That creates a very profound security problem when you don’t know what to trust.

Den Jones (14:32):

Yeah.

Colin Rand (14:32):

So that’s the first category of barbarians at the gate, noise at the gate, trying to figure out who to let in. The other is this category of… Of the ultimate insider. So let’s say today somebody breaches in, they get some malware in your computer, and the malware is gonna try to figure out… It’s very clever malware, right? It’s like, “I’m gonna figure out where to go next. I’m gonna scan all the ports in your network, and see who might be vulnerable and crawl over there, a- and go do something.”

(14:55):

Admittedly, it takes a decent security stack, but you can detect when somebody’s spidering your infrastructure and say, “oh, why… You shouldn’t be able to reach… Oh there’s something going on in that computer, looks like the IP address, oh you’ve got some malware, let’s quarantine it.” You could… You can get to the bottom of it. But now, imagine that a malware that doesn’t do that. Instead, it spends a couple days monitoring your behavior, your clicks, where you’re requesting, and then it says, “okay, I see you have normal access patterns between 9am and…” Well, in my case 9:45 until I take my morning break, and then again, you know, when I take my mid-morning break. And so it’s learned my patterns, uh, and it’ll send requests that look like me, right?

(15:36):

It’s saying, “oh, go to this database and fetch these queries,” right? Or, “go… Go… Go to Salesforce and do this.” And it’s acting as if it’s a normal person.

Den Jones (15:46):

Yeah.

Colin Rand (15:46):

And it’s on my computer, and it’s got my credentials, and you… You can’t use anomaly protection. And so in that scenario, it becomes much harder. Now you can take it even farther and say, “okay, uh, lots of generative AI, the idea is that it can write new codes.” And so it… It can, you know, rewrite its own code periodically, stay installed, and just continue and continue to evolve. It’s a little bit far fetched, but you can see where this goes is that it’s really hard to extract out of your infrastructure and your code once it’s there because you can’t…Not only can you not keep up with it ’cause the signatures look totally normal. Uh, and it’s evolving so the… The signature of your binary today is not the signature it is the next day, and so it’s very hard to keep up with. So there’s this ultimate insider.

(16:30):

And so those are the… Those are the two categories that are like, “you know what?” Y- you know, it’s… Wh- when we talk about the future it’s like, the hardest part of imagining the future… Or sorry, the hardest part of predicting the future is imagining it accurately. But once you start to get an idea of what is possible based on technology today, you know, people are gonna hack at that until… ‘Til they can kind of produce those effects.

Den Jones (16:50):

Yeah.

Colin Rand (16:50):

So it’s almost… There’s a… There’s an… A bit of inevitability about that. So your final question is, “okay, well what… What are we actually gonna do about that on a… You know, as a security industry?” And that’s… That’s a hard question, because I know what we’re doing an Banyan, um, and I know the types of research we’re doing, but so much of it is an unknown. You know, I… We were having a lunch conversation today and somebody said, “well, the reason that, you know, Google isn’t releasing their latest is because, you know, there’s no… There’s no standards, there’s no governance for AI.”

(17:22):

And my response is, “how are you gonna govern AI? Like, you can’t… Nobody is realistically waiting around for the governance of AI. How are you going to get the major nations of the world, technology producers and exporters, to agree on AI governance?”

Den Jones (17:37):

Yeah. And it’s… It’s funny-

Colin Rand (17:39):

You know, it’s… It’s so cultural, right? It’s so international, it’s like, “yeah we’ll get a treaty in 30 years,” right? So this idea that you’re waiting for governance is… Is nonsense. So the question is, wh- what do we, as practitioners, do today? And, um, I’ll hand it over to you, get a sip of water, and then I’ll share my thoughts on that.

Den Jones (17:56):

Yeah, well I- I was just gonna say, the UK, they started working on governance. In the US, they are beginning to talk about governance. But like you say, like, an enterprise like Google, or us or anybody else, you’re not waiting on the governance of the world to come up and regulate you first.

Colin Rand (18:16):

And what… What bad actor is going to wait, and what st… Th- there’s several states, uh, nation states around the world that-

Den Jones (18:24):

Yeah.

Colin Rand (18:25):

It doesn’t matter what the US comes up, they’re just like, “guys, um, we’re… We’re in a different country, we’re gonna do whatever we want to anyway.” So…

Den Jones (18:33):

“We’re…” Yeah, “we’re doing… We’re doing… We’re doing it either way, it doesn’t matter.”

Colin Rand (18:35):

Exactly. So we’re not the being… So… A- any type… Any type of, you know… And last time I checked, you know, the internet doesn’t really follow national boundaries as much as we like to think it does.

Den Jones (18:44):

Yeah, yeah. It doesn’t. But, no, when I think of this, I think of… So you mentioned the spear phishing, very highly personalized, at scale, um, the insider stuff where, you know, you can stay in- internally for… For a longer time. And th- then I think of the ex-employees just doing random stuff, wh- where they wanna leverage these tools to do a job better or faster.

Colin Rand (19:11):

Yep. Yeah.

Den Jones (19:11):

And inadvertently, without knowing it, they’re uploading confidential company information into these environments, a- and there’s no real recourse yet, uh-

Colin Rand (19:22):

Right.

Den Jones (19:23):

To be able to protect that data. Um, or the companies like ChatGPT, you know, their disclaimer is, “hey, that shit’s ours now we can use it as we like to prove ourselves.” So-

Colin Rand (19:34):

Yeah.

Den Jones (19:34):

And it’s already been… Some of it’s already been exposed.

Colin Rand (19:38):

Yeah. So I- I… I- I’ll get… I’ll get around to that point in a sec, but I’ll tell a little, uh, fanciful anecdote, so to speak. So I very much view AI in the workplace as, you know, medieval knights, you know? So it… It… A- a knight took 20 people, 25 people in order to support it. You had to grow enough crops to feed somebody who wasn’t working, uh, for the crops. You had to curate the animal… Curate… Uh, shows you how much I know about animals. I just know they have four legs and fur, right?

(20:11):

So you had to, you know, take care of the course, you had to make the armor, you had to kit up… Kit them up right before they go into battle. But once they were in battle, they were unstoppable. So these knights had these superhuman power, but they took 20, 25 people to support one. So I think that’s what we’re gonna see in the workplace in the near term is you’re gonna have th- the knights of AI, or these people who know how to use the tool to be… To… In their job, and they will be 25 times, 100 times more productive than somebody just trying to do th- their job without the AI superpowers.

(20:44):

So the implication is there, so great, that’s a… That’s a vision for the next few years. Right now, everybody’s in a race, whether they know it or not, to say, “I need to be that person who knows how to harness the AI, to use it, to really master it, so I can get those superpowers.” So you see things like, “hey, I can get it to write some code for me, and my boss’s boss’s boss’s boss’s boss says, ‘hey, has anybody tried this new code generation thing?'” And I’d be like, “I’m gonna be the eager go getter, I’m gonna try another code base. I’m gonna put my code base on generateyourcodeforyou.ai and I’m gonna see what it generates. I’m gonna tell it what to do, and I’m gonna be th- this awesome guy.” And lo and behold, I had secrets and keys that were in my code base that I just uploaded to some random third party site that I have no idea who it is.

(21:32):

And… So you get this pressure, this innate pressure where somebody’s like, “oh, I saw this great financial modeling tool. Great, I’m gonna upload my financial models and it’s gonna present it in the most glorious, contextual, uh, PowerPoint,” right? You getting the generative graphics, and it’s great if these reports, um, come out looking really slick. And so you’ve got people that are pushing the envelope. You’re trying to leverage th- the power of these AI tools, and the security folks are like, “whoa whoa whoa whoa, wait. Wh- what? I don’t… What?” They’ll just be like, “whoa, back up. What are you doing, why are you doing it? Is it safe? What’s the risk?” And they don’t even know where… We don’t even know where… I- I shouldn’t say “they.” We. We don’t even know where, often to start.

(22:14):

You know, it’s one thing when you’re at a- a tech provider, a smaller company. It’s a whole ‘nother situation if you are at a large, multinational, global enterprise with tens of thousands of workers, some people remote, some people in… In their offices, contractors who are beholden to a complete different corporate structure. What the heck do you do?

(22:33):

So one of the… One of the things that Banyan is doing is, we have… We have a… You know, we’re… Our platform or technology is about making the remote workforce, the distributed workforce more secure, making sure there’s trusted access to the services. And we have the context there that is really important. So we’re looking to see which services, uh, that you’re accessing, what SAS websites you’re going to. We’re saying, “hey, these ones have AI risks,” so we’re bringing that visibility into the product. So, trying to get the CSOs and the security leadership to have that visibility they need, and who is doing what with the AI tools?

(23:06):

We’re working on some cool stuff, we’re incubating it to be able when you’re, uh, uploading secrets, and either… Not just secrets, but any kind of sensitive data, to say, “hey you know what? Don’t do that. Either [inaudible 00:23:18],” and kind of… again, but it’s really about the education there, because anybody that’s determined can get around any security control in any company. Um, so really our… Our philosophy is like, we have to help our enterprise partners become educated in this and visibility… Get visibility into their problems, and then our tool set really helps them to manage the risk about what data exposure they have.

Den Jones (23:41):

Yeah. A- and that’s it, right? So you either block… Block the thing entirely, which… Which some companies are doing as a short-term strategy. But essentially if you do that, then they’ll circumvent it, and they’ll find another way. So… So-

Colin Rand (23:56):

Well, and think about that. W- we talked about the human motivation of trying to get these sup… Like, the superpowers are at everybody’s fingertips, and you’re… And I look at my corporate employer and say, “you’re just bringing the hammer down on me to not get the superpower, and I look to my left and right, and everybody else is?” Like, if that doesn’t work-

Den Jones (24:12):

Yeah, exactly.

Colin Rand (24:12):

I’m gonna figure out-

Den Jones (24:12):

Yeah.

Colin Rand (24:13):

A way to get the power, right?

Den Jones (24:15):

Yeah, they’ll just… I mean, sure they’ll just go home and a different device and figure out a way to take the data from their work laptop-

Colin Rand (24:20):

Exactly.

Den Jones (24:20):

To their personal device, and in the end-

Colin Rand (24:20):

Yep.

Den Jones (24:22):

It’s probably a worse situation. So we… So s- seems like what you guys are working on is a little bit of both, right? We… We enable to blocking if a company wants to block, but then we’re getting more intelligence and fine-grained about, well what is it you’re doing that’s offensive, and can we in- inject ourselves in that workflow?

Colin Rand (24:44):

Yeah, exactly. So w- w- we never want to impede productivity, we want to enable our customers a- and their… And really, their, uh, end users, to be more effective in their jobs. And so it starts at, you know, first with the visibility, identifying the risk, giving them the controls like we said, to block, to permit, but… And then let… And give the feedback to the administrator so that they can again, educate and work, and [inaudible 00:25:09] onboard properly. So you know what? Let’s get the SAS vendor as a properly onboarded and vetted vendor. Or, you know what? Wow, that SAS vendor, they’re gonna break us out of compliance, guys.

Den Jones (25:19):

Yeah.

Colin Rand (25:19):

And you can’t do that, you know? Think about the regulated industries. You’re dealing with health care and you wanted to get some… There’s some amazing new AI technology for detecting different types of, um, diseases, a- and conditions, and I wanted to test it out, right? And I had to upload some radiology, upload a couple health care video reports-

Den Jones (25:38):

(laughs).

Colin Rand (25:39):

And let’s see what it comes up with. But that… You know, th- that’s a massive problem. You can’t do that.

Den Jones (25:45):

Yeah.

Colin Rand (25:45):

So you have to get the visibility, you have to get in front of it, but you have to realize that the technology is outpacing everything right now. It’s outpacing our governance, it’s outpacing our… Our government, it’s outpacing, you know, ourselves. It’s… It’s a… It’s a runaway train and th- there’s a lot of excitement, a lot of hype. It’ll come crashing down, but it’s here to stay.

Den Jones (26:07):

Yeah. I think… I think that’s why a lot of, um, politicians are talking about putting the brakes. It’s almost like… Or… Or even, you know, somebody influential, powerful business leaders in tech, um, are talking about, “oh, everyone needs to put the brakes on this until we get… It gets-”

Colin Rand (26:26):

Right.

Den Jones (26:26):

“S- some sense of smart-”

Colin Rand (26:29):

A- and I think there is-

Den Jones (26:29):

I would imagine.

Colin Rand (26:31):

I would be a little bit cynical when I heard that. They are just saying… They want everybody else to pause so they can get ahead.

Den Jones (26:36):

Yeah.

Colin Rand (26:36):

There is no sincerity behind any of those statements.

Den Jones (26:39):

(laughs).

Colin Rand (26:39):

At best, they are going for regulatory [inaudible 00:26:41] capture. “Wait wait wait wait. Wait until I can get the regulations t- to make sure I am the one who wins,” right? So I- I don’t put a lot of stock. There’s no… There’s no sincerity in that.

Den Jones (26:50):

That’s… That’s funny you use cynic (laughs).

Colin Rand (26:54):

(laughs).

Den Jones (26:54):

Oh, dear. So a- a- as we kind of start to wrap up and stuff, you know, ’cause I know you’re a busy cat and we don’t wanna take up all your day, um, what… What are… You know, so the biggest security stuff that we’re doing on this, we’re gonna focus on, uh, giving enterprises the ability to better insight, and some better control, um, over how their employees are accessing a- and using some of this technology. Are we gonna… Are we gonna adopt, uh, their product for doing self h… Self-healing, self-help?

Colin Rand (27:34):

Yeah, that’s a great question. So there’s a-

Den Jones (27:35):

A- any thoughts? Any thoughts around that?

Colin Rand (27:36):

Yeah, there’s a couple… There’s a couple of things that we are doing, um, some… There’s some low hanging fruits so we can become really familiar with it. We, uh… I’m kind of like… I- it wasn’t a formal hack-a-thon, but one of our engineers spun up, you know, uh, the ChatGPT API, trained it on our public documentation and website, and made a little [inaudible 00:27:55] chat bot, right? And it’s like, “hey, interact with this chat bot to get your questions answered about Banyan.” And it gives you nice responses. And so, um, it was cool. It helped up learn about it, and we’re gonna see if it’s useful to our customers, and we’re gonna put it onto our… Our administrative consoles, so… And it will train… W- we’ll fine tune the model-

Den Jones (28:12):

Yeah.

Colin Rand (28:13):

So it could be a little bit more effective, but… But we’ll help… We’ll see if that kind of thing can help with… Help our end users who, oftentimes, like, there’s always that little bit of hesitancy to interact with a human. It’s like, do I really wanna… They’re busy , right? And I know it’s a support person I’m paying support, but they’re busy and do I really wanna explain I’m having this problem?

Den Jones (28:32):

Yeah.

Colin Rand (28:32):

Uh, it’d be great if I could go somewhere else and get my problem solved before I have to go engage with a human. So just trying to lower the threshold for helping our customers get… Get help was… Is a pretty interesting way to explore. Um, so that’s one area. Second, and I think a little bit more compelling in a long term vision is, how can we build really adaptive policies, and help… You know, so if we’re talking about, you know, these people getting in, how quickly can we adapt policies? Because once… If, you know… If somebody breaks in, the first thing they wanna do is move laterally, and that’s by and large what our product does today. We prevent that lateral movement. But you want to make sure then, uh, depending on normal behavior, you are adapting your policies to limit your risk and exposure. So, can we really get adaptive policies, real time recommendations about, “hey, you should really go ahead and take this corrective action t- to reduce y- your exposure.”

(29:26):

The one… The one that I… You know, the obvious one is, “hey I’ve got some sensitive data, I’ve got 50 people with access to it, and I can see that nobody’s ever accessed it in the last few months, maybe get rid of it.” But there’s other cool stuff. It’s like, “hey I’ve got these five, you know, databases that have been spun up, and I can see nobody’s looked at it in the last six weeks, maybe shut it off and save some money.”

(29:46):

So that’s just the beginning of like, how we wanna make intelligent recommendations, a- and then ultimately adaptive smart policies based on, uh, what we can recommend. Keeping the human in the loop to maintain the, you know, that human level of like, yeah, valid… We still have to validate it as we talked in the beginning. Sometimes these LLMs lie, so you have to have a human there, but we can give that human administrative superpowers-

Den Jones (30:09):

Yeah.

Colin Rand (30:10):

To say, “hey, I can lock down something that’s 10 times the scale than, uh, I previously could.” And what’s so important about that is, that will free us up to do other significant security initiatives. And so w- we see this in our ability to become easier and easier to operate, uh, free up for other kind of more manual, more labor-intensive tasks, so those can be solved and so on and so on. And really, over time, um, helping enterprise security in a major way.

Den Jones (30:38):

Awesome. Awesome. Well, um, I guess that about wraps it up. Is there anything else that we’ve not covered in this topic call that you think needs shared before we… Before we let our audience go?

Colin Rand (30:52):

There’s some amazing stuff, there’s some stuff to be concerned about. All I know it, uh, I’m gonna be in the thick of it trying to figure out which way it up, and telling people what’s on my mind.

Den Jones (31:03):

Awesome.

Colin Rand (31:04):

And being fancy in my pants.

Den Jones (31:05):

Being a fancy pants. Well, I’ll tell you, Mr. Fancy Pants, that was a great conversation and, um, we’ll have to have you back on the show I guess at some point, to share a progress update, I guess. That’d be… That’d be good.

Colin Rand (31:20):

Sounds good.

Den Jones (31:21):

So thank you very much, Colin.

Colin Rand (31:22):

Right on. Appreciate it, Den.

Den Jones (31:24):

Everybody, Col- Colin Rand, Mr. Fancy Pants, the Security Guy, thank you, and, uh, keep R&Ding.

Colin Rand (31:32):

I will.

Den Jones (31:32):

Appreciate it.

Colin Rand (31:34):

Catch you guys later.

Close Transcript

< Back to Resources

Free for 30 Days
Simple, secure, & free!

Quickly provide your workforce secure access to corporate resources and infrastructure.

Get Started Now