Get IT Started Podcast

GISGID – EP 26 – Theresa Payton and Den Jones

Hello and welcome to Get It Started Get It Done, the Banyan Security Podcast covering the security industry and beyond. In this episode, Theresa Payton returns to the podcast for a discussion focused on AI, including the work she’s doing to help organizations identify and mitigate AI-related risk. We hope you enjoy Den’s discussion with Theresa Peyton.

View Transcript

Speaker 1:
Hello and welcome to Get It Started, Get It Done, the Banyan Security podcast, covering the security industry and beyond. In this episode, Theresa Payton returns to the podcast for a discussion focused on AI, including the work she’s doing to help organizations identify and mitigate AI-related risk. We hope you enjoy Den’s discussion with Theresa Payton.

Den Jones:
Hey, everybody, welcome to another episode of Get It Started, Get it Done. I’m your host, Den Jones, and we have once again, my good friend Theresa Payton, on the show. I love, love, love having her as a guest because first of all, she’s extremely busy, so getting her time is incredibly hard. And also she’s been on the John Oliver Show, so for me, that gives credibility, right? So Theresa, why don’t you introduce yourself and do a little bit better of a job than me.

Theresa Payton:
Sure, absolutely. Den, always great to be with you. You should definitely have your own John Oliver show, definitely. So hi everybody, great to be here. And just a little bit about me, you can go to ChatGPT and ask it about me, but be careful about the hallucinations, okay? Be careful.
Spent some time in the financial services industry, had just incredible projects to work on and teams to lead and run while I was working in the financial services industry. Did a stint at the White House. I was the first female chief information officer, and I worked for President George W. Bush 2006 to 2008. And then I started my company Fortalice Solutions. And one of the things that we have focused on for a long time before generative AI hit the zeitgeist of social conversation is securing with the human user story in mind, all different types of technologies and really thinking about how does big data, big data analytics, behavioral based algorithms, machine learning, and yes, AI and generative AI, how does that all play a role, not only in securing us, but how will cyber criminals and nation states use it against us?
And then what’s the right set of guardrails to put around this amazing technology so we can benefit from the best of the technology and what it has to offer while at the same time not putting trade secrets at risk, not giving erroneous errors where in like healthcare, that could be life and death. So really thinking about, again, that human user story and how can we make sure that the power of artificial intelligence works for us and not the other way around.

Den Jones:
Yeah, I always think of this great technology can be used to do great things and great technology can be used to do really bad things. On that thought, what’s the biggest thing that you’re most concerned around AI from a security perspective?

Theresa Payton:
I think, Den, to put it in perspective, so I’ve got certain movies genres I really love. Have you seen Mission: Impossible – Dead Reckoning Part One yet? Have you seen it yet?

Den Jones:
That’s the new one that’s just out, right?

Theresa Payton:
That’s the new one.

Den Jones:
So no, I have not seen that one just yet.

Theresa Payton:
So AI plays a very prominent role in the theme of the movie, but I won’t spoil it for anybody who hasn’t seen it yet. But there’s a quote from one of the characters in the movie, Ilsa Faust, and she says, “The world is changing. Truth is vanishing.” And to me, we’ve learned a lot of lessons the hard way over the years with technology. So if you think about back in the day, even in financial services industry, we would use decision engines to make credit underwriting decisions more effective, more consistent, faster, better productivity. But an unintended consequence, unless you have the right playbook in place, is you could discriminate against groups. You could potentially not take into account a friend of a friend or a family member. And so you always needed to have the proper governance and thinking about this, you needed to have a champion versus challenger around the technology and really inspecting what you expect.
And one of the concerns that I have right now, Den, is we actually do have playbooks for how to basically integrate generative AI into our personal lives and into the workplace. It’s the playbooks we already have for older technologies. We just need to dust them off, tweak them, think about them, experiment and update the playbooks. And a lot of people want to lead us to all to believe like, oh, there’s no framework for this, and no, you just have to use it. And we just have to figure out, it is going to be so powerful and if you don’t use it, you’re going to be left behind.
Well, we have a playbook of thinking about governance, maker checker rules, all of that. It just needs to be updated. So I don’t want people to be sitting on the sidelines because they’re afraid of what the technology can do and feeling ill-equipped or they’re not up to speed. You’re up to speed and it’s going to change next week anyways. And so just jump in, hit the playbooks, learn from others, have conversations like you and I are having right now, Den, and we’ll all learn together.

Den Jones:
And when you think of a playbook, do you have an example? Are you thinking when the internet first emerged and then industries started to become online and do digital transactions rather than paper ones? Is it things like the governance around that? Would that be an example?

Theresa Payton:
Yeah, so here’s an example. We went to actually semi-automated credit decisioning. Now, underwriters will take into account a lot more things than you could ever program into a decision engine. And decision engines have been around for a long time, whether it’s for risk, for fraud, even thinking about physical security, different events are going on around a bank building. And so decision engines would tell you, well, you only have this many guns and guards, you only have this many cameras, so here’s how you want to think about maybe cordoning off the sidewalk, or here’s where you want to take your limited resources and place them. And so those decision engines have been around a long time. They were in place before I entered banking. So those playbooks though, would make sure, for example, with a decision engine that’s making a credit decision that you’re not, because the decision engine isn’t thinking through DEI.
It’s not thinking through equity, it’s not thinking through personal relationships, banking relationships. It’s not thinking through, this person is tied to this person and they’ve got this next life event and all the things that a banker can learn about you over the years. And so there’s a governance model where you can say, “Okay, we’re going to use automation, but we’re also going to allow a human being to review the results and make the ultimate decision.” And even if it’s fully automated, we would then have a team, an independent team review the results manually. And you do that as part of good governance in financial services industry. Every vertical industry probably has playbooks like this, whether there’s good governance, whether it’s food or retail or any of the types of things you’ve done over the years, you could take those playbooks and modify them for this technology.

Den Jones:
Awesome, awesome. Yeah, it’s funny because when I think of AI, I think of it from what will bad actors do with it? And I don’t really think of it about how does industry evolve and then what do governments think about all of this? And recently a lot of the governments have been huddling together, they’ve been talking about what to do about AI. In your opinion, what do you think governments should be thinking of their involvement as they deal with the private sector on this?

Theresa Payton:
Well, I always say to people when they’re like, “The government needs to get involved.” I always say, “Be careful what you wish for.” You may not like what they come up with, and by the time they come up with it, it could be outdated. They’re thinking on this. And no offense, we ask elected officials, we’re like, “Okay, we want you to think about climate change. We want you to think about energy independence. We want you to think about schools, education, transportation. And oh, by the way, how about this technology?”
And so I challenge big tech. I challenge cybersecurity industry. I challenge all of us as professionals. We have to think differently about this because we got it really, really wrong in a bad way around social media. We didn’t do right by our kids protecting them. You have to go through more rigamarole to get a driver’s license or to register to vote than you do to hop on social media.
And for our kids, I think we’ve really done them a disservice, honestly. And so we got it really, really, really wrong on data collection, privacy, and all those things on social media. My philosophy is I don’t think we’re going to get it super right all of a sudden on generative AI. And so I think, Den, to your point, we need to be thinking about in technology, we understand this subject matter the most. We need to be proposing self-governing of this technology.
We need to be going to elected officials as a community and saying, “We love this technology and the transformative powers. If we were you, these are the types of regulations you should put in place. By the way, why don’t we incent companies around the world to make investments in putting the right guardrails in place by giving them research and development tax credits, doing different safe harbor. If you’ve done all the guardrails and you end up having some type of an incident, we’re not going to throw the book at you, so to speak, because you did the right thing based on the guardrails.” So I think more work has to be done, and I think it needs to come from us in the ecosystem.

Den Jones:
Yeah, it’s funny, because there’s only, first of all, government has limited resources. I think people think of government like this unlimited pool of funds. They have big budgets compared with some tech leaders, but the reality is their task versus the dollars they have, most of us in business, it doesn’t make up. So you can’t just suddenly say they should throw money at it and have enough resources. We know they don’t.
I think that’s where that partnership really needs to come in. And yeah, the one thing I think of with industry is we have a huge competitive landscape where companies are trying to get ahead of one another, because they’re trying to get money and make revenue and share price and all that stuff. And I think what normally happens, and I see this will happen here, is the companies are going to be like, I want to be first to market, first to market first to market. Security and other shit can come later.
And so I think with AI, we’re going to see a whole run of companies that are just with that mindset trying to get first and then as with social media, then they’ll catch up later, they’ll start to repair the damage. But my concern is the problem with AI is the damage is way more impactful with AI than it was was with the advent of web because I think it’s going to take all of the social, all of the web stuff that we’ve ever seen before and just spiral this thing out of control.
Not to be a doomsday person about it, but I look at it, there’s the good guys, there’s the bad guys, and sitting in the middle, there’s all of our data and information, and I look at this like, many people are going to start uploading information into these environments that frankly they don’t know how to control and it shouldn’t be near those environments.
So from a data privacy perspective, whether it’s a company’s IP or a PII and healthcare or banking or whatever, what do you think is the biggest concern about data privacy in general, or what do you think is the biggest thing corporations should think about when they’re working with our data?

Theresa Payton:
Yeah, sure, Den. We’ve been working with organizations for quite some time around thinking about when do you use, if you think about the early days of the cloud, I like to use this as an analogy because it wasn’t too long ago. So the early days of cloud, you could say to your employees, please don’t just use your personal box or Dropbox or Google Drive, name any kind of cloud service that was consumerized first. Don’t just use these personal for uploading and sharing files. Give us a moment to find something user-friendly that you can use that’s protected corporate sanctioned. And so we’ve been working a lot with organizations both on creating their own guardrails, but you can see what even happened to Samsung.
So they created guardrails for their engineers, and oops, the engineers accidentally weren’t in their own private sandbox, if you will. They accidentally were in the commercially publicly available version, and their trade secrets are out there. Now, there’s a couple of things I want companies to be thinking about is one is how do you leverage the power? Who do you pick as the right partner and have some dialogue with them around if we do have accidental disclosure, what does that look like? The other thing to be thinking about is training your own employees. Here’s the dos and don’ts. What we encourage, here’s what we discourage, and this is why. Your employees want to do a good job for you. So you want to do as much as you can to enable them, empower them to not do accidental disclosures that give you problems with trade secrets or customer data. The other thing I think about though, Den, is if I were a nation state with nefarious intent, I would start going after the data taggers.
There’s this industry of people behind these huge data lakes, behind all the algorithms. Sometimes it’s outsourced too, offshored too, and they’re responsible for tagging all this data so that you get more accurate results, more accurate information in a speedier more timely manner. I would social engineer the people doing the data tagging because we have no idea, but I’m going to assume they probably have some pretty interesting administrative access to the data lake. And so that’s a concern as well. And then there’s the other piece, which is data lake poisoning, so do you have people with nefarious intent who have an engineering mindset, and are they specifically doing prompt engineering in such a way that they’re poisoning the data lake with misinformation and disinformation?

Den Jones:
No. Yeah, it’s kind of scary when you think of the way people may attack us. They may come directly to the employees of general companies, but yet the background of all these AI companies, that data set that sits there, the minute people can just break into that, then they can change all sorts of nonsense. And it is funny because when you play around with ChatGPT, there’s a level of confidence in its answers, but there’s also a level of bullshit in the answers. I’ve already heard of some lawsuits because one of the other problems is it’s either impossible or very hard to change the data and to change the answers, the things like ChatGPT are going to provide.
So there was professors, some universities that were suing the company just because of that, because they had said something about them being involved in pedophilia or something really, really crazily not just wrong, incorrect, but damaging to their brand and reputation and career and stuff. So I think that data at the backend is crazy to protect. How are we going to do that? We all know nothing is 100%. So the reality is when that person gets compromised and that data gets accessed and that data gets modified, what plans or contingencies do they have in place for that incident or that event?
Now, talking about incidents and events, you guys often have clients where you help them with incidents, you do tabletop exercises. Two questions for that kind of chain of thought here. One is, what is the scary stuff that you’re seeing related to AI with your customers? And then what is it when you speak to them that they’re actually concerned about? So let’s say you’ve got somebody who’s never been attacked, what are they concerned about? You’ve got people who are under attack. What are you seeing with all these attacks?

Theresa Payton:
For our customers that are in critical infrastructure, critical infrastructure is things like, there’s I think 14 categories of it now or something like that, 14 to 16. But if you think about it, all the things that make your day a blessed day. So it’s energy, it’s transportation, it’s food safety, it’s money movement, financial services. And so for those that are in these truly critical infrastructures, they are concerned about generative AI being used to get network schematics, get information that’s not something that’s easily garnered through traditional OSINIA, open source intelligence investigations. They’re sending it, maybe they get a bit of information, and then they’re putting it into the generative AI to get it to analyze it, reverse engineering different things that they find or source code, maybe they find a snippet of source code and they’re asking it for reverse engineering and through engineering prompts, figuring out where the vulnerabilities are in the source code.
There’s huge concerns around that and discussions around how do we tabletop that for our board? We finally just got through ransomware, tabletops with the board, how do we tabletop this? And there’s not enough things being reported yet. So instead of pulling from the headlines, you’re creating the headlines, the potential headlines. So those are some of the things that they’re worried about, what keeps them up at night. On some of the distress signals that we have received, we are doing both proactive but also reactive looks at all of the different generative AI. It’s like playing a game of Go Fish only it doesn’t always acknowledge. If you say, “Do you have the eight of hearts?” Sometimes it’ll tell you no. But if you go back and say, “Do you have a card that on the back it’s blue and white, but on the front it’s got an image of an eight and some hearts on it?” It’ll say, “Oh, look at all the ones I have that look like that.”
So it’s kind of playing a game of Go Fish with someone who’s a little psychotic. So sometimes you get there, sometimes you don’t. And so one of the things that we offensively, purposely look for in a very careful way is are there some trade secrets, misinformation, proprietary or confidential information that’s sitting resident? And then you have to go through this laborious manual process right now, and of course they’re all different, to request that information to be removed. That’s an interesting odyssey of a journey in itself.
And then there’s the distress signal, which is, “Hey, someone said they asked fill in the blank bar or ChatGPT, and it said these terrible things about me. What can I do about this?” Or things like that. So really just trying to understand offensively what’s out there and then defensively if something is found, navigating these crazy platforms, trying to figure out how to get that information scrubbed, removed, corrected.

Den Jones:
Yeah. And have you ever heard of anyone having any success on that data being scrubbed, removed, corrected?

Theresa Payton:
There has been limited success by people sadly being victims of misinformation. Like you mentioned, those individuals were, it accused them of heinous crimes that never, there’s no court record, nothing of the sort for that person. That’ll eventually get scrubbed and now they have a court case, which is probably why they did it, was in order to clear your name is to sue and say, “This wasn’t me.” But that’s a lot of money and that’s a lot of [inaudible]
So I think what you’re going to see, this is my humble opinion, I have no insider information on this. I believe that either Canada or the European Union is actually going to lead the way on this and just like they had the right to be forgotten clause with GDPs, they are going to require that if you have a generative AI product that you make available to the public, the right to have things scrubbed, I believe it’ll be required at some point. And I believe maybe Italy or the European Union or Canada, some of these countries or some of these groups like the European Union are going to lead the way on that. Sadly, it will not be America.

Den Jones:
Yeah, I can definitely see the EU. They’ve already from a government perspective, begun to huddle on what legislation across the EU they’re looking for. So I can see the data protection side of it being a big thing. And from a privacy perspective, especially countries like France and Germany, they so love the privacy. Now talk about data and privacy. You have a book on data privacy, not one book. So why don’t you explain and do a little shameless plug for the book and tell us all about it?

Theresa Payton:
Sure, absolutely. So I have a co-author for two of my books, and then I authored one book by myself. So our first book was Protecting Your Internet Identity. Are You Naked Online? It’s out in second edition. Little known fact. When the first edition came out, my Catholic priest was visiting the Pope and asked me to sign a book for the Pope. And I’m sitting there as a good Catholic girl saying, “Oh my gosh, it asks, are you naked online?” So anyways, I made a little funny joke about it in that book. Then our second book, Ted Claypoole is a brilliant privacy and cybersecurity lawyer. We co-authored these two books together, Privacy in the Age of Big Data, and the newest second edition is just out. We’re trying to get it out in paper books so it’ll be more accessible and more affordable to everybody.
So if people want to buy a book or donate a book, I donated a bunch of books to college students, let me know. I know some places where you can get it sometimes cheaper than you find it on Amazon. So let me know. And then I’ve got my book that I authored, Manipulated, and Den, you’ve been such a huge supporter, I really appreciate it. And guess what, this is one of the first places I’m announcing this, but the publisher asked me to update the book, so I’m in the process of updating the book. Yes, yes.

Den Jones:
That’s brilliant.

Theresa Payton:
It’ll be available just in time, I think, for the big European elections and the US elections for 2024.

Den Jones:
Oh, sweet. And as you know, I’m not a big reader, so I always see this first edition, second edition. So what kind of updates do you do that makes that next edition? What’s the difference?

Theresa Payton:
Yeah, I’ll go through and reread the book and then sometimes I cringe. I’m like, oh my gosh, that’s so boring, or why did I write that? Or whatever. But now you’ve got the lens of the future, because it’s now the present, right? And you’re reading your book. So typically, I’ll actually try to cut stuff out so I can try to save the reader. The publisher doesn’t always let you do that. They like to format as few pages as possible. Again, for the second edition, that’s kind of like the little secret that I learned, but I will go through every statistic and every reference and update it.
So if I made a statistic about there’s this many people using Instagram or this many records have been compromised or things like that, I’ll actually go through painstakingly each one of those and completely update those. And then also some tools are not as popular anymore. So sometimes I’ll remove those and then put what’s really the most popular. And then I’m a big fan of putting in predictions like, okay, that’s all great current states, great, but where are we really headed? And so that’s where I have a little bit of creative fun then, because I get to put my brain [inaudible] that.

Den Jones:
Yeah, I was going to say, so that is instantly brain fart for me. So you do predictions on the first edition. By the time you get to the second edition, do you then jump in and determine whether the prediction was correct and then make a comment about the prediction?

Theresa Payton:
I have done that. And so what’s funny about that, Den, I’ll get you the paper. So I actually had my group and I said to them, be tough graders because I’ve been doing predictions for about 10 years, even independent of the books. And I always go two years out on my prediction. So I do it on Black Friday, so I’ll do on Black Friday after everything’s all done, shopping’s done, turkey’s put away from yesterday and all that stuff. I’ll sit down and say, okay, two years from now, how amazing will technology be? How will it be integrated into our work and personal lives? And then more importantly, what are cyber criminals going to do? What are nation states going to do? And then how do I warn people about what’s coming? And so I actually had my team do a backwards looking paper and they rated my predictions and they were tough graders, Den. I’ll send it to you. So they actually [inaudible]

Den Jones:
That’s really brilliant.

Theresa Payton:
Where I got it right, where I got it wrong, where I got it mostly right, and it was kind of fun. So I’ll send that to you. You can take a look at it.

Den Jones:
Yeah. Well, it is funny. That reminded me of … Actually because you know I like John Oliver, and he had a show where he was talking about, for some reason he brought up Jim Cramer and what he brought up was, and I think he was talking about crypto and how the whole crypto thing went crazy. Well, the California kid getting arrested and stuff, can’t remember. I can’t even remember the detail.
But what was interesting was he then looked back and he said like, oh yeah, Jim Cramer had this guy on his show and then he had the lady that was doing the lab testing stuff on his show and he was calling them Messiahs and how amazing they were in the early infancy of their gig, their new business, and Jim was all over how brilliant this thing was going to be. And then obviously he crumbled and went to shit.
Now it’s easy to turn around and say, well, look, Jim got it wrong five times. I’m sure he’s got it right more than he’s got it wrong. But I think predictions, I dodge those. I’ll probably get some right and they’ll get some wrong, and then the people will come back and they’ll just haunt me on the stuff I got wrong. So yeah, it’s a risky game. So I’d love to see your scorecard and then we can huddle later on that.
One question. Do you think AI can be used to stop my auntie always posting on Facebook in uppercase? I think that’s one of the biggest challenges I’m faced with in my life these days. By the way, I did ask ChatGPT, I said, “Who is Den Jones?” And it says, “I have no specific information available on an individual named Den Jones.”

Theresa Payton:
Really?

Den Jones:
I’m like, “Yeah, I will pick that answer.”

Theresa Payton:
You’re like 007, then.

Den Jones:
I know. I’m like, God bless it. And ChatGPT, let’s keep it that way. So here’s a couple of things. One is, privacy leads me onto deep fakes. And I always tell people we’re now in the advent of you can’t necessarily trust that telephone call. You can’t always trust that video call. Even if they FaceTime you, you can’t trust the number it’s coming from because we can spoof numbers. So for me, deep fakes are becoming more alarming and more alarming. And the only advice I have is if they’re claiming something is really bad and there’s a sense of urgency and they’re asking for something, then don’t trust it. Now seems like very basic advice, but what are you guys seeing about deep fakes? What advice do you have for people out there that are concerned about that?

Theresa Payton:
Yeah, so I think it’s been five years ago now. I did a piece with the Today Show with Tom Costello and Jay over at NBC. And we did this on purpose. We used free tools, we used a gaming computer and we did a deep fake and we only allowed it to run its cycles overnight. So if you think about somebody who wants to perpetrate fraud and get you to move money, they’re going to spend more time and money to get the fraud just right. But we did it to make a point, and what we did is we actually showed Tom Costello saying the news and we changed a word that he said, and on a small screen, like watching it on your phone, it was almost imperceptible. You weren’t really sure that it was a deep fake. Now once you got it on a bigger screen, you could see the flashing and the mouth move kind of funky.
We then took one of my employee’s faces and put it over to giving the news and it looked like our employee was Now Tom Costello. Again, same thing. The fidelity changes once you get it on a high def screen, it’s like, well, that’s clearly overlay. But we were trying to make a point, and this is about five years ago. So fast-forward to today, it’s so sophisticated, it’s free, it’s easy to use, people do it for fun, and then there’s no nefarious intent. So what I always say to people both at work and at home, have a pass phrase that is not easily guessed. So my husband and my kids, we have a passphrase among the five of us.
So if I hear my kid’s voice on the other end of the phone and I am a mama bear, there is no doubt about that. So if I hear my kid’s voice on the other phone end of the phone saying, “I’m in trouble, you have to do this thing for me, mom. I need help.” And it’s my kid’s voice. I’m going to be saying to them, I will help you once you give me the passphrase. And I quiz them on this passphrase all the time, we’ll be sitting around the dinner, I’m like, “You’re calling me and telling me you’re in trouble and I’m going to ask you the passphrase, what is it?” And so I’m trying to make sure they have it in here, because I’m like, “I will not come help you if you don’t have the passphrase.”
And I’ve got other tools like Life360 and things like that, but have a passphrase and do the same thing at the workplace and make sure it’s something that is not easily guessed. And that can be the best way. Even your auntie who’s typing in all caps, she probably needs a passphrase. And I think we could create an app for her, Den, that could maybe every time she goes to post, it could be you, Den, that before she hits enter says, are you sure you really want to post that? You’re yelling at people.

Den Jones:
Yeah. Well, actually I think Facebook should have that question for probably 90% of the people that post on Facebook. I don’t need to know if you went to the gym again this week or that you bought a caravan, I guess. Now I’m conscious of time. So as I think about enterprises and their security, the one thing that we are telling people is, “Hey, these attacks are just going to be more sophisticated, more targeted.” And if you still think of phishing as being a big percentage of the attacks, I just think of it being like it’s no longer one email that goes to a million people that’s kind of believable with a 1% success rate. I think of it as being a million emails going to a million people, highly tailored with probably an 80% success rate. It’s just going to be like Theresa receives an email pretending it’s from Den, and it references the fact that we’ve done podcasts or we had drinks somewhere, or we met for dinner, RSA, because we’ve mentioned things like that.
And it gets so specific and then it connects, “Oh, and how’s Eric?” So all of a sudden and more I say this, scratch that Eric was his real name, we just made that up. So the more we’re doing shit like this, the more data they’ve got, and then the more data they’ve got, the more targeted it can become. And I see that for me as being probably the biggest thing we need to gear up and fight against as enterprises because that is just going to be so targeted. It’s just training your employees to recognize dodgy emails. Is this going to get damn near impossible five years from now? Oh, that’s my prediction.

Theresa Payton:
Look. Yeah, I agree. Well, they don’t block all of the tools and all the configurations. I was complaining about this. I’m not going to pick on any tool vendors, but you can imagine we’ve got several of the big name tools that everybody uses, and we’ve got a enterprise mail platform that everybody uses, and we’ve got it locked down tight. And I can’t get mail from my mother who’s just trying to send me an article about cybersecurity. It gets blocked and it says, are you sure this looks like spam? But man, I’ll get these very sophisticated social engineering attempts asking for invoices and payments and credit card data. And I’m like, “You guys got to be kidding me. All of these tools, you’re blocking my mother’s email with a link to an article on Reuters about something that just happened. But you’re going to let this thing asking me to respond to an invoice with a payment come through that is clearly social engineering.”
So that’s not even just the training. The tools are not stopping this stuff. And I think we need to keep kidding ourselves. You do need the security tools to stop. Who knows what. It did stop that day, but I do know it’s getting through every single day and AI is powering that. Then I totally agree with you. AI can be used to create transcripts of all of our social media posts, all of our speeches, all of our podcasts, create transcripts, look for keywords, look for relationships, do a link analysis, just like you would conduct a police investigation and get a chart built in Maltego that’s got all the link analysis. Cyber criminals can do that too. And so we’ve got to outthink and outsmart them by having things they don’t expect, and that’s going to be key. How do we create new patterns and behaviors that cyber criminals can’t guess and the algorithms won’t guess. And that’s going to, I think, be key to combating fraud and cyber crime.

Den Jones:
And it seems to me that we need to, and you said this at the start of the show, we need to pay attention to this. We need to learn more about it. We need to educate together and become really more aware of how fast this is moving. I think that’s the other thing is this topic, like you said earlier, what you learn this week, there’s way more to learn next week. And it just seems to me as if this train is a hockey stick of growth and the number of sites and number of tools, the number of companies, it’s just going to be crazy. So I think the next couple of years is definitely going to be fun, but I also think it’s going to be scary as shit. I think this is going to be more scary than some of the earlier web stuff and social engineering and stuff like that. I’m trying not to think about it though. I’m on the theme of adversarial attacks and I’m like, oh shit.

Theresa Payton:
Now you know what was not AI in the new Mission Impossible movie.

Den Jones:
Tom Cruise.

Theresa Payton:
Yeah, the motorcycle jump off of the mountain. Go look on YouTube. He insisted on doing that stunt himself. No CGI and the producer is literally losing his mind because it’s like, you don’t want Tom Cruise to die. And Tom Cruise said he purposely did it at the beginning of the movie because he didn’t want it in his head.

Den Jones:
Yeah, so it’s funny, I’ve not seen the clip, but I heard the statement where he’d done that jump on day one and it was like, holy shit, that was the first filming we’d done, right? It’s like, holy crap. Yeah, I’ve heard about him doing all of his own stunts, which is really kind of cool. And I do like those movies, so I will need to go and check that out, I guess. Now, Theresa, once again, thank you for your time. Really appreciate it. I’d love to check in six months from now. Let’s see how these predictions are coming out and see how gnarly scared we are at that point. And thanks for your time. It’s been a pleasure having you on the show once again.

Theresa Payton:
I love it. Thank you so much, Den. It’s always great to be with you. You’re a great host. You need your own TV show. Watch out, John Oliver.

Den Jones:
Thank you. I know I’ve got the glasses. See you never know. Cheers.

Speaker 1:
Thanks for listening. To learn more about Banyan security and find future episodes of the podcast, please visit us at banyansecurity.io. Special thanks to UrbanPunks for providing the music for this episode. You can find their track, Summer Silk and all their music at urbanpunks.com.

 

Close Transcript

< Back to Resources

Free for up to 50 users
Simple, secure, & free!

Quickly provide your workforce secure access to corporate resources and infrastructure.

Get Started Now