Insight ON AI Can Do the Work You Hate, But Don't Get 'Coldplayed' By Your Bad Behavior

The role of a CISO is more challenging—and more exciting—than ever. In an age of rapid AI adoption and "citizen developers," how can security leaders say "yes" to innovation while also protecting the business from new and evolving threats?

By  Insight Editor / 14 Aug 2025  / Topics: Data and AI Mobility Generative AI

Audio transcript:

AI Can Do the Work You Hate, But Don't Get 'Coldplayed' By Your Bad Behavior

Jeremy Nelson: You know, we've actually seen full on attacks perpetrated by somebody joining into a Zoom call with full video and voice replication of A CFO. Mm-hmm . Um, and basically the threat actor was able to convince somebody in accounts payable to basically do a wire transfer of $60,000. Wow. All through completely deep faked video and audio. So yeah,

Jillian Viner:If you're making technology decisions that impact people budgets and outcomes, you're in the right place. Welcome to Insight on the podcast for leaders who need technology to deliver real business results. No fluff, no filler, just the clarity you need before your next big decision. I'm Jillian Weiner, and today we're diving into cybersecurity with Jeremy Nelson, chief Information Security Officer for Insight North America. Let's go.

Jeremy: I literally turned 50 Happy birthday. Oh, it's, I tried to avoid it as much as possible.

Jillian: Right. That's an accomplishment. You made it to 50 is so you're half a century.

Jeremy: Yay. I survived. That's, see, those are the words that I hate the half a century. Alright, so, no,

Jillian: That's so cool. Think of all like the knowledge and wisdom that you have, but you still have so much time ahead of you. Are you satisfied where you are now at the milestone of 50?

Jeremy: Um, you know, I hope there's always more, right? Yeah. You could always be doing more. So, I mean, look, I'm, I'm really happy with, when I look at my family in particular, when I look at, you know, what my wife and I have accomplished mm-hmm . We're going on 30 years together. Aw. But, you know, I look at that and I look at, we've raised four incredible kids gone through college. They're, some of 'em have gone off, gotten their own houses. They're just, they're amazing. They're good predictive members of society. Yeah. You're

Jillian: Like, you're kind of a big deal

Jeremy: Now alive. You're

Jillian: A CISO at a Fortune 500 company.

Jeremy: Yeah. I focus less on that these days than I used to .

Jillian: So those mile markers have changed. Yeah. Yep.

Jeremy: Pretty much. That's good.

Jillian: How was the first day of school?

Jeremy: You know what, it was super fun. Super fun. I, uh, I'm not, I'm not afraid to cry, even though there's cameras running and I got a mic in my face. I cried. I literally cried. I'm, so this is, this is my last first day of school. Uh, you should have seen me when my first went. I kept it together really good during like, the move in and getting her ID and like, 'cause we had a lot of logistical stuff to work out. Yeah. So I was busy, I was occupied. And then we came, like, she was moved in. I did a whole IT setup, not even gonna get into that . And, um, gave her a hug. We were saying goodbye and it just hit me like all of a sudden it just, all the stuff I'd been procrastinating and putting off with all the busy work. Yeah. It just hit me. And I was just bawling. Just a total basket case. And my wife is just like, get it together man. , get it together. She's like, because now my daughter starts crying 'cause she sees me upset. And I mean, it's just, it cascaded like it was, it was bad. It was really, really bad. And I haven't gotten better. You're such a, I haven't better yet. Marshmallow.

Jillian: I love it. I love it.

Jeremy: I am a marshmallow. Yes. a million percent. I don't think that would surprise anybody that would tune into this podcast, by

Jillian: The way. Yeah. To know that Jeremy Nelson, this big marshmallow

Jeremy: Yep. Pretty much.

Jillian: Yeah. Heading up our security defenses. Yeah,

Jeremy: Exactly. But you know what, it's 'cause I care about people. It's funny 'cause you think about that, like how does that actually tie in mm-hmm . Like, at the end of the day, like, um, I want people to be happy. I want people to be safe. I want people to be protected. And so it, like, it just, it doesn't actually make sense 'cause you're like, uh, you know, usually hardened grizzled security guys . But no, I'm, I'm pretty soft. And

Jillian: It must have been hard for you to be on the security defense teams when you'd get that panicked phone call. Oh, yeah. From a client. Yeah.

Jeremy: It's emotional. Yeah. Every time. Like, even though it's a client I've never met before, you never engaged. Um, but yeah, like, it's, it's an initial emotional reaction. Uh, but it's, uh, I, I'm, I'm at the one thing that I am quite proud of is that my fight or flight is definitely fight in those types of situations where that's, that's it. Right? Yeah. Like, my responsibility is to help them pull out, pull them out of some of their darkest days. You just throw yourself at the bus, you jump in, you roll up your sleeves, you help orchestrate, you bring all the knowledge and experience that you've gained over the years and you just get it done. Mm-hmm . Because it's, it's, that's one of the most, that's one of the most fulfilling parts of being in security, is that no matter who you're working with, no matter what organization is a competitor of ours, not a competitor of ours, a client that we're supporting. It's one of the few areas where it's us versus them. The only real competitor in the room are the bad guys. Like the people who are literally perpetrating crimes and coming in and trying to negatively, you know, take advantage of the people that we support. And like that's, I think that's the most fulfilling part, is no matter who I work with, yes, there's going to be some business competitive elements to it, but we're all on the same side at the end of the day.

Jillian: I love that lead in about security is so, so critical, and it really is underpinning every other conversation we have in business today. It's really easy to get swept up in like this AI hype. Right. I mean, it's hard to talk about anything. I'm, if I asked you what is the number one question clients ask you today, it's gonna be something about

Jeremy: It's ai. Ai, it's ai. Yep.

Jillian: So there's no way to avoid that. However, there are a lot of like fundamental things that need to be addressed that have really nothing to do with AI initiatives and at the same time everything to do with AI initiatives. Yep. Yep. And it's maybe not as sexy or glamorous, but those are some of the things we're gonna get into to today. Um, before we dive in, you know, Jeremy, as a ciso you have this really challenging job of trying to help businesses accelerate transformation while reducing risk. That's probably been part of your job since the beginning. But do you feel like that's even more so, like, do you feel the pressure and the heat to do that anymore today?

Jeremy: Oh yeah. Oh, it, it's way worse. In fact, um, I'm actually gonna steal a, a quote from Mr. Jason Raders for our global ciso. And, um, 'cause one of the things that I think we get painted with that the brush of as a security team is, is that we're the, we're the office of No. Yeah.

Jillian: Right. You're the barrier.

Jeremy: We, we are the reason why you can't do something that you wanna do, that you feel is going to either take your business to the next level, you're gonna be able to service your customers in new and unique ways. Mm-hmm. Right? Like, there's all these different things, but we come in and, uh, I, I've, I've even said these words before is that, um, it, it wasn't necessarily in the security context, but it translates is, you know, I I used to be in network and network architecture mm-hmm . And I was running, operating this environment for an organization. And they would come up and they're like, man, all these issues, network, network, network. And I said, you know, my network would run perfectly if we didn't have all these users and stupid applications running on it. A hundred percent uptime honesty,

Jillian: Right?

Jeremy: Yeah, exactly. And, but it's the same thing with security, right? The way that we can ensures perfect security is by not doing anything, right? Like, we have our data and it's all completely closed off. No one can access it. We don't let anyone in or out. We don't interface with anybody that isn't, you know, so it's just, it's this perfectly secure environment, but it's also not functional. Hmm. And so we can't be the office of No. And, but it's hard, right? Because you, you internalize that and you're like, well, that's my responsibility is to make sure that we are perfectly secure and manage risk. It, it's just like we, we talked a little bit earlier about raising kids, right? You can't protect them perfectly because at some point, the way that they learn is by going out into the world, engaging with other people. Um, and sometimes they are gonna get hurt.

Jeremy: Sometimes they're gonna have to expose themselves to risk. But that's how they grow. That's how they develop, that's how they become the next best version of themselves. And it's the same thing with the businesses that we support, is that we can't make them perfectly secure. We need to do the best that we can to create policies to help keep them protected from the worst possible mistakes. But also giving the autonomy to go out and experiment, to do new things, to kind of push the boundaries and find new ways of doing business that inevitably helps all of us get better.

Jillian: We're gonna talk a lot about that a little bit later on. As you reflect back on cybersecurity defenses. What's the biggest shift or change that you see to cybersecurity before this great emergence of AI compared to today? Has anything changed? Mm-hmm

Jeremy: . Oh, dramatically. So I kinda look at it in three different phases, if you will. And the first two have a similarity. They have a commonality, um, in the way that we approach security. So if you go back, so I, we talked a little bit earlier how, um, I've been in the IT industry for a very long time. We, we started off where we would build these data centers, right? Like they were the physical locations that we had complete control over, you know, badged, access, biometrics going in and out of them. That's where all the server set, that's where all the data set that's, um, and, but really the only way in and out of them was if we willfully added a connection to a public network like the internet. Now, that became an essential, right? It became a necessity in order to be able to conduct business.

Jeremy: But what we were able to do is we were able to put these, you know, security appliances at that perimeter. We were able to identify what is the perimeter, what is safe and what is not safe. And so we kind of built this external, you know, this exterior security per, uh, uh, perimeter around our IT environments mm-hmm . And so it was kind of like this very outside in type of an approach. But we saw that get highly fractured when we saw the, uh, eminence of the cloud and organizations adopted and really started moving more applications and workloads and data out into these various different cloud environments. And so we still had to figure out, okay, now I don't have this nice tightly controlled little space that I own every aspect of, and now we've got all these different entry points. And so having to figure out once again, how do we build that nice hard protective security exterior to our IT environments?

Jeremy: But it was still the very same type of an approach. It was still very much outside in. And really what we're starting to see now is with the onset of ai, is, it's the exact opposite, is that as we look to innovate, as we look to enable our workforce, it's really all about data and data assets. And the ability that this new tooling and these new IT capabilities that we present to them have the ability of now taking that data and sending it outward. Right? And so now what we have to do is we have to bring that same type of rigor that we put around the perimeters of our I environments and bring that all the way into the juicy center of the data, and making sure that we've got those same types of policies and controls to ensure that we don't inadvertently start spreading out our intellectual property and, uh, valuable assets to people who shouldn't have access to it. So very, very different shift in mindset in the way. Now that doesn't mean we don't continue to invest in that perimeter mm-hmm . That exterior. Right? But we can't think about ex like out external in, we have to think internal out.

Jillian: So your philosophy of the way you're looking at security now has changed from an outside in perspective to an inside out, this kind of oversharing. Speaking of kids mm-hmm . Uh, that sounds pretty dangerous. I mean, I'm thinking about how easy it is for kids or even adults to overshare things. Oh, yeah. Um, what does that look like in business? Like, what is that threat of oversharing?

Jeremy: So here's, here's a great example. So one of the things that we're really starting to see a lot of organizations lean in on is this concept of a citizen developer, right? And so when you think about it, it's the concept of how do you unharness the creativity of your workforce, right? Like, you've got a lot of people mm-hmm . Who are really, really smart. You've invested in them. So obviously you, you've brought in top talent, not no one's going out, just hiring people that they don't believe in. Um, and, and humans by nature are very creative. Um, but also they, they want to work less, right? And so they wanna figure out, how can I take out these unproductive, and I'm gonna use air quotes here, not fun things that we have to do every day. And how do I accelerate that to get me to my end goal faster? And also remove that work from me? If, uh, for those of our listeners here who are familiar with, uh, Google site reli, reliability engineering philosophies, it's the concept of toil, right? Mm-hmm . And so the whole concept is, is how do we get rid of as much toil out of our day as possible and allow our brains to go out and do the creative and productive things that they're capable of.

Jillian: So I'm a citizen developer, it's just a fancy way of saying anyone in your company

Jeremy: Can come up with

Jillian: Tools, solutions to make their job easier, faster.

Jeremy: Exactly.

Jillian: Don't have to be in it.

Jeremy: Don't have to be in it. In fact, that's the, that's the exact thing that we're looking for.

Jillian: Low code, no code,

Jeremy: Low code, no code is the very first iteration of this that we saw. And so it's providing a set of tools to people to go out, leverage the various different systems that they have access to, to go out and basically automate their work.

Jillian: Sounds great.

Jeremy: It's awesome. It's fantastic.

Jillian: Except,

Jeremy: Except there's two really big challenges that we see with that. Number one is, is that a lot of times the organizations that we support have, we'll call them workflows that are based off of legacy ways of conducting business. And a lot of times those workflows engage with and exchange data in ways that aren't particularly secure. Mm. But because of the overall speed that the business is being conducted, it's, it's a risk, but it's a relatively low and manageable risk. However, take the training wheels off and now you're able to do those same workflows. So we take those, so the, the natural behavior is for a citizen developer to take that existing workflow and to replicate it in its entirety inside of a machine.

Jillian: Sure. I'm gonna take the process that I'm familiar with. Yep. I know that this, the series of steps is what I've, I've had to do to do this toil this time consuming of obnoxious task. Yep. So my first inclination is to just write down those steps and then replicate it into

Jeremy: And get a machine to do Yeah. Those same

Jillian: Steps. What's

Jeremy: Wrong with that? The exact same steps. The problem is, is that by nature, a lot of those steps are not necessarily done with data, data security in mind. And so what ends up happening is, is that a relatively slow process that was hampered by the speed of a human being going in and doing those very toil and laborious tasks mm-hmm . Um, which brings with it risk, but it's a more manageable risk. But as soon as we replicate that same bad process mm-hmm . In the machines and we turn it over to do it at machine speed, you now have taken what was a relatively minor and manageable data risk and accelerated it to something where you've got information that's potentially being, uh, overshared in ways that you probably couldn't even imagine at a volume that really puts your organization at risk, whether that be through data exfiltration, inappropriate use of that data, or there could be regulatory compliance requirements around the way that that's shared. And the machines might be able to share that in a pace that kind of puts you in a situation where you're now liable and, and, uh, you're in violation of your compliance requirements.

Jillian: Can you gimme an example? 'cause I'm having a hard time thinking through like, what is something that if a human's already doing it, how does it then become dangerous if we had let a machine do it?

Jeremy: Yeah. So it's, it, it really comes down a lot to volume, right? Is is really what it is. So if you think about, um, I'll just use an example that I've come across in the past where there was a, an organization that focused around retail, and one of the ways that they would engage with disputes that would come in from a customer would be highly through email. Right? So email was their primary form of communication. Really

Jillian: Secure, really

Jeremy: Secure, one of the most secure transports on the planet. Exactly. Air quotes.

Jillian: Exactly. Its sarcasm. Yeah.

Jeremy: And, you know, if, if we can kind of peel that back a little bit, like why is email so insecure, right? So email, it's just text being transmitted from one server to another server. Um, there are ways to encrypt it. So there's one way to secure it in flight, but at the end of the day, it's just going to an inbox. Mm-hmm . That's basically just a storage repository that somebody can go in download. And then once it's downloaded, you have zero control over where it goes, how it gets consumed, who uses it, right? Who sees it. So there's just this very stark lack of accountability on what happens with that data. Mm-hmm . Once you know, you've hit that send button, maybe even put in the wrong email address. I mean, think about how simple that is. Yeah. That happens to each of us every single day. And now potentially sensitive information is landing in the wrong location and

Jillian: The retailers using this exactly. To conduct business with customer data

Jeremy: A hundred percent. And so a lot of it is, right. So it comes from, it came from a place where, um, and I am going a little bit back in the way back machine for this, by the way. Um, this is early in the PCI days where we were going through and we were acquiring digital systems to scrub, right, to mask credit card data. And so the only way that they were able to potentially process a refund, um, for a, uh, unintended purchase was to actually collect full credit card data from the client. And so now we have full credit card information that's being transmitted through email right now. These were relatively low volume cases, things of that nature because, you know, uh, an individual customer support representative could only address so many concerns a day. There were only so many that actually needed to have that information exchanged.

Jeremy: And so it was relatively managed and it was all done based off of the, uh, decision making of the individual on the customer service desk. Now you build that into some type of a programmatic approach where it is completely automated by a system and a lot of those same decision making, you know, maybe don't get properly coded in because they don't necessarily think about it. 'cause they're just going step A, step B, step C step D, step B resolution, right? Mm-hmm . And they're just following it through and not applying any of the other logic components that they would think about before they would initiate a certain step. So now what you've got is you've got a machine that has the capability of going through thousands of requests as opposed to a human being that's gonna go through maybe five requests, and they're gonna be able to do it without necessarily some of the decision making built in.

Jeremy: And so now you've got thousands of credit card information that's being either gathered or transmitted in a way that's not at all secure. So you've got two issues. You got number one, a bad process, right? So the process that interfaced and exchanged information of a potentially sensitive nature over email was really just replicated in a machine. So we, we took a bad process, bad workflow, and bad overall based tool set and built it into an automated workflow. The other piece is, is that there's the data security component of that. So when we talk about that, the inside out type of an approach, it really comes down to how do you properly classify, how do you properly govern and tag data so that even these automated processes don't have the capability of transmitting in a insecure fashion. And then building controls to detect when it does go outside the boundaries of your policy to capture that and to remediate it as much in line as possible.

Jillian: It sounds a little complicated, and kind of what I'm hearing from you is, first of all, the approach of like these, these bad practices, I'm sure we're all guilty of having a, a bad workflow, bad

Jeremy: Process. I, I haven't found a company that I work with that doesn't have them.

Jillian: So how do you, what's the first step then if you're, if you recognize that, you know, our business has this process that is draining time and resources we see potential ROI, if we can find a way to automate it, what's the first step to make sure that you're doing it the right way?

Jeremy: Yep. So the first thing that we, so there's a couple of different ways that we go about that. So there's the concept of, you know, openly acknowledge that you want to do citizen development and you create a citizen development framework, right? So that framework, both standardize standardizes on tools, um, best practices, but also really encourages innovation, right? To not necessarily just take, okay, here are the steps that I do, and then build them into a tool to actually think about what's an optimal way for me to accomplish this? And if I were to turn this over to, um, to a machine to execute on my behalf, like, what does that look like? Right? So number one, just kind of creating that, there's a cultural element, an organizational change, change management piece, if you will. Uh, the other piece is, is, you know, having a data loss prevention program, like going through creating data governance strategies, building out policies that help you both classify what is considered sensitive data to, to your organization, and then building the tools, it's able to identify that both at, at rest, um, as well as in flight.

Jeremy: Um, and being able to capture that alert off of it. Um, and, and what's interesting there is, is that it's not even necessarily completely always blocking it, but identifying like, okay, we're doing this. This seems like a bad behavior. 'cause some of it might fall into a gray area. It's not easily like black and white, right? Like, this, this probably isn't a great thing for us to do. Let's go back, let's have a conversation with the business. Let's understand the workflow that's associated with why we're transmitting this, right? Or storing it. And let's figure out, like, is that appropriate? If it is, if it isn't. Um, and then we go through and we kind of build, we, we look for optimal ways to address that, right? To whether that is, we just continue to alert on it. 'cause you know, another great example could be, um, you know, we, we share client address information, which kind of falls underneath the classification of PII mm-hmm . Um, but if you do it on a one-off basis, it's maybe not so bad, right? But maybe our processes that we alert off of it. But if we see that one email box happens to send out hundreds of addresses, maybe that's a trigger for some type of exfiltration or some other type of inappropriate behavior. And that would escalate the over alert that we would then respond to and try to remediate. So anyway,

Jillian: I'm gonna go back a second because you meant, so you explained how security has changed from the outside into this inside out approach. And really what you're getting at is the, the biggest threat to companies right now. Yeah. What it, what was it before and what is that biggest security threat today?

Jeremy: Yeah, so it, that's a, that's a great question. So if we think about it before, a lot of what it spent their time focusing on securing was the infrastructure mm-hmm . Right? Like we went through, we wanted to make sure that we had configurations hardened, that we were making sure that things were patched, that we had good security policies, that we were blocking, you know, good and bad traffic. Um, we would look for anomalous behavior that kind of transpired across these different areas. And we would take, um, uh, basically, um, responsive action to help lock that down. So we, it was very heavily infrastructure focused. I think one of the things that we're seeing in this shift is, is that it's now people, right? It really has shifted to this cultural, uh, incorporation of security into, um, the organizations that we support. So the actual people and getting them to buy into it and having them thinking with a security first mindset, and then translating that into data practices mm-hmm .

Jeremy: That allow us to identify classify sensitive information, monitoring the way that it's consumed, not always jumping in and blocking it, but being curious and asking questions about, okay, why are we doing this? And then going and kind of attacking the heart of a potential operational challenge that is creating a security issue and then addressing it that way as opposed to just going in and fixing it through technology. I think that's the biggest one, is we used to be really focused around everything was a technology fix. And now with the, um, with the onset of ai, the consumption, the, the desire to enable your workforce, it's more people focused, people and process focused.

Jillian: And you're not talking about people falling for spam or phishing attempts. This is people who are being empowered to be citizen developers, which I love that. I'm gonna add that to my resume. Um, but to be a citizen developer, come up with innovative ideas, tools, solutions, using company data. But really what you're saying is we have to be careful that we're not oversharing. Yep. We have to be careful that we're being responsible with our company data. Um, I'm gonna throw this into a current relevant, uh, topic that seems to be on everybody's mind. How do you as a CISO avoid being Coldplay by your own company, ? And what I mean is, in this context, it's having people who are unfaithful to your data policies, right? And then you get caught in a very public scenario because somebody has leaked information.

Jeremy: Yeah. It's, it, so it's, it's really, really interesting and very, very timely. Um, not just the cold play cam , uh, components of that. I couldn't resist. Yeah, that was awesome. I love it. What, what a great inclusion . Um, so yeah, so I think one of the biggest things is, number one is really coming in and understanding what is your data? Like, what is, what is sensitive data? 'cause I think that's all where a lot of organizations don't even necessarily know. Like, where do we engage with and what is actually sensitive data? Pretty

Jillian: Basic, but important to

Jeremy: Define very basic, but very, very, it, it's, it's, it's critical. It's, it's essential mm-hmm . Right? It's, it's, it's foundational. So coming in, identifying what is sensitive data, and then really going through and leveraging a series of tools and people to inspect both processes as well as practices on where that data is being stored and transmitted. Um, and then really kind of implementing a series of tools that help you identify when those policies are being breached and allowing you the time, right? And the, and the tooling necessary to be able to re respond, respond to either block that activity from happening or to identify, okay, something bad has happened, what is our response? Right? So going into some type of a crisis response procedure on how to be able to control what happened so you can get out in front of it rather than, uh, somebody else alerting that it's happened, allowing you to be in the driver's seat so that you can control the narrative and you can actually manage the impact of that, of that data loss.

Jillian: When does the appetite from leaders that you're talking to around encouraging citizen development? Because it sounds like there's a lot of guardrails you have to put in place to actually make this work successfully.

Jeremy: A hundred percent. And I think really the, the appetite is very high. Um, it's, it's kinda shocking, right? Because you're like, well, there needs to be a lot of rails. Yeah, we need to protect a lot of this. But at the same time, it is a huge differentiator mm-hmm . And, and everyone is fighting against disruption and somebody else getting ahead of them, you know, being first to market, um, and taking advantage of the innovation that exists within their workforce. And so there's a lot of emphasis around, um, citizen developer. And if you go back and you look, so I'm sure our listeners are very familiar with, uh, the various different things that are happening within the AI space. We kinda led off the conversation with that. But AgTech ai, for all intents and purposes is that next iteration of really citizen developers, right? Yeah. And so now, not only are we looking at, you know, just accelerating the speed that toil tasks are actually executed, but now we're including an AI model into that.

Jeremy: And we're talking about the potential ingestion and access to data that is not appropriate for the person either writing the tool or the person that the output of the tool is generated for creating. And so, um, there's a huge appetite, right? There's a huge appetite to be able to equip your workforce with a gentech AI tools with these low code, no code citizen developer practices. Um, because it is a market differentiator and nobody wants to be left behind right now. Like we are at a point of tremendous change and inflection point of really what a workforce looks like and how that is empowered and altered by ai. So nobody wants to slow down. Nobody can afford to slow down. So the appetite's very, very high mm-hmm . But that doesn't mean that the risk goes away. And so, um, you know, there are probably some organizations that are going into it a little bit on the more reckless side. Um, but the, the organizations that typically reach out to us and ask, um, recognize the potential, but want to try and put those guardrails up early so that they know that they're not gonna be perfect. They know that there's gonna be some gaps, but at least they've got a structure that they can continue to build off of as this very, this this really complex citizen developer program takes off.

Jillian: Yeah, that makes sense. I mean, when generative AI really started kind of coming across organizations, it started to adopt different generative AI tools. It, I think it quickly became apparent that this was not gonna be a top down approach. It really has been successful bottom up with like individuals because we're the ones that are doing the toil. Yeah. So we understand where those pain points are. And if you start small and then go big, that's really where you start to see the ROI build up. You mentioned at the very beginning of our conversation that as a ciso you have the very unfortunate reputation of being the department of no. So how can the CISO become a department of Yes. What do other business units leaders need to know or understand, or how can they best partner with their security teams to make this a reality?

Jeremy: Yeah, I love that. So, um, and I don't even know if I finished my quote. I think I said , we don't wanna be apartment of No. Um, the finishing, like the, the second half of it is be the office of know how. And I think that's really important. I think that actually answers your question, right? Is at the end of the day, we don't wanna come in and just say, no, we wanna say like, we wanna support you, but here's how you do it. We wanna help provide guidance. We wanna help provide that safe framework that allows you to, to kind of take advantage of these advantage of these technologies in a way that doesn't expose any of us to risk. And so I think the number one thing is, is just recognizing that we're all in this together, right? As a ciso, um, I have the same objective that, uh, the, we'll call it the director of infrastructure has, right?

Jeremy: Mm-hmm . At the end of the day, and by the way, that's the exact same mission that the director of customer service has. That's the same director of product management has, we wanna see the company grow, we wanna be able to support the core initiatives that service our clients that create the best product possible and allow us to finance, meet our financial obligations, whoever those financial obligations happen to be to. And so really, I think the big thing is, is my messaging would be almost more to CISOs than anybody else, is really try to shed that skin of just say no. Because that's the easy answer to keep us secure mm-hmm . And think about how can I help them know how? And really leaning in and investing in that partnership and recognizing that we're all on the same journey. We all have the same goals, it's just we all have different seats to sit in, um, to make sure that those objectives are being made and accomplished with as little risk as possible.

Jillian: And there's always risk. So there's always risk if you say no, they might go up behind your back and do it anyway. Oh. And that's when you get shadow

Jeremy: AI a percent. That's where you get shadow, shadow ai, shadow a, it, shadow cloud, all of it. Every time we've had, we've had the security team say, no, it doesn't stop. It just gets done in less secure fashion.

Jillian: Say No, you can't say no. Yeah, exactly.

Jeremy: So say, know how. Let me show you how

Jillian: I like that. That's a great approach. Yeah. Um, you mentioned earlier too, some like bad behaviors. I'm curious, are there some, some low hanging fruit or like really often overlooked measures that organizations should go look at right now? And the thing that comes to mind right now is file sharing mm-hmm . The defaults on file sharing. Yep.

Jeremy: So that's a, that's a good one. So we didn't even kind of dig into that one a little bit. But obviously when we start getting into ai, um, one of the, one of the powerful aspects of AI is the fact that it can go out and access a wide variety of different, uh, data sets that you individually have access to. One of the things that we've noticed as we partner with organizations who are kind of on their unique AI journeys is we go in and we look at, Hey, what are your ga what are your data practices? And how do you govern who has access to what data sets across the entirety of your organization? And one of the things that we have found is that, again, it comes back to people don't want toil. The more steps that you require somebody to do, the more likely they are going to shortcut those mm-hmm . And so we see that in file sharing. The most natural behavior when somebody goes out to try and share a document or file or data amongst their organization is the button that they click is to share with entire organization. That way no barriers. Yeah. When it gets sent to the other side, they don't have to worry about, oh, it doesn't work. Can you try this again? Right. Like, they just know it's going to work. And what's the risk? I'm sharing it out with my organization's

Jillian: When Sharon finds out the salaries of all the employees and bingo. Yeah.

Jeremy: Mm-hmm . Bingo. And that's what happens is that when we turn on these AI models, it comes back to that same bad behavior. It's able to ingest information at a much higher capacity with much greater efficiency than the humans can. So unless Sharon specifically wanted to go out and look for those, that salary information, she'd have to go and do a whole series of searches through their intranet sites, through whatever their data shares look like to try and find that it's a little complicated. AI is doing that at rapid speed, right? At, at AI speed, it's ingesting all that data. 'cause that's really, it's, it's a big data pump, right? And so it's gonna go out and find everything that you can, because it wants to draw those correlations to give you the best prompt, the best response to your prompt. Right? And so, uh, what ends up happening is, is that that that natural behavior of sharing with everyone turns into a time bomb. Where now everyone that, that AI model that's only supposed to be specific for a specific work that's supposed to be designed for a specific workflow now has access to data that it shouldn't have. And it's inappropriate for it to have. And it could be used to draw inferences that then share out information that shouldn't be allowed to be

Jillian: Shared. So you, to lock that down, lock that down, you set the default to

Jeremy: A hundred percent. You gotta go through, you gotta do data governance, you do data tagging. You, you create sensitivity markers, and you build policies that, uh, that basically number one is you go and you find out here's everything that's shared out to everybody mm-hmm . And you'll be like, this is clearly not appropriate. And then really building that, it comes back to that organizational change management. Again, building in that culture, going through and department by department, cleaning up those shares, that's step number one. Mm-hmm . And then you can allow the AI models to start learning and gathering in that information. But then building a policy that monitors and inspects after the fact to make sure that those policies are being adhered to.

Jillian: Jeremy, I'm gonna ask you kind of a mean question because you say policy, and I know that that's obviously requirement. It's important to have, but you know, when you send it on email, please read this new AI policy. Like how do you effectively make sure that

Jeremy: People are, okay, so this, this is kind of fun. This is really actually kind of an interesting one. I was just having this conversation earlier today with some members of my team. 'cause we talk about like a policy document. Mm-hmm. You go, and that is the, we're gonna come back to just how ineffective email is as a communication platform, right? Right. Yeah. So we do, we send out these big documents where it's a link to like a hosted version 15

Jillian: Page PDF

Jeremy: Massive document. Multiple. Like, it's like reading a contract, right? Uhhuh, . It's very confusing. Um, not necessarily every, everyone's going to be able to consume that. And so traditionally, like the, the fix to that is, is we'll have like compliance training videos where you go through and it'll have some fun little scenarios and things of that nature. And even that though, you find that because of the length and the, uh, the cadence of the material being presented, it doesn't always sit sink in because people tune it out, they multitask, they'll kind of come back to it. They'll like, even if there's a test, they'll get to the point where like, ah, this seems Right. Right. They'll kind of socialize.

Jillian: I was checking email while I was watching that compliance video about email. Exactly.

Jeremy: Exactly. Mm-hmm . So one of the things that we've even been talking about is how do we actually make this a little bit more personal? Right? So I talked a little bit about organizational change management, and one of the things that we've started to kind of just discuss internally even is how do we make it almost like TikTok, right? Like, or like those very, very short clips where it's highly engaging in a very short period of time. And it's something that you can try and just plant a little seed in somebody's brain, somebody's consciousness. And so really kinda looking at different ways for driving security best practices through much more tight consumable ways that don't get people mired down in what really kind of comes across as legal speak. 'cause at a lot of time when you're talking about risk management, risk management typically lives within a legal organization mm-hmm . And you end up feeling like you're listening the legalese and it's really, really challenging for just normal people to, to necessarily always consume. So really bringing it down storytelling is extremely important. Um, and really helping them to just consume that in a way that makes sense to them.

Jillian: Gen Z there's opportunity for you in, in

Jeremy: The future. There we go. That's right.

Jillian: . All right. So we've hit on data management. Looking at automation a little bit more carefully. Is there anything else that you think people are missing in their security landscape?

Jeremy: Yeah, so, um, outside of the concepts that we've already talked about, I think one of the things that is there, I think people are conscious of, but I don't think they necessarily recognize the role that it plays, especially as we make this transition into citizen developer community. And that is the form of identity. So a lot of people when they think of identity, they think about like, oh, I have my username and my password, right? Sure. And obviously multi-factor authentication is a big thing that's come up. But what we start to think about is when you start to look at, um, especially in the world of ag agentic ai, right? So the whole concept is, is that you're actually giving a series of actions to a machine to go out and execute. And it's using generative models to be able to do predictive analysis on what comes next, right?

Jeremy: And so the whole concept is, is how does identity play into that? Is it's something that's prompt driven, that you are actually basically giving your identity to, like you're lending your identity to that specific very, uh, interactive. Is it something that you actually want to impersonate you that runs autonomously in the background? Um, what about the identity for somebody who's actually creating an agent? Like what does that need to look like? So the role that identity plays in both the creation, execution, and overall lifecycle of these citizen community developed, uh, tools and platforms and agentic ais, um, is really, really important. And, uh, probably warrants its own podcast episode.

Jillian: Yeah. I mean, I have so many questions about this, but you say an identity and, and the impersonation, are you talking about like my, my my identity? Yes. Or credentials?

Jeremy: So, so it's interesting. So a credential is a way for you to establish identity, but that's not necessarily in and of itself, your identity. Your identity is basically the context of who you are as it relates to the various different IT systems that live across our company.

Jillian: So it's me in ones and zeros. Correct.

Jeremy: Okay. Correct. And your, your credentials are a way for you to validate your identity. Okay. So there's ways for us to basically give that identity over to somebody else to basically authorize the execution of something based off of your identity. Now, we haven't even got into what that looks like as far as like contextualize, like what does identity access and management look like as an organization, especially with the onset of citizen developers, like the overall roles and responsibilities, the way that our organizational structures are built inside of these identity systems. Um, the other different forms of identity that exist outside of just username and password. So there's a lot to kind of digest there mm-hmm . Um, but really what it comes down to is when you think about it, right? When you start thinking about how do I empower this user community, a lot of it comes back to, we talked about data, what people have access to, all that's based off of identity. How do you build a robust identity strategy that looks at all the different ways that you want to leverage agent AI in your organization and empower it to do the thing that it needs to, um, to understand what those identities look like. How do you box those and how do you make sure that they don't go outside the constraints of that box?

Jillian: Yeah. I mean, my head is going down a black mirror rabbit hole where I'm imagining that I'm like on a two week vacation cruise, right. And my, my agentic version of myself mm-hmm . Is like answering my team's ims Yep. And chiming in on meetings. Yeah, exactly. This is a plausible future.

Jeremy: Yeah. A hundred percent is. So

Jillian: You have to find a way to make that

Jeremy: Absolutely . Yeah. In

Jillian: Fact, I'm actually not actually here. No. You're talking to my,

Jeremy: Oh my gosh. This is the best agentic AI I've ever encountered.

Jillian: Very advanced. So you have to find a way to build boxes around that. Build some structures, right. Some constraints. Yep. And really what's the valuable use case there?

Jeremy: And there's also, it comes in, I know we've talked a lot about this, but there's also the risk management component always. Like, how much are you willing to lean in? How much risk are you willing to take by granting these types of identities to these agent ais? Yeah.

Jillian: Yeah. Uh, I don't wanna derail this too much, but voice cloning comes up a lot. Oh boy. And I get really, I just kind of scratch my head when I log into particular applications, maybe financial applications that ask me if I wanna set up voice is my password. Yep. I just

Jeremy: Very easily s spooked these days. Yeah. I mean, the sheer fact that, um, you know, we've actually seen full on attacks perpetrated by somebody joining into a Zoom call with full video and voice replication of A CFO. Mm-hmm . Um, and basically the threat actor was able to convince somebody in accounts payable to basically do a wire transfer of $60,000 Wow. All through completely deep faked video and audio. So, yeah.

Jillian: Which, that's an interesting example. It's a painful example, but because we talked about email earlier, but we also need to think about locking down like video calls. Yep. Everything's a video call. Oh, yeah. Now. And it's always, almost always recorded.

Jeremy: Almost always. I, I, so yeah, there's obviously the recording of the, of the audio and video meetings that we attend. But also another thing to kind of be con conscious of is, you know, one of the other aspects of my role is that I support organizations during incident response mm-hmm . Um, so they have found themselves the victim of some type of major incident that, um, you know, whether that's ransomware or some other type of large scale, um, uh, exfiltration or, um, compromise of their environment. You know, we get called in and we help to support that and we help them on their recovery efforts. And one of the things that we've really identified pretty quickly is that one of the first place that threat actors go is identity. They wanna get into your identity and when they're in your identity store, you know, what else is really interesting is that for the most part, we're trying to work through recovery. We're using the exact same collaboration tools that we use for just day-to-day operations. And so a lot of times these threat actors can find themselves into our war rooms and they're literally listening to every single action being taken to kind of as countermeasure to help stop them from spreading throughout their environment. And they're literally creating their own responses to the countermeasures in real time.

Jillian: Holy smokes. Yeah. That's terrifying, Jeremy. It's real. How do you, how do you ident, like how do you prevent that? Like do y all set up like a, a password before you join a meeting?

Jeremy: Like how do you Um, so one of the things that we've seen is a lot of organizations, like we, so I've seen this happen where like everyone go audio and video on now and all of a sudden you'll just see, um, what is a valid account. Mm-hmm . Person connected to the meeting will just drop out

Jillian: Because they can't replicate

Jeremy: 'cause they can't replicate it. And so usually what you end up doing is you, you end up leveraging, um, out of band collaboration tools that aren't reliant on the same identity stores. Wow. Um, in order to be able to facilitate those sensitive communications.

Jillian: That's wild. Yeah.

Jeremy: We went really off track with that one, but we

Jillian: Did. But that's a really interesting and important piece to, for, to know. Yep. Because we always say like, a security attack is not an if it's a win. Yep. And so these are just new layers of defense to be aware of. It's do you have a plan? Has it been updated? Do you know who to call? Do you know what precautions you need to make sure that you take to make sure that the attackers aren't

Jeremy: A hundred percent Yep. Communications are essential. Like when you're in the middle of a crisis event, like that, communication is the number one thing that you can be doing. And if the threat actor has been able to compromise your means of communications and can actually basically listen in and take advantage of the work that you're actively doing to counteract your ways of, of, of getting in front of them. Yeah. What do you do? Yeah. Checkmate, you gotta be. So that's gotta be part of your plan.

Jillian: Hmm. Sounds like a whole new episode we'll have to do. I love

Jeremy: It.

Jillian: . All right. I wanna close off by asking you, um, what just advice would you give to other IT leaders and, and business execs who maybe haven't started citizen development yet, they're getting, they're getting comfortable with that. Um, and, and data management really needs to be a priority right now. What is your, your top advice for them?

Jeremy: So top advice is go through leverage. You know, look at what subscriptions and tools you already have today. And the best advice that I have, a lot of organizations actually have access to data loss prevention tools within the overall catalog of subscriptions that they already have. If you already have that, the best thing that I can recommend is go and just see what do you have today? Just go through, look at your data at rest. Um, if you have the capability, look at the data in flight to see, you know, through traditional collaboration means like what are you transmitting today? That's probably not something that you should be transmitting, especially on a frequent basis. Um, and just trying to understand why, like ask yourself the question, why are we doing this? And I think you'll be really surprised if you go back and you'll find like, who are the perpetrators of either storing or transmitting data that's not appropriate And you just ask them why. You'll learn about probably a business process's been around for like 40 years and has never been adjusted. It's taken on the form and incorporated some technology into it, but it really hasn't changed. It's the same process that the company's been doing for 40 years and it's an opportunity to partner with the business in a unique way to both reduce toil, make people's life better, but also to secure that extremely important data.

Jillian: I like how the simplicity of that, because often we talk about how disruptive these technologies are. We're looking at it through the lens of like the large strategic, you know, future of the business and like our offerings or services, but it really comes down to disrupting your everyday workflows. Yeah. Yep. So in the process of trying to make things faster, you're probably discovering where some, some security vulnerabilities exist. You're probably discovering where some inefficiencies exist. So it's really a win-win all the boxes if you

Jeremy: The time to do that. Boxes. Checking all the boxes. Yeah,

Jillian: Exactly. And ultimately just stop being an oversharer. Yeah. Have you ever overshared something?

Jeremy: I am very careful. I'm sure I have. I think that's the scariest part is I probably have overshared and I can't even remember. Or I probably don't even know what I, overshared.

Jillian: Dangerous

Jillian: All right on that, Jeremy, thanks for coming in. It was really fun to talk to you, Julian.

Jeremy: It was a pleasure. Thank you so much for having me.

Speaker 3: Thanks for listening to this episode of Insight on. If today's conversation sparked an idea or raised a challenge, you're facing head to insight.com. You'll find the resources, case studies, and real world solutions to help you lead with clarity. If you found this episode to be helpful, be sure to follow insight on, leave a review and share it with a colleague. It's how we grow the conversation and help more leaders make better tech decisions. Discover more@insight.com. The views and opinions expressed in this podcast are of those of the host and the guests, and do not necessarily reflect on the official policy or position of insight or its affiliate. This content is for informational purposes only, should not be considered as professional or legal advice.

Scopri di più sui nostri speaker

Headshot of Stream Author

Jeremy Nelson

North America CISO, Insight

Jeremy Nelson is a multidisciplined solutions architect with extensive experience in networking technologies, including projects in network design and implementation for international enterprises. He leverages his experience in cross-discipline technology fields to ensure Insight services and solutions deliver effective, efficient results for our clients and drive value for the business.

 
Headshot of Stream Author

Jillian Viner

Marketing Manager, Insight

As marketing manager for the Insight brand campaign, Jillian is a versatile content creator and brand champion at her core. Developing both the strategy and the messaging, Jillian leans on 10 years of marketing experience to build brand awareness and affinity, and to position Insight as a true thought leader in the industry.

 

Registrati Rimani aggiornato con Insight On

Iscriviti oggi al nostro podcast per ricevere notifiche automatiche sui nuovi episodi. Puoi trovare Insight On su Amazon Music, Apple Podcasts, Spotify and YouTube.