Insight ON What Does the New Federal AI Action Plan Mean for Your AI Strategy?

Public sector tech isn’t stuck in the past. Insight Public Sector CTO Carm Taglienti breaks down how mission-driven organizations are modernizing fast — and what the new federal AI Action Plan means for your strategy.

By  Insight Editor / 28 Aug 2025  / Argomenti: Data and AI Generative AI

Audio transcript:

What Does the New Federal AI Action Plan Mean for Your AI Strategy?

Carm Taglienti:

And AI is many things. It's not just generative ai, but everybody wants to make it just generative AI organizations that are, say, taking, I'll call it more simple capabilities like advanced analytics or graphical depictions of just statistical analysis. And calling it AI is another sort of, you know, I think you can get played a little bit. Hmm. Some of that's snake oil, so it's sort of like, Hey, you know what? That's just, those are just statistical analytics. That's not really ai.

Jillian Viner:

If you are making technology decisions that impact people budgets and outcomes, you're in the right place. Welcome to Insight on the podcast for leaders who need technology to deliver real results. No fluff, no filler, just the insight you need before your next big decision. Hi, I'm your host, Jillian Wener, and today we're going to school on AI with insights Chief Technology Officer for the Insight Public Sector. Carm Tag. Meti. Let's go. Carm, what are you teaching now? I know you said you're taking the summer off, but what are you normally teaching?

Carm:

Let's see. I teach machine learning. I teach, um, cloud computing. I teach cybersecurity, um, and I teach data science. How

Jillian:

Do you time for this,

Carm:

Right? Yeah, no, it's like, uh, well, I told you I'm a grandparent now, so I don't have any kids anymore floating around.

Jillian:

You still have a full-time job as a Chief Technology officer? <laugh>.

Carm:

I know. Well, to be, to be it's a little easier because I teach the things that I talk about. So it's kind of like in line with what I do. Like if I was a history professor than it'd be a whole different thing.

Jillian:

Fair.

Carm:

But, you know, I teach in, I teach in technology, so like all the research I do is just reusable.

Jillian:

Carm. I don't wanna bury the lead here, but there is a topic I've been really keen to ask you about. 'cause it's very fresh in everyone's minds. The federal government just released our America's AI action plan, and this is really pushing down a top-down federal strategy for AI adoption from accelerating innovation to even investing in our domestic infrastructure. There's a large amount of urgency here, but there's also a lot of uncertainty. And so I'm really curious from your perspective, again, you're, you're out there talking to leaders across different public sector organizations from education, government, et cetera. What are they nervous about, if anything, and how much of what's in this initial plan do these leaders need to be, you know, aware of like what's, what's kind of just noise right now and what do they actually like? What are the signals need to pay attention to?

Carm:

So my perspective on it would be number one, it's really, it's great that we have a plan. Um, because I think if you look historically at the European Union, European Union had GDPR probably, well, we still don't have a comprehensive right plan for privacy within the us, um, which

Jillian:

We're not gonna debate on in this, in this show,

Carm:

<laugh>, right? We'll talk about that. But the, um, and the, the reason I reference this is because the European Union passed the, um, the European Union AI Act, um, which went into force, I think this past March, or maybe it was January, it doesn't really matter. But in general, they passed a, um, an act that talked about appropriate use and they had, you know, um, a mechanism for measuring what was appropriate and what wasn't. So you could comply or not comply with the AI systems. And we, and we, again, you know, in this case, Europe acted first. Cool. Um, and we didn't really follow along. I know we tried to do something in California just like with GDPR, but we never really had sort of a federally, um, led plan. And so, and the Biden administration tried to do something like this. And, um, so, but I think now it's good to have a plan.

Carm:

I don't think it necessarily answers all of the questions that we would really want to have answered, like ethical use, um, the communicate or, uh, the actual understanding of bias that might be included within the models and model usage. And then even, you know, terms around what is, um, copyrightable, what is unique content, what isn't. Um, so I, I think there's still some work to do there as it relates to how we think about the use of ai, the infrastructure's. Another important part of this, because I think it's pretty clear, um, at least that we see today, is innovation and understanding of the underlying computing power for inference or for training is gonna be a really important part of this too. So I think supporting that underlying infrastructure, and I don't really wanna talk too much about maybe protectionism, but maybe there is some degree of protectionism that, um, might be embedded in that as well.

Carm:

So I think we really need to look at it a little bit more holistically. So I think in the grand scheme of things, it's great that we are moving in the direction of having policies in place and plans that help us to understand how it is that we can infuse these capabilities within our own internal infrastructure and within our government, um, systems. And then be able to use them, you know, in a trust, trustworthy and appropriate and ethical way. Um, but I think it's, you know, we're still a bit a ways away. 'cause I don't think that we're, again, this, this is calm talking, not insight, by the way. Right, right. I, I think we're a little bit, you know, we maybe need to do a little bit more in terms of understanding where, you know, commercial models are. How are they aligning to some of, to this plan? Like, how are

Jillian:

They able to Yeah, I wanted to ask you about that specifically because one of the recommend, one of the recommended policy that this action includes, and I'm gonna quote this 'cause I don't wanna misstate anything. It says, update federal procurement guidelines to ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias. And it certainly is that ideological bias that is catching a lot of attention right now. But there's really no clear framework to define what that means. So, particularly for organizations right now that, you know, government contracts is part of their, their, you know, strategy when you're assessing what large language models you're gonna basically get into bed with, like, what do you do right now? How do you approach this smartly?

Carm:

Right, right. Well, I think there's, there are some, there are some models that are, are more open models where the weights and the algorithms are defined. So those models are ones where you, you could actually explore and investigate and say that we can, we can make a claim about, um, the, um, the bias within the model because we have the training data, we have the weights that we're included within the neural network itself, which is part of the model. Um, so you at least stand a chance to be able to, like you said, create the metrics, which help us to understand what bias or exposure do we possibly have. But the ones that are more closed, um, even the ones that, you know, produce their weights without the training data, you don't really know. So I think in those particular cases, we have to be very careful about who's making statements about free from bias, for example, or ethical, you know, ethical, um, adherence.

Carm:

Because we don't know in, um, in the grand scheme of things, and I highly doubt the frontier models. Like I don't think Sam Altman's gonna come in and say, oh, no, no, no, that's, you can definitely use that for government, you know, systems, if you know what I mean. So I think in the grand scheme of things, I think it, it's a pretty tall order to be able to say, Hey, look, you know, number one, I'm going to either give you my, all my weights and all my training data because I'm gonna make this a completely open model, because that's competitive advantage for the frontier model creators. But then secondarily, um, I don't think they're gonna turn around and say, oh, no, no, but we'll cover all of the, um, you know, risk that's associated with it if for whatever reason you provide advice to, you know, um, a, a Somalian speaking Minnesotan who's trying to renew their license, and all of a sudden it did something that we found inappropriate. You know, I don't think they're gonna take, they're gonna take responsibility for that. So it, you know, these are really, they're tough questions. It's, instead of a good question, this is a tough question, <laugh>,

Jillian:

But these are, it's tea, but these are, these are I imagine questions that a lot of leaders are dealing with right now. So ultimately, if you're having to make a decision about a language, large language model to bring into your organization, who's responsibility is it right now to assess whether or not this is a smart investment because it's not going to raise any red flags down the road? Or is it something that we really just don't need to worry about? And if we have to make a change later, we can?

Carm:

Yeah, well it's, that's one piece of the way that we typically engage is instead of having an answer, I mostly, um, present to customers the trade-offs and the questions that they should be asking and let them make the decision. So we help to guide and advise mm-hmm <affirmative>. As opposed to decide. So in a lot of ways, um, because I don't, I don't have the answers, insight doesn't have the answers yet. I mean, if there is, you know, a number that can be generated by just running an algorithm, then amazing. That's really cool. But I don't think we're gonna have anything like that anytime soon. So I think up until that point, we need to just make sure that, you know, again, I go back to the education part. We educate customers on what does it mean to be an ethically unbiased model, and how would you measure it and what is acceptable to you for your particular use case?

Carm:

And then if we have a suspicion that potentially something maybe biased or unethical could be produced, well we have guardrails, so let's check for it. So there are technology ways to be able to account for some of these challenges. Like I've talked to customers before about using things like, you know, recommendation frontier models, but then running it back through either some adversarial network or to run it through a semantic language model, which allows you to be able to, um, understand whether or not these are the kinds of things that we should be telling to our customers. And if not, then you block it. Um, so, you know, these are kinds of things where you can use existing techniques and still provide, do the right thing, as opposed to sort of forcing us to expose all of the details of the model itself so that

Jillian:

You're not creating a customer agent bot that is telling nasty things about your competitors, or absurd. Yes. Um, I'm gonna ask you another tough question. Okay. The plan also raises the possibility of a federal override of state laws if they are deemed to, I'm gonna air quotes again, unnecessarily hinder AI development and deployment. So again, working with state local governments, what risks or opportunities does that create for agencies that maybe operate under, like, different rules? And how do you, how do you navigate that uncertainty?

Carm:

Yeah, I think, I think you hit the nail on the head earlier where you were talking about the definition of terms. So what does it really mean to not hinder? Um, and so in this particular case, it's, unless we can define very clearly what it might mean, then I think we have to either establish policies which are in line with the, um, you know, the statutes that are being provided, or, um, make a statement around things like, and that's what we do actually, is to help organizations understand that are concerned about whether or not it's going to block or prevent their own policies from being enforced at the state or local level. But we have to sort of run a, run it through these, sort of make a statement about purpose and clear definition to know whether or not it's, um, in violation of, you know, said, uh, said law or statute.

Carm:

But I think that's the only way to really move through it, because I think if you just, if you fly blind, you may actually find that things may change over time. So I think following those standardization processes, at least initially, and being very clear about use cases, which is another thing that we do quite a bit with customers on our state law governments, is we help them with their AI policy. And so within the context of that policy is clear definition and intention, which I, which I think is really important for this kind of thing. So I, here it's to err on the side of specificity and not so much on the assumption.

Jillian:

So, so on the grand scheme of things, do you think, and this is again, CaRMS opinion, do you think that this America's AI action plan and these policies are gonna help accelerate AI innovation within the public sector

Carm:

Eventually? Yes. And I think innovation is already happening. I think there are maybe some concerns. Um, and unfor, you know, this is probably classical, I'll call it classical regulation. Um, so, and this goes, I mean, I'm gonna go all the way back to maybe even cloud adoption. Um, so basically when you get sort of unfettered access to, um, tools and environments, um, and then all of a sudden you apply, uh, regulation on top of it, you stifle progress significantly. So the, the intent is to, I'm sure the intent is to ensure that we continue to make progress as quickly as we have been. But unfortunately, when you sort of force a level of pause, then all of a sudden now you're basically, people are having to go back and check their work or validate, or, like I said, create more specificity, which tends to slow things down.

Carm:

So I, while I, while I think it's the right thing to do, because I do think that we have to start somewhere, and it will be evolutionary, of course mm-hmm <affirmative>. It's great to say that we want to enforce bias and regulatory preventions and security constraints and, you know, ensure that there is no, um, misuse within not just government sector, but more globally. So I think that's a really important, and the infrastructure can do that. I just, you know, like I said, I think it may slow things down. I initially, you know, again, Carm talking <laugh>, but I'm already seeing it actually, because c you know, um, state and local government, they're already asking us, well, what are we supposed to do? Um, our, our policies might be overridden if, uh, you know, by the, by the government, what do we do? And so, you know, my recommendation, as I said before, is just, you know, make sure that you're very clear and articulate about what it's, that you're going to do and how you're going to do it. And then, you know, then you're, you should be, you should be good to move forward. I mean, it's not about circumventing anything, it's more about making sure that you're working within the context of understanding and, you know, advance and innovation, which they all are doing anyways. Mm-hmm <affirmative>. But slow you down a little bit.

Jillian:

So, just to kind of put a little little bow on it, this is, these are interesting times. Everything's moving fast. There's a lot of, you know, direction that we might head to as, as a society with ai, but while we're in this transition, this evolution phase, and not knowing exactly what is actually gonna get pen to paper with this AI action plan, your advice sounds like you need to proceed with caution, intent, make sure that you're consulting with people who really understand the rules of the road. Um, and just be clear about your intentions with your, with your AI use cases. Do I have that right?

Carm:

Yeah, a hundred percent. And one thing that I love to tell people in more generally, and certainly in this case, is, you know, trust but verify. So you constantly have to make sure that you understand what it is that you're doing and then verify, or it could even be your responses from your AI systems. But I think it's really an important part of really understanding. And, you know, certainly within the context of the public sector, we're very good at understanding that, you know, there are regulations, there are constraints that are being put in place. This is just another instance of making sure that we understand what those are, but then also thinking about, well, how do we get things done even within those contexts? And the government's been doing it for, you know, as many years as the government's been in existence,

Jillian:

Right? We can't stop, we can't slow down. So we'll figure it out. What's been the biggest change for you moving from the private sector to commercial sector? Are the conversations different? Are the concerns and worries different?

Carm:

They're a little different, but in the, this is the interesting part of this discussion is that it is effectively now the same. So the, I think with generative ai, it really changed the dynamic of the conversation because it didn't just disrupt, you know, enterprise or government sector independently, or public sector. It actually changed everybody. So it's basically a societal change or disruption around ai. And I think everybody's feeling it. Mm-hmm <affirmative>. So in a lot of ways we're, you know, and there are economic things that have happened and political things that have happened, which we can talk about as well. But in general, I think it's something that just sort of infused itself into society. So everybody's taking advantage of it now, or needing to understand how to use it more effectively.

Jillian:

It seems like the public sector has this longstanding stereotype of being slow, outdated, always behind the times, particularly with technology. Tell me I'm wrong,

Carm:

You're wrong, <laugh>. But the, uh, in the, I think what happened and aligned to what I had just mentioned about sort of this disruptive change in society and the way that we do things. I mean, of course there was a, you know, change in the political, um, ecosystem within the US and Doge and other kinds of efficiency measures that are being taken now, which really did sort of force this change of, and I, and I guess actually to some extent, maybe even COVID did it too. It forced us to be more technology savvy. But I think Doge really forced us to think more about what are we actually able to do from an efficiency perspective, um, not just within the commercial sector, but public sector as well. So we're trying to do more with less. And we've always had a reduced budgets, even in public sector.

Carm:

So we don't have the ability today to sort of sit on the sidelines anymore. We have to be able to deliver services to, you know, in education, which is also part of public sector to our students, you know, K 12 advanced education. But even for our state and local government workers or, uh, DOD or more government intelligence, community workers, everyone is really looking at the ability to be able to leverage existing capabilities and do more with less. And especially with automation, which is another big component of this. And I don't wanna get too deep into things like agent ai, but I think ultimately that's where we're going and we think about automation and being able to do things more effectively and really changing the conversation about impact and return on investment. And that's true now, even in, not that it wasn't true of course, in the public sector, but it's true now, um, in the public sector as well, because we're trying to make sure that we can be as efficient as we possibly can

Jillian:

Outside of perhaps cuts caused by Doge or other, other forces. You know, this concept of doing more with less, where are you seeing public sector organizations really kind of skip ahead of the private sector? 'cause I think, again, the common misconception perhaps is that public sector is slow, private sector is fast, small private sector, emerging private sector is even faster. We've seen that all the time these days with, with AI really helping these emerging companies just completely take off. So, to hear you say that actually public sector is not that far behind the times, I'm, I'm kinda surprised and I'm curious, like, where are you actually seeing the public sector take the lead over private sector?

Carm:

Yeah, it's a really good question. And I would say probably two things are, at least I've seen so far, and I think this will probably come to pass as a, as we move deeper into the use of AI systems within public sector, or more across the board for any business. But number one, I think was always this ability to think about, um, how do we do things in a, uh, standardized or, um, I don't wanna say methodical necessarily, but it is kind of methodical. Like compliance standards are usually put in place in order to create repeatability within the public sector. And especially now with ai, it's super important to make sure that we can understand what are we, what are we implementing, how are we leveraging these capabilities, and are we doing it according to the best ethical bias practices, the security practices, et cetera.

Carm:

And the government's really good at that. So I think we're a bit ahead in terms of, you know, how are we doing this as it relates to AI use and adoption in the grand scheme. So we're following sort of these like the NIST standards around things like, um, the AI risk framework, for example, or even the NIST cybersecurity models as it relates to ai. So these are things that are being adopted today and implemented across the board within major states or government, um, municipalities or government sectors. And I think it's a little bit ahead of, um, you know, the commercial industry and, you know, the commercial industry, and I'm not, this is not a disparaging comment, but it's kind of the wild west, right? Right now we're just like, Hey, let's just go do whatever we need to do to create competitive advantage. But I think eventually we're gonna have to think about ethical, trustworthy use and, you know, even, you know, small things like copyright protections and, um, other things like that, which could be really critical to us overall.

Carm:

Um, so that's number one. Number two was really more focused around, the government's always been good at scale, so how do we do things in terms of implementing AI capabilities at scale? And so they already have these kinds of mechanisms in place to be able to help us to be able to figure out how are we going to say, enable 20,000 workers or 50,000 workers. And so we're seeing a lot of disruption in the commercial sector because it's sort of the, we didn't think about the transformation of our culture. We really just were thinking about how is it that we're going to start using AI capabilities and here's your copay, that license, have a nice day, <laugh>,

Jillian:

That sounds about right,

Carm:

Right? As opposed to we've got processes in place mm-hmm <affirmative>. In order to be able to effectively use these capabilities. And because it's repeatable, it's not so hard for us to figure out how are we gonna use this to be able to create meaningful change within our organization. So I think those are really two ways that, um, are, you know, reasons for, and, and I wouldn't say necessarily that the public sector is ahead, but you know, kind of, they have better, um, mechanisms in place to be able to, uh, adapt faster. How about that?

Jillian:

Yeah, I understand what you mean about being methodical in that approach. If you're having to do it a large scale, you do have to be methodical, otherwise it's not gonna be successful. And, you know, I laughed about the copilot license, but it, I think that is probably is a true statement for a lot of organizations. There's been sort of this panic mode of, Hey, we've gotta learn AI and let's just get it in the hands of our employers or our employees in a lot of cases that might be successful. But, um, you know, there's something to be said about actually taking a steadfast approach of, these are the steps you need to take, this is how you do something and roll it out really intentionally. Uh, Carm, you know, you're a chief technology officer within the public sector here at Insight. You talk to folks across lots of different organizations. And again, I'm gonna, I'm gonna zero in on this because I'm still surprised you're telling me that. Like, I think I've actually heard you say that the public sector in some ways is quote unquote killing it. So I'm just dying to know, I know you can't give away a specific client name, but can you give an example of, you know, or organization that is using technology right now in a very modern and, and really, you know, sophisticated way that we're not seeing the private sector quite grasp onto yet?

Carm:

Right. And I'll focus on, because it's an easy example and maybe one that people could understand, and I don't have to mention any secret names, <laugh>, but the education sector is one that we're seeing a lot of uptake in, um, as it relates to public sector. So education is key component of really how are students able to move along with the way that the disruption within our society is occurring, but also how can we take advantage of it. And I was literally at a school system last week talking to them about how do we leverage some of these capabilities because we can't even get teachers to show up in class. So how do they prepare for their lesson plans? And through AI systems, we can do that really easily. So it helps them to be able to prepare for the day's materials. They might be a substitute teacher, or maybe they're even temporary for six months or more, but they don't really know the subject area.

Carm:

So we can prepare them really quickly to be able to do that. And I won't mention products, but in, in the grand scheme of things, we can set the context, we can prepare the lesson plans, we can ensure that we're keeping within policy, we can prepare the homework assignments. We allow them to be able to, the students to be able to have subject area kinds of large language model, um, um, experiences so that they can learn on the fly without giving them the answers. But it's a really impressive kind of capability that allows us to take advantage of these ai, the AI systems without necessarily saying, oh, it's doing all our workforce, us, we're out of a job. But it's more embracing the learning concepts. And that's really impactful. And, and there are other institutions too, where we've been working with mostly around things like, how can we, you know, clarify our data assets, how can we move faster in terms of implementation? And coding's another big one actually with a lot of institutions that we work with, um, around, we need to create all of this code for whatever military exercise x, y, or Z, and we need to do it faster so we can leverage things like, uh, you know, co-pilot for GitHub or mm-hmm. Any other kind of pro generator. So we've seen examples there too. So, you know, those are probably two good ones where most people can relate to.

Jillian:

I love that you brought up education as an example. Uh, I know that you are an adjunct professor and you teach basically all of the topics that you talk about at work every single day. I'm curious, how has that experience of being an adjunct professor changed or shaped the way that you think about how to introduce AI in the workforce? And I'm curious about your answer for a couple of different reasons. Number one, we were joking earlier about how organizations have sort of like just kind of shoved AI in the hands of their employees. I think as a people manager, you approach ai, and this is, this is my, my suspicion here, my hunch. You're gonna approach a AI with a little bit more patience and understanding that you're probably not gonna get what you want on the first try. And it really does depend on how specific you are in the instructions.

Jillian:

And as a people manager, you kind of understand that because you've had the experience of working with humans and having to onboard new employees or junior employees and having to kind of walk them through step by step, this is what I expect you to do, this is the outcome that I expect. Or, you know, maybe you don't have an outcome in mind, you're just saying like, go work on this project and see what they come back with, and then you can provide feedback and critique and move on from there. So I think, you know, again, as a people manager, that's, that's how I approach AI, is that understanding. It's very much an iterative tool that I have to coach and provide a lot more instruction. So again, as an adjunct professor, like how has being a teacher shaped the way that you approach AI and how others should adopt it?

Carm:

That's a great question. And, and, um, I, I have, um, a couple different areas that I can respond to. So first of all, I, I should probably let you know that my doctorate is in education, and I, I'm an educator, so I basically was studying, um, adult learning and organizational behavior and cultural change. So this is really something that is super important to me. And of course, my passion for education is part of that, but in the, the way that I really think about it or approach it is to not think about using these techniques for replacement of what it is that humans can do, but how does it augment our ability to learn more, to learn faster? And I always take the approach of learn first. So it's really looking at it from the perspective of not so much, and I do this, actually, if you follow me along, like I've been traveling for the last, you know, two months probably, but I'm constantly educating people that we talk to.

Carm:

So like, I'll deliver prompt engineering training to people and just as a, it's not a formal class, it's more of a how do we, I show up, I help them to understand the different techniques that they can use, how to become better and educate them on what it is that they're experiencing, not just from a societal perspective, but from a organizational culture perspective as well. So that's really the modality that I typically take, which I think is helpful because it really is, and I, I don't want to bring up the existential crisis that, you know, some people are talking about, right? But, but if we don't learn, then I think we do really, we can become lazy. So it's really, I think, an important part of just understanding how we integrate with our customers and to, and you would not believe how much, you know, people appreciate the fact that they're learning and you're educating them.

Carm:

You're not just saying, here's a tool it, you know, type these three things and you can do your, you're done. It's more of a, how can you really explore, um, outside of the box, how can you really learn about, um, new techniques and new capabilities and create that passion for education, which again, is, you know, that's really kind of my thing. So that's where I tend to focus. So I mean, from that perspective, it's really, and maybe it's my, my mini mission in life to be able to help people to learn more about these capabilities, but it's also how do we, how do we adapt as humans to this whole concept of AI taking over the world? And, you know, it is a really interesting time for us because I think depending on who you talk to, like Jeffrey Hinton's off saying, you know, we're eventually going to silicon's better at, you know, under understanding and brain power than we are. So we're in trouble. And I was like, oh my God, don't say that <laugh>. So there are, there are other ways to think about, um, what it is that we have the opportunity to take advantage of.

Jillian:

I'm gonna take advantage of you being my private teacher right now. Got it. What is your number one prompt advice for those mediocre prompters out there?

Carm:

Oh, so it's basically let the AI system or chat botter or whatever you're using, let it help you write the prompt. So ask it to help you to write the prompt that you're going to prompt it with. So basically, if you're interested in understanding more about physics, ask it, how can I learn more about physics? And which prompt would you use in order to build me a course of study or maybe an explanation of a particular concept? And it will help you to understand how to do it more effectively.

Jillian:

That's a great hack. I think I actually shared that with somebody recently that I use, yeah. Chat, GBT to shape my prompts all the time. I wanna go back to education for a second, because you use that as your use case where you're really seeing some progress. And I think there's a lot of excitement about how AI can tailor education, personalized learning. I think that's probably one of the, the greatest benefits that people are excited about with AI is just the personalization of things that otherwise couldn't be. What do you think is like the, the biggest advantage, or, or even a misconception between the, the public sector when it comes to technology, specifically AI and the private sector? Like what, what is the edge that the public sector has when they're, when you're talking about thinking about the use cases and value of ai?

Carm:

Um, that's a really good question, and I think it depends on the public sector domain. So education, I think specifically is more about how do we, how do we continue to engage with students and provide the best possible experience, which is something we've always historically tried to do, right? So we would either try to hire more teachers, better teachers, or provide technology to help with the learning experience. I, I think AI now is, is exploring and investigating alternate modalities of communication. So like there is, I see a lot of speech to text, text to speech in, in, in the, um, educational space because there's a lot of, you know, people may be more auditory learners or they may be more visual learners. So you see,

Jillian:

It's definitely auditory <laugh>.

Carm:

There you go. I know. It's like, um, yeah, it's amazing too. I'm that way too. I

Jillian:

Love notebook lm, just for that reason. You can dump all your research in there and it creates a little podcast for you,

Carm:

Right? Exactly. Yeah. It's like, and it is, yeah, because it almost feels like when you read, we're so slow. Um, so it's either our auditory system or visual perception is much, well, it is, it's true that our bandwidth is much better with those, um, different modalities, but those are start, those are being embraced within educational systems. Uh, and I think in some ways it depends on the, you know, how much money does the, the system have? Like historically, this would've been very expensive to do, but now the democratization of AI is starting to move now more toward you can provide these services re relatively low cost. Um, like you said, like even LLM and as a matter of fact, there are educational versions of it. There's something called learn LM, right? And which is a set of models which are targeted more toward students and student behavior.

Carm:

And then adding in some of these capabilities like we're describing are relatively easy. So I think it's a little bit of ahead of where maybe the commercial is. And, and again, I don't mean that, you know, you're not gonna walk into a casino and see an avatar, you know, greeting you to go play poker or something <laugh>. But it's, um, those things are definitely happening. Um, but I think it's the, the adoption rates are maybe slower, like educational systems, um, are, they're looking to create a better experience to increase the, the ability, especially in STEM for our, you know, our kids to be more well-versed in mathematics or engineering or physics or whatever, you know, history. But it's more, um, we really just wanna make sure that we can take advantage of the technology that's out there. And I think some of it is, you know, maybe it is more of a noble cause, but it's definitely being embraced and I think the cost to entry is not, not that high, that it allows people to be able to move and embrace.

Carm:

So I th I think that's why you see this difference. 'cause you know, you might say, well, does that mean that the public sector education systems have more money? No. But they also, um, and I guess, well, another point maybe to make on this one as well is, is to say that, you know, large, large language model vendors or large AI vendors are, they do have, um, educational, um, opportunities or licensing where it can be more accessible to educational groups. And so I think that's also really beneficial. Um, you know, few of the institutions that I teach at, like Philanthropics a big one mm-hmm <affirmative>. They have educational licensing, Microsoft as educational licensing, Google, et cetera. I don't wanna leave anybody out, but I think they all do at some level. Yeah. So it's, it definitely becomes more approachable.

Jillian:

But I think what I'm hearing from you is the user experience of the students is one example, if you could to, you know, replay that to the private sector, it's, you know, the user experience of their customer base, but really what you're saying is the public sector is approaching this really with more of that mission-driven focus versus just where's the ROI, where's the profits,

Carm:

Right? Yeah, exactly. And, and when you think about it, you know, again, let's go back to the generative AI large language models. You know, and when you think about most of our recorded knowledge is, you know, textbooks or other kinds of domain knowledge, well, a large language model is really good at that. So if you want to learn, you have this perfect opportunity to be able to do it. So it's almost as if the use cases are really tailored toward education. Because even when you think about it, if you're a, you know, um, customer service representative within a large, you know, whatever widget manufacturer, well, guess what you're doing? You're, you're looking up information about those widgets and you're actually learning. So you're basically querying a large repository of knowledge to understand, you know, what do I say to this person when they ask about widget X, Y, or Z? So again, that's a learning experience at some level. So it, it almost is like really aligned in that area, which is kind of cool when you think about it. So,

Jillian:

All right. So Karin, let's get specific, walk me through a public sector organization. I, you can't name names, that's fine. But walk me through a project that has applied technology, particularly AI technology that's really made an impact that's really impressing you.

Carm:

Yeah, so we've had, we've delivered a project for a state that was looking to create better and more efficient service to their citizens more generally. But in this particular case, they were interested in how can the Department of Transportation better respond to requests from citizens about things like, how do I renew my license? When do I need to get a real id? What is a real id? Where can I go? Questions like that. The big kicker here, which was the most impressive part, was that number one, this particular state had 36 different languages spoken holy smoke,

Jillian:

Which is crazy

Carm:

Yeah. To think about, right?

Jillian:

How do you find people with that talent that speak that many languages?

Carm:

Yeah, it's not easy. So you kind of, either you don't provide the level of service that you can, or it takes a long time because you have to have multilingual people on staff that can answer these questions. Mm-hmm <affirmative>. So that, number one, that becomes a, a big problem. But it also means, in this particular case, if you had a hundred different say requests, you can probably address 10 of them, um, over the course of time before people just give up. Um, so the other important part about that is not number one, being able to translate to multi multiple languages, but you can also change the call center modality. So being able to do things like, I can send you to a website and be able to understand the question you're asking to be able to respond in the language with which you asked it, and they can also reroute you to the systems that are, allow you to be able to fulfill your particular request.

Carm:

Like, I wanna renew my license, take me to that location, and oh, by the way, here's some information that I'm gonna also carry along and fill it out for you. So to make the process easier, I can also do it verbally. So I could do speech to text, text to speech, so understand dialects and to be able to respond. And sometimes it gets it wrong, but it's still pretty impressive. I mean, you know, you know, the way sometimes people talk different dialects, et cetera. Mm-hmm <affirmative>. But pretty darn good as it relates to being able to do it that way. So again, it's just tying into the system itself to be able to allow for that expediency, and I'll call it immediate or more immediate response. So the citizens are happy, they get their objective, the state's happy, the state workers are happy because everybody's able to sort of do the things that they need to do in order to be successful.

Carm:

And so this was just a great example of applying technology. And some of this technology had already existed, like voice translators were already there, text translators already there. The, the ability to be able to now bounce that off against a large language model to be able to say, how can I, with, you know, of course the context for the state and state agencies and the services that they provided, how do I tie that all together to create a real solution, which makes this experience much easier, is just an amazing feat, I think. And as it relates to what it is the state's looking to accomplish. And then from there, we can go to any other services that the state may provide, um, across the board. So that's really the objective, such a great example. And yeah. And the other areas to go from there is like, how do we take this and then take it to every other state in the union, for example, how do you do this because there's so much repeatability here and we can refine it and make it better and just continue to evolve. Yeah. So it's really wonderful.

Jillian:

It's a great example of taking some foundational work that's already, you know, been in existence like the, the chat bots, the, the translations. And then AI is really like the magic, I won't say a magic bullet, but it really is like the unifier that actually brings all the systems together and just amplifies our capabilities, which we talk all the time that we want AI to be an amplifier of human capabilities. AI is also an amplifier for that technological capabilities we've developed. So, and I love that your point is about, you know, the scalability that if, if this works in one organization or government, this could absolutely be replicated and you can see that, you know, happening across the United States, it'd be amazing

Carm:

Game changer. Yeah, yeah.

Jillian:

I wanna ask you about something I've heard you say about ai, that it's sometimes there's some ai snake oil, there's a lot of, uh, hype right around ai. What are the biggest red flags that you think leaders should be aware of when they're evaluating AI tools and vendors?

Carm:

Two things. Number one, AI is not all ai. So the, I always talk about AI being generative ai mm-hmm <affirmative>. And classical ai. Classical ai, meaning we've been doing machine learning or advanced analytics for decades. Um, so it's not really different, but it's, what's really important to understand is what is the AI that you want? What is the AI that you need? And which AI is being sold to you, if you will. Mm-hmm <affirmative>. So if it's generative ai, say for example, that is trying to do computer vision, well, computer vision models are very mature and they've been around for decades, and they probably are gonna do a better job than generative AI will today. So maybe be careful about those kinds of claims. Um, like for example, you know, SOA is not gonna be able to give you a human fidelity style image because we already know that it's, you know, can't spell very well, at least today.

Carm:

So, but there are systems that can, and these are more classical systems that have been around and there's been decades of research that have gone into these kinds of systems. So I think number one is really just to understand what it is, because you can get caught up in the, the definition of terms. Once again, AI is many things. It's not just generative ai, but everybody wants to make it just generative ai. And then I think the other thing that is, um, you know, some of it, the snake oil is, you know, when um, you see, um, organizations that are say, taking, I'll call it more simple capabilities like advanced analytics or graphical depictions of just statistical analysis and calling it AI is another sort of, you know, I think you can get played a little bit. Mm. Some of that's snake oil, so it's sort of like, Hey, you know what, that's just, those are just statistical analytics that's not really ai. So I think you have to be careful about, you know, and again, I'm not, I'm not like doomsday sky is falling kind of thing, but it's, you know, you do have to be careful. Yeah. There are organizations that are, you know, jumping on the bandwagon and maybe they don't have, you know, real ai, AI either classical or generative AI capabilities, and they're just rebranding some of what they're already doing, which might still be valuable, but they're rebranding it as ai. So I think you have to be careful about that.

Jillian:

Yeah. It makes me think back to the organic food craze when like, organic was really important, right? All of a sudden everything was organic and then all of a sudden labels were highlighting that there was lots of protein. And it's like, when you look back to it, like some of them, the ingredients really weren't that different. You know, organic mean a lot of different things. So yeah. Classical ai. Do you think we'll ever have jazz ai?

Carm:

Sure. Jazz hands, <laugh>.

Jillian:

Yeah. Um, okay, here's something I really want a real answer. What advice do you give to leaders who feel the pressure to do something with ai but they don't have a use case to find, they don't really know where to start. And this is probably applying to maybe a lot of people right now. What's the biggest mistake they'd make? Or what, what, what do you tell 'em to do? Oh,

Carm:

Yeah. So, well, I don't believe that it's necessarily true that, um, they don't have a use case. I think they probably just haven't discovered it yet. So, but if they really, really don't have a use case, like if you work in construction or you're a plumber and you're like, no, no, we gotta have ai, it's like, no, you probably don't need ai <laugh>,

Jillian:

Your job's probably safe from ai. You're probably one of the few ones. I

Carm:

Would probably, I would probably say You're okay. Yeah. Don't, you don't need ai. But I think more specifically is, um, you know, there, there probably are, um, you know, specific use cases. And I think some of it just has to do with understanding what their business challenges might be, understanding how they can maybe do things more effectively, explaining what AI is capable of. And then you can probably uncover use cases in that particular case. And I, I think the biggest mistake people make, and we sort of joked about this earlier, is, and you know, I'm not gonna name any names, but in, in the grand scheme, well, you, everybody knows companies that have done this. It's like, Hey, we just bought a 10,000 licenses of x, y, Z chat bot product. I won't mention any products, but it's like, and then you dump it on people's desks and it's like, how come we're not doing any better? Like, what's wrong? So that's a big mistake that we can make. So it's like we, we got ai mm-hmm <affirmative>. How come we're not doing ai, no change

Jillian:

Management training,

Carm:

Nobody knows how to do it. Yeah, right. Exactly. And it, it's not that simple. But, um, so I think really understanding the reason why you would want to use AI or like we said, uncovering what those use cases are, because in probably 95% of the cases that are there, it's more just really thinking about uncovering them. So if you really have a desire to become an AI led or AI first company, um, you can probably uncover exactly where they're, and think about the, what I was talking about earlier. You know, there probably isn't a single company on the planet that isn't interested in finding information about their products or services or generating some kind of content, whether it's marketing material or, or something else. Um, or, you know, some level of discovery or, you know, there's, all of these use cases are just pretty apparent. Um, it's just a matter of executing.

Jillian:

In other words, it doesn't have to be fancy or flashy, it just needs to be practical and useful.

Carm:

A hundred percent. Yeah.

Jillian:

I'll say one of my, one of my favorite use cases of copilot is to have it like find files and conversations for me. So practical, simple. Right. Hundred percent. Carm, I'm gonna close on a tradition here. I'm gonna ask you our final question. What's something that you've learned recently?

Carm:

Oh, let's see, what have I, <laugh>, does it have to be in the context of anything? Well, it's in the context of ai. Alright, so I'm gonna really be a total geek for you. Um, do it. So I am super interested in human intelligence as it relates to, um, machine intelligence. And I just got this book, it's actually a textbook for a graduate course in neuroscience. So I'm learning about neuroscience and it's really about how does the brain actually, how do denate work in the brain? And it really is it's chemical thresholds in the brain that actually activate the neurons that allow us to be able to think and remember. And so that's what I've been learning about most recently. So it's really understanding sort of how does the brain work and then how does it relate to things like neural networks. And, you know, neural networks are obviously a digital version of our analog brain. But man, it's just fascinating stuff. And again, I, you probably don't wanna have like a dinner conversation with me 'cause you're gonna be like, all right, I'm tapping out. This is not something I'm interested in. But it's really cool to me.

Jillian:

I actually heard somewhere recently that the next like great thing to study in college is actually psychology because it will pair well with the technology. So I think you're on the right track.

Carm:

There you go.

Jillian:

Well, Karm, thank you so much for the time today. It's always a pleasure to talk to you.

Carm:

Yes, likewise. Thank you so much. I appreciate it.

Speaker 3:

Thanks for listening to this episode of Insight on if today's conversation sparked an idea or raised a challenge, you're facing head to insight.com. You'll find the resources, case studies, and real world solutions to help you lead with clarity. If you found this episode to be helpful, be sure to follow insight on, leave a review and share it with a colleague. It's how we grow the conversation and help more leaders make better tech decisions. Discover more@insight.com. The views and opinions expressed in this podcast are of those of the hosts and the guests, and do not necessarily reflect on the official policy or position of insight or its affiliates. This content is for informational purposes only, should not be considered as professional or legal advice.

Scopri di più sui nostri speaker

Headshot of Stream Author

Carm Taglienti

Chief Technology Officer, Insight Public Sector

Carmen has more than 25 years of experience as a cloud computing, data science, data analytics and data management expert. As a Chief Technology Officer for Insight Public Sector, he focuses on cloud-based business solutions across multiple industries and technical domains. He also serves as an adjunct professor at Northeastern University Khoury College of Computer Sciences and the College of Professional Studies teaching graduate courses in data science, cloud computing and analytics. Carmen is a published author and speaks at industry and technology events.

 
Headshot of Stream Author

Jillian Viner

Marketing Manager, Insight

As marketing manager for the Insight brand campaign, Jillian is a versatile content creator and brand champion at her core. Developing both the strategy and the messaging, Jillian leans on 10 years of marketing experience to build brand awareness and affinity, and to position Insight as a true thought leader in the industry.

 
 

Registrati Rimani aggiornato con Insight On

Iscriviti oggi al nostro podcast per ricevere notifiche automatiche sui nuovi episodi. Puoi trovare Insight On su Amazon Music, Apple Podcasts, Spotify and YouTube.