Podcast

Episode: 556 |
Markus Starke:
AI Project Case Study:
Episode
556

HOW TO THRIVE AS AN
INDEPENDENT PROFESSIONAL

Markus Starke

AI Project Case Study

Show Notes

Markus Starke, an advisor for cybersecurity and digital process transformation, has recently been working in cybersecurity for the AI applications that corporations are using. Marcus explains that, AI plays a significant role in work, particularly in intelligent process automation. This concept involves combining technologies like robotic process automation, process mining solutions, chatbots, Optical Character Recognition, and more advanced forms of machine learning and generative AI to build end-to-end processes. However, cybersecurity issues can affect these automation systems, especially as more users use them individually.

 

Safety Measures with AI Automation

Markus talks about several dimensions of cybersecurity with AI automation. To ensure the safety of AI-related automation situations, clients are asked to review their setup from a Target Operating Model perspective. A framework is created to guide this process, including governance, secure development processes, and creating awareness about potential risks. Governance involves governing roles and responsibilities, access, user rights, and other aspects of the system. Secure development processes ensure that solutions only access the data they should access, store data securely, and use encryption. Securing the platform is another dimension, involving standard frameworks for cloud-based solutions. Awareness about the human factors in reducing risk levels is crucial for achieving good cybersecurity. And lastly, monitoring and reporting ensure that the environment is controlled to a degree.

 

Examples of Cybersecurity Threats Using AI Tools

Markus discusses cybersecurity threats with AI tools, such as generative AI (GPT) for working on company data. One example is a human user extracting data from their corporate data pool and sending out an email with this data, and sending it to their private email account, which could be used in a public chat GPT instance. This can be controlled by creating awareness and setting up standardized IT security control mechanisms to limit data extraction from corporate networks. Another example is using proprietary corporate data for advanced data analytics on GPT, which could expose it to a potential attacker. Private computers are typically less secure than corporate ones, making them more prone to being attacked or losing data to an attacker. Corporations generally want to limit the type of data that is made publicly available in generative AI applications. He states that it is not always clear what happens to the data that is input to AI applications.  Markus also discusses the risks associated with using consumer versions of chat GPT, as any data uploaded could potentially be put into its training data. However, there are options for setting up AI applications in a limited way for specific corporate use cases, but it is important to evaluate these solutions on a case-by-case basis to ensure they fulfill specific needs and governance. With Gen AI, it is crucial to balance between limiting too much while maintaining control.

 

AI Tools Retaining Data

The discussion revolves around the use of AI tools, such as Zoom, which may be retaining data on calls or transcribing them without letting users know. This raises concerns about the accessibility of information to organizations. It is essential to ensure that these tools align with cybersecurity standards and are compliant with protection requirements. However, this may be a case-by-case consideration, and Markus emphasizes that it is always necessary to question security processes. In addition, he mentions that it is crucial for independent consultants to raise awareness about cybersecurity and AI. Basic rules apply to the use of AI, such as ensuring data is stored in controlled instances and using strong protection mechanisms like passwords, access rights, and encryption. When working with clients, it is important not to make their lives too simple by creating AI solutions for specific business problems. Cybersecurity can sometimes be perceived as slowing down businesses, but it is an essential control that must be maintained. Independent consultants should review these aspects and not make their work too easy. Markus strongly recommends that consultants should be aware of active and forthcoming regulations that apply to AI when setting up solutions for clients. 

 

Timestamps:

Cybersecurity risks in AI-powered process automation. (0:03)

03:10 Governance and security for AI-related automation

05:53 Cybersecurity risks with AI tools and data

10:48 AI data security and control

14:47 Cybersecurity and AI in business

 

Links:

Freelance Website: http://starkeconsulting.net/

Company Website: https://www.ten-4.de/

 

 

One weekly email with bonus materials and summaries of each new episode:

 

  1. Markus Starke Unleashed

SPEAKERS

Markus Starke, Will Bachman

 

Will Bachman  00:03

Hello, and welcome to Unleashed. I’m your host will Bachman. And I’m excited to be here today with Marcus Starker, who is an expert in cybersecurity. And in particular, he has been working recently on cybersecurity for the AI applications that corporations are using. Marcus, welcome to the show.

 

Markus Starke  00:26

Hi, well, thanks a lot for having me here. And yeah, great to talk to you here.

 

Will Bachman  00:32

Great. So why don’t you just dive in walk us through a case example. Before we started recording, you explained to me that, you know, some of the AI tools that companies are now adopting, you know, Chad GPT, and some of the other Genette AI tools as well as tools that are not just Gen AI, but other types are subject to cybersecurity attack, and you’re helping companies defend against that. So walk us through case example.

 

Markus Starke  00:59

Yeah, thank you. Yeah, I’m happy to do that. So generally speaking, of course, AI, like plays plays a broad role for work. I mean, for every consultant, that’s an important topic, but specifically, what we have come across is really thinking about how does AI and in a broader sense AI that is used in what I used to call intelligent Process Automation? How, let’s say how can that also be affected by cybersecurity issues? And how can it be protected? So what specifically comes up and just to explain this, this concept of concept of intelligent process automation, is, if AI, let’s see more and more getting into into regular process management and Process automation. So you can imagine, you have a set of technologies that work together. So things like robotic process automation, which is not a new thing anymore, process mining solutions, maybe chatbots. Optical Character Recognition, but also more and more, some degrees of AI, let’s see this, somehow simpler version or more, let’s say understandable version of machine learning, but also generative AI, getting into these into these applications and building, let’s say, a, an end to end process, or supporting end to end processes. And we have come to the situation that that customers address us with the question is our set of intelligent process automation, or process automation, whatever you want to call it. So our set of tools, how we use those in combination? Is this still safe, especially also, as more and more tools come into play that you don’t, let’s say, at least feel like you don’t totally control them yourself? So and that’s maybe a key question that that comes up around, especially generative AI, how much control do we have about that, especially as also, more and more users use them individually. And it’s not always really controlled by like a corporate framework, and that can create some substantial substantial issues here. So, concretely, so, we had we have been asked by clients to review their set up for these automation situations, including AI applications, mostly from a what I call Target Operating Model perspective, not so much from a very deep technical perspective. And we have we have created a framework to help us with that, that that basically helps guiding through that process, and I would like to share some of the basic perspectives that that we are that we have been using for that for that framework, they are definitely applicable also for other situations, but I consider them especially also let’s say helpful for these really AI involving or these AI related automation situations. So we say if you want to if you want to evaluate that, you have to look on the one hand on the question, how do you actually govern things right? How do you govern your whole set of solutions that way that work together, and from end to end? So so that can be workflow management, robotics, and generative AI solutions, process analytics, etc. So how do you how do you govern that in terms of roles and responsibilities, access, user rights, etc? Then how do you how do you ensure that you have an actual let’s say secure development process for things so how do you make can you make sure that any solution within that and for example, the specific API solution is developed in a secure way that can mean things like, again, how you do your control access rights, how do you make sure it only accesses the the data that it should access that it’s maybe data is stored in the right places, etc, that encryption takes place, etc, then the third dimension, so after governance, and the actual solution development would be securing your actual platform. So typically, there’s like a set of different technologies that some will have a platform somewhere. So it’s not the not the actual only the actual AI use case, for example, about the, for example, the Azure based platform that you’re using. So that typically comes into place, let’s say, more or less standard iframe standardized frameworks for, for example, for securing cloud cloud based solutions. A fourth dimension that we consider very relevant is to actually create awareness. So for AI as for every other, let’s say solution, humans actually one of them one of the largest risk factors in cybersecurity. So that means you have to address if you want to get a good, that’s a good degree of cybersecurity, you have to address not only the, the technical factors, but you also have to address the human factor. And that includes really making people aware what they are doing and where the risks may be, and how to, let’s say reduce that that level of risk. And last but not least, the last dimension that that we look at, is the what we call the monitoring and reporting. So even if you are set up properly, if you have a proper set of rules, technologies, etc, awareness, you still want to make sure you control the environment to a degree. So that is relatively broadly applicable to too many too many situations. But specifically, this has proven to be very helpful for for also solutions that asset involve AI as a part of an overall, let’s say, automation initiative. Any questions so far? Well, on that, yeah, maybe

 

Will Bachman  07:21

give us a practical, you know, potentially real life example of a cybersecurity threat with a AI tool. So what would an example

 

Markus Starke  07:33

so, so a very, let’s say, a very obvious one is, is, let’s say, a human user wanting to use and that’s a very simple example, wanting to use some sort of generative AI. So let’s, let’s say a GPT, for working on company data, right. So what some people may do is simply extracting data from, from their corporate data pool, and sending out an email with this data and, and sending it to their to their own private email account, and then doing something with it right, and putting it into a like a public chat TBT instance, which is maybe not what you really want them to do. So this is the very, very most obvious example, I think that that can be controlled. So that’s actually a lot about awareness, of course, creating awareness that this is not a desired behavior, but also to, let’s say, to set up kind of standardized, IT security control mechanisms to, let’s say, limit the ways data can be extracted from the corporate networks. And to also detect these cases to make users aware that that’s not a not a desired behavior. So that’s, that’s a very, let’s see, I will tell you a very simple example. And what also and

 

Will Bachman  08:57

just educate me a little bit. So let’s say that you, you know, I use my proprietary corporate data, and I upload it to the chat GPT what they used to call advanced data analytics to do some analytics on it. So that data is now kind of been exposed to catch GPT Is there a way that some bad actor is going to be able to then extract it from that system? Or, you know, like, what’s the what’s the, what’s the risk? I’m not? I’m so naive about this. So what’s the risk? What’s gonna happen?

 

Markus Starke  09:34

So in that, in that specific case, I would, I would mostly worry about two things. So the one thing is that typically your private, let’s say your private computer, laptop, whatever, isn’t as much secured as your corporate Exelis. Right. So and as soon as you have as you have sent that same email with whatever data let’s say employee data or contracts or something out of out of out of your corporate environment to your to your private laptop it’s at least for some time typically stored on your private laptop, and your private laptops can simply be, let’s say, more prone to be to be attacked or to to lose some data to an Hector, then then your corporate laptop. So that is that is I think the one one of the risks the other question, and I think that’s not so much a question of a of, let’s say, an actual attacker, and I’m honestly right now not sure what’s what’s the chat TBT policy, for example, but very generally speaking, corporations certainly want to limit what kind of data is, let’s say making made somehow publicly available. And generally speaking, it’s not always clear what happens with the data that is put into into a specific generative AI applications. So that is always the question. Whenever Whenever, let’s say a Gen Gen AI application is being used. Let’s see what happens with the data. How do we let’s say how can we make sure it’s not being used for something that we don’t intended to be used, and especially with very intellectual property, so with with any of the important knowledge that is proprietary to it was specific, your specific cooperation, you certainly want to avoid that that happens.

 

Will Bachman  11:32

And my understanding is that, if you just use the consumer version of chat, GPT, then if you put anything that you type in there, or any data you upload, could, you know, is somehow getting put into its overall training data. So somehow, theoretically, someone might be able to extract it or expose it. But there’s also if I understand correctly, there’s now like more of a enterprise jet GPT solution that companies can get where, within your corporate firewall, you could use Chad GPT enterprise version, and it would be safe because it’s not leaving your four walls. Is that Is that right? Exactly.

 

Markus Starke  12:15

So that there are options for let’s say, setting up AI applications in a, let’s say, in a limited in a way that they are limited to your to your own corporate use cases. Still, then there are sometimes discussions I had with one client that was currently setting up a Microsoft application is that data’s really staying with us? Or is it somehow anonymized and then making, let’s say, made available, or at least to some degree been used for optimizing the system? So it’s not always totally clear. And that’s certainly something to investigate. But yes, there are the corporate versions and corporate instances of these kinds of tools that make it that Yeah, certainly make it more secure. But that have always has to be evaluated in a case by case basis, right? That’s that that specific solution that you’re talking about? Does that fulfill your specific needs? And that’s very much a, let’s say, question of governance. Right? That’s, that’s a question of, What do you want to what do you want to achieve? What What rights do you want people to have? What possibilities do you want to want people to have? And especially with with Gen AI, it’s certainly a question of not limiting too much. But on the other hand, really keeping a good degree of control?

 

Will Bachman  13:38

What about tools? Like, what about tools like zoom? It? It seems like, and I don’t have a firm handle on this. But you know, some stuff that I’ve just casually read online suggest that Zoom is now kind of somehow retaining data on the calls or like transcribing them without letting us know and storing that information. So am I my off base there? Am I falsely accusing zoom of this? Or I seem to recall that like they own the data of

 

Markus Starke  14:14

exactly I would, I would give you the same kind of more or less vague answer. So yes, generally speaking, this, there’s always a risk of that. I mean, all these tools like calibration tools, they they try to incorporate some degree of AI that’s maybe transferring calls, or maybe automatically, at some point, maybe not today, but maybe in the near future, are able to create automated notes, all of your calls are like a summary of something. And of course, that is a that is a question. Do you want that? If you want that? Do you have an instance of that tool that limits the accessibility of that information to your own organization? So And for for zoom, and I’m not sure on this on the specific on the specific solutions, but generally, that’s just always a question that you need to ask. And one of the key reasons why typically, when you want it, when you introduce some sort of solution, it’s always makes sense to have an alignment with cybersecurity. And to just check with them is that that’s a very generally speaking compliant with what we need from a protection standpoint. And that’s a case a sort of a case by case evaluation, I would say, I mean, there are probably solutions, like the standard Microsoft toolkit that you just have to accept that net, you have to trust on that there’s a sort of, okay, policy of how it’s handled. But generally speaking, it’s always makes sense to question those things. And why, if you’re introducing a solution, that the way I know it also from from other perspectives, if you’re doing an RFP, for example, basically, every bigger corporation has a step having a sort of cybersecurity assessment for the solution that you want to implement before you do it. And even before you contract it, and that applies, again, for AI solutions, but also for for every other kind of solution that is somehow handling your your corporate data.

 

Will Bachman  16:22

Fantastic. Any other tips for independent consultants of, you know, raising our awareness about cybersecurity? And AI?

 

Markus Starke  16:35

I would say generally speaking, some of that is some basic rules apply to the use to the use of AI. And I think what’s what’s most important is really to make sure what data do you put in there to make sure you have your let’s say, you have a controlled instance of the of the AI tool that you’re using, if you do not. So if you’re if you’re using, let’s say, the public chat CBT instance, for example, you should probably better only put anonymized data in that, for example. So data that doesn’t create any trouble, let’s, let’s say that way. And to protect any any data management, and I mean, in the end, AI is a lot about data management, to protect any data that you’re managing with, with some strong protection mechanisms. So passwords, access, rights, encryption, etc. So that you’re really can be sure you are not, let’s say getting into into any any trouble there. And if you’re working with your, with your clients on this, I would really strongly recommend to not, let’s say, to not make your life too simple. So it’s very tempting to just say, Okay, let’s create some sort of small chat CPT or whatever solution for a specific business problem. But I would very strongly recommend to also for those solutions, go into into at least a small assessment from a cybersecurity perspective. Because otherwise, you might, you might, let’s say, half an actual issue, you might even run into regulatory problems, because they are regulations, active and coming that apply to AI the same way as they as they apply to other software solutions and technologies. So it’s, it’s on the one hand, and actual risk and danger for businesses and, and on the other hand, a regulatory risk. And I would really strongly recommend whenever you’re working with, with your clients, on setting up some AI solution, to also review that aspect and go into that and not just make your life too easy. Even if it’s even if it’s tempting. Fantastic. And then you’re obviously you always get into that challenge that cybersecurity has always sometimes it’s perceived as the slowing business down and making things complicated, etc. So you have to find a good, that’s the balance of not making it too slow. But on the other hand, you also have to make clear that this is an essential control that you must have. Fantastic,

 

Will Bachman  19:21

Marcus, for listeners who would like to follow up with you, where would you point them online?

 

Markus Starke  19:30

I would point them on the one hand to my own website, which I can of course, share the link with you and I’m also acting as associate partner for 10 for consulting that’s a, I would say medium sized consultancy, in in Germany. I can also share the link there and I’m responsible for a large part of their cybersecurity activities and some some other activities around digital trade. Information and digital process transformation. So I’m very happy to for people to get in contact to me also for other independent consultants to maybe have a chat on this topic and just just exchange on this so that that would be the two access points.

 

Will Bachman  20:15

All right, well, we will include those links in the shownotes. Marcus, thank you so much for joining today.

 

Markus Starke  20:21

Thank you. Well, what’s a pleasure?

Related Episodes

Episode
569

Automating Tax Accounting for Solopreneurs

Ran Harpaz

Episode
568

Integrating AI into a 100-year-old Media Business

Salah Zalatimo

Episode
567

Author of Second Act, on The Secrets of Late Bloomers

Henry Oliver

Episode
566

Third Party Risk Management and Cyber Security

Craig Callé