Podcast

Episode: 557 |
Julie Noonan:
AI Project Case Study:
Episode
557

HOW TO THRIVE AS AN
INDEPENDENT PROFESSIONAL

Julie Noonan

AI Project Case Study

Show Notes

Julie Noonan shares a case study on using AI while working with a top 15 global pharma company to get the most insight from the data and reduce time to market or time to development of their particular molecules and drugs.  In early 2022, the pharma company was using artificial intelligence and machine learning to analyze clinical and research data. The organization Julie worked with was a digital and data concentration alongside data scientists and computer scientists. Julie shares where this organization placed focus and what their goal was with regards to using AI and machine learning(ML), and the role she played in developing this center of excellence. 

 

Company Use Cases of AI and ML

Most of the early use cases involved clinical data and research data. Clinical groups were conducting the first clinical trials with animal populations, and recording their data in various tools. They were studying a specific model molecule to understand its implications across projects. For example, they were studying a molecule for one disease indication and wanted to predict its relevance for another project that another team was working on. AI and machine learning prompts were used against the data, allowing them to organize and prompt data to return potential other indications that could be tested with the collected data. Julie talks about how companies are grappling with the rapidly evolving AI technologies, and a center of excellence can be a solution. However, concerns may arise about adding bureaucracy and slowing down innovation. She explains how she helped her client deal with these concerns. The company culture of this global organization highly values entrepreneurialism, and allows data ownership within its group, allowing for experimentation unless it directly impacts patients. She mentions that they were able to educate interested groups about the importance of patient safety and ethics. The organization rewards innovation by publicly recognizing those who come forward with project ideas. Even if the project is not great or a failure, it is a lesson learned. The company’s top priority is the patient, and they reward those who come forward with ideas without imposing penalties or shutting down projects. The organization also stresses the need to comply with correct procedures to avoid ethics violations. 

 

Inspiring a Company Culture of AI and ML Innovation 

Julie talks about how her role in change management helped inspire innovation within the company.  They used polls to encourage innovation and encourage change. They run exciting advertising, competitions, and partnerships with universities, allowing for the introduction and excitement of new AI technologies. This approach helps companies navigate the challenges of AI adoption and ensures that their innovation is not stifled by bureaucracy. Julie explains that for change to be successful, leader support plays a key role. The center of excellence (COA) is a key change management initiative within an organization. It involves making people aware of AI and machine learning, which can be achieved through various marketing strategies. The organization chose a name that aligns with its culture and annual message from the CEO, highlighting the future and benefits of AI and machine learning in drug delivery. The COA also held pop-up events where individuals could access learning materials, certifications, and practice using fake data. Office hours were provided for those who had no idea about IT architecture or how the organization operated. Newsletter articles, posted posts, and video monitors were used to promote the COA’s existence. A community of practice was formed, which met monthly for educational sessions and discussions on AI usage. Julie also explains how they monitored ethics and DEI to represent the target patient population.

 

Measuring the Efficacy of the COA

Measuring the effectiveness of the COA is challenging due to the lack of metrics. Julie talks about measuring awareness, and how the organization has grown from six members to a global community of over 1500 people. She also mentions accessing use of the learning, accessing use of the sandbox, and the number of projects brought into be evaluated,  focusing on their metrics. For example, in the first year, 10 projects were part of a competition with a local university, where teams of university and company employees worked together to implement AI/ML elements in their projects. The project metrics included surprises, opportunities, and lessons learned. This success was significant in the pharmaceutical industry, as more drugs and experiments fail than succeed. Over the last two years, the number of data scientists has grown dramatically, and the COA has become a vital tool for the organization’s digital transformation efforts.

 

Timestamps:

01:03AI use cases in pharma company

06:33 Balancing innovation and governance in a large organization

11:29 Marketing a new AI center of excellence internally

15:47 AI and ML center’s effectiveness measured through awareness, access, and project metrics.

 

Links:

Website: www.jnoonanconsulting.com

LinkedIn: https://www.linkedin.com/in/jnoonanconsulting/

 

 

One weekly email with bonus materials and summaries of each new episode:

 

  1. Julie Noonan

SPEAKERS

Will Bachman, Julie Noonan

 

Will Bachman  00:03

Hello, and welcome to Unleashed. I’m your host will Bachman. And this is one of our series of short AI case studies. I’m pleased to welcome today, Julie Noonan. Julie, welcome to the show.

 

Julie Noonan  00:16

Thanks. Well, I’m glad to be here.

 

Will Bachman  00:18

So Julie, thanks so much for agreeing to share a case study that you’ve worked on related to artificial intelligence. Walk me through your case example. First, why don’t you set up the context for us? What was the situation?

 

Julie Noonan  00:33

Excellent, I certainly will. So in working with a top 15 global pharma company, became readily apparent a couple of years ago, that they had, obviously tons of data and needed to educate their population, particularly in research and development, on how to best use that data and use artificial intelligence and machine learning in order to get the most insights out of that data to reduce the time to market or time to development of their particular molecules and drugs. So the organization that I was working with is a digital and data concentration. Most of the individuals in that organization are data scientists and computer scientists. So one of the things that this organization decided to do was to create a center of excellence, so that not everybody was jumping on the AI ml learning bandwagon. But they had kind of a frontline group of individuals that were staying at the leading edge, and advancing their particular agenda toward AI and ML, as opposed to, you know, everybody in their brother taking time out from the drug development process, to learn about it. So my job was to enable this center of excellence to figure out how we could take several different perspectives on creating it based on what they were really trying to accomplish. Their objective was to first of all educate. Second of all, to provide a sandbox environment where individuals could go in and play and learn the tools like like Python, and R. That was a couple of years ago, and work with data that didn’t really matter, so that they weren’t, you know, messing around with data that that was critical to the company. And then third of all, they need to to put together some sort of ethics and governance process, so that individuals who thought they had a project that could be that could be forwarded. By using AI and ML, there were some there was a committee of individuals who are experts on the topic, as well as experts in the subject and the domains and the business domains. They can actually sit on an advisory board, read through the project proposals, the business case, and then the also the technical implications, and provide a yay or nay or go back and get this other information. So those are the three objectives that we worked on with the COE.

 

Will Bachman  03:44

Now, talk to me about some of the use cases that people throughout the pharma company might have been coming up with, to use artificial intelligence or machine learning. And, and also Okay, and this was, and you mentioned that this was early 2022. Okay, so this was sort of before chat GPT came widely available internet generative AI, but certainly lots of machine learning and artificial intelligence tools were out there. So what were some of the use cases that were coming up across the company?

 

Julie Noonan  04:17

Most of the early use cases involved clinical data, and research data that were that were being captured within the company through the research process. So for instance, a clinical group would be running the first clinical trials, you know, with with animal populations that would be recording their, their data in various and sundry tools. But the molecule that they were, and I’m not a pharmacist, by the way, so I may be messing up this whole thing. from a pharmacy perspective, or a drug development perspective, I might not sound that great. But what they were trying to do was to see not just for a particular project, what the data was showing, but for a particular model molecule that they were studying what all of the implications might be across projects. So, for instance, they would be looking at a particular module, or a molecule. And I’ll give it a name, x x x 375. They were looking at this particular molecule, the results that it was giving us for one disease indication, but they wanted to also use AI and ML to predict if it could be, it could have relevance for some other disease indication that it project perhaps another team was working on. And AI, they so they wrote prompts against the data, they were able to define a way to put all of their data into a data hub, a data lake, and they were able to then go up against that data, and organize it and prompt it such that it comes back with possible other indications that could be tested with the data that they had already collected.

 

Will Bachman  06:33

Now, I imagine and strongly would guess that a lot of companies are trying to figure out, organizationally, how to deal with the rapidly evolving AI technologies. And one solution is to do a center of excellence, like you helped create, I imagine that some people might have, there’s pros and cons to that some concerns might be that it’s going to just add this big layer of bureaucracy and someone out there has like a good idea of how to use AI. And they say, No, no, you’re not allowed to just implement that and experiment, you need to submit it to our committee for review, and maybe we’ll approve it, maybe someone else would get, you know, so it could like slow down innovation. So, but on the other hand, you have more governance and so forth, then you can get the expert expert people to work on it. So how did your client think about that trade off? And how do you think about that trade off? For other clients? Would you recommend that other companies set up a center of excellence? Or in some cases, you know, maybe when is the right time to do that, and when is the right time to let you know 1000 Flowers bloom across the company, right?

 

Julie Noonan  07:52

It is a tight balance. And in this particular company’s culture, entrepreneurialism is highly valued. There were multiple groups, and this is a huge organization. And it’s a global organization. So as you can imagine, the central the central data and digital organization that I was working with wasn’t the only one that was looking at, should we should we rattle, or, you know, Ratchet this back in should? How do we govern it, etc. The nice thing about the organization is that, within reason if what they said was if you own the data if your group owns your data, and you can just go forward with it, unless it directly impacts a patient or a patient population. The other thing that they did was they that we were able to educate the groups that were interested, because one of the things that the culture of this organization does is it It rewards innovation. So the what’s in it for me, for people who wanted to experiment and to and to do some innovative things? It we made it rewarding for them to actually show those project ideas to this committee. Because when they did, they were rewarded publicly, for coming forth. They were they were rewarded, even if the project turned out to be, you know, not so great or a failure or, quote unquote, failure. I don’t think there is such a thing as that failure. It’s just a lesson learned. But they, they also did a lot of upfront education around this is if you do this well and if you do this right, then you do don’t run the risk of ethics violations, you don’t run the risk of harming a patient. And that is the number one culture item for this company is the patient’s first always. So they hooked it to the hook to the ends incentives to the actual he cultural values of the organization and recognize people that came forward. If someone did have an idea, and they, you know, were out there, they were doing the whole cowboy, wild west thing. And this group heard about it, they reached out not in a punitive sort of a way or oh, we need to look at this kind of a way, they reached out in a curious way. And asked if they could see what was going on, if they could offer suggestions. But they never once pulled a project unless it was an ethical violation. And they had a whole group for that. But they never once pulled a project, just because someone was being, you know, innovative or entrepreneurial.

 

Will Bachman  11:17

Okay, so they, they were using more of a poll, they were not, yes, being punitive or shutting people down, but they were inviting people

 

Julie Noonan  11:28

to change managers. So that was, you know, that was part of part of my job on the team is just to help them come up with well, what’s in it for me, you know, how are we going to? How are we going to introduce this? How are we going to make it exciting, we did a lot of exciting advertising, we did competitions, we did a lot of partnerships with universities. For particular projects, we ran several competitions with the universities that were also on the bleeding edge of AI at the time.

 

Will Bachman  12:09

And tell us a little bit about the like how that you know about some of that marketing internally, right. So you have a COA, there’s is a big change management aspect to it, of you can’t just build it, you have to, you know, either convince people or make people aware of it. Just as if you were trying to sell consulting services, externally, you have to internally, you know, make people aware, get them to consider it, get them to reach out. Talk to me about how you marketed it internally.

 

Julie Noonan  12:42

Of course, we had just extraordinary sponsorship for the initiative, we, which obviously, is the key in my book of all change management initiatives. If the leaders don’t live it, love it, and use it and disseminate the message, then no one else is gonna pay attention. One of the one of the key initiatives of this particular organization that year, revolved around digital and data and transformation, as you can imagine, like, you know, a lot of other companies. And so what we did, when we started to announce the existence of this center of excellence, first of all, the name that we chose for it, which I can’t divulge, but the name that we chose for it was totally in line with the culture of the organization. It also reflected back on on the annual message from the President, the CEO of the organization, we were able to use a quote, a direct quote from him on a lot of our marketing material, talking about how it’s the future and how we could get drugs to patients a lot faster if we were able to slog through our data much better using AI and machine learning. We showed up at what we call pop ups, where if there was a group that was actually doing an in person event, this particular team would show up at that event with marketing materials, you know, QR codes that people could could link to, to go to our landing page, where they could then go out and access learning that we had created a full three, three level curriculum for them. With certifications included in some of them, they could actually go out also in jump into the sandbox and access some of that fake data and practice it They also had office hours, that if there was an individual who was working on something, or had an idea that could call in during those office hours and just shoot the breeze with one of the COA members to see if it was even worth exploring. And it was all very open, vulnerable people could call in who didn’t have a clue. And it was okay. That, you know, they had this crazy wild idea, but had no idea about any IT architecture or how I worked or etc, they could call in and it wasn’t, there was never a stupid question, which was awesome as well. We did newsletter articles, we did posted posts, we did postings on websites all over the company, this particular company, used teams. And so we used Yammer, along with teams messaging. We used video monitors throughout the company to tout the existence of the group. And it picked up we also formed a community of practice. And the Community of Practice got together once a month for either an educational session, sometimes they the training person on the team would bring in someone from a local university, who was an expert in AI and and they would, you know, talk about using our to do X y&z Or they would talk about how to parse particular datasets. Or they will say, here’s where you can go to get external datasets that are free, you know, from governments and such. We had the Ethics Committee come in and do a discussion about why it was important to make sure that your use of AI was ethical, non biased and had an element of di to it, that would represent the patient populations that you were targeting, et cetera. So we literally we did everything we could think.

 

Will Bachman  17:23

Yeah, that’s pretty extensive. Yeah. is amazing. So how did you measure the effectiveness of this? Senator? I imagine that’s tricky to do, because they’re giving advice and so forth. So you’re gonna, like count the number of inquiries, or you could try to actually get into return on investment of, we advise these projects that actually ended up saving us money or something. So what kind of KPIs did you use? Or what kind of metrics did you did use for the center?

 

Julie Noonan  17:57

The The nice thing is that when you’re starting from scratch, you don’t have metrics. So you basically say, Okay, for this for this year, for the first year that we went out, we wanted to increase awareness. And how do you measure that? Well, how many people decided to join the community, we went from, you know, the people who were actually on the team, which was about six, all the way up to today, it is more than 1500 people that are actually part of the AI ml community. And that was pretty extensive. And it’s global. And it’s not. It’s not restricted to one particular group, like r&d, or, you know, one of the commercial units. So that was one thing. Another thing was access and use of the learning. And also access and use of the sandbox. The third thing is the number of projects that were brought in to be evaluated and how many of those projects actually got legs and became true. You no true projects that then had their own metrics attached. So for instance, there were I think, 10 projects in that first year, that were part of a competition with a local university. And so the teams of those projects were blended University and employees of this particular organization. So they worked together for a year implemented several, several AI ml elements to their projects. And at the end of the year, they were judged, not if the project had failed or succeeded, but if it had actually provided insight to the company in moving forward with a particular molecule on the drug pipeline. And I would say it was a 5050 split, that first class of what we would consider good project metrics. And basically, surprises or opportunities or lessons learned on the other half. That is tremendous in the pharma in the pharmaceutical industry. Typically, more drugs and more experiments fail than they do succeed. So it was a very it was very it was very successful first year for those that competition team. I do know that there are there isn’t now an entire 100 person 100 and some odd person team who works with the business units. Now on creating projects with AI ml elements and data elements. The number of data scientists is has grown dramatically as well over the last two years.

 

Will Bachman  21:14

Julie, thank you for giving us this overview or folks that want to follow up and learn more about your practice. Where would you point them online?

 

Julie Noonan  21:23

www dot j Noonan, n o n a n consulting.com Or you can link in with me.

 

Will Bachman  21:31

All right, fantastic. We will include those links in the show notes. Julie, thank you so much for joining today.

 

Julie Noonan  21:37

Thank you. We’ll

Related Episodes

Episode
569

Automating Tax Accounting for Solopreneurs

Ran Harpaz

Episode
568

Integrating AI into a 100-year-old Media Business

Salah Zalatimo

Episode
567

Author of Second Act, on The Secrets of Late Bloomers

Henry Oliver

Episode
566

Third Party Risk Management and Cyber Security

Craig Callé