AI in the Enterprise Environment

Enterprise Environment Q&A with Jason Weems


With Jason Weems, former CIO for Care+ at Cigna. 

Q&A

About Jason Weems:

Jason Weems is a seasoned and accomplished technical leader with a remarkable career spanning over 25 years in driving innovation and excellence in the fields of Healthcare and Financial Services. Jason brings a wealth of experience and a proven track record of executive leadership, making significant contributions to organizations through strategic vision, technological insight, and a commitment to transformative solutions.

Jason has served in various leadership positions in large Fortune 50 companies like Cigna and Citi, and brings a unique blend of strategic insight, technical expertise, and a passion for driving positive change in the Healthcare and Financial Services sectors. He brings to the table a passion for identify opportunities for technology-led business innovation while also guiding their implementation for tangible business impact.

 

3 KEY TAKEAWAYS:

1. Importance of a Defined Framework in AI Projects: It is critical to have a clear framework when embarking on AI projects. This framework should start with defining the desired outcome, understanding the metrics that drive this outcome, ensuring proper measurement instruments are in place, getting the team on board with the objectives, and being cautious about over investing in initial iterations.

2. Value of People Closest to the Work: It’s equally critical to involve people who are closest to the actual work being done. These individuals possess vital knowledge about the tasks, the challenges, and the potential areas for improvement. Engaging them not only aids in driving innovation but also ensures that AI solutions are practical and valuable.

3. Continuous Improvement and the Role of AI as a Co-pilot: AI should be seen as a tool that assists humans rather than replaces them. The goal is to push individuals to operate at the peak of their capacities, focusing on meaningful tasks while learning and testing in an iterative nature and adapting based on data. Moreover, the introduction of AI should come with talent development initiatives to help employees adapt and grow with the changing technological landscape.

Q&A 

Jake: Before we were getting started, we were both talking about Technology Partners’ new COO, Supantha Banerjee. You mentioned that he seemed like the type of leader who is willing to get into the technical details and make improvements at the functional level.

I’m reminded that I think we need more of that generally, we need leaders that are shepherds that smell like the sheep – I know that’s how you’ve led and I’m excited for the audience to get to hear this session today.

I’ll kick things off by saying you have a ton of experience in the digital healthcare space and there are many trends that myself and our Cloud Data & AI team see in healthcare. We kind of got this thing started unintentionally in healthcare and now that’s where the majority of our time is spent. So, I’m excited to dive in and get your thoughts, experiences, and best practices on AI.

I’ll first ask you about this AI framework you’ve talked to me about in the past – we both know that there is a mixed history of ROI on data science projects?  I'm curious if you can share a bit more about that?

 

Jason: Yeah, absolutely. One thing that's so important to do up-front is to properly define your outcomes.  ML and large language models are an interesting use case, the technology is fun, but you really need to know the outcomes you are going after.

Is it increased revenue? Is it cost reduction? How are you going to measure it? These are things that I think of fundamentally. AI projects are often transformational in nature, requiring a redesign of the current business process as you look to add automation and/or a change in the way you engage with customers.  In industries like Healthcare, you may also have additional regulatory restrictions you will need to adhere too.  There are a lot of variables to consider as you look to optimize. It's can be too easy with AI to go in and think I’ll create X solution and produce Y outcomes and it’s going to deliver all of this value...but, too often these projects start with unidentified performance metrics, no definition of success... and the next thing you know, you have spent a great deal of money with sub-par results.

In terms of an approach, this is a general framework I like to use for transformational initiatives:

  • Step 1 is to define your expected outcomes, at minimum at a qualitative level, but try to quantify them as much as you can
  • Step 2 is understanding the metrics that ladder up to produce that expected outcome and will result in the anticipated value
  • Step 3 is to assess your current systems and processes to ensure you have the right instrumentation in place within your applications to properly capture those metrics.  That’s much easier said than done and it's really easy to just ignore this step in early phases – and if you are in an enterprise organization that has multiple disparate systems or has been pieced together through acquisition, instrumenting your data may be no small task
  • Step 4 is socialization to ensure both your team and key stakeholders understand the defined outcomes and measurements and that it’s not an afterthought or quickly forgotten after initial planning. It needs to be a living and breathing guide that informs the work
  • Step 5 is don’t get married to your initial idea and over invest in the first iteration. It’s really easy to get tunnel vision in these projects and not be agile enough to recognize necessary course changes or signals to shut down a path and move on to your next opportunity.

Jake: There is a lot of culture impact there too, if your organization is building or has already built an agile mindset of quick failures and iterations, closeness to the business, and product-based development, then change and pivoting is just a part of getting better every day.

 

Jason: I totally agree! A common misnomer I have heard is that “being agile” means not setting a plan up-front.  I have found just the opposite is true – it’s really important to invest in understanding the space and developing a hypothesis you are going to test for and be clear on the value you expect to see.

As a leader, it’s important to re-enforce a mindset within your teams that we’re not going to get it right every time... and we certainly aren’t going to get it just perfect in our first attempt.

I think that’s one of the reasons why generative AI is such a powerful opportunity, because it helps to reduce the initial setup time and effort required to test, learn from, and improve upon an initial hypothesis.

 

Jake: Talk to me about measuring project success, including AI deployments, what does right look like to you?

 

Jason: There’s a really interesting book I like called “How to Measure Anything” by Doug Hubbard. The premise of the book is that organizations often avoid measuring things because they consider them too intangible.  The author makes the supposition that anything that truly matters is both detectable and measurable. A lot of project sponsors forget the importance of quantifying measurements up-front in order to make data-driven decisions.

First, you need to understand the baseline today. Is that baseline currently known and easily reported on, or even being looked? Sometimes that part can be harder than it sounds. Without a baseline you are shooting at a target in the dark and you won’t even know the true impact of your changes, when to stop, and if you were effective or not. Keeping an eye out for false positives is really important too -- sometimes you can see results because of some extraneous variables like a market shift, process change or adjustment in go-to-market strategy that aren’t directly related to the changes you are making. People spend a lot of money on projects that underperform or don’t scale because no baseline or controls were established.

Within AI specifically, sometimes the results you find may require you to adjust your thinking in how you approach a problem.  I think it’s really important to be open to that and foster a culture that is willing to quickly pivot.  An example that comes to mind is call center automation through AI.  Perhaps the initial hypothesis was to drive efficiency through embedding GPT natural language processing into your IVR call flows to fully handle your customer interactions.  Once you start deploying you see that you are not seeing the efficiency you were anticipating and your customer containment rates are not what you expected them to be and you ultimately recognize the need to pivot your strategy to leverage AI as agent assistant to reduce call time and improve the customer experience.

 

Jake: This work is definitely easier in a digitally native company, right? I know you grew your career in global enterprise organizations, and we see this with our own clients, there are so many disparate systems, technical debt, other obstacles, it just feels tough to get the data you need some days. How have you figured it out?

 

Jason: Definitely true. The bigger the company, the more complicated it becomes to make rapid change, but also often the larger the opportunity as well for cost savings or growth.

It can be really tempting to say we can potentially save X dollars a year through AI, let’s go invest a bunch of money to scale up and deliver this great project with unbelievable ROI and hope we can achieve it. That mindset needs to evolve going forward – we need to have a culture of continuous improvement, making small changes, getting a little bit better every day. It’s important to embrace and re-enforce a continuous improvement mindset as a core part of your culture.

Now, on to the systems side – if you are in a large company that has been around for any real period of time, you probably have 20 different systems that are a part of your ecosystem that require some type of change or refactoring to get to the data you need.  You probably have a mix of on-prem and cloud hosted processing, front-end digital, and back-end services across multiple data structures.

It’s important recognize at whatever level you are leveraging AI to make improvements, that you will likely need to instrument the corresponding systems to capture the data you need to measure outcomes. You may also need to do some work to stitch together your data to get a unified view of a customer.  If you don’t have a unified hosted data platform, that’s likely going to be harder, but not impossible. If you don’t start there, it will be much harder for you to make data-driven decisions.  One consideration to accelerate and realize value it to explore leveraging large enterprise tools, like data virtualization, to bring together disparate data more quickly and reduce the significant up-front expense associated with large data migrations.

It's also important to prioritize your efforts iteratively.  You likely won’t be able to get to 100% of the data you desire in your initial baseline. Can you identify and capture the 10% of the observations, total population, or process that your team can use to drive the most impactful change in? Can you then ramp that up to the next 20% after you’re ready to take the next step?

 

Jake: We work with a lot of enterprise healthcare organizations that are working through those platform engineering challenges right now. It’s funny to think we can grab a Kaggle data set on brain tumors and train a CV model in an hour or so, but designing and implementing a platform that delivers quality data as a service across users is the really hard work.

 

Jason: It is hard work, but the effort will pay dividends. The up-front investment in time and resources to get your data in a normalized fashion and into centralized platform will go a long way to clearing your current and future path, especially in AI projects. I have found this to be especially true in regulated industries like Healthcare and Financial Services. The added benefit of your platform build work will be far-reaching and have a direct impact on your customer and patient experiences as well.

The more you are able to provide a comprehensive unified picture of your customers, the more impactful and personalized your interactions will be with your customers across all engagement channels, whether that is digital, call or other channels.

As I have engaged with peers in the industry, there is a definite trend that most large corporations have built up significant tech debt over time that has resulted in massive fragmentation.  Making a meaningful reduction in tech debt within an organization will require a concerted focus on simplifying the environment. You won’t have the resources to go after everything.  If you prioritize your efforts on the data that is most important and focus on closing those gaps first, you will maximize your yield and the benefits to various stakeholders across the organization are often pervasive.

 

Jake: Those are great points; you’re going to have to attend some client meetings with us in the very near future! Let’s talk about addressing big system level challenges with AI, one of the buzziest discussions right now is recreating the front door of healthcare – that's big work, right? You’re one of the few executives that has those type of experiences though.

 

Jason: Well, I believe innovation occurs are multiple levels.  You have disruptive innovation like strategic initiatives focused on rethinking the entire digital landscape, patient interactions, and operations through a new digital front door. That’s transforming the way work is being done today. Foundationally, that may be rethinking service line models, business processes, and physical infrastructure.

There are big pervasive decisions that require senior level sponsorship with a lot of surrounding organizational efforts, and while it may be a necessary and exciting path forward, those types of efforts can’t be the only undertaking.  If there is significant data fragmentation within your environment, it will only make it that much more difficult to leverage AI and data intelligence to help power the digital experience.   In order to create a compelling digital front door, bringing together unified data, intelligence that feels intuitive leveraging AI, and personalized customer experience is key.

To truly scale, innovation must also occur a local level.  What is an engineer or product owner doing within the team to get a little better today? The data that’s needed for those big organizational-wide efforts can be curated over months or sometimes years' worth of planning.  In those scenarios, embracing an iterative mindset to pull data quickly and leverage AI to deliver results while working in parallel to build strategic platform assets is essential. As you proactively look to get 1% better every day, what are you doing to abstract, compose and service-enable the data within your local product domain?

My other belief is these type things have inertia. So, I’m never worried about starting small. Go prove a hypothesis out leveraging AI... yes, you may have to roll up your sleeves with the data up-front in terms of capturing and pre-processing, but start somewhere in a manageable way. Start building upon what you’re learning as you’re proving or disproving your hypothesis.  Start to socialize and share those learnings with others that ultimately lays seeds toward impacting the culture and just becomes part of the DNA of your organization.

Great work often catches fire quickly and honestly it can take off quicker than you’re ready for at times. If you don’t see that initial fire within your organization, then there’s likely work to do to incentivize. If you want to incentivize innovation in data or AI within your organization, are you talking about that at the town hall? Are you soliciting ideas from your teams? Are you structuring compensation and year-end bonuses for top contributors? Those things matter in reinforcing innovative thinking and creating a learning culture that knows how tackle new things.

 

Jake: Let’s talk about AI as a co-pilot. We focus on being thoughtful between prediction and judgement first. This is a big point of debate in AI as it bleeds over to the potentially negative outcomes of AI development and deployment.

 

Jason: Yea, it’s a slippery slope if you don’t have a defined vision and principles of how you want to operate as a business. I’ll get back to my point of measuring anything is possible. Don’t go digging for what’s ripe for automation through AI just because it has high confidence intervals. That may be a fun interesting AI project, but will it drive value?

Knowing the indicators that can define organizational outcomes are important because that’s where you can track effectiveness and ROI. Start where you see headroom and potential for digging in deeper.

Understanding that while your team may be experts from a technical perspective, there are likely business experts within your organization that are closest to the work that can become your domain subject matter experts. If you engage business leaders to understand what performance and activities mean to them, they’re likely going to have invaluable insights into where large opportunity exists that you can go after with AI. You could become the enabler that helps them unlock real measurable value.

In highly regulated industries like healthcare, AI automation can be hard with regulatory requirements, so you can’t always go after everything easily, and you may need to think differently on how to get to value.  As you look to leverage AI as a co-pilot, perhaps a pharmacist is required to complete a particular activity, but can the system pre-process and curate a recommendation in parallel that will prompt them in their work and improve their accuracy and productivity?   I've seen teams go out and spend significant money on deep learning initiatives when they didn't fully understand the industry or business model. They didn't get all the right people involved early on, resulting in spending a half million dollars and having to backtrack because they ended up solving the wrong problem.

 

Jake: I don’t think that’s rare. It’s a great reminder that the people doing the work are so valuable not just because of the work they do and value that work brings, but because of the knowledge they can bring to improvement efforts.

 

Jason:  The people doing the work every day already know where there the waste is and short cuts available to get the job done better, the goal is that our employees are able to work at the top of their ability, or license in the case of health care professionals, and spend their time on highly impactful and meaningful work.  Done well, AI become a vital and supplemental tool that augments the human experience.

 

Jake: I’m really curious to ask what programs you’ve seen rolled out in terms of developing team members skillsets when AI is introduced as a co-pilot. If you’re pushing them toward their highest work, how have you seen organizations marry that with talent development efforts?

 

Jason: I can share what I have seen work well. When you lean into those individuals that are close to the work within a specific business domain, you pick core lieutenants that you can tap into for domain knowledge and context.  You’re looking for people that have the right attitude and aptitude and are hungry to learn. Often, I have found these individuals are also excited at the prospect of doing something new and exploring the art of the possible. My experience has been those individuals exist within your organization right now right where the work is being done. As you start to collaborate to build out and refine your AI models, it creates an opportunity to lean into those individuals that want to grow right now that may even feel constrained in the current environment.

Some of this talent development happens naturally with the pilot individuals that you select to partner closely with. As it relates to developing team member skillsets within roles changes, I think about COBOL. Obviously, there was a big shift to distributed, java-based applications. In my experience, I saw our organization employ a concentrated effort to retrain those individuals.  This doesn’t feel all that different. It’s about re-tooling and upskilling and you start with individuals that have high aptitude, the right attitude, and want to learn.

 

Jake: Sounds a lot like our old conversations when we partnered at LaunchCode! That was a really fun time with a lot of impact. It’s only gained more importance for me the farther I get away from it. I don’t know that I’ll ever be able to thank you, Tim Kessler, and Neal Sample appropriately for your belief in LaunchCode and people from diverse backgrounds.

 

Jason: So many great stories from those students. A great experience in thinking differently about talent through upskilling highly competent junior engineers to address hard-to-find technical competencies like Pega, Cobol and Machine Learning. That was a big win for a lot of us.

 

Jake: You mentioned incentives earlier, what have you seen work in terms of rewards that drive innovative ideas and thinking?

 

Jason: This gets back to the culture conversation. Are you encouraging and creating space for your teams to try new things? It’s important to have an effective intake process for new ideas, especially in disruptive technologies like AI. Socialization and eliciting input is key and often difficult when everyone is so busy with their day jobs.  Getting back to inertia, identifying a couple of wins can go a long way in creating a tailwind that spurs more engagement and new innovation. Suddenly people are working and saying, “There is opportunity here and I think we should look at it.” That’s critical.  It is really key and really important.

In terms of incentives. I have seen a lot of public recognition in town halls, spotlight type awards, and highlighting the individuals that are contributing new ideas that get through the pipeline process and end up in a proof of concept. I wish I could say that I’ve seen more financial incentive rewards, it’s something I have pushed for in the past, but haven’t yet been able to implement at scale. The investment is relatively minimal given the value creation opportunity.

 

Jake: Any final words for our audience?

 

Jason: First of all, thanks Jake for asking me to join you on the Q&A session; I really enjoyed it. AI is an exciting space. AI is a powerful tool that has been hypercharged through the introduction of high-powered GPT models for generative AI. The ability to rapidly build AI models through prompt-based programming is changing the way we think about AI.  In the last year, the prevalence and ability to quickly build out AI GPT programs and connect them to massive data sets through large language models has created a huge shift in the market.

While the time-to-market opportunity with GPT is really exciting, there is still so much room to securely acquire the right data within an enterprise local infrastructure, understand the data with context, and instrument your systems to highlight the current baselines and future impacts. We all need to make improvements in the way we work, AI has a big part to play in that story, but so does ensuring that you’re investing time and resources in the right places. You need to make sure you're organized around continuous improvement and embracing a testing and learn orientation and ultimately that’s the mindset and culture we all need to strive for. Getting better is about pushing forward, trying new things, leveraging technology and not fearing doing things in a different way.

 

Jake: Jason thank you so much for your time, thoughts, and the treasures you’ve provided. I’m always thankful to call you a friend and this session is a reminder that I need to reach out more to schedule happy hours; we both have too many kids for free time it seems!

Similar posts

Subscribe to our Healthcare's Data Innovation Blog

Be the first to know about the latest trends and developments in healthcare data management and analysis.

Sign Up