Different use cases of generative AI in healthcare have emerged, whether helping providers with clinical documentation or helping researchers determine new experimental designs.

Anita Mahon, Executive Vice President and Global Head of Healthcare at EXL, spoke with MobiHealthNews to discuss how the global analytics and digital solutions company helps payers and providers determine what data to implement into their LLMs to ensure best practices in their operations and offerings.

MobiHealthNews: Can you tell me about EXL?

Anita Mahon: EXL works with many of the largest national health plans in the United States, as well as a wide range of regional and mid-sized plans. Also PBMs, health systems, provider groups and life sciences companies. This gives us a fairly broad perspective on the market. We have been focusing on data analytics solutions and services as well as digital operations and solutions for many years.

MHN: How will generative AI affect payers and providers, and how will they remain competitive in the healthcare industry?

Mahón: It really depends on the uniqueness and variation that will already reside in that data before we start integrating it into models and creating generative AI solutions from it.

We believe that if you’ve looked at just one health plan or provider, you’ve only seen one health plan or provider. Everyone has their own nuanced differences. They all operate with different portfolios, different portions of their membership or patient populations in different programs, different mixes of Medicaid/Medicare and commercial exchanges, and even within those programs a wide variety of product designs , local market, regional and practice variations all come into play.

And each of these healthcare organizations has sort of aligned themselves and designed their internal application and their internal products and their internal operations to best support the segment of the population that they’re aligning with.

And they have different data that they rely on today in different operations. So as they put together their own unique data sets, married to the uniqueness of their business (their strategy, their operations, the market segmentation they’ve done), what I think they’re going to do is really good. refine their own economic model.

MHN: How can we ensure that the data provided to companies is unbiased and will not create greater health inequities than those that already exist?

Mahón: So that’s part of what we do in our generative AI solutions platform. We are truly a service company. We work in close partnership with our clients, and even something like a bias mitigation strategy is something we would develop together. The kinds of things we would work on with them would be things like prioritizing their use cases and developing their roadmap, developing plans around generative AI, and then potentially creating a center of excellence. And part of what you would define in that center of excellence would be things like standards for the data that you’re going to use in your AI models, standards for bias testing, and a whole assurance process. quality around that.

And then we also provide data management, security and privacy in the development of these AI solutions and a platform that, if you take a cue from it, integrates some of these monitoring and bias detection tools . So, this can help you detect quickly, especially during your first pilot tests of these generative AI solutions.

MHN: Can you talk a little about the bias monitoring that EXL has?

Mahón: I certainly know that when working with our clients, the last thing we want to do is allow pre-existing biases in healthcare delivery to arise and be exacerbated and perpetuated through generative AI tools. So this is something that we need to apply statistical methods to identify potential biases which are of course not related to clinical factors, but to other factors and highlight if this is what we see when we test l Generative AI.

MHN: What are the negatives you have seen regarding the use of AI in healthcare?

Mahón: You highlighted one, and that’s why we always start with the data. Because you don’t want those unintended consequences of reporting something from data that isn’t really, you know, we all talk about the hallucination that public LLMs can cause. So an LLM has value because it already represents, you know, several steps forward in terms of being able to interact on an English linguistic basis. But it’s really critical that you understand that you have data that represents what you want the model to generate, and even after you’ve trained your model, continue to test and evaluate it to make sure it’s generating the type of result. What do you want. The risk in healthcare is that you might miss something in this process.

I think most healthcare customers will be very careful and circumspect about what they do and look to those use cases first where maybe instead of offering like this dream , a personalized patient experience, the first step could be to create a system that allows people who currently interact with patients and members to be able to do so with much better information at their disposal.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: