Monthly Update: Provider Networks, HIPAA Rules for Overseas Vendors, and Generative AI Model Tuning

I’ve recently been sharing information with friends and colleagues via a monthly email. I received suggestions that I should also post this information online to share more broadly. My goal is to share useful healthcare and technology information distilled down as much as possible. I select topics that regularly come up as I’m speaking with others or are problems in need of solutions (for the entrepreneurial-minded). In this month’s update, provider networks and network adequacy, rules for vendor access of PHI from overseas, and generative AI model tuning, what it is, when it works best, and an example.

Joe Bastante

9/30/20244 min read

gray HTC Android smartphone

Network Adequacy—Are There Enough Doctors in Network?

I include this topic since it impacts patient health, it’s complex, regulated, and an area of opportunity for data and insights experts. From KFF research: on average, members have access to 40% of doctors near home, metro areas had narrowest networks (as low as 14%), 25% of members have access to fewer than 25% of primary care doctors, in some large counties, 25% or more of providers don’t participate in any network, Kaiser and Centene have the narrowest networks on average, Blue Cross plans have the broadest.

Regarding regulation, many regulations govern network adequacy requirements, and it’s ever-changing as each side of the aisle argues for greater or lesser government controls. Medicare, ACA plans, Medicaid, and state laws impose adequacy rules, and they are not consistent. In general, network aspects regulated include: the ratio of members to providers, member travel time and distance from providers, number of providers by specialty, and appointment wait times. New legislation is proposed and under review specific to behavioral health networks as network coverage has traditionally been insufficient. Better analytics are needed to model a more comprehensive view and measures of patient experience as it relates to network adequacy. See resources below if interested:

Ex-U.S. Partner Access to PHI

This topic arises frequently since many healthcare processes benefit from use of global labor. Furthermore, technology and service vendors may employ resources outside of the U.S., which is equally a risk. (NOTE, I am not a layer so please do not take the following information as legal advice.) In a nutshell, HIPAA allows for the use of business associates outside the U.S. provided the covered entity (the organization using them) establishes a business associate agreement (BAA) and can establishes sufficient assurances that the vendor will adhere to HIPAA requirements. Medicare imposes additional requirements. Medicare Advantage Organizations and Prescription Drug Plans must report to CMS on use of offshore contractors and must audit the vendors reporting results to CMS. Tricare (veterans’ insurance) has specific timing requirements on breach notifications, and Medicare/states impose additional requirements which vary by state. For example, Florida and Texas prohibit moving confidential information outside of the U.S. Below is a link to the CMS memo requiring reporting of offshore business associates. I also include a link to a summary of state privacy legislation.

AI Model Tuning Versus RAG

I share this topic since most companies are using AI in some capacity, and I’ve found even non-technical founders are rolling up their sleeves and trying AI development. Since OpenAI recently announced model tuning availability for GTP-4o, I thought it would be a good time to summarize tradeoffs. Fine tuning means that you provide training data and produce a version of the model specific to your need. Retrieval-Augmented Generation (RAG), an alternative to fine tuning, (usually) uses pretrained models as-is and passes any supporting content to the model along with the user’s question. For example, using RAG to answer a question about your company’s privacy policies, relevant policy documentation would need to be provided to the model along with the query.

When fine tuning makes sense: when a large body of knowledge exists (e.g., medical, programming) or it’s too big to pass in as information/tokens with the query, when costs can be avoided through eliminating content passed in with the query, when more influence over results or greater output structuring is needed,

When RAG makes sense: when information changes frequently (i.e., RAG avoids the need to retrain the model), when you’d like your application to be model vendor agnostic, when cost of training doesn’t outweigh cost of larger queries, when there’s a large body of content and it’s straightforward to isolate and extract the pieces needed for a query.

What to Know About OpenAI’s GPT-o1

I expect most have seen the announcement, but just in case…OpenAI announced toward the end of last week the release of GPT-o1. This latest model release is different from previous releases in that the focus was reasoning skills. In other words, it’s not better than previous models at tasks like reading and writing, but it’s dramatically better at tasks requiring reasoning and advanced skills. From the announcement regarding math skills, “In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly

solved only 13% of problems, while the reasoning model scored 83%.” That’s a dramatic improvement. GPT-o1 is also better at programming and other complex tasks. No doubt this will be useful in healthcare to avoid hallucinations and solve more complex tasks.

I hope you found this post informative. Reach out to me if you have questions or feedback.

Contact us

Whether you have a request, a query, or want to work with us, use the form below to get in touch with our team.