I’m concerned about the intersection of artificial intelligence and the HIPAA legal regime that protects patient data in the United States. Specifically, I am concerned about AI Policy for FQHCs. An FQHC is a federally qualified health center, more commonly known as a community clinic, like Gardner Family Health Network1, the clinic where I am currently board chair.
By way of introduction, I live in Silicon Valley, and I am an engineer by training. I use AI every day for a variety of reasons. This spans from asking it questions that we all ask, to using API’s so that I can make my code run better. My concern is two-fold.
First, is that all of these models are run by for-profit companies, and once you type it in, and once they process it, they own the data. As profit pressure increases, there will be increasing motivation to use that data to fight profit pressure from investors. It’s just physics. This is how capitalism works.
Second is the privacy issue. HIPAA is a big deal. There are fines of up to $50,000 fines per patient per violation. And those numbers are enough to make any FQHC pay attention.
After studying the issue, it seems to me that there are three options that an organization has to take if they’re going to use AI in their organization. There are also some practical measures that organizations need to take in the meantime. But first, let’s talk about your three options.
Option one: build your own local model. Use open source software or use a local LLM like Llama from Facebook. This is probably one of the safest models, but it has several downsides. One, while the LLM software is free, the hardware you need and the power it will consume are not. Two, you’ll need to be in an industry where you have suitably qualified IT people. And most FQHCs are not.
Option two: Use the enterprise-level version of the large language models whose contract language contains very strict language about the protection of Data. But this only matters if you trust what the contracts are again, see my comment about profit pressures.
Option three: Have strict rules in place to anonymize any data. You’ll need not only a technical solution, but you’ll also need an organizational solution. You’ll need some sort of organization to ensure that your data is in compliance with the standards. Whether this is a compliance officer who is suitably technical, or a data governance function within your IT group.
There are also some short-term practical considerations.
It’s easy to sign up with an LLM these days. All you need is a credit card, and $20 a month will get you access to an LLM. And in general, that often no stopping any employee from using either their personal card or a purchasing card to purchase an LLM subscription.
One of the first things you learn in cybersecurity is that the biggest threat to your organization is from the inside. It’s your people. So you’ll have to put in strict policies to govern the use of AI and perhaps have to take some draconian measures to monitor your employees’ Internet use to make sure that there isn’t any surreptitious usage. But that raises a whole other set of issues about employee privacy, etc.
In summary, AI policy for FQHCs will not be easy. Using AI in a compliant way as possible, but you do have to do your homework. And if you don’t, the fines could be substantial and could seriously damage your organization.
Do you ave questions about this? Contact me.