Many believe that the future of technology is artificial intelligence (AI). Everywhere we look, we see examples of AI seeping into various aspects of our lives. The complex algorithms that operate the facial recognition on our cell phones, the smart home devices that learn our preferences, the social media platforms that personalize our content, or the autonomous self-driving cars currently in development are all examples of an all-encompassing concept known collectively as AI, which comes in many forms and levels of complexity.

Not surprisingly, AI, especially machine learning-based technologies, has also become prevalent in the healthcare industry. There are many products currently on the market that help review and assess patient data to identify risk factors and possible areas of health concern. Technology is currently being used to scan x-rays, MRIs, colonoscopy video, and other imaging to help identify abnormalities. AI technology is helping to monitor in real-time vital signs and other data to perform risk assessments and detect patient needs. Healthcare providers are relying more and more each day on these advancing technologies to provide better quality and more efficient and timely care to patients.

While most recognize the significant benefits the continued advancement of AI could have on all aspects of our lives, its utilization in healthcare provides unique challenges. As of this writing, regulation of AI in the healthcare sector has been limited, with both the U.S. Food and Drug Administration publishing a set of guiding principles and the U.S. Department of Health and Human Services creating an AI Office and issuing a strategy document. Over the past several years, various Congressional Committees have also held hearings on the topic, but as of yet no formal legislation or regulatory proposals have come to fruition or appear imminent on the subject. Despite this inaction, the European Union has proposed the first of its kind regulation of AI (not just in the healthcare sector) known as the Artificial Intelligence Act and is moving toward an effective date sometime in 2024. All eyes are on the fate of this law, as it would likely be a driving force in shaping the regulation of AI not just in the European Union but in countries around the world.

In reviewing the proposed AI Act, and in considering possible legislative or regulatory action stateside, there are a number of key areas of concern that will likely be addressed and considered to ensure the safety of AI. These include:

  • Data privacy – Data is big business around the world and given the need for data to train AI systems, regulators will need to closely examine the limitations on the right of third parties to share or sell customer/patient health information that was originally shared for a different purpose. There are already laws on the books under HIPAA that preclude healthcare providers, insurance companies, and clearinghouses from selling patient information to third parties. However, HIPAA does not apply to non-covered entities. For example, when a person inputs their health information into an app on their phone, the developer of that app is not necessarily under any restrictions that would prevent it from sharing or selling the health information to other third parties.

 

  • Healthcare inequity – Healthcare inequity is a major focus in the industry right now and a priority of the current administration. AI, if not programmed properly, poses the risk of perpetuating these inequities – for example, if the algorithm is developed in such a way that there is bias built into it. Moreover, close attention must be paid to the data utilized to train the algorithm. If the data contains biases within it, then the technology will learn those same biases. And finally, if the data sets utilized to train the technology are not broad enough, then inequity can again creep into the technology.

 

  • Safety – As AI continues to evolve and advance, healthcare providers and others in the industry are likely to place greater reliance on its recommendations and conclusions. This will pose significant questions for regulators. For example, could this technology be considered the unlicensed practice of medicine? Moreover, if the technology misses a diagnosis, will insurance cover the malpractice claim? And would the healthcare provider who relied upon the technology, or the patient it was used on, have a claim against the manufacturer for the mistake?

 

  • Transparency – With our ever increasingly digital lives, communicating over the internet or generally through our phones and other devices can make it difficult to recognize when we are communicating with an artificially intelligent machine versus another person. Regulators will be looking to ensure transparency to ensure that individuals are fully on notice when it is not an actual person on the other end of the communication.

 

We cannot deny that AI technologies will only continue to advance and evolve. Given the rapid pace with which this is occurring, legislators and regulators are faced with the challenging task of trying to develop laws and regulations that can withstand the test of time. Moreover, they must balance the goals of advancement with the need to ensure that citizens are protected from possible abuses of the technology. We will continue to monitor AI-related developments impacting the healthcare sector as they inevitably occur.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of John W. Kaveney John W. Kaveney

Partner, Healthcare and Litigation Departments

Mr. Kaveney focuses his practice in the area of healthcare law, representing a range of clients that includes for-profit and non-profit hospitals and health systems, academic medical centers, individual physicians and physician groups, ambulatory surgery centers, ancillary service…

Partner, Healthcare and Litigation Departments

Mr. Kaveney focuses his practice in the area of healthcare law, representing a range of clients that includes for-profit and non-profit hospitals and health systems, academic medical centers, individual physicians and physician groups, ambulatory surgery centers, ancillary service providers, medical billing companies, skilled nursing and rehabilitation facilities, behavioral health centers and pharmacies.

His practice in the healthcare field encompasses advising healthcare clients on corporate compliance matters, including the implementation of new, and the assessment of existing, corporate compliance programs. He also assists healthcare clients with compliance audits and investigations, as well as guiding clients through the self-disclosure and repayment processes. Finally, he provides general legal advice concerning compliance and regulatory matters under state and federal healthcare laws.

In the area of information privacy and data security, Mr. Kaveney advises healthcare clients on issues arising under the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health Act (HITECH). This includes the implementation and assessment of privacy and security policies and procedures to ensure the proper protection and utilization of protected health information both by healthcare providers and the business associates with which they contract. In addition, he represents healthcare clients in investigating, reporting, and remediating information breaches and the liability such breaches create under various information privacy and security laws.

Additionally, Mr. Kaveney provides counsel on Medicaid and Medicare reimbursement matters before the Division of Medical Assistance and Health Services and the Provider Reimbursement Review Board, as well as assisting clients in civil litigation and with professional licensing and medical staffing concerns.

Contact information:

jkaveney@greenbaumlaw.com | 973.577.1796 | vCard | LinkedIn

For more information visit the Greenbaum, Rowe, Smith & Davis LLP website.