Technology is a critical component in healthcare: we get it, we can all agree on this point. However, these same technologies...the ones meant to better patient care, to improve workflows, to reduce costs, also introduce the possibility for errors, for attacks, and for patient harm. The referenced article below provides a good example of how disaster can strike, and I'll provide a few lessons we can learn to, hopefully, do better in the future.
As a Security Analyst and Secure Software Developer, I am continually amazed at how systems are moved into production prior to a robust and independent review of their security and functionality. While on the surface these systems may appear to be an asset to an organization, their deployments present real liabilities.
Reference this article and then you will understand.
From a clinical workflow perspective, software like this makes sense as it is designed to improve efficiencies and reduce errors. From a security perspective, software like this is a nightmare, introducing myriad of vulnerabilities that sophisticated adversaries exploit in order to harm patients, the same people this software was designed to help.
However, there are a few lessons that can be extrapolated from an event like this.
First, let’s tackle human error. As told, it is obvious the nurse had become reliant on the software to dispense the proper dosage. Due to hundreds, perhaps thousands, of previous entries, her own skill-set had most likely diminished due to lack of use; instead, relying on a system that, for all practical purposes, is much smarter than her. As systems like this are continually moved into production, in any industry, our reliance on them grows and our own abilities diminish. This not only introduces errors, but also allows intentional attacks to be executed, as adversaries are fully aware of this weakness. Future versions need to be completely rethought, introducing better user interfaces and more complete systems of checks and balances that require both the system and the human to work in unison.
Second, digital communication that is out of band and lacks validation by the system introduces a broad and penetrable attack surface. As adversaries probe for easy attack vectors, social engineering and phishing attacks are easy starting points. When that communication can be directly targeted to exploit systems that have control of an entire process, a thorough and robust review by independent parties is needed to make sure these systems are living up to the expectations we have of them.
Lastly, fatigue in healthcare is a known problem. How many patients had this nurse seen that day? How many hours had she been on duty? How many systems had she interacted with in order to provide the care required of her when on duty? These systems all provide different visual interfaces, different tactile inputs, and different audible warnings. This sensory overload can cause a disconnect between human and machine, greatly effecting the quality of care provided. We must augment these systems to account for fatigue, and tailor notifications that cater to our personal needs and ensure that our bodies are receptive to the alerts.
Readers interested in further details about this topic can reach us at: firstname.lastname@example.org