This is the story of how your pacemaker could level an entire hospital system and lead to the deaths of thousands.

Was that too melodramatic? All right, well, how ridiculous does it sound to say that an HVAC system led to the theft of 40 million identities from Target Corp. and cost the company about $250 million in settlements alone? Because that’s how it actually went down.

Nothing in the computer tech world these days operates in a vacuum, and that includes the ventilation grid in a large retail chain. Target’s HVAC system was connected to a network that was connected to a network that encompassed the entire company. Clever bad guys figured out that if they could hack the seemingly innocuous ventilation unit, they could then shimmy their way through its processors and chips and synapses until they found the big, fat pot of gold they were looking for.

So let’s think about this pacemaker thing again — if a savvy terrorist looking to kill people figures out how to hack a seemingly innocuous medical device that’s connected to a network that’s connected to a network, what kind of damage could this terrorist do once he is inside the system that literally keeps people alive?

Here’s a bonus thought: keeping the holes in a healthcare network’s fence plugged would be a whole lot easier, a whole lot cheaper, and a whole lot safer if safeguards were built into the various independent medical devices from the start. It’s just that the engineers and manufacturers who build the devices don’t think it’s their job to do that — because the federal government doesn’t actually require them to.

The preceding nightmare is the kind of thing Rebecca Herold lives with every day. A cybersecurity and privacy expert for the past 28 years, Herold, a.k.a., the Privacy Professor, has been trying to get the FDA, device makers, and hospital system administrators (both front office and computer system types) to understand exactly how vulnerable medical devices are to attacks that could eat a healthcare company’s computer network, or worse.

How dangerous is the potential?

“If I were a terrorist and I wanted to do widespread damage,” she says, “imagine the changes in programs and networks I could make.” Changes like, say, altering the pace of your pacemaker or upping the amount of medicine automatically released into your body, as programmed into the network itself. “It would be a good way to kill a lot of people before anybody even thought about what was happening.”

The issue for Herold has been getting the right people to help her deliver the message so that she doesn’t sound like Chicken Little with an agenda. She will, at long last, get her “dream team,” as she calls it, when she leads a panel at the Biopharma Research Council’s conference, “The Internet of Medical Things: Cybersecurity for Connected Devices,” on Thursday, July 28, beginning at 8:30 a.m. at the New Jersey Hospital Association, 760 Alexander Road. Cost: $395. Register at www.biopharmaresearchcouncil.org/iomt-2016.

Herold’s dream team of leaders in development, implementations, networks, security, and healthcare includes:

Colin Morgan, global product security at Johnson & Johnson;

Roberta Hansen, director of digital product cybersecurity at Abbott;

Robert Jamieson, chief information security and privacy officer at Mallinckrodt Pharmaceuticals;

Kevin McDonald, director of clinical information security at the Mayo Clinic;

Mitchell Parker, chief information security officer at Temple University Health System;

William Ash of the IEEE Standards Association;

Seth Carmody, cybersecurity project manager at the FDA Center for Devices and Radiological Health;

Nicholas Heesters Jr., Health information privacy and security for the U.S. Department of Health and Human Services Office for Civil Rights;

Gavin O’Brien of the National Institute of Standards and Technology;

Christopher Rodriguez, director the New Jersey Office of Homeland Security;

Miranda Alfonso-William of WAM Consulting;

Shelby Kobes, director and health security architect at Kobes Security; and

Antonio Biancardi, vice president of DataForm Software.

Herold was born in Missouri, the daughter of a farmer who became superintendent of schools and a hospital nurse’s aide who became a teaching assistant. She earned bachelor’s in math and computer science from the University of Central Missouri and later a master’s in computer science and education from the University of Northern Iowa. She has lived near Des Moines ever since.

She started her career in the late 1980s as a systems engineer for a large financial and health company before moving into IT. By 1990 she had spent seven months looking into networking and systems security and made her report about what to do.

“Their attitude was, ‘Well, you studied it, you do it,’” she says. So she created policies and procedures training for the company.

In 1994 Stanford Federal Credit Union wanted to open the first online bank, which was a great idea, except there were no privacy regulations in place for a thing like that. Without laws on the books, lawyers were reticent to direct her, so she had to develop her own security protocols. Fortunately they worked, and the federal government stepped in to set up regulations before things went wrong for courageous early bank customers. Stanford successfully launched the first online bank that October.

Herold moved into consulting around 2000 and struck out on her own because a company she was working with quite literally called her at the last minute before a client meeting to tell her they were going under. Herold told her client that she could do the job herself, and her own company was born. She founded Privacy Professor in 2004. She has been an adjunct professor for the Vermont-based Norwich University master of science in information security and assurance program since 2005.

In 2009 Herold led the National Institute of Standards and Technology Smart Grid privacy subgroup, where she also led the Privacy Impact Assessment for the home-to-utility activity, the first-ever performed in the electric utilities industry. She recently launched the Compliance Helper service (www.ComplianceHelper.com) to help healthcare organizations and their business associates to meet HIPAA and other information security and privacy compliance and risk mitigation requirements.

Herold hopes that by finally getting representatives from the various levels of the medical and cybersecurity community together, heretofore dismissive manufacturers and engineers, as well as reticent hospital executives, will get the clear message that there is a serious problem to be addressed.

The dismissals, she says, are often colored with the money brush ‒‒ i.e., it’s too costly to build in safeguards up front ‒‒ or with the “not my job” argument. Mostly, though, she gets this reasoning: It’s never happened before; therefore it’s not a problem.

Herold isn’t worried that she sounds like an alarmist because, frankly, she is alarmed with good reason. And the alarm she’s sounding is not like the Y2K scare, which thrived on the “maybe it could happen” supposition. This threat, this vulnerability of unsecured medical devices hooked up to far-reaching networks, is grounded more in the bedrock of “stuff like this is actually happening all the time, right now, and we’ve just been lucky so far.”

Herold isn’t counting on luck to keep us going forever. She prefers the proactive approach to the cleanup role. Essentially, it’s cheaper and easier to build a strong dam than to clean up a catastrophic flood.

Part of problem lies in the fact that there are no federal regulations requiring medical device makers to build in security protocols in the manufacturing phase. So they don’t. The FDA has guidelines, but those are just that — suggestions and good ideas. But there’s no enforcement.

“My fear is that we’re not going to establish or enforce controls with medical devices until something really bad happens,” Herold says.

The solution — set up security compliance regulations from the outset — is obvious. But another part of the problem is that device makers are very good at convincing everyone that new regulations would cause device costs to soar, Herold says.

“People argued that in the ’90s,” she says, referring to the push to put security features into personal computers as we moved from mainframes to smaller, more interconnected systems. That fear never materialized and she doesn’t expect the one about too-pricey medical devices will either.

But remember, device makers are great at convincing people that new directions will cost money, and that includes everyone at healthcare companies. Hospitals and doctors, Herold says, talk directly to manufacturers’ reps and hear about the problems pre-installing security measures would cause. The argument goes all the way up the ladder to healthcare CEOs, who subsequently saddle in-house system administrators and IT people with keeping out the viruses and malware by way of the trusty firewall. So far, that wall has held up for healthcare, but that doesn’t mean it always will.

“We have to learn from the past,” she says. “We still have the same tech problems we had 20 years ago, but we have to keep addressing new technology.”

Because there’s always going to be new technology.

Facebook Comments