Replay: Permanente Live — Keeping patients and clinicians safe from harm

TwitterFacebookLinkedInEmail

Safety in health care is our priority, and the technology, tools, processes, and culture we use to prevent harm are essential. How can we learn from the best practices and innovations in patient safety across the health care industry? How do burnout and physician wellness factor into the equation? And how can we leverage the power of AI and other technologies to enhance patient safety?

If you are interested in these questions, you won’t want to miss the opportunity to watch the recording of a lively and informative webinar on patient and physician safety, featuring two renowned experts and leaders in this field. You’ll discover how physician leaders can foster a culture of safety, address the root causes of harm, and support the well-being of clinicians and patients alike.

You will also learn about the latest trends and challenges in patient safety, such as:

  • The importance of evidence-based preventive care
  • Bias and inequity in health care
  • Clinician burnout
  • The impact of AI/tech on patient safety

The webinar was hosted by The Permanente Federation, a national organization that represents the interests of more than 24,000 physicians who provide care to nearly 12.5 million Kaiser Permanente members. Guest Marcus Schabacker, MD, PhD, president and CEO of ECRI, a leading nonprofit organization dedicated to patient safety, joined moderator Stephen Parodi, MD, executive vice president of The Permanente Federation.

Don’t miss this chance to gain valuable insights on one of the most critical issues of the day. Watch the recording today and share it with your colleagues and network!

Podcast transcript

Transcript is autogenerated. Although edited for clarity, it should not be considered an exact replication of the podcast and may also be updated as needed.

Stephen Parodi, MD: Hello everyone and welcome to our Permanente Live webinar, keeping Patients and Clinicians Safe From Harm. I’m Dr. Steve Parodi, executive vice president of The Permanente Federation, which is the national leadership and consulting organization that represents the interests of the Permanente Medical Groups. And I’m really so pleased that we’re talking about this subject today because safety really is a bedrock when it comes to health care. At a most basic level, it is fundamental to the functions of medical practice and in health care systems. It is also at a higher level part of our calling to provide health care that we would want for our own families and for our communities. And increasingly, it is recognized as a contributing concern for why people stay or leave health care as a career. It can be a source of inspiration or in some quarters a source of moral injury. And as an industry, we have been on a multi-decade journey to raise awareness and to actually set and activate the principles of what would be expected for highly reliable organizations.

So I am looking forward to and glad that we are joined by an esteemed guest, Dr. Marcus Schabacker, who is the president and CEO of ECRI, which is a patient safety organization known for its medical device evaluations, hazard alerts, and evidence-based guidance to create a culture of safety that stops preventable harm against patients and clinicians before it occurs. ECRI is federally recognized and is an independent nonprofit and is a well-established leader for health care risk in the safety community. And I’ll just say Kaiser Permanente and ECRI have had a long, storied relationship working together. So, Marcus, it is a real pleasure that you are with us today. I want to kick it off with an initial question. Your career actually spans [back to] when the Institute of Medicine first published To Err Is Human in 2000, as a clarion call to action. Can you talk a little bit about what progress we’ve made and what remains to be realized from that initial report that we saw back in 2000?

Marcus Schabacker, MD, PhD: Steve, first of all, thanks. Thank you so much for having me, having ECRI here with you. It’s truly a pleasure and an honor to be here with you to discuss this most important topic. And to your question, while we have made some progress, much is left to be done. We just had a recent study published in 2023, which used the methodology of the initial To Err Is Human study and found that 1 in 4 patients admitted will experience an adverse event, and 25% or another quarter of those are preventable. This basically keeps us at the same number the initial study called out, which is about 100,000 preventable deaths per year. So now I’ll let that sink in a little bit, 100,000 preventable deaths per year. To give the audience an idea of what that means, we all heard [Boeing] 737 Max [planes were grounded] because the windows blow out. If we translate that number [100,000] into airplanes, it would mean that a 737 Max 9 crashes every single day of the year with no survivors. That’s a shocking number. And from that perspective, lots remains to be done.

Is safety in health care going through a reckoning?

SP: Marcus, that is a great way to dive into this. You referenced the airline industry and I’m thinking about the recent news in terms of the culture of safety that has permeated the airline industry, and a lot of questions about what has gone wrong. And I think about health care in the context of what we were just talking about. In many ways, the airline industry is going through a reckoning. Do you think health care is going through a reckoning or is there a reckoning to be had?

MS: There’s a reckoning to be had. I think we’re not quite there yet. I think we haven’t been quite as shaken up as maybe the airline industry has, yet. And the reason for that is these events are not as public, as visible. They don’t happen in one place all at the same time with the media focusing [on it]. It’s happening every single day, somewhere, to someone. And that’s where it becomes more difficult to draw the attention to it. We as a health care industry are certainly at a crossroads. I have seen more change in the last 3, 4 years than I have in my 35 years prior in health care. And some of this is going to force that reckoning. But at the end of the day, this is what it took in aviation in early 2000, everybody came together, the airlines, the FAA, the airports, the manufacturers, and said, we want to have a zero harm, zero casualty policy.

And since 2009, there’s no casualties in commercial airlines in the United States. We as a health care industry need to get to that same point. Preventable harm is not acceptable. We want to have zero harm. You and I, we both swore an oath. And that oath as a physician, as a clinician is to do no harm. And what do we do? We’re harming patients, not deliberately, but [it’s] preventable, every day. So, we need to do more. We are getting there, but it’s not quite there yet, as with the airlines.

SP: Yeah, it’s interesting. Marcus, when you bring this up in the context of the airline industry and needing to get people together and groups together, what does that look like for health care? You were referencing how it’s transformed, and the transformation I think about is over a career and the training that individuals have received, where the individual is supposed to be responsible, as opposed to a team. And nowadays with the complexity of care, whether it’s ambulatory or inpatient, you need a team. So, what do you think convening looks like today when it comes to improving safety?

MS: Well, first of all, I think it’s really the recognition and the acceptance that where we are is unacceptable. And that’s number one. Number two is then to really say, okay, we want to take a systematic approach. And that systematic approach has four components. It’s not something ECRI invented, this is something high-risk industries like aviation, the military, [and] nuclear power have long used. Total system safety needs leadership. It needs a culture of zero harm. It needs to involve the patient and their caregiver. It needs to have an agile learning system, where positive behaviors, positive learnings are reinforced and where near misses or misses are identified and then amplified. No shame, no blame culture, but the issues need to be addressed and then fixed. And lastly, and most importantly, is that the workforce, the entire workforce, the nurses, the technicians, the physicians, the administrative part, the janitors, are part of that whole, that team base, where everybody has a voice and everybody has signed up and feels safe for promoting safety for the patient. That’s what it’s going to take. And this total system approach has been proven to be very successful. We just need to apply it. We don’t need to reinvent the wheel.

The impact of workforce burnout on patient safety

SP: I really like that. It’s really [about] getting to our people and speaking to our people. You were referencing in many ways it’s a silent epidemic within health care that these injuries are happening day in and day out and yet it’s normalized. So, speaking up is so important.

I was struck by one of the priorities around safety for 2024 with ECRI, which [focuses] on the workforce and workforce burnout. I’m interested in hearing why you picked burnout? That’s not necessarily been in the lexicon of safety in the past, so talk a little bit about that.

MS: I think the issue of workforce burnout — and this is equally relevant for physicians, nurses, technicians and others, anybody who interacts with the patient — has a one-to-one correlation to safety. If you have just finished a double shift and now there’s an emergency, or you come out of a night shift and you have to perform another surgery, that puts a lot of emotional stress on you. And if we have a blame culture,  where a couple of years ago [there was an] unfortunate deadly administration of the wrong drug [and there was a] scrutinizing and public flogging, so to speak, of the nurse, instead of looking at what went wrong from a systems perspective rather than trying to individualize the mistake, that all creates this uncertainty.

This creates the additional tension that creates the stress. Staff shortages creates stress, which then leads to anxiety, it leads to depression, it leads to fatigue. And that automatically then leads to potential mistakes, which then causes even more stress and brings the person down who came to health care to do good, to help, to provide safe care. And they’re put often into a position where they can’t do it because we don’t have enough staff, we don’t provide the necessary training, the simulation, we don’t have a learning system, which helps us to stay agile and react to things near term in real time. So, we really need to change our approach holistically.

The pros and cons of AI in clinical settings

SP: You mentioned workforce and sometimes you’ll hear, well, we can solve that with technology. We can automate processes, standardize processes, put forth best practice alerts. We’ve implemented a lot of those things at Kaiser Permanente. I was struck by ECRI’s focus also with the latest recommendations around technology. And you were referencing how rapid things are changing. 10 years ago, we were talking about how proud we were to be implementing an EHR that’s uniform and whatnot, and now, in the last 3 months I’ve seen I don’t know how many different articles about AI and versions of AI, whether it’s generative or ambient, et cetera. From an ECRI perspective, what’s the promise, and then what’s the potential issues that we need to be looking out for?

MS: Well, AI I think has the potential to truly revolutionize how we conduct medicine. Just to put it into perspective, the human brain is able to cognitively compute about 4 variables. In today’s diagnostic world, we often are confronted with 20, 25, if you include genome analysis, even more variables, which our brain simply cannot compute. And so, we are reducing it down to that 3 or 4, which we then can work on. AI can help if used correctly, if developed correctly, if validated correctly, to reduce that number of variables to something which we then can act upon. And so that’s where the promise, but that’s where also the danger of that new technology is. Where we see the biggest problem with AI in particular is do we understand as a user [that] a) it is AI, [and] b) how was it developed? What algorithm went in there? What was the training pool of that algorithm? Or is it a continuous training pool? Is it adequately diverse in its development when it comes to patient population, when it comes to disease stages, when it comes to ages, when it comes to gender, when it comes to race, and so on? Because we’ve seen that in the past not happening and therefore introducing an automatic bias which then will potentially get aggravated depending on where and how you’re using it.

And then lastly, when we validate this, can we get enough real-world experience to actually make sure that what we think the AI is doing is actually accurate? And with increasing complexity of that world and increasing complexity of what’s going on in the black box, that becomes more and more difficult. So there’s a call out to regulators and to developers and manufacturers saying that when you develop those tools, which can be tremendously helpful if done right, please make sure that you are aware of the biases you’re introducing, that you try to stratify for that, that you are making sure you get enough medical insight into that technology development.

No disrespect to our medical profession, but most of us are not code writers or algorithm developers. So, the tech people who do the work behind the scenes need to be guided not just by one medical expert, but on evidence. We pride ourselves [as] one of the only or the only evidence-based centers, so we know a lot about evidence. But does that clinical evidence get included into the development of AI? That’s a big call out. There’s a big danger from our perspective that these biases, which get introduced early on, keep on perpetuating and getting worse and worse over time.

SP: Marcus, I’m going to pull on this thread a little bit here because you’re raising a really important question. And you referenced there’s a bunch of stakeholders involved with this. It is not just the health care industry, you’ve got information technology, you have regulators, policymakers. In fact, I think I could cite for you any number of federal- or state-level activity right now in terms of trying to set the rules of the road for AI in general and then for health care specifically.

What do you think “good” looks like here coming up into the future? Where do we as physicians, as clinicians, insert ourselves into this process? Do you think it’s upstream? Do you think it’s during development? Is it post-development? Is it validation? Is it all of the above? And I’ll insert one thing. I am finishing up a book that was written by Eric Topol around AI and medicine and its promise, and introducing back into medicine more humanity, so that physicians and clinicians can spend more time at the bedside as opposed to locked in an EHR. So, I’m curious, what do you think that looks like?

MS: So, it’s all of the above, right? The clinicians need to introduce themselves early on. We need to again, take a systems approach. We need to think about the human factors of that approach. We haven’t talked about human factors in health care. That’s a relatively new concept. So clinical human factor engineering, I would love to spend a minute there if we can, understanding in which scenario that AI is going to be used. Is it truly a decision support tool or does it quickly become a decision-making tool? And if that’s the case, there needs to be a way more stringent regulatory approach. So, the clinicians need to be left, right, front, center, and back in this process development, validation, usage, postmortem evaluation. We need to have ongoing evaluations. What is happening with it?

And I’ll just remind you, we could talk about EHR for a long time about what that did or didn’t do for health care. It certainly didn’t increase patient-clinician time, right? It cut that dramatically. But think back when we introduced computer tomography, CT scan. We started treating a lot of things which we saw and thought were abnormal, until we realized it’s actually not a pathology, it’s a variance. And now we know that better, and we are not overdiagnosing, overtreating. With AI, that risk is even more so that we identify things. And then the question, so what? Is this a pathology? Is this truly where we need to intervene or is that something which is just the variance of normal? And I think AI can help in that, if it is developed the right way, if it is validated the right way, if it’s tested the right way.

And if it is reevaluated after and has been used for a longer time. As you know, we often learn sometimes years later that some of the things we believed we were doing good were actually harmful. And depending on how often the technology is used, we might see it only after a longer period of time. So, if you have a device or AI, which is used 10,000 times a year, it takes a while to show the 1 in 100,000 or 1 in 100 million adverse events. And so that’s why we need to be extra diligent there.

We cannot take the physician, the clinician, out of the equation, no matter what.

The need for more simulation training by clinicians

SP: You are so right. This has to be informed by clinicians, and clinicians that are actually practicing at the bedside or now increasingly at the home site.

Let me pivot here. I know ECRI has an interest in and a focus on simulation and the use of simulation to prepare for, anticipate, and learn from each other as teams. And in addition to that, the opportunity to introduce technology to enhance [simulation], whether it’s virtual reality. So, I’m interested in where you think that needs to go from an industry perspective.

MS: We can’t do enough of it. It’s never going to replace the hands-on training and experience. I grew up in time where “see one, do one, teach one” was the mantra and — we talked about burnout earlier — it caused tremendous stress on health care providers and clinicians because they were just not equipped [to do it]. It was not at figurative speed, it was literal; they showed you once how to do a suture and the next time you did it and then you taught somebody else. And suture is probably the most benign example I could take here. I grew up as an anesthesiologist and intensive care specialist, and anesthesia was one of the earliest disciplines where simulation was actually an accepted teaching tool even in the nineties. And so, simulation labs popped up and were more frequently used.

Any other high-risk industry is using simulation as the key tool to improve safety for the employees, for the workers, whoever’s on the receiving end or if it’s aviation, military, whatever. Simulation may be in connection with virtual reality when and if appropriate. But simulation and repetitive training and then aided by things like team-based approach, checklists, other cognitive aids which help you deal with situations you’re not dealing with on a day-to-day basis. We need to build that muscle memory particularly for those things which don’t happen every single day.

And again, anesthesia is a great example. Malignant hypothermia, one of the most catastrophic events in anesthesia, happens very rarely. But when it happens, seconds matter truly seconds matter. And you need to include the entire team from the surgical team, your anesthesia assistant, the nurses in the room, to get that under control. But if you don’t practice that, if you don’t dry run this, if you don’t have your checklist of what to do and where to get the meds, if the meds are not ready at the bedside, that patient will die. So, we cannot overstate the importance of simulation, in connection with hands-on training, in connection with cognitive aids.

How bias in health care harms patients

SP: Marcus, you earlier in the context of AI, touched on the topic of bias. Certainly, there’s a lot of concern when it comes to the application of that particular technology and being able to understand and monitor. But to be honest with you, some of the inherent bias that we’re worried about in terms of being introduced with an AI, is that the AI is actually taking data that we’ve already put into the EHR from our prior shared experience, which I think what we’re saying is potentially inherently biased.

So that is an interest of ECRI’s and it’s called out specifically. What do you think we need to be doing to be a more situationally aware of that potential inherent bias? What do we do to correct for that? And let’s take AI out of the equation; just the practitioners ourselves, what should we be doing?

MS: See, the biggest problem with health inequities is that it is not generally accepted that they exist. That’s the very first step. We need to all look into the mirror and say, health inequities are real. Now, I’m not saying anybody comes to their job in the morning, in the hospital, in an outpatient center or anywhere saying, I want to treat X, Y, Z person differently today than I treat anybody else. But we all have implicit biases. Some might have even explicit biases, but we all have implicit biases, which means we’re not aware of it.

So, we all as a health care industry need to, number one, accept that it exists. And then number two, proactively challenge ourselves and [ask], where does that come into play? I’ll give you a few examples. If you are a Black mother in the United States, independent of where you live or your socioeconomic status, you have a 30 to 40% higher chance to have an adverse event to you or your baby during pregnancy and early childhood.

That is well documented, there’s no question about it. And if you are a Black man in the United States, you [have a] higher chance of being misdiagnosed or not diagnosed for kidney disease, that is because we calculate the glomerular filtration rate (eGFR) wrong. If you receive a pulse oximeter and you have darker skin, your chances of getting a wrong reading are very high because these algorithms were developed on white skin. That is real. That is not something which somebody is jumping on a woke bandwagon about, but we are not talking about it. And if we don’t address health inequities, we will not fix patient safety. Full stop, end of story.

And I can go on with diagnostic errors, right? So diagnostic errors, if you are a minority — and this is not just race, this is gender, if you belong to a certain minority from a religious perspective, from a sexual orientation perspective, if you are disabled, if you are a veteran — inequities are prevalent throughout. And the first step is a) recognizing that they exist, and then b) being proactive in saying, okay, let’s analyze where that happens. And how you can do this is you can look at your safety records, you can look at your adverse event records and say, okay, how many of those happen in minority groups? And you will be shocked, I promise you. You will be shocked [to find that you think] this is equally distributed, it is not.

A total system approach to improving health equity

SP: Marcus, you’re raising such an important point here, which is you’ve got to first recognize it, and then second, what you’re getting at is measurement. And that’s not something that we’ve typically [mandated at a national level]. Now, it became an area of interest during the pandemic, and [gained] greater recognition that there were disparities when it came to access of care and of certain treatments and vaccinations. What do you think that looks like in terms of actually measuring this, going into the future? What do you think either from a statutory standpoint or even just from the standpoint of managing a health system, should we be thinking about?

MS: That’s the first step. If we measure it, we can do something about it. And the saying in management, what matters gets done. So, if we start measuring it, we actually can implement change. And we can see if what we implement is actually working or not working, because not everything we’re going to do is going to be the right answer for what the problem is. And that’s okay. We don’t have, ECRI doesn’t have, or anybody really has, the silver bullet. It’s very situational. It depends on where your health care system is. [If] you’re in a rural setting, your problems are going to be very different [than] if you’re in an urban or a suburban setting, or if you’re in California or if you’re in Texas.

So, I think there’s the situation, accepting that inequities exist, identifying areas where they’re most prevalent in your organization, and then setting up a plan. And not here’s the 50 things we’re going to do different; take 3 and then implement the 3 and measure again. And once you see, well out of the 3, 2 really work, the third one not so well, then let’s go to the next 3 or let’s take the next topic.

And again, when we take this total system approach, which means leadership, patient and caregivers, plus the workforce with an agile learning system, you’re going to come to a good solution. And then you include the human factors approach in this saying, okay, was the system designed with a human in mind? Because the human will fail, there’s no question the human will fail. And so how do we create enough redundancies into the system and how do we design the system around the human in the center, be it the patient or the workforce or the health care provider, nurse, clinician, technician, whoever it is, making sure that they have the best possible chance to succeed rather than dumping them into the system and say, well see how you can cope. It’s a fundamental shift in how we think about delivering care.

SP: And in many ways, we have to, when we talk about precision medicine, personalized medicine and reaching people where they are, that’s what you’re really speaking to.

MS: Yes, the patient-centric approach is important, but the workforce actually supposed to function in this system is equally important. We need to understand what are the inherent shortcomings. I mentioned to you earlier that cognitive ability of dealing with four variables as just one example. But I talk about this as these two systems who coexist; there’s the medical system and then there’s the administrative system that has to take care of all the things which need to happen to be able to deliver that care. And they are not necessarily conducive to each other. So how do we create these systems so they can better function? Think about it as a clinical operating system. So, on your phone you have an operating system which makes all the apps talk to each other. Now think about this as a clinical operating system where all these different functions actually can function together. Those are some new concepts which have been proven to be very successful, not only in reducing patient harm, increasing patient satisfaction, increasing workforce satisfaction in actually being financially profitable. Interesting concepts.

SP: That’s fascinating. And we are getting some questions in here, Marcus, so I’m going to turn to one of the audience questions, which I think is an interesting one. ECRI has been very much focused on devices and device safety. And when you think about the past few years, respirators, syringes, I can name any number of other things that have had to be recalled, it seems like the pace and number of these recalls has picked up. Any sense of why that’s happening and what should be done about it, if anything?

MS: So, the numbers don’t quite support that feeling. We have a pretty steady amount of recalls throughout systems. There was a slight uptick in 2023. The total impacted devices were actually less than in previous years. So, maybe we got more attention on it, which is actually good.

We called out last year and this year in our safety reporting that with the increased home use of devices, the industry and regulators need to recognize that there needs to be more effective ways of recalling products. They cannot rely, like they do in a hospital, that there’s a functioning supply chain with good control of where those devices are, who’s using them, how do we get them back, even that is sometimes difficult.

It doesn’t exist for home care, and this is something which the manufacturers need to fess up to, the regulator need to enforce it, but also the prescribers have responsibility there in informing their patients adequately and saying, hey, when you have an incident, make sure that you reach out and we can help you recognize that. And so, there need to be better mechanisms in place for particularly home use devices. And the devices that are going to be used in the home need to be developed for the home, which is another big issue. We can talk another hour about that, but that’s a call out to the industry as well. You know that your device, which is developed for a super user like an intensive care nurse, is now going to be used in the home by a layperson or community nurse who doesn’t have the same experience or knowledge about this [device]. So now you’re expecting somebody who usually drives Ford or Prius to the hospital to drive an F1 race car. That is not going to work.

I’ve spent 20 years in industry developing products. There are many things we can do as an industry to make sure that these products become safer, and we need to be more vigilant. ECRI has taken that very seriously and through our network of members, we often are able to alert the health care community about issues early on. And that is a very important task we have and something we take very seriously.

Ensuring patient safety in care at home settings

SP: You’re touching on a fundamental shift of what is occurring here and now, and was accelerated during the pandemic, when it comes to the complexity of what we’re providing in a home. We’ve got examples now in the industry of what we call advanced care at home, which is basically hospital-based services in the home. But let’s take it further than that. There’s any number of skilled-type services that are occurring in a home or increasingly complex care in skilled nursing facilities, long-term care facilities. I mean, there’s a lot more going on than just the hospital itself. So much of the initial safety journey had been focused on health care harm that was occurring in the hospital. There are a lot of other stakeholders now involved when it comes to care in the home and they don’t all have an MD, PhD after their name or even a license after their name. So, Marcus, can you comment a little bit on that? What do we need to be thinking about as an industry in terms of what other stakeholders we need to get engaged? How do we inculcate a culture of safety to a broader audience?

MS: That is probably one of the most discussed topics within equity is this, we call it the shift of care areas. Primary care is still going to be very important. A lot of the complex procedures like open cardiac, vascular surgery, neurosurgery, polytrauma still will stay in the hospital. But more and more we see that shift into the ambulatory space or very quickly from a very short stationary into the home.

And then you mentioned skilled nursing facilities, long-term care facilities. We have an aging population where higher demands on sophisticated care because we’re getting better in treating initial disease stages and then complications where additional diseases happen. So that’s getting much more complicated in a care setting, which was not necessarily conceived as high acuity care, but it’s used as, and you’re exactly right, we developed a lot of guidelines, a lot of best practices, a lot of regulation around the in-hospital care.

We made strides, not enough as we discussed earlier. But now that care is shifting in an area which is highly unregulated, unregulated, and that scares us. That scares us because we don’t have the same understanding in those communities with these stakeholders on what risks they’re taking on and leaving them woefully unprepared to deal with it. This is not a ding on people who work in these facilities. We have just shifted a lot of the problems we had in the hospital into other care settings without equipping them adequately to all the things we discussed earlier, technology training, clinical pathways, best practice guidelines, and so on and so on. And that has a tremendous risk. And we’re very worried about that.

What is it going to take? Well, to start with, it takes an informed consumer patient, and the patient has got to speak up. The caregiver’s got to speak up and say, are you equipped to deal with my father, my mom, who needs a 24 hour infusion? Who needs a ventilator assist? Who needs whatever it is.

Number two is then we need to, and I mentioned this earlier, manufacturers need to recognize that that is a very different user group. And so, they need to design the equipment that’s going to be used in these new environments to these new environments. And then we need to train the people who are going to use it or support it adequately. We need to incentivize those facilities who actually signing up for making sure that these technologies are used appropriately, and their staff is equipped and trained. So, I don’t believe in penalizing people. I believe in incentivizing people because they’re going to run, they’re already having shortages from a staff perspective. If we now increase the complexity without equipping them appropriately through technology training, guidelines, support, it will get even worse than it already is.

Creating a culture of safety in the workplace

SP: Marcus, there’s another question that just came in that I think is an important one, and it reminds me of a conversation that I was having with another fellow leader at Kaiser Permanente. And here was the basic question that they asked me: is safety still a priority? That was the basic question, and it was in the context of a lot of other competing priorities. One of the things that has become important, and we touched on this earlier with some of our questions, is that membership and retention of our people is critically important. The workforce is critically important, and the attitude towards safety actually has significant impact on whether people stay or leave health care. Actually, I would posit it probably pertains to almost any industry; if you’re not feeling good about what you’re doing on a day-to-day basis, you’re going to leave. So, here’s the underlying question that they were asking, which is can you talk about the importance of a speak-up culture, psychological safety within a team or within health care? And from an ECRI perspective, are we doing enough? Do we need you to be doing more? What’s that look like?

MS: No, we’re not doing enough, and we need to do more. This is where when we talk about this total system safety approach, where the safety and the well-being and the comfort of that workforce is a critical component, if not in the center. And that’s the same when we talk about this clinical operating system, where the person needs to be in the center and the system needs to be designed around it with human factor engineering. That is exactly what we’re talking about. And then on top of that, it needs the cultural aspects. It’s a woke term, but if you think about what it comes down to, this culture of belonging is if we as leaders can create a culture where people are comfortable and they feel they belong because they’re respected of who they are, respected for what they bring to the table, and they feel safe in speaking up — even if their voice is the outlier — and at least get the sense they’ve got to be listened to and taken into consideration.

That’s the true meaning of inclusion, not having different races and genders sitting around the table. That’s not inclusion. Inclusion is if everybody can be their best self the way they are and feel comfortable. That’s the concept of cultural belonging. It is up to leaders in health care, in the delivery area, in the administrative area to create a safe workplace where people can show up and be their best self. And there’s tons of literature which provides that if you are able to create that environment, you will have higher work productivity, less turnover, more innovation, better outcomes. So, there’s no reason not to do it. There’s simply no reason not to do it. And this old culture of blame and shame and fear and punishment and so on, it’s a thing of the past.

SP: Marcus, that is a great answer. Okay, I saved the best for last. I got a hard question for you. Okay, so this came in here and I’ve had to reflect on this myself, too, as a safety leader. We’ve talked about the concept of zero harm for a long time and getting to zero harm. And I’ll paraphrase a little bit of our previous conversation, zero preventable harm. Is that possible? And if we think that is possible, what’s it going to take to reach it?

MS: I wouldn’t be here if I didn’t think it’s possible. I haven’t given up on that yet. I know we have a long way to go, and maybe I’m not going to see it during my active time, but that’s what gets me out bed. This is what makes me excited. That’s where my passion is. As I mentioned before, we swore an oath, we shall do no harm. So, let’s aim for that. Let’s not accept anything less. Let’s not be okay with, yeah, there’s going to be some collateral damage. No, it starts with us. It starts with the mindset we have. And if we say, no, we’re not going to do this, then everything we do in our daily practice is going to be measured against that north star. And I speak for myself, have I taken shortcuts in my physician life? Absolutely. Had I had the same thinking as I have today, that would’ve been much less likely.

So, it starts with that general acceptance that that is achievable, and that is something which we all sign up for. And this is the other thing, which is you can’t do this alone. We can’t do that for one area. We can start there, but at the end of the day, it’s going to take that systematic approach, and I’m sounding like a broken record, I’m sorry. But it takes these four aspects and it’s proven over and over again, leadership and culture, patient and their caregivers, agile learning system, and a safe workforce environment. Those are those four critical essentials. And combine that with taking the workforce in the center and creating a clinical operating system around it, which allows them to practice safely and allows them to call it out when it’s not happening. Those are the key ingredients. It’s not rocket science, but it’s hard. And we’ve got to start somewhere and then we start developing from there.

When people hear me talk about total system safety, they say, oh, the whole crosswalk needs to change all about. No, you can start in a ward. You can start in a department, you can start in one OR. It doesn’t matter where you start, but you got to start because then you quickly going to see the benefits and the positive effects and others will latch one to it.

SP: I really think this is a great way to tie it up because what you’re describing is systematic optimism. We have to continue to push. You were referencing the OR, if we had said retained foreign objects is just something that we have to put up with and it’s a cost of doing business, we would’ve made no progress. And yet there’s been massive progress and I could cite any number. It warms my heart, Marcus, as an infectious disease physician, to know that we’ve had a lot less HAIs in the last decade than we did before. So, it’s a journey, as you reference. So, thank you so much for joining us. Thank you to everyone who joined this webinar and watched. Based on what we know from all the research out there and from Dr. Schabacker’s remarks, we’ve got work to do to continue to improve upon our basis and our belief in patient safety.

And while technology, AI, value-based care are all good, exciting, interesting, it’s us. It’s us as clinicians that ensure that we are the foundation for which patient safety emanates. Ensuring that patients remain safe is one of our most basic obligations as physicians and as clinicians, and using that technology to our advantage, innovating beyond it. That’s really what we’re all about. And despite the scale of the issue before us, many hands make light work. And Dr. Schabacker, thank you and ECRI for all of the contributions to health care to improve patient safety. Thank you all for joining us today, and I hope you all have a good rest of the week.