Can we keep our patients safe?

Asking an expert the tough questions with Craig Clapper

Podcast by Gayle Porter RN CPHQ

This Healthcare Quality Week, I had the honor of sitting down with esteemed High Reliability Organizing (HRO) expert Craig Clapper to discuss my toughest questions about patient safety and quality improvement. He has a background in nuclear engineering and an emphasis in good humor, resulting in a conversation that was both compassionate and thought-provoking. This feature-length episode covers the human side of high reliability, the number of rodent hairs acceptable in a jar of peanut butter, and a practical approach to barriers in data collection and process adoption.

“Things that make systems brittle are usually machines – but the things that make systems resilient are us: ..people always have a head to think and then they have a heart to care, and that makes us more resilient than things like paperwork and computers which cannot think and cannot care.”

Key Points:
-Compassionate outcomes and HIPAA
-The future of patient safety
-Configuration control and change management

-Reliability 4 Life:
-Craig Clapper PE, Founder and Chief Knowledge Officer at Reliability 4 Life:
-Clapper, C.; Merlino, J.; Stockmeier, C. (2018). Zero Harm: How to Achieve Patient and Workforce Safety in Healthcare. McGraw Hill Education.

Gayle Porter RN CPHQ

Gayle always wanted to be a detective, and she often roamed around the neighborhood with a notebook trying to understand why the world worked the way it did. Today, she still loves solving mysteries in the data that will help real patients get better care.

With more than a decade of experience in healthcare quality improvement, she has helped more than 100 hospitals improve their processes and patient outcomes. She is a Certified Professional in Healthcare Quality (CPHQ) and the recipient of multiple awards for innovation and excellence. She has clinical experience in the Intensive Care Unit (ICU) and expertise in abstraction improvement and validation, solutions analytics, instructional design, and project oversight. She is passionate about collaboration and mentorship in the field of healthcare improvement.

Podcast Transcript


Craig Clapper, Gayle Porter


Sometimes it feels like becoming a high-reliability organization is impossible in healthcare because we are just so human. We care so much, but then we mess up, and it can be really challenging to pick up again. That is why I am grateful to have Craig Clapper join us today. He is an author for the book Zero Harm, one of the seminal works in high-reliability healthcare and patient safety, and the Founder and Chief Knowledge Officer for Reliability 4 Life with 30 years of experience in improving reliability in power, transportation, manufacturing, and healthcare industries. As a nuclear engineer turned patient advocate, he specializes in failure analysis, event analysis, reliability improvement and safety culture improvement. In summary, he has taken the chaotic world of healthcare safety and turned it into a science that anyone can adopt and use. So, if you’re feeling stuck and want to know how to strengthen your patient safety program, as well as inspire your colleagues, grab a cup of coffee or tea and join us for the conversation.

Mr. Clapper, thank you so much for joining us today.

Craig Clapper: Thank you Gayle for having me and you're welcome to call me Craig.

My first question for you is just to ask about what brought you into this field. I can tell that you care a lot about patient safety. You've often shared patient stories and professional media, LinkedIn, and I also know that your background is in nuclear engineering. So, why does healthcare matter to you? What is it about these patient stories that would make you drop something as interesting as nuclear engineering to phrase awareness about safety procedures and healthcare that maybe only a few people scrub nurses would even know about?

Craig Clapper: I didn't really make a conscious decision to transition from nuclear power and to manufacturing and then transportation then the healthcare, it just kind of happened. I think that health care is important to me because it's important to everyone. Everybody receives care. Everybody's a patient at some point in their life. Many of us know caregivers and providers. So in my work in nuclear power, I worked in reliability and I thought I was helping the machine to become more reliable, but I realized that it was really about people, and how people work together to achieve system reliability of which the machine was only a part. And then I worked in that same business and transportation and then for a couple years in manufacturing and then for the last 20 years in healthcare – that kind of makes me feel old when I say that. And I think it's because healthcare is so large, and the interest is so keen that so many people in healthcare have a general interest and making it better, but also a specific interest in things like safety quality, patient experience, and caregiver engagement.

Gayle Porter: That's so true and it keeps growing, right?

Craig Clapper: Leaps and bounds.

Gayle Porter: I've enjoyed the patient stories, the case studies and the applications that I read in the book, Zero Harm -- and you were an author for that book -- because it illustrated the process so clearly. It makes this system methodology much more relatable. But even as I was reading the stories, I thought about the risk involved in sharing any of these error stories with patients. Unfortunately, these fears often inhibit the transparency of the safety culture and transparency about safety issues to the people that matter most in healthcare, which is our patients. So how should we include our patients in ensuring their safety in a high reliability organization? And how would you recommend that healthcare leaders could make those interactions more fruitful?

Craig Clapper: Yeah, that is an excellent point. Transparency is a key factor and caregivers and providers really can't solve problems that they can't see. So they have to be able to see the problem to solve the problem. In my practice, I do post-safety stories on LinkedIn and then in our book Zero Harm, we often try to illustrate complex ideas around a story that's illustrative, and both of those are a little harder in healthcare harder than the other industries in which I've worked. I think that the first thing that I recommend on transparency is to always disclosed to patient and family. If you've disclosed to patient and family, then the only people that can hurt you legally already know about it. So then it becomes a question of what about the rest of us, and can we benefit from that? And there, I urge safety leaders to be as specific as they can. If they've asked permission from the family or from the patient, and they've granted that provision, then use the name. That helps. And talk about the details and if our people are part of the story and if they do something good, name names. If they're part of the story and maybe it's not their most flattering moment, then we can politely just talk about their role or their job. People learn from story. They're much more likely to remember when they learn through story and the fact that I remember that so well is because the person who told me that illustrated it in a story,

Gayle Porter: That's very true. And I like what you said about asking the patient if we can share their story to improve. Do you think that it makes a difference to add purpose to it?


Craig Clapper: Often you'll hear from the patient that they just want some good to come out of this. Or the family says that we don't want this to happen to another family. So I think even when they receive poor care, they still view themselves as part of the improvement if they can get their story heard.

Gayle Porter: I think that's really helpful. My next question refers to some other fields, which I know is a skill for you because you've had such a broad experience. Are there some situations that cannot achieve zero harm? The first thing that comes to mind is parenting. If I could get a guarantee on zero harm parenting, I would buy it, but I already know though that I would struggle to achieve the 100% consistency in the way that I raise my kids. I read that in parenting books: If you're just consistent, it'll be fine but I know I'm not. Another example is that a lot of biological industries, like food processing and water treatment, for example, have a margin of error that's considered acceptable. And I'm daring to ask the question -- In those fields, if a certain number of contaminants are found, it's okay, and they pass inspection. Is there any margin of error that is acceptable in healthcare? Do you have any recommendations for someone who's struggling with the human side of becoming highly reliable?

Craig Clapper: Thanks Gayle. it's been a long time since anybody asked me a question that involved, how many rodent hairs could be in the peanut butter, but that was the implication of your question. And to your thought on parenting, I talked big about leadership until we had kids and then I realized that this is a lot harder than I thought, and I kind of scaled back some of my advice to leaders on how to do things. My thought there on zero harm is that it's not really a destination. So I'm kind of with you in that I don't think anybody would ever reach zero. But it's about improving the care. So unless it's zero, you're still looking to do quality improvement. You're never satisfied with being average or getting into that top decile. If there are defects or errors, you're going to look to do something that's in the best interest of the patients to have fewer of those defects and errors. And if you look at it from a math standpoint is that we can go from one out of a thousand to one out of ten thousand, one out of a hundred thousand -- maybe one day, one out of a million -- but it never really gets to zero. But we do know that it gets better. It's better for the patients who are receiving your care and it's better for those caregivers and providers that are devoting their life to giving the care. But what's your thought on your own question? Do you have an answer for how do you get to zero and is it even possible?

Gayle Porter: I struggled with it, and it really leads into my next question about discouragement. But I do feel that the zero harm mentality, the way that it can be projected to the public. We have signs throughout facilities that say, aim for zero, zero harm. and when it comes to a mistake, and a jury is reviewing it, then we're asking the question, is the nurse acting in their best judgment? If zero harm sounds like perfection is the goal that can be a really rough place to be and it can feel like you're in a shark tank even when you're talking to your own administration. And so I've wondered about that. And the whole mentality of the Smart Catch is a really great thing -- but it's a palpable struggle for every clinician at this point, I think.

Craig Clapper: Yeah, there are lots of important issues just in that one statement. Everybody is saddened when there's a bad outcome and if we contributed to that in some way, then it's even harder for people to come to grips with it. I think when you touch on the idea of just culture, in just culture it's all about honesty. If our people are acting honestly, we should stand behind them, and support them and work to improve quality within their delivery systems. But then if they're dishonest, then they've honestly earned some sort of consequence for themselves and we could save them with some good progressive discipline. I know that sounds kind of funny to say, save them with progressive discipline, but I'd prefer people to learn inexpensive lessons earlier in their career than to have bad practice habits and then have some harm event and somebody lose their job or lose their entire career. So I know that the support of people is important to that and Jeff Raskin said it better than He said that we're all human first and then either expert or novice second, but there's no such thing as a person who never makes a mistake. So, why we talk about zero, zero is only possible on events of harm.


Craig Clapper: And events are usually made up of active error and latent weaknesses. Error will always be with us, so we should strive to have fewer of those and support each other as humans. That’s where we experience the human error journey.

Gayle Porter: So I have a follow-up question. There's something interesting with HIPAA policy, in that if I take care of a patient on one day and then I'm not taking care of them on the day of discharge, I don't get to find out if they lived or died. Do you think that knowing patient outcomes is something that would contribute to just culture, or do you think that it's not relevant?

Craig Clapper: That's a good question about HIPAA and patient confidentiality versus us and our desire for closure or even a form of broad learning. I feel kind of like I’m in a bad position having to say something nice about HIPAA, which I think is just kind of out of control. It was originally intended so that we would have better access to our own medical records. And shockingly, it's become more of a sword than a shield and I see a lot of good people lose their jobs every year over these HIPAA violations. I think we have a natural desire to come to closure, and personally, Gayle, I don't see why it's a HIPAA violation to learn about patient outcomes. I remember when we started the Code Lavender -- are you familiar with code lavender where we can respond to support caregivers?

Gayle Porter: I'm not.

Craig Clapper: There are several ways to do care for the caregiver, and the code lavender is a specific way to support our own folks. The people that developed the code lavender, they kind of thought that maybe there would be a few activations in a big medical center per year. And if we harmed a patient, we could activate this lavender cart and we could help our people. The big medical centers found out that they activate the code lavender often and of those activations the most common is a death of our patient, when our patient died because of their illness or injury and that affects us. And we need to support our people. So I think, in the light of that idea, I support people learning about patient outcomes and being able to follow patients to other care settings and know what's happening to them -- beyond just a note that the patient sends to us later to thank us for our time.

Gayle Porter: It's so nice to hear that because I always wanted to know, did I do a good job? Did it turn out well?

Craig Clapper: I remember a story from long ago, Dr. Ernest Codman, probably a pioneer in quality in healthcare, was a physician in Boston and he was so interested in outcomes that he left Massachusetts General and founded his own hospital called the Outcomes Hospital. And each patient got their own card like a medical record where he would follow them through their outcome. And then he could reflect back on the care and whether they want to make adjustments to the care. So in your world Gayle, I would say that we deserve to find out every time how our patient turned out. I think it's very sad just because I'm not caring for the patient that day that I couldn't learn about what's going on with them.


Gayle Porter: Absolutely, thank you for that. My next question then, is that I've heard that there are patient safety professionals who have expressed discouragement about their work. They've confided to me that it's frustrating, that sometimes the same problems persist after years of trying to eradicate them, and in the quality improvement cycle, when we see something that's still occurring, we've tried to address a problem, but it still happening, we might say that we were chasing the wrong problem and there would be a need to reevaluate as part of that cycle. How should patient safety leaders evaluate their interventions if a problem doesn't resolve, and how can they use high reliability principles to find the right pain points and solutions for those persistent problems?

Craig Clapper: Another big question. Yeah. And you had mentioned earlier that we have the book Zero Harm, which talks about the approach -- and I was proud to work on that – but it’s really the healthcare systems that did the work, that wrote the book. We were just more the editors that put the stories together. I think if I wrote another book, I'd write a book called 20 Years A Sophomore. I really felt that when we came into healthcare with these ideas for safety culture and high reliability organizing, that we could do that first year, the freshman year, and then move on to the second year, the sophomore year, and then eventually have graduate studies. But now, the last 20 years we've been stuck in our sophomore year. And then to make it worse, people like Beth Daley Ullem and Carol Hemmelgarn, they've written articles like Who killed Patient Safety? And notice the name of the article isn’t “is it dead?” They're just asking outright, Who killed patient safety.

Gayle Porter: And who to blame.

Craig Clapper: Yeah. So I do think that that's a problem for us, this year. A lot of the energy that went into the patient safety movement seems to have dissipated. That, and other problems with covid challenges and disparities in outcomes. I think that the best thing that we can say about safety and high reliability is that it's consistent with our quality principles and we should stay close to those quality principles. So some fundamental questions like, Is our care improving? And then if our care is not good and improving, then what should we do about it? And that's what I admire about your book on Quality for the Rest of Us. It took something huge and made it accessible for everybody. Anybody can write a thousand pages about quality improvement in healthcare, and many have, and your book, to put that into a hundred and fifty nine pages -- I thought that was very skillful. And we need to stick with those basic principles of things like, engage people in talking about what's better, putting practices in place to become better. I think we've kind of made it too complicated and we've kind of hidden it behind the bureaucracy and the complicated statistics. That's my thought.

Gayle Porter: Thank you so much for reading my book! I really appreciate that. So along the lines of looking at the future and moving beyond the discouragement, I wanted to address sort of a complicated question. Recently, there was a case decision that caused a lot of concern about under reporting of errors because there was a nurse named ReDonda Vaught, who was criminally prosecuted for a fatal drug error. This happened in Tennessee, and she did self-report the error. And there is a lot of pushback that people are going to be afraid to report because of this trial result. The thing that I found really interesting about the case though is that the verdict weighed heavily on the fact that she did not monitor the patient after she gave a risky drug. It was not the classification or the outcome or the type of med error or even the reporting that played into the decision -- it was whether she recognized the patient's distress and tried to rescue in a timely manner. So what role does recognition and recovery -- rescue from an error -- play in a high reliability organization?


Craig Clapper: Yeah, thanks for asking that. I'm familiar through the media accounts with ReDonda Vaught and her patient, Charlene Murphy. Because Charlene – well, Murphy's family said, “Mom would have forgiven her, but apparently Vanderbilt would not.” They terminated her and they took away her nursing license from the state of Tennessee, and then she was prosecuted in criminal court. So I think it sent a shock wave through all of healthcare when it happened. And in reliability, there are really two forms of reliability: There's the first form which is kind of made up of Automate, put everything into the electronic health care record, standardize it. make it step-by-step, and those things make health care more reliable, but brittle, in that when they go badly, they can result in loss events directly. But there's a second kind that you had touched on, which is called resilience. And that means that things don't always go as planned. So we have abilities to watch for problems, realize there are problems, make an adjustment to recover from a problem. The word resilience comes from the Latin which means to bounce back. So I made a mistake, I gave the wrong med, but in monitoring I realize it and we kind of bounce back. In fact, Gayle, a better expression here might be that we bounce but we bounce forward, because we're not trying to recover from this bad outcome. We're trying to get back on to a success path. So I think that the value in your point in asking about RaDonda Vaught and her patient, Charlene Murphy, is that we can put more emphasis on the resilience aspect of the work system and make our future high reliability. Work on both reliability of the first kind, which is managing the expected, but also reliability of the second kind, which is managing the unexpected -- which is actually the name of the Weick and Sutcliffe book -- I just borrowed their title to make my point. And complex systems are made up of both. Things that make systems brittle are usually machines, but the things that make systems resilient are us: Because people always have a head to think and then they have a heart to care. And that makes us more resilient than things like paperwork and computers, which cannot think and cannot care. I could tack on an example, if you like. I have a candy example, and a tree example. Do you know which one you want to hear, trees or candy?

Gayle Porter: Candy.

Craig Clapper: So, toffee is strong, but brittle. If you bend it, it'll just snap. Caramel, which is made up of the same things, like sugars and milk, is more resilient, so it kind of stretches and absorbs a lot of that energy. Interestingly, they're made out of almost the same basic ingredients, but toffee is prepared at a higher temperature, so that the sugar molecules are shorter in toffee and they don't connect very well. But in the caramel, the sugar molecules are much longer and they connect with each other. So, as you put stress on the caramel, it just kind of bends and elongates and it's able to sustain. I think that's a lesson for the folks that are listening to you today. In care systems, if you can connect with each other, you're more resilient and maybe you're more likely to identify that problem that you touched on and make an adjustment to preclude harm.

Gayle Porter: It's like overlap instead of a pressure point, right?

Craig Clapper: Yeah, that's a good analogy about an analogy.

Gayle Porter: So on a similar line, I'd like to ask about how you would get data to show if you're being resilient, if you're doing a good job of moving forward from an error. At the national level, we have lots of data to identify the top error types. There are lists of them. Most clinicians, if you ask them, they know the high risk activities in healthcare. They spend a lot of time, though, thinking about what it will cost them to report an error when it occurs. That questioning can actually delay recovery of the patient from the error and lead to worse outcomes. Do you think that we have enough data to shift our focus from individual error incidents…


Gayle Porter: …Toward rescue reporting or something along the lines of that forward motion after an error? In other words, could we reduce patient delays and deaths by focusing on the speed and efficacy of our rescue efforts rather than the severity of our mistakes after the fact? Or would we lose our zero harm compass in doing that?

Craig Clapper: Right now, we're awash in data. In fact, we have a lot more data than we actually use to improve care. If you look at the data, it’s mostly data about What. Where I think where we need to invest is more data about how and why, and why data is very hard to come by and very expensive. I like your thinking though, on the resilience aspect, and what data would you need to look into your Delivery systems and decide whether they're brittle or resilient? I think looking at events that are broken early in that progressive error chain would give you some insight to that. I think, looking at how and why of events that continued on, especially over time, and we just lack the ability to break out of it -- I think that would be valuable as well. I just think overall we need better reporting of harm. We know that harm is always underreported. So, how badly is it underreported? I think that's the question. I’d look at some technologies, like risk trigger monitoring by Pascal Metrics to give us a better insight and risk trigger monitoring. You're not so much dependent on your Reporting system in that you get your harms right off your electronic healthcare record using event triggers. So that way there's less burden, in terms of time, for caregivers and providers. They don't have to enter reports and then they don't have to worry about whether they’re getting a friend of theirs in trouble or getting themselves in trouble. I think that it takes a lot of those things out of the picture.

Gayle Porter: That those are heavy components.

Craig Clapper: Yeah, but overall, I think we're looking for a solution that maybe is a good idea on the part of somebody who hasn't thought of it yet. And that showed up in your ReDonda Vaught case study in that she had self-reported and it still didn't help. I firmly believe that we in healthcare have poor whistleblower protection, that other industries have better whistleblower protection. Also, I don't like the expression, whistleblower, that always sounds like you're in trouble maker and people that report problems always have good intentions. But if we look to the aviation safety reporting system, if a licensed pilot reports their own mishap, the FAA waives discipline against the pilot, and I just wonder why we don't have that in healthcare. If a caregiver or provider puts that into the learning system, then we're going to waive disciplinary action against that person. Just seems such a logical thing for us to do.

Gayle Porter: I had no idea that that was the case in other industries. We have safe harbor laws in my state and it still doesn't guarantee... Even if you report it and say, “this is an unsafe situation that I don't have the training or the capacity to handle,” even if you report that, it does not offer you protection, so that's fascinating.

Craig Clapper: And nuclear power people that report safety concerns are very well protected. In fact, you don't even have to be right, you can be wrong and know that you're wrong about it and still be protected. Then I had mentioned that the people that run the Aviation Safety Reporting system, the people at NASA run the program because the pilots were concerned about FAA having all the data. So NASA said they would do it as a third party. And they have very, very nice protection for those that self-report. So, I would look for that and apply it. Both to the people that are employed, as well as the independent credentialed provider, and put that into the peer review as a protection.


Gayle Porter: That's an excellent recommendation. Thank you. I do want to ask, in the expanding world of healthcare and patient safety, everything is rapidly changing. The technology is changing, the culture and policies are all changing. What trends or innovations are you personally watching right now, and why is it of interest to you?

Craig Clapper: Yeah, thanks for asking. I think that things that might be of interest to me aren't always of interest to everybody else. I think that Configuration Control is a serious problem for us in healthcare, which usually leads to a question, What is configuration control? But configuration control are things that we put in place so we can maintain, to look at the way our devices and machines are assembled an operated, and it just keeps things intact. And the other HROs have very strong configuration control, and that gives them the ability to do quality improvement because the system that they had last year is basically the system they have this year. In healthcare, we don't have the same system. Our system has drifted in all sorts of crazy directions. So what most people are calling quality improvement is just figuring out how to recover from the drift that they had from five years ago when they worked on this before. and I see a lot of posters at the quality conference that say, Getting to Zero in CLABSI Again, and they're doing basically the same good practices they did for line insertion, and maintenance is five years ago, they're just trying to get back the practice habits. So my point, Gayle, is if we had better configuration control, if we can kind of lock down the work systems then our quality improvement efforts would be more beneficial and they would show up year after year. I think that is important for us.

Gayle Porter: If I'm understanding what configuration control is, that is actually something that I'm quite passionate about. I've been looking at how healthcare in the past has been very standardized for the patient and what we deliver to the patients, the protocols and such, the measures, one size fits -- all but then our vendors are just a plethora of different systems. And what says high blood sugar on one device is not the same icon on another device, and then the delivery and the vendors and everything are such a myriad. And I feel like there's a shift going on in the interoperability movement where we're trying to streamline the communication between patient and hospital, and the digital communication with the electronic medical record that's becoming standardized, and then patient care evolving with precision medicine and genetics, and all of these very individualized approaches to the patient that is suddenly becoming individualized, which is a tremendous switch in healthcare. And so when you talk about configuration management, is that kind of what you're getting at? Is it the idea that Elevated blood sugar is always an Up Arrow with an S and everyone who uses any device will find that? Is that accurate?

Craig Clapper: The answer is yes, and there's “and” part. So the configuration control means that it doesn't change unless we decide to change it. So that may not always mean arrow up is higher blood sugar but my people know what that means. So configuration control and human factors integration work together. When we do human factors work, we spend a lot of valuable time and money deciding what is good human factors. And then we want to put it into the device or the process. But we waste that money when we can't maintain the configuration. So you had pointed out there we’re really only as good until we change vendors or the supplier discontinues and we've had events where we go…


Craig Clapper: From where it operates one way, to the very next one you grab off the supply shelf is different, because there's been a change in the device. So there's really no point to doing a lot of reliability evaluation or usability testing to establish these great human factors if we can't maintain that configuration from month to month and year to year.

Gayle Porter: That is fascinating. And it makes me think. I didn't realize how important that was until I was working with a facility on submitting their CDAC audits, which is a test of the accuracy of data that we submit to Medicare for abstraction. So that healthcare data gets submitted, and CMS actually forgave them for sending incomplete charts because they were in the midst of a transition to a different medical record. And I have never in my life seen CMS forgive for not sending complete charts, and I thought, Wow, that really must be a major issue for facilities, when just the EMR change is enough to say we’ll waive it, you can send it and do a do-over.

Craig Clapper: Good point. I like the story as well and it really highlights configuration control and change management. A hand in glove interface is that when we have configuration control it doesn't change unless we choose to change it, and then in change management we thoughtfully make the change. So, we plan it to have those good human factors and we communicate, and we train our people on the new system before we go live. And in healthcare, I just think we do that very poorly.

Gayle Porter: Yes.

Craig Clapper: You imagine if a commercial airline did change management the way we do in healthcare, the flight crew would show up for the flight and we'd say, I'm sorry, we changed the departure route and we told you. We had a safety fair when all the pilots went to the cafeteria and we went over the new departure routes and if you got the answer right, we gave you a piece of candy.

Gayle Porter: And that's not your plane anymore. Here's your plane.

Craig Clapper: And then they roll in the avionics on a pole. And they say, Don't worry, we don't need it till we're at altitude. Let's just get the plane off the runway and when we get up at the altitude, we’ll figure it out.

Gayle Porter: On the job training.

Craig Clapper: That is the hard way.

Gayle Porter: So this brings me to one of my favorite portions of any interview. I like to end on a positive note, to ask for a shout-out to someone who's doing something good. And I'd like to ask you, What is something or someone in healthcare improvement that you are proud of?

Craig Clapper: Wow, thank you for the opportunity. This is a golden chance. I am very proud of the commitment from all the caregivers and providers who have been working year to year on safety culture and high reliability organizing. I firmly believe that patient safety is better today than that it was 20 years ago when I started this work, particularly in med safety. And I think that is the result of a lot of people who are actually doing the work in engaging in the improvement efforts themselves. I think we've spent way too much time on big performance improvement. And we should spend a lot more time on local learning systems where we engage the people who are close to the work and bringing forth issues that make safe care hard for them, and then give them the ability to make that change at the local level. As they say, improvement by the yard is hard but improvement by the inch is a cinch.

Gayle Porter: That's right.

Craig Clapper: And I also say improvement by the mile takes a while. I try not to say that at the board meeting because they say you were here last year and you said that, and it's been a while.

Gayle Porter: I'd like to hear a little bit more about you and your organization. How can we find you and your work outside of this episode?

Craig Clapper: Thank you, Gayle. I'm the Chief Knowledge Officer at Reliability 4 Life, and most of our practices are around life skills. We call them life skills because we want you to practice them throughout your life. And when you do practice them, you save lives and improve lives. You can find us on our website, But you could also find us through LinkedIn. You could either look for me, Craig Clapper PE, or you could look for Reliability 4 Life -- Usually just R4L.


Gayle Porter: Thank you for that. I have learned so much more about high reliability and just the human considerations for it, and how it would apply. I could grasp what it meant for machines but what you said early in this interview about how to apply it to the human side, that really clicked for me and I really appreciate you explaining in such detail, and with stories, how it works in healthcare with some practical solutions. Are there any other suggestions or ideas that you'd like to share before we conclude this?

Craig Clapper: Once again I want to thank you Gayle for having me and I just encourage everybody that works in patient care and patient care improvement, whether you call that safety, quality, or patient experience, to be a lifelong learner. Every time I talk with folks, I learned. Every time I go to teach content, I learned something from the learners. And remember, even the person that invented judo wanted to be buried with a white belt because he wanted to be known as a student and not as not as a teacher. So always keep that white belt on.

Gayle Porter: I love that. Thank you.

Meeting ended