Grand Rounds-Measuring and Improving Performance in Surgical Training
This is a preview. Check to see if you have access to the full video. Check access
- Hello, ladies and gentleman. And thank you for joining us. Today we have a special guest, Dr. Gary Dunnington. He's the chairman of general surgery in surgical department at Indiana University. He's been a mentor of mine from early times at USC, and he's been really gracious to give us a talk on a very important topic that he's an expert on. And that's how we can evaluate resident training and resident performance in surgical training. Gary, thanks for being with us and please go ahead.
- Thanks very much. And Dr. Cohen, thanks for that introduction. It's really a pleasure to be here and discussing with you something of importance to all of us involved in the training of surgical residents. While much of what I will discuss today, as we discuss measuring and improving performance in surgical training will pertain to general surgery residents. The principles are broadly applicable to all surgical specialties and particularly those in neurosurgery. We begin by discussing the reality of resident evaluation, because it's somewhat surprising to find out that 20 to 25% of the residents have performance problems identified during training. Most of these problems are identified early, but there are real delays in addressing these problems. And probably the most important reason is that our evaluation strategies are poor and it does a poor job of yielding the diagnosis for these problems. Remediation is often as we say, penicillin for everything read more seldom solves the problem. And most residents complete training with the same problems that were identified early in training. We have recently tried to characterize what we believe these problems are in the resident evaluation process beginning with most frequent to least frequent. And the most frequent is clearly an adequate sampling of performance, not enough raters, not enough ratings, not enough skills sampled. There's also the inaccuracies due to over-reliance on memory. None of which is better illustrated than the end of rotation evaluation. When we try to recollect all of those instances during the last one or two months, there are the hidden performance deficits, the lack of meaningful benchmarks, often hesitancy to act because of brief encounters and systematic rater error, which we used to believe was a principal problem and is actually probably much less common than some of the others we've identified. One of the questions that's frequently asked of me as a surgical educator is are surgical residents competent currently? There've been all of the changes with regard to work hours, restrictions and a very different environment for training with an increase in the technological demand. Recently in general surgery 300 ACGME index cases were ranked by program directors according to competency. A were essential. These cases must be performed well by residents completing training. B is they should be and C were not necessarily. 121 operations were then deemed to be essential components of general surgery training by a majority of program directors. Surprisingly over half of those essential cases at a mode number of zero were looked at in the American Board of Surgery case profiles. These were cases such as common bile duct exploration, trans anal excision and Whipple, procedures that we believe most residents should be capable of doing once in practice. So the question obviously was, are surgical residents competent when they finish training. And so it has really created a major issue as we look to the future of training surgeons. It forced Richard Bell, a recent president of one of the major surgical organizations to ask in a presidential address why is it that Johnny cannot operate if this is actually the case? And we believe there are principally two reasons that are involved. One is an inadequate experience in residency training, as we just illustrated by the data. And secondly, an absence of formal assessment of operative skills, something that only is recently coming to the forefront in general surgery, neurosurgery, and many of the other surgery specialties. And that's the precise need for surgical skills training prior to residents arriving in the operating room. In fact as Lord Kelvin has said with regard to the evaluation or performance in these settings, "If you cannot measure it, you cannot improve it." So for the last 15 years or so, our educational research group has been predominantly focused on this issue of evaluating performance of surgical trainees. And so as we look to this stated otherwise as beyond the better evaluation form, I'm going to touch briefly on these issues of surgical skills performance, team-based performance, how we evaluate performance in the operating room, evaluation of clinical assessment and management, and then speak briefly to an issue of real concern to program directors. And that is milestones and the next accreditation system Back in the 90s I became engaged in the movement of surgical skills laboratory, and it become convinced that they are an integral part of training of all surgical trainees, regardless of specialty focus. I look at this much as one would look at the training of professional musicians, because I think there are great similarities. And so when we think of the professional training of a musician, we know that as they arrive to Julliard School of Music seldom do they ask immediately, how quickly can I get to the performance arena or Carnegie Hall. And in fact, if one of those early students were to ask such a question, it is likely they would be told emphatically, you'll need to spend a number of hours, weeks, months, and perhaps years in one of these small practice rooms until you've become proficient enough to arrive at the performance arena. And yet we are quite comfortable with residents arriving in July and asking how soon can I arrive at the operating room? And so we have really focused on this concept of deliberate practice, which is seldom deliverable in the setting of an operating room, especially for early trainees. And so we talk about deliberate practice being a well-defined task, appropriate difficulty level for the individual with informative feedback, the opportunities for repetition and correction of errors. And we believe that largely this is available on a regular basis, only in a surgical skills laboratory. And so in that regard, we've begun to refer to the skills lab as the practice arena and the operating room as the performance arena. There's some strong rationale for this concept of practice arena before performance arena. And for example, we know that it avoids exposing patients to the sharpest slope of the surgical skills learning curve. So it is an issue of patient safety. It clearly enhances the quality of all our teaching residents are coming to the OR already well comfortable, very comfortable with the equipment and troubleshooting of errors. There's a potential to significantly decrease the number of operative cases needed for competency, an important issue in a restricted workweek. An hour for hours and recent data from the Imperial College in London. It may actually be more efficient than training in the operating room, particularly for early trainees. So what have we learned over the last 15 years in this movement of surgical skills laboratory? Well, we clearly have recognized the benefit is predominantly in years one and two. There's no better place for a senior resident to be than in the operating room with a skilled mentor. They have to be a mandatory part of the residency curriculum. It clearly cannot be voluntary. Scheduled practice sessions are critical perhaps two to one or three to one over the typical scheduled module where the faculty is involved in teaching the skill. We've also learned that non-MD skills coaches will be essential for long-term success. You can only ask a surgical trainee to spend so many hours with a full-time faculty member teaching basic skills before that faculty member will eventually wonder if not someone else might be able to do this is capable. So we've been able to use trained nurses and OR technicians, scrub techs who have really been trained by faculty as gifted teachers in this setting to offload some of the work for faculty. Several years ago, I was approached by the American College of Surgeons and the Association of Program Directors and Surgery to develop a unified surgical skills curriculum that would be useful for all program directors in the United States and Canada. It would prevent the problem of every program director having to reinvent the wheel, developing their own unique curriculum. This project has now yielded this three-phase skills curriculum widely used across North America. And even throughout the world. Phase one of curriculum modules that would be appropriate for any trainee, whether neurosurgery, ENT or general surgery so for all specialties in surgery, this is a very basic curriculum module. Phase two now moves specifically to general surgery and would have those cases specifically done by general surgery. Let me give you an example of how we implement this strategy for phase one at Indiana University, we have what's described as a surgical skills bootcamp for all PGY 1 residents. This is available for orthopedics, plastic surgery, urology, ENT, general surgery. As again, the modules are not specific to general surgery. This is 12 weeks beginning usually in late July or early August, three sessions weekly, each 90 minutes. And there are nine of 11 verification proficiency modules completed by the last session. Let me come back to that in a bit when we discuss that focus of evaluation. Phase two, as I mentioned are now full procedures. Having learned the basic skills. They are largely selected because again, they are essential components of a general surgical practice, but seldom performed in residency. So at least we can be certain that in either a model or in a cadaver or in a live animal laboratory, these skills are being taught for these full procedures. And then finally team-based training is just now coming into its own in many specialties, including the surgical specialties. And for this module, we put together a group of experts throughout North America with significant research experience and designed a curriculum that would address these essential team-based skills, communication, leadership, briefing, and planning, resource management and the others, as you can see. So all of the cases that I will show you as part of this curriculum weave these essential competencies throughout the cases. We've developed 10 of these cases that are in various scenarios from the trauma bay to the PACU, to the operating room. And several are again, not specific to general surgery, such as laparoscopic troubleshooting, or even some of the operating room crises. In addition, others take place after the surgical procedure, such as a retained sponge on a post-op chest x-ray to help the trainee understand all of those critical issues of involving the medical legal team. So these entire scenarios are available on this web-based curriculum entirely developed with all training and Confederate materials that allow a training program to move very quickly into team-based training, without developing a unique curriculum. Now let's move to a very important part of this whole process. And that's the evaluation component because we have determined that in many cases, faculty and programs institute a curriculum, which fails to evaluate the performance before allowing the resident to move on as competent. Early on in this process, we use something developed at the University of Toronto called OSATS, objectives structured assessment of technical skills. Neuroplastic surgeon is viewing a young trainee, performing an incision of a melanoma, if you will, which is a lesion on a pig foot. So this is a multiple scale task and directly observed and then rated accordingly and ultimately passed or failed. When we introduced OSATS to our curriculum development team for the American College of Surgeons, most said, this was far too faculty intensive. We needed to come up with a measurement device that was much more faculty friendly, more amenable to their busy schedules. And so for that reason, our research team came up with the Southern Illinois University Verification of Proficiency Modules. And the ones listed here are all fully developed and available on the curriculum on the American College of Surgeons Education Website. And so in each of these cases, we can verify that a trainee is proficient before they're then allowed to do this procedure on a patient in the ICU or the operating room. Let me explain how this system works. First, the trainee would review a video of an expert performance of let's say an arterial anastomosis. What it should look like when that trainee is competent. They'll then undergo guided practice in the skills lab until their time is within standards. And then there'll be ready to produce a video of their own performance. They'll step onto the camera, produce the video, which is then obtained by a video capture system and sent to a faculty desktop where their performance is rated. And they are allowed to go to the operating room to perform an anastomosis such as a vascular access only when that faculty member has rated them as having a passing performance. You do the same thing for endoscopic surgery. And while in the past fundamentals of endoscopic surgery were the only unit required by the American Board of Surgery. It is likely that in the next few months, we'll also have the same requirement for endoscopic skills. Just recently, there has been released by Sages, a vigorously validated system to again, assure that our trainees are proficient in upper and lower endoscopy on this device, the GI-BRONCH Mento in the skills lab before they're allowed to do that in a skills lab on a real patient. This is what the video capture looks like at a faculty desktop. And so, again, it's blinded, they only know that this is examinee 1208 and they have the rating instrument on the left and below, they're able to annotate those areas for feedback that would be very valuable, particularly in a failed performance as the trainee practices to be retested, to prove proficiency before going to the operating room. Now let's move to the operating room. Let's assume this trainee has been proven proficient, is now able to do the procedures in the operating room. We believe that evaluation in this setting is also critical in the evaluation of learners. Again, much more precise, much more valid and reliable than an end of rotation evaluation with all of the inherent inaccuracies due to reliance on memory. So in the operating room several years ago, we developed what's called the operating performance rating system or Sentinel case mapping, which has rating forums for selected operating procedures at each PGY level. And this is case specific, unlike some other systems out there and available that are global ratings, not case specific. And then the system obviously provides for early feedback on resident performance as all of this data is entered into something like new innovations or a similar resident evaluation portfolio. Here are some of the specific cases that we've developed. These can easily be done in any other surgery specialties, such as neurosurgery. Again, there are cases that allow us to sample a reasonable, random sampling. Those critical cases performed throughout the five years of surgical training. Our milestones then for measuring performance and operative skills would look something like this. In the first two years of training, we do the verification of proficiency for those 11 modules, as well as fundamentals of endoscopic surgery. In year three, we do fundamentals of laparoscopic surgery required by the American Board of Surgery before one finishes training and sits for the American Board of Surgery training exam. And then finally in the three, fourth and fifth year, third, fourth, and fifth year, we're doing the operative performance ratings on the eight specific procedures, which we believe for the first time gives us a very accurate record of performance evaluation throughout the five years of training. Now, the next step for us was to develop a national operative performance evaluation system, because we want to know how our trainees compare with other trainees throughout the United States. And so the American Board of Surgery will require operative performance evaluations for certification beginning this summer. And so the Surgical Council on Residency Education came to our research group at SIU to develop this system and validate nationally. We did our standard setting with gold standard surgeons, evaluating video performances, followed by national piloting for norms. And that process is currently underway. And in this validation, we chose four procedures, simple to complex, open to laparoscopic to again, give us a nice spectrum of validation for rating norms. This is what it looked like at the American Board of Surgery in Philadelphia, as these gold standard surgeons spent hours and days viewing videotapes of operative performance, to be sure that we had a system with strong validation. Here's some of the lessons we've learned that I think have some valuable teaching tips for surgeons in residency training programs. We believe the focus of evaluation should be on performance where the resident does most of the planning and directs the team. We've largely stopped building these evaluations in years one and two, and reserve them for years where residents are clearly doing the bulk of the procedure. With identified guidance is something that seriously interferes with the evaluation process. And we know for sure that attendings underestimate the amount of guidance they provide. Ray Stretter several years ago described the learning vector where learners progress from very immature dependent learners to very mature, independent trainees. The goal of our attending guidance then should be quite heavy in early years, but as residents mature, we ought to fade with our queuing. And so this has been a topic of great interest to our research team. As we look at the amount of guidance provided by surgeons. There are a number of ways as surgeons we cue in our operative teaching, we point with forceps, we frame the steps with cameras, we vocally direct perhaps the medical student or the surgeon trainee directly. However, the method we really do probably cue more than any of us would recognize in our training. So our recommendation is to modulate this queuing based on learner level and skill level. Residents clearly recognize some faculty as great teachers, but perhaps only for junior residents or perhaps for chief residents, because they aren't able to modulate the queuing according to resident ability. The key is to drop out cues for teaching of operative flow or next steps. And as some have said, don't just do something stand there and let the residents struggle with operative flow. Without those very commonly depended on attending cues for guidance. Ultimately work toward being a first assistant for chief residents as they advance in training. Finally, we want to recommended that rating reports and norms be stratified by half training year, so that ultimately we want to be able to tell a third year resident ind the first half of their third year, how their performance compares with residents across the United States at a similar level of training. And so this is the report that we are generating, and we hope to soon have this available for program directors. Blue line would be a particular trainee's performance on average at each six month interval of training, while the yellow or orange dots would be that mean performance of residents throughout the country. So again, giving that resident a very good sense of where their performance is compared to national norms. Let me summarize for those of you who are interested in developing operative performance rating systems for neurosurgical procedures, for example. And it's certainly something that would be incredibly valuable. This is data that we've now gleaned over a number of years of research that really formed the basis for the reliable systems. You need at least 20 ratings per year per resident, you need at least 10 different raters per year per resident. We know that the ratings should be completed within 72 hours of performance. Anything beyond that really develops really a halo effect for that resident's overall rating. And so we ultimately needed to develop a smartphone application and we have actually finalized that work. And finally the ratings should be done with validated instruments. So if those instruments are developed in neurosurgery, it would be important to validate those before widespread use. We are now beginning to launch our personal best surgery, which will form a template for how the system works across the country. As faculty would sign up for becoming either a coach, someone who would evaluate performance or a resident in a training program, therefore we would be able to provide the feedback for that particular trainee's operative performance, video in this case. We also have developed iPhone application for this personal best surgery so that our faculty can complete these evaluations as they leave the operating room, as they drop down a menu of the residents and the procedure they've just observed them performing. We think this again will help us stay within that three-day period and provide the best possible evaluation and feedback for performance and setting of the operating room. Let me move briefly to the setting of the surgical clinic because we have had a feeling for years that we don't adequately evaluate resident diagnostic and workup skills and test ordering skills in this setting. This is sort of a menu of all of those things that are available in the literature that would be useful in assessing clinical management skills. We've largely relied on the last two, the pain, something also developed at the University of Toronto and the cameo developed at SIU, the clinical assessment and management exam, for the outpatient agreement The pain is an examination using a standardized patient in two settings, once obtaining all of the information and then asking for diagnostic studies or tests, and then returning to the clinic room where the patient is advised on the best surgical procedure informed consent is obtained and any discussion of potential complications occurred. All of these interactions with the residents and the standardized patients are observed behind a one-way mirror by faculty evaluators. So again, this provides in one setting a tremendous insight to most of the competencies important for residency skills. This is some of our data over a period of several years, showing again, residents improving their performance as they move towards senior years of training. Similarly, we do the CAMEO which is a faculty rating of patient evaluation, much more faculty friendly. It's done in the setting of the clinic. And in this particular case, we use 15 common diagnoses. We rate test ordering, diagnostic skills, history and physical and communication skills. And very importantly, it allows us to gather patient satisfaction data. And we've found this to be a very reliable way to do this. And we collect this and then enter it in to new innovations. So for us, it's provide a way to provide that assessment of resident performance in the clinic, which provides valuable feedback, particularly as residents near the end of their training. This is some of our recent data again, showing continued improvement of performance as residents move toward those senior years of training. Let me finish with some comments about the ACGME Next Accreditation System. Something that will be available very shortly for general surgery. It will be fully implemented in 2014 with these 10 year site visit cycles with updating of program information and a program director report of milestone performance in 15 areas every six months, those numbers may be somewhat different than neurosurgery training. So how do we measure the milestones? Well what we've seen clearly is there's no clear direction from the ACGME. They're really relying on the research and input from surgical trainees and program directors around the country. What we are certain will not be the answer is the addition of 15 items at the end of rotation evaluation form. This is a strategy used much after the introduction of the competencies and really is not what we're looking for, but we're looking for an innovative way based on direct observation. And our recommendation is then to adopt or create new evaluation strategies. Again, always based on direct observation. So in our own program, we began to look around and recognize that in many instances, we already had some way we were observing these skills. We just needed to collect it more formally. So in communication, coordination of care, we realized that we have team leadership in the trauma bay, by trauma staff, and we use something called SMARTT stepback trauma resuscitation simulation. So we were already collecting this information as to how well a trainee communicated in the coordination of care in a trauma resuscitation. Furthermore we realized that our program directors or associate program directors were already on a quarterly basis, observing team handoffs to improve this aspect of resident performance. We now formalized that and made that again, part of the collection of data for the next accreditation cycle. These diagrams will be familiar to program directors as they look very similar to the very first one produced by Tom Nasca, the head of the ACGME when he began his vision of describing a resident's competence, according to this spider diagram, with all of the areas of collected information and data. This is the way it now looks at Indiana University in our program, where there are numbers of things that are collected in each of the six competency areas, which allow us to collectively portray a picture on this spider diagram of what a resident's performance looks like. This would be one of our individual trainees. The red would be their individual mean performance for each of the six competencies. The dotted line would be the PGY class average. So it becomes very easy to see how this particular resident's performance compares with those of his or her peers. And I believe this kind of system is what we're moving toward as we collect more and more specific data in these 15 areas, each of them falling into one of the six competency areas. In summary, I think we've learned some valuable lessons in resident evaluation over the last few years that we believe are applicable to all surgical training programs. You clearly can't evaluate all the residents all the time. We know that you need to maximize the number of faculty ratings. We looked for a number of years as this important piece of information, just how many ratings does a program director need to have an accurate and reliable assessment of performance. And we know that that number is 38, maybe a bit less if you're not looking at professional behaviors. We also know that ratings only have two dimensions. Your rating form can be as complex as you wish. The faculty still only see it in two perspectives. Clinical performance and professional behavior. And we want to sample performance broadly because clearly the most common deficiency in rating systems is that we fail to have enough trainers enough training reforms, enough ratings of performance. Because the more broadly we sample that performance. The better the evaluation will be. Finally, we develop norms based on those group meetings at the end of every six months period, where we really look at our own data to see how meaningful it is in projecting resident performance. Finally, we want to make progress decisions by committee. We've learned the efficiencies inherit and asking each individual faculty at the end of the rating period, to give a pass or fail on that performance. It's clearly superior to do it as a group with the feedback of all raters at a six month interval. So we've talked about the reality of resident evaluation, and if we continue down the road we have in the past, it looked rather bleak, but I hope we've offered some things today that will allow us to improve our resident performance evaluation system. Hopefully the reality of the future is going to be very different than it has been in the past. So in summary, separate the practice arena from the performance arena for surgical skills, they are very different. Broadly sample trainee performance. We refer to this broadly as workplace assessment, the clinic, the operating room, as opposed to just end of rotation evaluation. Measure performance based on direct observation. If there's an item on a form and it's not directly observed by faculty, we remove it from the rating instrument. And measure, don't assume that proficiency and competency in a mastery-based approach to residency training. Thank you very much.
- Gary thank you so much for really a very important topic. Something that's only going to be more important in the near future. And we look forward to having you with us again. Thanks again.
Please login to post a comment.