Background image

Medical Education
Graduate Medical Education

Patient Care

Mini Clinical Skills (Mini- CEX)

Recommended Assessment Tool

The Mini-Clinical Evaluation Exercise (Mini-CEX) was developed and studied extensively by the American Board of Internal Medicine. It is a focused assessment on specific aspects of a patient interaction. As such it can assess principles of patient care foremost and secondarily it asks for ratings of professionalism and interpersonal and communication skills as these are important components of every patient interaction.

Reliability and Validity

For use at the semi-annual review meeting a minimum of 6 forms/year would provide satisfactory reliability with 12 being optimal. The reliability and validity is based on the research done with the Mini-CEX. The program must monitor that the forms are being completed correctly including signatures to ensure that they are measuring with the same psychometric rigor as was done in the research studies.

Administration

  • Timing: 6 (up to 12) assessments per year
  • Who Performs: Skill assessments should be done through observation of the actual performance. It is possible that a faculty member could assess through video review of the performance, but the assessment reflects the skill of the resident on the performance date. The entire clinical encounter does not need to be observed – a shorter duration of observation may be more efficient.
  • Format: A checklist is the most appropriate format for evaluating specific procedural or communication skills (see section on focused assessment of observed skills). Because the Mini-CEX is developed for generic use across different encounter types, it uses scales to assess important elements in any encounter (e.g., history-taking, the physical examination, humanistic qualities and clinical reasoning).
  • Scoring Criteria and Training: The Mini-CEX research has been done on the 9 point scale for assessment. While other research may suggest that fewer points will give the same decisions, it is recommended with maintaining the 9 point scale as designed. The form includes a glossary describing the skills being assessed but no criteria are provided for unsatisfactory vs. satisfactory vs. superior performance. Attending faculty review the form and it is considered self-explanatory. This is not ideal, but most expedient.
  • Documentation: At minimum, twice annually as part of semi-annual review meetings.

Uses of the Data

  • Formative Feedback: Concurrent, written same-day feedback is recommended. The Mini-CEX is an observational form and must be completed in real time. It sets the expectation that resident and faculty member will discuss the observation and sign the form. More details may be found in an article by Holmboe (1).
  • Summative Decisions: Programs should inform residents that the criteria for judging progress will be the performance on the Mini-CEX averaged across all observations. The program should indicate a standard that would generate action such as any single Mini-CEX with an "unsatisfactory" rating. These decisions criteria should be made explicit to the residents.
  • Remediation Threshold: Programs should communicate what performance on the Mini-CEX would require remediation. However, the faculty must be willing to support such a process or they may be likely to inflate performance so as to not be burdened with remediation. Most programs would consider an average of 5 or below on the Mini-CEX while supposedly indicating satisfactory performance worthy of a development plan for the resident.
  • Program Effectiveness: The Mini-CEX is so intertwined with the fundamentals of patient care that the data are to assess resident performance and generate plans as needed. They are less likely to be useful for program effectiveness. References1. Holmboe ES, Yepes M, Williams F, Huot SJ. Feedback and the Mini Clinical Evaluation Exercise. J Gen Intern Med 2004; 19(5 Pt 2): 558–561.

Focused Assessment of Observed Skills

Evidence of Competency In:

Surgical Skills, Procedural Skills, Specific Communication Skills

Recommended Assessment Tool

Unlike the shared general competencies, Patient Care objectives are defined by each RRC. Thus, no specific tool can be recommended across GME programs. Instead, we offer principles to guide the development or selection of tools. Checklists define the discrete tasks that comprise the overall skill being assessed. Scales evaluate procedural skills that transcend individual tasks, such as knowledge of anatomy, instrument handling, flow of operation, etc. Richard Reznick and colleagues at the University of Toronto developed scales for the Observed Structured Assessment of Technical Skills, or O-SATS (1,2).

Overview

Focused assessments for skills are used primarily to assess patient care by determining if a psychomotor skill has been acquired. It is possible to combine some communication items if this is the only time a resident interacts with patients, but it is not the primary use of this assessment. It is also possible to develop a focused assessment of specific communication skill tasks, such as an informed consent discussion or specific counseling following a practice guideline.

Reliability and Validity

Skill checklists primarily have content validity. The items for a specific checklist may have come from the literature where someone has decided the checklist has content validity. Alternatively, if designing a checklist within the program, having those who are “expert” in the skill review and approve the checklist would serve as a level of validity. Reliability exists in several dimensions: the ability of different assessors to come to the same decision (inter-rater reliability) and the internal consistency of the checklist items (do they “fit” together). However, if checklists are used to identify when someone has mastered the skill, the internal consistency is not relevant (since everyone should get 100% eventually). Therefore, the best reliability evidence would be consensus of faculty concerning the decision that the resident is competent in that skill.

Administration

  • Timing: Skill assessments should be performed until a resident can demonstrate competency. The sustained level of competency can be measured if a program is worried about “drift” from desired performance. This would require a recheck of the skill at some systematic interval.
  • Who Performs: Skill assessments should be done through observation of the actual performance. It is possible that a faculty member could assess through video review of the performance, but the assessment reflects the skill of the resident on the performance date.
  • Format: A checklist is the most appropriate format, but the checklist may have some gradation reflecting the quality with which the specific step was performed, e.g.: not indicated (n/a), not performed but indicated, performed poorly, or performed well. General scales (such as for O-SATS above) also exist and facilitate comparability across specific procedures. Written comments may be especially helpful for giving feedback.
  • Scoring Criteria and Training: There should be guidelines for the checklist describing the environment for the assessment and a description that accompanies what is meant by each step on the checklist. For example, if the checklist includes “washes hands”, does that mean a resident running his/her hands under water without soap is acceptable? Is scrubbing involved? For a minimum length of time? It is advisable to indicate to learners and evaluators the acceptable standard for checklist items. The training could be by reviewing the written guide. The checklist should contain a written standard by which the resident would know that the performance demonstrated competency. Generally, this would mean achieving 100% of the checklist items and/or overall judgment of competency by the assessor.
  • Documentation: At minimum, twice annually as part of semi-annual review meetings.

Workflow Procedures

A systematic approach is recommended to maximize the use of the focused assessments and facilitate data management. A sample workflow document for focused assessment of surgical skills follows.

References

1. Winckel CP, Reznick RK, Cohen R, Taylor B. Reliability and construct validity of a structured technical skills assessment form. Am J Surg1994;167(4):423-7.
2. Reznick R, Regehr G, MacRae H, Martin J, McCulloch W. Testing technical skill via an innovative "bench station" examination. Am J Surg 1997;173(3):226-30.

 

Tags: