ABSTRACT

The Post-Licensure Assessment System of the Federation of State Medical Boards and the National Board of Medical Examiners has been evolving for nearly 10 years in its effort to develop a system of evaluation for practicing physicians. The development of such a system requires collaboration among a variety of assessment and educational institutions. To be credible, the system must be grounded in reliable and valid assessment tools, provide unbiased information about particular physician competencies, and be accepted by both licensing authorities and physicians. It also should provide feedback for planning remedial educational opportunities and be useful to physicians who wish to participate in continuing professional development.

Assessments using the same standardized protocol addressing competence in medical knowledge, clinical reasoning, and patient management have been completed at three different sites for 79 physicians. Results show that when compared with non-certified physicians, certified physicians were twice as likely to achieve adequate levels of performance. In relation to licensure outcomes obtained for 53 physicians, of the 29 who performed in the less than adequate performance levels, eight remained in practice with restrictions and three returned to fully independent practice. All of the 24 whose performance was adequate were in practice.

For nearly a decade, the Post-Licensure Assessment System (PLAS) has provided state licensing medical authorities information, in the form of objective assessment data, for use in making licensure decisions about physicians whose competence is in question. With membership of state licensing authorities changing, there are many representatives who may be unaware of the PLAS and the resources it offers now and for the future. This article first will briefly describe the origins and components of the PLAS and then focus on the initial years of work in the newer component, the Assessment Center Program. It will provide the rationale for a collaborative model of regional assessment programs and review the barriers to physician assessment. Then assessment data will be presented and discussed for its potential impact on licensure decisions. The article will conclude with plans for the future and the need to focus on how the educational recommendations resulting from assessments will contribute to the continuing professional development of physicians.

BACKGROUND

The Post-Licensure Assessment System (PLAS) was introduced in 1998 as a collaborative program between the National Board of Medical Examiners (NBME) and the Federation of State Medical Boards (FSMB). The PLAS was established with the purpose of providing state-of-the-art assessment services to state licensing authorities and other health care agencies for their use in evaluating the competence of licensed or previously licensed physicians. In creating the program, the sponsoring organizations formalized their commitment to meeting state medical boards’ (SMBs) needs for access to high-quality assessment resources for licensed physicians, and recognized the role of assessment as an important and pertinent initiative to assure the public of the competence of practicing physicians.

The PLAS comprises two programs: the Special Purpose Examination (SPEX) Program and the Assessment Center Program (ACP) (see Figure 1). The SPEX is a one-day multiple choice examination of current knowledge requisite for the general, undifferentiated practice of medicine. It was originally introduced in 1988 to help SMBs in making decisions regarding licensure endorsement or reciprocity for applicants whose medical knowledge had not been tested for some time. The SPEX was first administered as a paper and pencil examination and subsequently transitioned to computer-based administration in 1995. At the time that SPEX was established, the relationship between the FSMB and NBME was one of client-vendor: the FSMB owned the program and NBME provided test development and analysis services. With the formation of PLAS in 1998, both organizations assumed equal responsibility for enhancing the examination’s capabilities.

Figure 1.

Post-Licensure Assessment System (PLAS) structure.

The second PLAS program, the ACP, encompasses the Institute for Physician Evaluation (IPE), which provides comprehensive, objective and personalized assessment tools for use in evaluating physicians about whom there is a question regarding clinical competence. The ACP Program Committee evolved from a joint task force that began investigating the potential needs for assessment of practicing physicians in 1993.

The PLAS programs are governed by committees that are responsible for adopting policies and procedures, approving testing methods and examination blueprints, and overseeing a research agenda for their respective programs. These program committees are under the jurisdiction of the Governing Committee, which provides oversight of the PLAS. Together, the PLAS programs have assessed hundreds of physicians over the past several years for reasons that range from endorsement of licensure to license reactivation after disciplinary action.

In the current environment where such issues as tort reform, malpractice insurance rates and changes in continuing medical education predominate, the value of competence assessment is beginning to be recognized as a key step in evaluating physician performance and designing educational programs. Accordingly, as the perception of the need for more attention to physician assessment has broadened, so has the number of initiatives wherein competence and/or performance assessments will play a role. Some of these are: a) maintenance of licensure initiatives, b) specialty board certification or maintenance of certification, c) credentialing and privileging actions and d) continuing medical education/continuing professional development needs or outcomes assessment. The evolution of these activities is expected to increase demand for assessment resources.

A COLLABORATIVE MODEL OF PHYSICIAN ASSESSMENT

In 2004, the governing boards of the FSMB and NBME endorsed a proposal that would optimize the capability of the PLAS program to meet the assessment needs of practicing physicians in the next decade. IPE activities shifted from an operational model of providing competence assessments at NBME and FSMB headquarters, to a model wherein the program provides assessment tools to third-party collaborators who conduct performance assessments and targeted remedial education for practicing physicians. This collaborative model represents an innovative approach to tailoring physician assessment and remedial education. It supports enhancement of locally developed and administered assessments and provides additional data for use in the formulation and monitoring of specific educational plans. A primary goal of the regional delivery model (displayed in Figure 2) is to facilitate the movement toward attainment of national standards for physician assessment and remediation, while providing flexibility to the independent assessment centers and geographic convenience and individualized approaches to assessment and remedial education for the participating physicians. The delivery system, like the available assessment tools, is not static and will evolve as other assessment centers develop in regions of need.

Figure 2.

The benefits of collaboration.

BARRIERS TO PHYSICIAN ASSESSMENT

The design of a system to improve assessment of competence must overcome current barriers to change and identify techniques that will succeed in achieving system-wide change. A variety of barriers to an effective assessment and remedial educational system for practicing physicians have been identified. First among these barriers is the perception of the assessment process as a punitive exercise. Physicians are probably among the most tested professionals by the time they begin their practice careers. However, the notion that knowledge testing and performance assessment is a stage that is “over and done with” after one obtains initial licensure is receding quickly into the past as the concept of continuing professional development evolves. In 2004, the FSMB’s House of Delegates adopted a policy that for the first time in its history, puts the FSMB on record as affirming state medical boards’ responsibility to ensure the continued competence of physicians as a condition of re-licensure. In its report introducing this recommendation to the House for consideration, the FSMB’s board of directors noted such requirements should be non-punitive and facilitate practice improvement.1 Several SMBs and hospitals have been working in collaboration to reflect this approach to assessment through the PreP (Practitioner Remediation and Enhancement Partnership) 4 Patient Safety program.2 A program of the Citizen Advocacy Center (CAC) that was supported contractually by the Health Resources Service Administration (HRSA), PreP 4 Patient Safety is intended to identify and remediate practitioners whose practice is not up to standards, but does not require formal disciplinary action by a state licensing board. The value of this program is its focus on changing the culture of blame. It promotes a proactive quality improvement culture instead of one that is perceived as reactive and punitive.

Second, and related to the first, is the reluctance of physicians to be continuously assessed. After many years of training and testing, the first opportunity to practice comes as welcome relief from the role of a tuition-bound student who is constantly being evaluated. The transition is made all the more enjoyable by the opportunity to earn income and pay down educational loans. By the time seven to 10 years have passed, the sharply honed skills for test-taking have dulled and the idea of returning to the encyclopedic knowledge of the training years lacks appeal.

Third, the focus on a narrower area of practice adds to the reluctance to be tested in areas beyond the scope of the practice. While the license granted to practice medicine is not restricted, the credentialing of hospitals and health systems, and the contact with peers, encourages the development of a comfortable, if limited, area of expertise. In reality, a physician’s practice has a tendency to narrow over time such that tailoring of assessments to match practice patterns is required for the assessments to carry credibility. Melnick et al.3 described the conceptual challenges that are required of “practice friendly” assessments. Primary considerations include the purpose of the assessment, the description of the practice, and the availability of assessment tools to evaluate the desired aspects of the physician’s competence. To assess an established, practicing physician, it is necessary to have tools available that are relevant to the practice. Such assessment organizations as the NBME are in the process of developing more practice-friendly measures to meet this need, which in part is being driven by the concept of maintenance of certification, adopted by the American Board of Medical Specialties (ABMS)4, based on the six competencies approved by the Accreditation Council on Graduate Medical Education (ACGME).5

Fourth, the measurement community is working to meet the needs for more practice-friendly assessments, including workplace assessments. These in situ activities will require more intensive and costly development to attempt to attain the high standards of competence testing (multiple-choice knowledge tests); progress is being made in many countries on many types of assessments.6,7 With the development of new assessments comes the need for research to validate the methods and approaches to measuring aspects of clinical competence. Meanwhile, competence tests and performance tests are making progress in adapting to individual practice needs. Modular testing and flexibility in test blueprinting technology are making it easier to adapt a knowledge examination to a particular array of concepts or practice profiles. The current SPEX examination will be replaced in the near future with a system that allows an individual physician to select a series of one-hour modular examinations in desired topic areas.

Fifth, and finally, there are few opportunities to educate physicians seeking to redefine their career, adjust the scope of their practice or participate in remedial education in an area of identified weakness. Further, the costs of participation in an educational program are expensive. A small cadre of institutions is developing the capacity to provide educational services for physicians seeking training program opportunities. The need for and use of community-based or academic preceptors is growing rapidly. As part of their search for a systems-level solution for individual physicians, Leape and Fromson8 have called for the development and testing of models for the construction of successful remedial education programs.

REGIONAL SITES AS A NATIONAL NETWORK

The PLAS is in the process of aiding the development and coordination of assessments among a network of regional sites that are or will function as assessment and educational centers. This collaboration makes state-of-the-art tools available to organizations assessing the clinical competence of practicing physicians. An initial collaborative model has been developed with the Physician Assessment and Clinical Education (PACE) program of the University of California at San Diego (UCSD) where more than 100 physicians were assessed in 2004 and 2005. Collaborations with similar programs are underway at medical schools including the Albany Medical College, the University of Florida College of Medicine and the University of Wisconsin School of Medicine and Public Health.

The PLAS is also fostering collaborative relationships to facilitate research in assessment and standard setting at other institutions such as the Texas A&M University Health Sciences Center and the Medical Review and Accrediting Council (MRAC) of New Jersey, which are developing new programs for physician assessment and remedial education. The Texas A&M model is developing a program of life-long learning to complement its current efforts in peer review for rural and community health facilities within Texas. The MRAC program employs local preceptors to work with physicians to determine their assessment needs and identify appropriate educational intervention following the assessment. Through these regional collaborators, the PLAS program expects to define the best practices of a model program for delivery of physician assessment and educational services. Collaborating sites are depicted in Figure 3.

Figure 3.

Map of USA identifying collaborating sites.

An umbrella organization known as the Coalition for Physician Enhancement (CPE) is an association of assessment centers in North America that is dedicated to the support and development of expertise in personalized assessment and education to enhance physician performance to promote optimal patient management.9 Some members of the CPE are already collaborating with the PLAS to encourage consistency within assessment protocols by using common assessment instruments and standards. The CPE is beginning to develop an accreditation system for educational programs in collaboration with the Accreditation Council for Continuing Medical Education (ACCME).

MULTIMODAL, TAILORED ASSESSMENT PROTOCOLS

A standard assessment battery offered to PLAS collaborators currently includes multiple methods of evaluating four main areas of competence: medical knowledge, clinical reasoning and judgment, patient management, and communication skills. The PLAS provides nationally standardized competency-based examinations (MCQs) in the major clinical clerkship subject areas, mechanisms of disease, pharmacotherapeutics, ethics and communication and interpreting the medical literature. There are performance-based, simulation-type assessments such as the computerized Primum®/CCS and its associated Transaction Stimulated Recall (TSR) structured interview. A hands-on clinical skills examination for practicing physicians currently is in transition to being routinely available to assess patient communication and data gathering skills. Table 1 shows the various evaluative methods used to assess each aspect of competence.

Table 1

Assessment Process: Multiple Aspects by Multiple Methods.

The collaborative delivery model promotes linkages with educational programs, enhances local capacity to individualize assessments, and facilitates local practice-based performance assessments as supplements to the PLAS tools.

Some of these local assessments include chart audits, chart stimulated recall interviews and professionalism instruments, usually in the form of a 360° review (a rating scale assessment of interpersonal skills conducted by peers, allied health professional staff and in some instances, patients). The measurement expertise of the PLAS staff professionals is available to help to develop best practices at the local level.

SUMMARY OF RESULTS TO DATE

The IPE is in the process of gathering data on the performance and outcomes of physicians assessed. To date, most of the results were obtained through the IPE assessments at FSMB and NBME offices; however, one collaborating program has begun to share data. Figure 4 shows the performance of 79 physicians about whom there are complete data (53 IPE and 26 from the PACE Program at the University of California at San Diego).

Figure 4.

Adequate Performance Levels Achieved by Noncertified & Certified Physicians.

As shown in Figure 4, certified physicians performed better on all three aspects than noncertified physicians. These data were reported in the assessment reports submitted to the SMBs following full assessments of physicians referred for a variety of reasons. Fewer than half of the noncertified physicians demonstrated adequate performance (medium or high levels) on any of the three aspects investigated. Less than 20 percent of the non-certified physicians performed adequately on the tests of medical knowledge, whereas more than 70 percent of the certified physicians performed at least adequately on medical knowledge tests.

During the three-year interval in which the 53 IPE assessments were conducted, the physician usually underwent a full two-day assessment that included the computer simulations (CCSs), the follow-up TSR interview, and several MCQ tests. Those assessment reports summarized the same three aspects of competence: medical knowledge, clinical reasoning, and patient management. In the new collaborative model and as more evaluative tools are developed, other aspects of competence, such as communication skills or professionalism, will also be assessed and summarized depending upon the focus of the prescribed or desired, tailored assessment.

Table 2 shows the overall performance of the 53 IPE physicians who were assessed during 2002 through 2005 in medical knowledge, clinical reasoning and patient management. Each assessment report was 12 to 15 pages of descriptive information and included recommendations for education or retraining. These recommendations could be undertaken at the discretion of the participant physician, but most often were negotiated to comply with the requirements of the SMB. Low performances usually warranted a full-scale educational plan and a recommendation for a mini-residency training program. For a substantial number of low-performing physicians, a full residency training program was recommended. A current IPE research study seeks to identify how and whether physicians undertake any of the report recommendations. Borderline performance usually warranted an extensive remedial educational plan and some form of personalized guidance, such as an individual preceptor. Medium performance involved tailoring an educational plan designed to address discrete areas of need. High performance may have included a suggestion for an educational course, but maintaining good standing in CME/CPPD was the typical recommendation. (For summary purposes, low and borderline performance constituted less than adequate performance. Medium and high performance represented adequate performance.) Recommendations for further study and structured clinical education varied based on time out of practice, regardless of performance. For instance, a practitioner who performed adequately on all three aspects might still have received a recommendation to seek an alliance with a preceptor if the practitioner had been out of practice for more than a few years.

Table 2

Overall Performance Ratings on IPE Assessment Protocols (2002–2005).

In some cases, the physician’s training and certification are not related to his or her current practice. Building the assessment for these physicians in transition is particularly precarious. Usually the transition is from a more specialized practice to a general practice. For instance, one physician certified in anesthesiology was practicing emergency medicine at the time of the investigatory process. Two others were certified in surgery but are now in general practice; another from obstetrics and gynecology is seeking to be a general practitioner. The assessments currently are better suited to broader areas of practice such as family medicine, internal medicine and emergency medicine. Thus, physicians in transition are more difficult to accommodate with personalized assessments and add complexity to attempts to summarize results. When the data set is larger, these physicians in transition may constitute a separate data set with its own research questions. For now, in this small data set, these physicians are more likely to perform in the low or borderline range. It remains unclear as to whether this is because they have difficulty with assessments outside their area of training or are not prepared for assessment in general. Regardless of cause, they are likely to require more structured clinical educational experiences to demonstrate adequate performance levels in their new area of practice. These limited data suggest that previous certification in one specialty should not be viewed as a pass to practice in another area without some guided educational experience. The development of more personalized and in-office evaluations should facilitate a more physician-friendly and realistic assessment process, which may help to identify more specific learning needs.

Table 3 shows the distribution of licensure outcomes as best as could be ascertained. Some participants are so recent no outcome has yet been determined by the licensing body. Thus, the outcomes in Table 3 are based on a review of information reported to the FSMB’s Physician Data Center and available on SMB websites. It also is critical to note the assessment results were only a part of the decision-making process and not the sole criterion for a licensing authority’s decision. Making direct comparisons of performance to outcome may present problems for interpretation that are not resolvable. Some case examples will be discussed.

Table 3

Assessment Performance Levels by Licensure Outcomes of IPE Participant Physicians (2002–2005) N= 53

Of course, one cannot presume an assessment was the basis for the licensure decision. There is no way to know how much these reports were weighted in the decisions made by either SMBs, or in some cases, hospitals. For example, there are four physicians in the cell comprising the intersection of high performers on the assessment but who have conditions on their license. These four physicians illustrate the diversity of the impact of the assessment process. One physician whose license was suspended and was on probation had the suspension lifted the day after the assessment, but remains on probation for another four years. Within three months of the assessment, the second physician was required to take courses both related and unrelated to the findings of the assessment report. The third had the license restored one year later, but it was limited for two years with the requirement of additional CME. The fourth physician had different outcomes from two different states. In the state for which the assessment was performed, the license was reinstated with probation three months post-assessment. In a neighboring state the license was surrendered six months after the assessment in lieu of investigation or other action, such as that taken by the board of the state requiring the assessment. These examples point out the variability in how individual states approach assessment outcomes in considering the totality of information available on individual practitioners.

Another group of interest would be those four physicians who performed in the low level on the assessment but still have full or partial licensure. These four physicians all have restricted licenses: two must participate in a one-year residency; one is limited to physical exams and has no prescription writing privileges; one may not do office procedures and must undergo a medical record review. The latter two physicians also lost their license in a neighboring jurisdiction because of the restrictions.

The review of the three physicians with borderline performance and full licensure identified some striking similarities: All three were English-speaking international medical school graduates (IMGs) and reticent participants in the TSR interview. All were slow in responding in the interview and had great difficulty recalling case details. None were able to describe a reasoning process or even to express an interest in trying to think about such a process. They each used an algorithmic approach to test ordering and had difficulty in an unstructured environment in deciding what tests to order. All three are currently in practice, with no action being reported on two of them by their respective SMBs more than two years after the assessment was completed. The third was granted full licensure two weeks after the assessment when the consent order was satisfied.

In reviewing the outcomes relating to licensure, it appears that the IPE-reported performance levels were usually concordant with the decisions of the SMBs, though it is difficult to tell how much impact an individual report might have had due to the timing and reporting policies of various SMBs. In some cases the physician was required to complete the assessment, with the results not necessarily used in the decision-making process. In all 14 cases of reported low performance, no physician was in independent practice. This represents good agreement between the outcomes from the SMB deliberations and the report findings. Nevertheless, it remains unknown as to the degree to which an assessment report assisted an SMB to make the decision to monitor the physician, or suspend or revoke a license.

Of the 15 borderline performance reports, nine physicians are practicing; six are practicing conditionally and the other three physicians, who illustrate the similarities described above, are practicing unrestricted. The remaining six are not currently in practice. All of the 24 medium- or high-performing physicians currently have licenses to practice; seven are under conditions of some kind, with variability of the four high performers described above.

The IPE assessment process was used by 18 different SMBs in the three years that it performed assessments in Euless, Texas and Philadelphia, Pa. One state had more than 10 reports, eight states had two reports, and five states received only one report. Two physicians of the 53 physicians assessed were self-referrals. Four osteopathic physicians were evaluated. One physician was assessed twice. The expansion of the collaborative network with regional sites should improve the accessibility such that more SMBs might use the services provided by the closest site, but also the one that best fits the assessment and educational needs of the individual physician.

THE FUTURE OF PLAS ASSESSMENT

The PLAS system is facilitating the growth of emerging and established assessment centers as regional sites for a national network of high-quality and reliable evaluative services. New research and development initiatives are intended to continue to customize the assessments toward more practice-friendly content and to introduce more practice-based (in-office) tools, such as chart audits and structured interviews.

One example of a recent innovation is the newest tool in the continually growing assessment tool box, which is a clinical skills assessment to accommodate the efficiencies of practicing physicians. To be effective, this tool had to take into account that the hands-on examination of a patient and associated data gathering becomes more efficient with experience. This use of standardized patients is a major step forward in being able to assess more than knowledge in practicing physicians.

At this time it is not possible to evaluate the educational recommendations of the assessment reporting process. A wide variety of recommendations were included in the assessment reports ranging from obtaining training in a mini-residency training program or with a preceptor, to taking courses to solidify knowledge bases or joining a journal club. Efforts to gather data on how physicians use the reported recommendations have begun, however, the nature of the process and availability of education resources suggest that it will be years before significant data are collected.

As the collaborative network develops and expands, the experience gained in the past three years, when the IPE conducted assessments, will facilitate the processes implemented in cooperation with the collaborative sites. The cyclical nature of the feedback process will enhance the evolution of the network itself. The IPE is now referring SMBs to the collaborating sites, usually in the regional arrangement shown in Figure 3. The list of contact persons at each site is included as an appendix to this article.

The PLAS intends to continue to address the evolving needs of regulatory agencies, the public, and the profession in prioritizing ongoing and future development. The PLAS is interested in collaborating in studies such as the research sponsored by a grant from the Robert Wood Johnson Foundation. This project is a needs assessment focusing on what medical boards and other components of the medical community perceive to be the features associated with physicians who are at risk for practice difficulties.

SUMMARY

This article was intended to provide a snapshot of the progress being made in the continually evolving arena of post-licensure assessment. The history of the PLAS demonstrates the close relationship of the two sponsoring organizations (the FSMB and the NBME) and the dedication of both to adjusting to the difficulties encountered in the operating environment. A variety of evaluation tools are now available or are coming into more widespread use to assist in competence assessment.

One of the specific goals of this article was to describe the transition in the role of the IPE from conducting assessments for referring SMBs to providing reliable and standardized assessment tools to independent assessment centers, where the assessments can be localized and personalized to regional or specialized areas of expertise. Another specific goal was to show the performance data of the participating physicians and relate that performance to the outcomes of the deliberations of the individual SMBs. The review of specific intersections of performance and outcome suggests that the data from the assessment reports is generally supportive of the resulting licensure decisions of the SMBs. Finally, and of highest significance, the PLAS is evolving slowly but positively as an effective facilitator of competence assessments for practicing physicians. The personalized assessment process and concomitant education recommendations have implications across several developing initiatives in the arena of maintenance of licensure or certification and the continuing professional development of physicians.

Footnotes

  • Thomas R. Henzel, Ed.D., Research Associate, Post-Licensure Assessment Unit, Assessment Programs, National Board of Medical Examiners; Andrea Ciccone, M.S. Program Manager, Post-Licensure Assessment Unit, Assessment Programs, National Board of Medical Examiners; Frances Cain, Post-Licensure Assessment System Manager, Federation of State Medical Boards; Carol A. Clothier, Vice President, Examinations and Post-Licensure Assessment Services, Federation of State Medical Boards; Richard Hawkins, M.D., Deputy Vice President, Assessment Programs, National Board of Medical Examiners.

REFERENCES

  1. 1.
    Federation of State Medical Boards. Report of the Special Committee on Maintenance of Licensure . May 8, 2006, available at http://www.fsmb.org/pdf/GRPOL_public_policy_compendium.pdf (p. 13, #150.004).
  2. 2.
    Citizen Advocacy Center. PreP 4 Patient Safety, Healthcare provider assessment & remediation resource manual . Washington, D.C.: Health Resources and Services Administration. ( 2004).
  3. 3.
    Melnick,DE, AschDA, Blackmore,DE, Klass,DJ, & Norcini,JJ. Conceptual challenges in tailoring physician performance assessment to individual practice. Papers from the 10th Cambridge Conference. Medical Education2002; 36: 931935.
  4. 4.
    American Board of Medical Specialties, “Maintenance of Certification: the program for assessment of continuing competence,” April 17, 2006, available at http://www.abms.org/Downloads/Publications/3-Approved%20Initiatives%20for%20MOC.pdf.
  5. 5.
    Accreditation Counsel on Graduate Medical Education, “Outcome Project: general competencies,” April 17, 2006, available at http://www.acgme.org/outcome/comp/compMin.asp.
  6. 6.
    NorciniJJ. Current perspectives in assessment: the assessment of performance at work. Medical Education2005; 39: 880889.
  7. 7.
    SwanickT, ChanaN. Workplace assessment for licensing in general practice. British Journal of General Practice2005; 55 ( 515): 461467.
  8. 8.
    LeapeLL, FromsonJA. Problem doctors: is there a system-level solution? Annals of Internal Medicine 2006; 144: 107115.
  9. 9.
    Coalition for Physician Enhancement, Mission Statement, February 2, 2006. Document available at http://www.physicianenhancement.org.
Loading
Loading
Loading