What Are Formative Summative Evaluation Made Easy Community Health

Key Messages

  • Both summative and formative assessments are critical components of a competency-based system. (Holmboe, Norcini)

  • Understanding why the assessment is being conducted and how the purpose aligns with the desired outcomes is key to undertaking an assessment. (Holmboe, Norcini)

  • By combining a demonstration of knowledge with acquisition of skills, and by testing for an ability to apply both knowledge and skills in new situations, a message is sent to learners that knowledge, skills, application, and ability are all important elements for their education. (Holmboe, Norcini)

  • Too little time is spent on formative assessment. (Holmboe, Norcini)

  • There is a need for greater faculty development in the area of assessment. (Aschenbrener, Bezuidenhout, Holmboe, Norcini, Sewankambo)

  • Although it is a useful tool, most individuals are not good at self-assessments. (Baker, Holmboe, Norcini, Reeves)

  • Regardless of how well learners are trained, dangerous situations leading to medical errors will persist if there is no support of the larger organizational structures emphasizing the need for a culture of safety. (Finnegan, Gaines, Malone, Palsdottir, Talbott)

In setting the stage for the workshop, John Norcini from the Foundation for Advancement of International Medical Education and Research (FAIMER) described assessment as a powerful tool for directing learning by signaling what is important for a learner to know and understand. In this way, he said, assessments can motivate learners to acquire greater knowledge and skills in order to demonstrate that learning has occurred. The summative assessment measures achievement, while formative assessments focus on the learning process and whether the activities the learners engaged in helped them to better understand and demonstrate competency. As such, both summative and formative assessments are critical components of a competency-based system. A competency-based model directs learning based on intended outcomes of a learner (Sullivan, 1995; Harris et al., 2010) in the particular context of where the training takes place. Although it is outcome oriented, competency-based education also relies on continuous and frequent assessments for obtaining specific competencies (Holmboe et al., 2010).

THE PURPOSE OF ASSESSMENT

According to Norcini, assessment involves testing, measuring, collecting and combining information, and providing feedback (Norcini et al., 2011). Understanding why the assessment is being conducted and how the purpose aligns with the desired outcomes is key to undertaking an assessment. Norcini posed a list of potential purposes of the assessment in health professional education, which might include some or all of the following:

  • Enhance learning by pointing out flaws in a skill or errors in knowledge.

  • Ensure safety by demonstrating that learning has occurred.

  • Guide learning in a particular direction outlined by the assessment questions or methods.

  • Motivate learners to seek greater knowledge in a particular area.

  • Provide feedback to the educator or trainer that benchmarks progress of the learner.

Highlighting the fourth bullet, Norcini emphasized that a purpose of assessment is to "create learning." In order to learn, one needs to be able to retrieve and use the information taken in. To underscore this point, Norcini cited an example involving students who took a test three times and ultimately scored better on that test than students who read a relevant article three times (Roediger and Karpicke, 2006). This is known as the "testing effect" where it is believed that tests can actually enhance retention even when those tests are given without any feedback. Norcini described the testing effect hypothesis that assessments create learning because it forces not only retrieval but also application of information and signals to students what is important and what should be emphasized in their studies and experiential learning.

Forum Co-Chair Afaf Meleis from the University of Pennsylvania School of Nursing questioned whether there is a danger in using assessments that direct studying toward the assessment tool rather than opening new ways of critical thinking. Norcini responded in the positive, saying that because the risk is always present, the assessment tool must be carefully selected. Historically, tests have been designed around fact memorization. Roughly 20 to 25 years ago, the standardized patient was introduced into assessments that moved beyond the simple memorization–regurgitation model. By combining a demonstration of knowledge with acquisition of skills, and by testing for an ability to apply both knowledge and skills in new situations, a message is sent to learners that knowledge, skills, application, and ability are all important elements for their education.

Assessment Outcomes and Criteria

As might be expected, said Norcini, the most important outcome of an assessment differs based on one's perspective. Students are concerned about being able to demonstrate their competence, educators and educational institutions are interested in producing competent health professionals who are accountable, and regulatory bodies are mainly focused on accountability and maintenance of professional competence. Users of the health system are also concerned that health professionals are accountable and competent, but in addition, they want to know if providers are being efficient with their resources.

Desired outcomes of an assessment differ not only based on perspective as noted above, but also based on the context within which the assessment is being conducted. And although there are certain characteristics of a good assessment, Norcini emphasized that no single set of criteria applies equally to all assessment situations. Despite all of the diversity in reasons for conducting assessments and the settings within which the assessments are conducted, Norcini reported on how participants at the Ottawa Conference were able to come together to produce a unified set of seven criteria needed for a good assessment (Norcini et al., 2011). These conference participants also explored how these criteria might be modified based on the purpose of the assessment and the stakeholder(s) using it. The criteria were presented to the Forum members for discussion at the workshop and can be found in Table 1-1.

TABLE 1-1. Criteria Needed for a Good Assessment, Produced at the Ottawa Conference.

TABLE 1-1

Criteria Needed for a Good Assessment, Produced at the Ottawa Conference.

In considering the criteria outlined by Norcini, Forum Co-Chair Jordan Cohen from George Washington University asked if it is possible to use these principles of assessment for assessing how well teams function and work interprofessionally. Norcini responded with a resounding affirmation that the principles apply regardless of the assessment situation, although the challenges increase dramatically. This is an area, he said, that is a growing area of research. For example, the 360-degree assessment is one way to measure teams, and there is considerable work under way in using simulation to assess health professional teams.

Assessment as a Catalyst for Learning

Warren Newton, representing the American Board of Family Medicine, asked about Norcini's use of the term catalyzing learning. Norcini responded that it is one thing to tell a student what is important to learn and another thing to provide students with feedback based on the assessment that drives their learning. The latter is a much more specific way of signaling what is important, and it is used to create learning among students. Newton then asked about the activity costs of assessment versus other kinds of activities. He pointed out that many of the Forum members manage both faculties and clinical systems; this prompted the question, how much time should be spent in assessment as part of the overall teaching role? Norcini responded by looking at the types of assessments, saying that far too much time is often devoted to summative assessment and too little time is spent on formative assessment; he added that formative assessment is the piece that drives learning and the part that is integrated with learning. Furthermore, assessments can be done relatively efficiently, especially if the assessors collaborate with partners across the institution. Norcini believes there could be greater sharing of resources across institutions, which would lead to better and more efficient assessments. Another advantage is the cost savings that can be achieved by spreading the fixed costs across institutions; these costs typically represent the largest expenses associated with assessments.

Assessment's Impact on Patients and Society

Forum member and workshop co-chair Eric Holmboe from the American Board of Internal Medicine (ABIM) moderated the question-and-answer session with John Norcini, and brought up assessment from a public perspective. He asked the audience what the return on investment would be if the assessment were not in place—if health professionals were licensed who are insufficiently prepared, and allowed to practice throughout a 30-year career? The cost to society would be much less if time was spent, particularly on the formative side, to make sure health professionals acquire the competence needed to be effective. Holmboe said that often assessors look at the short-term costs and the time costs without recognizing that not putting in sufficient effort comes at a heavy cost over time. And, there has not been a strong concerted effort to embed assessment into daily activities, like bedside rounds; this might be a form of observation and assessment that could be more effectively exploited. There are also a number of multisource tools that are relatively low tech and involve a series of observations; however, what is lacking in these tools is how to make them sufficiently reliable so appropriate judgments and inferences can be extracted.

Forum and workshop planning committee member Patricia Hinton Walker from the Uniformed Services University of Health Sciences followed Holmboe's lead and asked about including the public on the health team and how an assessment might be conducted that includes not just patients but students as well. Norcini responded again by emphasizing the value of multisource feedback for team assessments as well as other opportunities, such as ethics panels that can make use of the patient's competence in a particular area. He went on to say that the assessment process would lack validity if patients were not involved in the assessment. But in follow-up, Walker commented that students are somewhat separated from patients and families. Norcini pointed out this is an area of keen interest with researchers in the United Kingdom who are incorporating patients into the education of all health care providers through family interviews. Holmboe also brought up the longitudinal integrated clerkships (LICs) where students are assigned a group of patients and a family to follow over all 4 years of their training. The families play a major role in the assessment and feedback process of the trainees, said Holmboe. Although it is a resource intensive model, there are data from Australia, Canada, South Africa, and the United States looking into using LICs as an organizing principle (Norris et al., 2009; Hirsh et al., 2012). The Commonwealth Medical School in Scranton has actually moved to an entirely LIC-based model so every student at Commonwealth will be in an LIC-type model for their entire medical education.

Walker also wanted to know Holmboe's and Norcini's views on "high-stakes assessments." In Holmboe's opinion, there needs to be some form of public accountability through a summative assessment (Norcini agreed). At the ABIM, Holmboe views the certification exam as part of their public accountability as well as an act of professionalism. But for him, the bigger issue is the inclusion of more formative assessments during training and education rather than relying so much on summative examinations. Norcini added that he sees formative assessment as a mechanism for addressing trainee errors at a much earlier stage than waiting until the end for the summative assessment.

Jacob Buck from the University of Maryland School of Social Work, who joined the workshop as a participant, asked what the target of the assessment should be—is it to have healthier individuals and populations, or is it to graduate smarter health providers? In response, Norcini took apart the goal of the assessment. If the goal is to take better care of patients, then the focus would be on the demonstration of the skills in a practice environment and likely not a multiple choice test. In his opinion, the triple aim of improving health and care at lower costs may be the desired outcome from education, so an assessment could be designed to achieve that goal. Forum member Pamela Jefferies from Johns Hopkins University did not disagree, but she asked how one might measure interprofessional education (IPE) in the practice environment while patients are involved. Holmboe responded that this gets at some of the complexities of assessing experiential learning acquisition of a learner. Holmboe also raised the complexity of finding training sites where high-quality interprofessional care can be experienced so the learners can be assessed against a gold standard. It is not surprising that learners who do not experience high-quality, interprofessional care are not well prepared to work in these environments. Jeffries suggested that interprofessional clinical simulations could help bridge the gap for learners who are not trained through an embedded IPE clinical or related work experience.

STRUCTURE AND IMPLEMENTATION OF ASSESSMENT

Looking at the assessment from a different lens, Forum member Bjorg Palsdottir, who represents the Belgian organization Training for Health Equity Network (THEnet), wanted to know more about who is doing the assessing and how that person might prepare to undertake this role. Norcini acknowledged the need for greater faculty development in this area because health professionals are not trained in education or assessment. Forum member and workshop planning committee member Carol Aschenbrener from the Association of American Medical Colleges agreed, but also felt that the shortage of modern, clinical practice sites in which to embed the learner is another major impediment. In her opinion, it is the clinical sites that need greater scrutiny and that, if pushed toward modernization through assessment, could be the lever for greater, more relevant faculty development. According to Holmboe, measuring practice characteristics unfortunately remains difficult although the tools are improving, particularly with the introduction of the Patient-Centered Medical Home (PCMH). For example, the National Committee for Quality Assurance (NCQA) PCMH developed the NCQA 2011 Medical Home Assessment Tool that providers and staff can use to assess how their practice operates compared to PCMH 2011 standards (Ingram and Primary Care Development Corporation, 2011). This tool looks mostly at structure and process, said Holmboe, but researchers are beginning to embed outcomes into the assessment that might make it a good starting place for measuring practice characteristics that could be then be applied in education.

Another example Holmboe described is the Dartmouth Microsystem Improvement Curriculum (DMIC). This is a set of tools that incorporates success characteristics associated with high-functioning practices (The Dartmouth Institute, 2013). It uses action learning to instruct providers on how to assess and improve a clinical work environment in order to ultimately provide better patient care. The Idealized Design of Clinical Office Practices (IDCOP) from the Institute for Healthcare Improvement is yet another tool (IHI, 2014). It attempts to demonstrate that through appropriate clinical office practice redesign, performance improvements can be achieved that respond to patients' needs and desires. Goals of the IDCOP model are better clinical outcomes, lower costs, higher satisfaction, and improved efficiency (IHI, 2000). Holmboe acknowledged that these examples are clinically oriented, and he would be interested to learn about other models (although no other models were offered by the participants).

Assessing Cultural Competence

Afaf Meleis asked how one might assess the social mission of health professional learners and design a tool that assesses cultural competence. Neither Norcini nor Holmboe knew of any good models to assess either of these areas, but Holmboe repeated that work within social accountability and professionalism can only be assessed if learners actually experience a work environment that has role models in these areas—and it is the responsibility of the professionals to create these opportunities. Norcini agreed with Meleis, saying that cultural competence is a critical issue to assess. He added that it is absolutely essential that assessors scrutinize the methods used and the results obtained to ensure no one is disadvantaged for cultural reasons. Meleis encouraged Norcini to add multicultural perspective to his list of criteria needed for a good assessment.

Assessment by Peers

Forum member Beverly Malone from the National League for Nursing questioned the role of peer assessment in formative and summative assessments given the inherent challenges associated with this type of assessment. Norcini responded that peer assessments are underutilized particularly when it comes to the assessment of teachers, although a set of measures is being developed for assessing teachers that includes peer assessment. Norcini added that another way to assess teachers is to look at the outcomes of students. Holmboe pointed out that one of the risks to using student outcomes as assessment tools of educators is when the experiences are not well designed so interactions with peers, patients, or others are brief or casual. Attempting to assess learners' knowledge, skills, or ability in these types of brief and casual encounters are simply not useful, said Holmboe.

Assessment by Patients

The next question changed the focus of the conversation from the learner to the patient: a patient encounter is a one-time event, so what methodologies are in place to ensure equivalence when incorporating the patient's very particular set of experiences? Norcini admitted that there are biases so, in order to counter those, he samples the patient population of a provider as broadly as possible to include different patients on different occasions. In his opinion, there are at least three reasons for including patients in the assessment of providers:

1.

Patients are reluctant to criticize their provider so when they do, the provider has a major issue that should be addressed.

2.

Patients can be used to compare providers with their colleagues.

3.

Patient feedback makes a major difference in provider performance.

Time-Efficient Assessments

Another comment made during this question-and-answer session was a personal example from Forum member Joanna Cain, representing the American Congress of Obstetricians and Gynecologists and the American Board of Obstetrics and Gynecology, who described how her colleagues in the operating room (OR) use a time-efficient model of formative assessment. In their model, every operation ends with a "60-second" gathering of the team to discuss what did and did not go well. Holmboe applauded their use of formative assessment, but he cautioned against using time limitations as an excuse for not engaging in a complete assessment process. In his view, assessment is a professional obligation that demonstrates the return on investment. With that caveat, Holmboe reported that multiple 2- to 3-minute shared observations can be a rich source of information, and more opportunities for such assessments would be useful. In fact, as the OR example showed, quick assessments are attractive to many health professionals who keep busy schedules. Quick assessments can drive culture as colleagues observe the value in this form of individual and peer assessment, information sharing, and team building.

Self-Assessment

In hearing the previous discussion, Jordan Cohen commented that self-reflection is a potentially important tool. Norcini partly agreed, because although it is a useful tool, most individuals are not good at self-assessments. Holmboe added to the response that self-directed assessment defined by Eva and Regehr (2011) as a global judgment of one's ability in a particular domain is as Norcini described. The real value is found when self-assessors seek comments and feedback from others, especially those outside their own profession or discipline (Sargeant, 2008). But despite the valuable information this form of assessment can provide, it is not used as often as other forms of assessment.

MAKING ASSESSMENT MEANINGFUL

Following the orienting discussion, Forum members engaged in interprofessional table discussions to delve more deeply into the value of formative and summative assessments. Each table in the room included Forum members, a health professional student representative, and a user of the health care system. The purpose of engaging students and patient representatives was to enrich the discussions at each table by infusing different perspectives into the conversations. Students identified by members of the Forum were invited to attend the workshop and represented the fields of social work, public health, medicine, nursing, pharmacy, and speech, language, and hearing. Forum member and workshop co-chair Darla Coffey from the Council on Social Work Education led the session. Coffey suggested that communication might be a focus of the discussions about assessment. One person from each group was designated to present to the entire group the summary of the discussions that took place at his or her table. The results of these discussions can be found in Table 1-2 (value of summative assessments) and Table 1-3 (value of formative assessments). The responses were informed by group discussion and should not be construed as consensus.

TABLE 1-2. Summative Assessment Discussion Question: From the Perspective of Assessment of Learning, What Do You Think Makes a Good Assessment Tool/Measure?

TABLE 1-2

Summative Assessment Discussion Question: From the Perspective of Assessment of Learning, What Do You Think Makes a Good Assessment Tool/Measure?

TABLE 1-3. Formative Assessment Discussion Question: From the Perspective of Assessment for Learning, What Do You Think Makes a Good Assessment Tool/Measure?

TABLE 1-3

Formative Assessment Discussion Question: From the Perspective of Assessment for Learning, What Do You Think Makes a Good Assessment Tool/Measure?

The Challenge of Uneven Power Structures

In addition to the points listed in the Tables 1-2 and 1-3, Forum member Richard Talbott, representing the Association of Schools of the Allied Health Professions, brought up challenges associated with assessing supervisors or others who may be possess greater power than the assessor, due to fear of reprisal. He believes that the first goal within communication is to dismantle the power structure so anyone can feel comfortable in speaking up. In this type of setting, individuals may feel more comfortable giving honest assessments. This would include patients and caretakers, and it would create positive role models for learners to emulate. Bjorg Palsdottir then discussed the hidden curriculum and how negative role models have an ability to imprint negative experiences on learners regardless of the educational training received in the classroom.

This comment was underscored by yet another Forum member, who cited an example of an aggressive attending physician. Their program director confronted the physician about his aggression by emphasizing the risk to safety, saying, "If you are intimidating people, you are not a safe practitioner." One needs to understand how to navigate potentially delicate situations created by uneven power structures when one is challenging the hierarchy, said the Forum member. It takes practice, but it can be done. Workshop planning committee member Meg Gaines from the University of Wisconsin Law School took this point a step further, saying that it was an ethical imperative to speak up.

This topic resonated with the Forum's public health representative John Finnegan from the Association of Schools and Programs of Public Health (ASPPH), who was reminded of the 2005 Joint Commission report that cited communication failures as the leading root cause for medical errors (Joint Commission Resources, Inc., 2005). This does not mean the wrong information was always transmitted; rather, oftentimes nothing was said due to a fear of retribution. Regardless of how well learners are trained, said Finnegan, dangerous situations leading to medical errors will persist if there is no support of the larger organizational structures emphasizing the need for a culture of safety.

Assessment as a Driver for Change

Darla Coffey then asked the members and the students and patient representatives to consider how assessments could be a catalyst for change in the educational and health care systems. Much of the discussion revolved around the idea of better integrating education and practice; Forum member George Thibault from the Josiah Macy Jr. Foundation was a vocal advocate for rethinking health professional education and practice as one system. Forum member Lucinda Maine, the representative from the American Association of Colleges of Pharmacy, thought this could possibly be accomplished within her field by improving the assessment skills of their volunteer instructors and preceptors. In her view, this would make it easier to suggest changes in practice environments that could strengthen relationships within the continuum of education to practice. But, said Aschenbrener, for there to be any benefits to health professional education, assessments need to be reviewed at least annually for their alignment with the predetermined educational goals and the set level of student achievement.

The representative from the Association of American Veterinary Medical Colleges, Chris Olsen, felt that for assessment to drive change, it would need to be part of the expectation. Too often, assessments are carried out without taking the critical last step of using the information to drive change. Individual participants at the workshop provided their thoughts on how assessments in the context of education could drive changes in the practice environment. For example, workshop planning committee member Lucy Mac Gabhann, a law student at the University of Maryland, suggested that in a community setting, student assessment might influence policy. And Forum member Jan De Maeseneer from Ghent University in Belgium thought that students exposed to resource-constrained neighborhoods would develop a sensitivity to the social inequalities in health. However, others expressed doubt that assessments could affect change when the organizational culture is based on hierarchy and imbalances in power structures that are perpetuated through the hidden curriculum and role modeling. Beverly Malone pointed out that such a culture puts patients at risk when open and honest communication is avoided due to a fear of reprisal. John Finnegan fervently agreed, saying that communication in an organizational setting is strongly influenced by that culture, and no matter how much one tries to educate around it, the larger organizational framework will prevail. That must change, he said; there has to be a safe culture where communication is not feared in order for assessment to drive change in education and practice.

Yet another view was expressed by George Thibault, who pushed for health professions education and health care delivery to be taken as one unit with one goal. In this way, the impact of assessments is considered on both education and practice simultaneously. The educational reforms are informed by the delivery changes, and the delivery changes are informed by the education changes. If education and practice continue to be dichotomized, he said, valuable learning opportunities across the continuum will be missed. Workshop planning committee member Cathi Grus from the American Psychological Association commented on the opportunity for learning from assessments that are bidirectional. To her, such learning meant engaging patients in the design of the feedback that would be provided to students, and as such could send a powerful message to the learner of what is important to the end user of the health system. What is important, said Grus, is that all involved have an understanding of the goals of the assessment in order to maximize its impact.

REFERENCES

  • Eva KW, Regehr G. Exploring the divergence between self-assessment and self-monitoring. Advances in Health Sciences Education. 2011;16(3):311–329. [PMC free article: PMC3139875] [PubMed: 21113820]

  • Harris P, Snell L, Talbot M, Harden RM. Competency-based medical education: Implications for undergraduate programs. Medical Teacher. 2010;32(8):646–650. [PubMed: 20662575]

  • Hirsh D, Gaufberg E, Ogur B, Cohen P, Krupat E, Cox M, Pelletier S, Bor D. Educational outcomes of the Harvard Medical School-Cambridge Integrated Clerkship: A way forward for medical education. Academic Medicine. 2012;87(5):643–650. [PubMed: 22450189]

  • Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Medical Teacher. 2010;32(8):676–682. [PubMed: 20662580]

  • IHI (Institute for Healthcare Improvement). Idealized design of clinical office practices. Boston, MA: 2000.

  • Joint Commission Resources, Inc. The Joint Commission guide to improving staff communication. Oakbook Terrace, IL: Joint Commission Resources; 2005.

  • Norcini J, Anderson BB, Burch V, Costa MJ, Duvivier R, Galbraith R, Hays R, Kent A, Perrott V, Roberts T. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 conference. Medical Teacher. 2011;33(3):206–214. [PubMed: 21345060]

  • Norris TE, Schaad DC, DeWitt D, Ogur B, Hunt DD. Longitudinal integrated clerkships for medical students: An innovation adopted by medical schools in Australia, Canada, South Africa, and the United States. Academic Medicine. 2009;84(7):902–907. [PubMed: 19550184]

  • Roediger HL, Karpicke JD. The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science. 2006;1(3):181–210. [PubMed: 26151629]

  • Sargeant J. Toward a common understanding of self-assessment. Journal of Continuing Education in the Health Professions. 2008;28(1):1–4. [PubMed: 18366124]

  • Sullivan RS. The competency-based approach to training. Washington, DC: U.S. Agency for International Development; 1995.

burnelljusnis.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/books/NBK248052/

0 Response to "What Are Formative Summative Evaluation Made Easy Community Health"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel