|
|
REVIEW ARTICLE |
|
Year : 2016 | Volume
: 11
| Issue : 4 | Page : 119-120 |
|
Fixed pass mark: Time for change
Assad Ali Rezigalla
Department of Anatomy, College of Medicine, University of Bisha, Bisha, KSA
Date of Web Publication | 16-Mar-2017 |
Correspondence Address: Assad Ali Rezigalla Department of Anatomy, College of Medicine, University of Bisha, P. O. Box: 61922, Bisha 551 KSA
| Check |
DOI: 10.4103/summ.summ_13_16
Pass mark is a score that forms a limit between enough competent candidates and those who are not competent. There are two types of pass marks: relative and absolute. Fixed pass mark can be determined by either asking judges or arbitrary figure. The internationalization, globalization, and cross-border education driven by the development of information and communication technology need global standards for medical education. Thus, there is need for evidence-based standards. In the presence of remarkable evidence against the use of fixed pass mark, continuing its use becomes unjustifiable. Keywords: Angoff method, fixed pass mark, standard setting
How to cite this article: Rezigalla AA. Fixed pass mark: Time for change. Sudan Med Monit 2016;11:119-20 |
Pass mark is a score that forms a limit between enough competent candidates and those who are not competent.[1] According to the test being implemented, there are two types of pass marks: Relative and absolute.
Relative pass mark is used to select a predefined group of examinee.[2],[3],[4],[5],[6] This type concentrates only on the wanted group of examinee without any regards to the limit of competence. Relative pass mark makes a competition among the examinee rather between them and the examination as a limit of competence.
Absolute pass mark is a figure limiting the competence without concerns with the resulted numbers of the competent examinee.[3],[4],[5],[7] In this method, the examinee compete against an examination as a competent limit. Thus, it used more for certifying examinations.[1]
According to the classifications of the standard method,[5],[8] a fixed pass mark is a relative type. Fixed pass mark can be determined by either asking judges about the percentage that they believe a qualified examinee can score [1] or as a figure determined by responsible authority. Each of these methods is not based on the examination and levels of candidate's knowledge. Both of these methods are not suitable for certifying examinations. A fixed method [1] depends on average of judge's about examinee qualification. This method has more scientific base than determining 50%,[9],[10] 60%,[10],[11] or even 70% as a pass mark, used for years. The presence of consistent standard setting has been site of questions.[12] In spite of different methods for the standard settings and the huge literature about this topic, some schools and collages of medicine are still using this method.
Any standard setting method depends on implementation on one of the two stakes of assessment: The candidate or the examination. The first is the wanted candidates as a limit of competence or the required number. The second is the examination as a guardian of the curriculum outcome [13] and the level of difficulty. Besides, both of these assessment drive student learning [14] and expand professional horizons.[6],[15],[16]
Arbitrary fixed pass mark depends on no scientific basis; it is just a figure and consequently nondefendable.[1],[17] Such invalid and unreliable pass mark can result in allowing noncompetent candidates to practice and unrealistically high pass mark will exclude competent candidates. Both of these situations will affect the candidate's confidence about them, the assessment, the stockholders, and labor market.
Literature about standard setting describes many types of pass marks and methodologies of setting standards.[8],[18] The most commonly cited method is modified Angoff method. Modified Angoff method takes in consideration the candidate as a borderline with enough competence and the judgment about the examination. Judgment about examination depends on two points, first the examination difficulty and the curriculum outcomes.
The selection of suitable method of setting standards and changing is nonconsuming time practice and has an impact on the quality of the graduating candidate. Many international bodies and councils concern with accreditation and good practice are advising to use standard setting in assessment.[19],[20],[21] The Accreditation Commission of Colleges of Medicine [21] stated that the methods for assessing students' skills, knowledge, and proficiencies must be developed by the medical school and overseen by a promotions and evaluation committee.[21] The General Medical Council [22],[23] reported that the schemes of assessment must support the curriculum and allow students to prove that they have achieved the curricular outcomes.[7] Moreover, the supplementary advice declared that medical schools should not use fixed pass marks that is pass marks which are the same every year.[4] The trilogy of standards of the world federation of medical education had emphasized on the assessment of student and trainee for accreditation of colleges. The internationalization, globalization, and cross-border education driven by the development of information and communication technology need global standards for medical education.[21] Thus, there is need for evidence-based standards.
In the presence of remarkable evidence against the use of fixed pass mark, continuing its use becomes unjustifiable. On the other hand, using standard setting is becoming more evidence-based, especially with the availability of a number of widely tested methods.
Acknowledgments
Great appreciation was to Dr. H. Kameir, Dr. S. Bashir, Dr. O. Elfaki, Professor J. Haider and Professor M. Habieb for their comments. The comments of Dr. El. Mekki A are highly appreciated. College dean and administration of the college of medicine (KKU, KSA) are appreciated for help and allowing the use of facilities.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
References | | |
1. | Norcini JJ. Setting standards on educational tests. Med Educ 2003;37:464-9. |
2. | Boursicot KA, Roberts TE, Pell G. Standard setting for clinical competence at graduation from medical school: A comparison of passing scores across five medical schools. Adv Health Sci Educ Theory Pract 2006;11:173-83. |
3. | Downing SM, Tekian A, Yudkowsky R. Research methodology: Procedures for establishing defensible absolute passing scores on performance examinations in health professions education. Teach Learn Med 2006;18:50-7. |
4. | Jackson N, Jamieson A, Khan A. Assessment in Medical Education and Training: A Practical Guide. Oxford: Radcliffe Publishing; 2007. |
5. | Livingston SA, Zieky MJ. Passing Scores: A Manual for Setting Standards of Performance on Educational and Occupational Tests. Educational Testing Service, Princeton, NJ; 1982. |
6. | Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach 2011;33:206-14. |
7. | Al-Wardy NM. Assessment methods in undergraduate medical education. Sultan Qaboos Univ Med J 2010;10:203-9. |
8. | Kaufman DM, Mann KV, Muijtjens AM, van der Vleuten CP. A comparison of standard-setting procedures for an OSCE in undergraduate medical education. Acad Med 2000;75:267-71. |
9. | Taylor CA. Development of a modified Cohen method of standard setting. Med Teach 2011;33:e678-82. |
10. | McCoubrie P. Improving the fairness of multiple-choice questions: A literature review. Med Teach 2004;26:709-12. |
11. | McCrorie P, Boursicot KA. Variations in medical school graduating examinations in the United Kingdom: Are clinical competence standards comparable? Med Teach 2009;31:223-9. |
12. | Cusimano MD, Rothman AI. The effect of incorporating normative data into a criterion-referenced standard setting in medical education. Acad Med 2003;78 10 Suppl:S88-90. |
13. | Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet 2001;357:945-9. |
14. | Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990;65 9 Suppl:S63-7. |
15. | Ben-David MF. The role of assessment in expanding professional horizons. Med Teach 2000;22:472-7. |
16. | Chandratilake M, Davis M, Ponnamperuma G. Evaluating and designing assessments for medical education: The utility formula. Int J Med Educ 2010;1:1-17. |
17. | Norcini JJ, Shea JA. The credibility and comparability of standards. Appl Meas Educ 1997;10:39-59. |
18. | Berk RA. Standard setting: The next generation (where few psychometricians have gone before!). Appl Meas Educ 1996;9:215-25. |
19. | Rubin P, Franchi-Christopher D. New edition of tomorrow's doctors. Med Teach 2002;24:368-9. |
20. | Christopher DF, Harte K, George CF. The implementation of tomorrow's doctors. Med Educ 2002;36:282-8. |
21. | Karle H. Global standards and accreditation in medical education: A view from the WFME. Acad Med 2006;81 12 Suppl:S43-8. |
22. | General Medical Council. Tomorrow's Doctors: Outcomes and Standards for Undergraduate Medical Education. Manchester, UK: General Medical Council; 2009. |
23. | General Medical Council Ethics Committee. Tomorrow's Doctors: Recommendations on Undergraduate Medical Education. London: General Medical Council; 1993. |
|