• Users Online: 128
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
REVIEW ARTICLE
Year : 2016  |  Volume : 11  |  Issue : 4  |  Page : 119-120

Fixed pass mark: Time for change


Department of Anatomy, College of Medicine, University of Bisha, Bisha, KSA

Date of Web Publication16-Mar-2017

Correspondence Address:
Assad Ali Rezigalla
Department of Anatomy, College of Medicine, University of Bisha, P. O. Box: 61922, Bisha 551
KSA
Login to access the Email id


DOI: 10.4103/summ.summ_13_16

Rights and Permissions
  Abstract 

Pass mark is a score that forms a limit between enough competent candidates and those who are not competent. There are two types of pass marks: relative and absolute. Fixed pass mark can be determined by either asking judges or arbitrary figure. The internationalization, globalization, and cross-border education driven by the development of information and communication technology need global standards for medical education. Thus, there is need for evidence-based standards. In the presence of remarkable evidence against the use of fixed pass mark, continuing its use becomes unjustifiable.

Keywords: Angoff method, fixed pass mark, standard setting


How to cite this article:
Rezigalla AA. Fixed pass mark: Time for change. Sudan Med Monit 2016;11:119-20

How to cite this URL:
Rezigalla AA. Fixed pass mark: Time for change. Sudan Med Monit [serial online] 2016 [cited 2017 Apr 8];11:119-20. Available from: http://www.sudanmedicalmonitor.org/text.asp?2016/11/4/119/202355



Pass mark is a score that forms a limit between enough competent candidates and those who are not competent.[1] According to the test being implemented, there are two types of pass marks: Relative and absolute.

Relative pass mark is used to select a predefined group of examinee.[2],[3],[4],[5],[6] This type concentrates only on the wanted group of examinee without any regards to the limit of competence. Relative pass mark makes a competition among the examinee rather between them and the examination as a limit of competence.

Absolute pass mark is a figure limiting the competence without concerns with the resulted numbers of the competent examinee.[3],[4],[5],[7] In this method, the examinee compete against an examination as a competent limit. Thus, it used more for certifying examinations.[1]

According to the classifications of the standard method,[5],[8] a fixed pass mark is a relative type. Fixed pass mark can be determined by either asking judges about the percentage that they believe a qualified examinee can score [1] or as a figure determined by responsible authority. Each of these methods is not based on the examination and levels of candidate's knowledge. Both of these methods are not suitable for certifying examinations. A fixed method [1] depends on average of judge's about examinee qualification. This method has more scientific base than determining 50%,[9],[10] 60%,[10],[11] or even 70% as a pass mark, used for years. The presence of consistent standard setting has been site of questions.[12] In spite of different methods for the standard settings and the huge literature about this topic, some schools and collages of medicine are still using this method.

Any standard setting method depends on implementation on one of the two stakes of assessment: The candidate or the examination. The first is the wanted candidates as a limit of competence or the required number. The second is the examination as a guardian of the curriculum outcome [13] and the level of difficulty. Besides, both of these assessment drive student learning [14] and expand professional horizons.[6],[15],[16]

Arbitrary fixed pass mark depends on no scientific basis; it is just a figure and consequently nondefendable.[1],[17] Such invalid and unreliable pass mark can result in allowing noncompetent candidates to practice and unrealistically high pass mark will exclude competent candidates. Both of these situations will affect the candidate's confidence about them, the assessment, the stockholders, and labor market.

Literature about standard setting describes many types of pass marks and methodologies of setting standards.[8],[18] The most commonly cited method is modified Angoff method. Modified Angoff method takes in consideration the candidate as a borderline with enough competence and the judgment about the examination. Judgment about examination depends on two points, first the examination difficulty and the curriculum outcomes.

The selection of suitable method of setting standards and changing is nonconsuming time practice and has an impact on the quality of the graduating candidate. Many international bodies and councils concern with accreditation and good practice are advising to use standard setting in assessment.[19],[20],[21] The Accreditation Commission of Colleges of Medicine [21] stated that the methods for assessing students' skills, knowledge, and proficiencies must be developed by the medical school and overseen by a promotions and evaluation committee.[21] The General Medical Council [22],[23] reported that the schemes of assessment must support the curriculum and allow students to prove that they have achieved the curricular outcomes.[7] Moreover, the supplementary advice declared that medical schools should not use fixed pass marks that is pass marks which are the same every year.[4] The trilogy of standards of the world federation of medical education had emphasized on the assessment of student and trainee for accreditation of colleges. The internationalization, globalization, and cross-border education driven by the development of information and communication technology need global standards for medical education.[21] Thus, there is need for evidence-based standards.

In the presence of remarkable evidence against the use of fixed pass mark, continuing its use becomes unjustifiable. On the other hand, using standard setting is becoming more evidence-based, especially with the availability of a number of widely tested methods.

Acknowledgments

Great appreciation was to Dr. H. Kameir, Dr. S. Bashir, Dr. O. Elfaki, Professor J. Haider and Professor M. Habieb for their comments. The comments of Dr. El. Mekki A are highly appreciated. College dean and administration of the college of medicine (KKU, KSA) are appreciated for help and allowing the use of facilities.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Norcini JJ. Setting standards on educational tests. Med Educ 2003;37:464-9.  Back to cited text no. 1
    
2.
Boursicot KA, Roberts TE, Pell G. Standard setting for clinical competence at graduation from medical school: A comparison of passing scores across five medical schools. Adv Health Sci Educ Theory Pract 2006;11:173-83.  Back to cited text no. 2
    
3.
Downing SM, Tekian A, Yudkowsky R. Research methodology: Procedures for establishing defensible absolute passing scores on performance examinations in health professions education. Teach Learn Med 2006;18:50-7.  Back to cited text no. 3
    
4.
Jackson N, Jamieson A, Khan A. Assessment in Medical Education and Training: A Practical Guide. Oxford: Radcliffe Publishing; 2007.  Back to cited text no. 4
    
5.
Livingston SA, Zieky MJ. Passing Scores: A Manual for Setting Standards of Performance on Educational and Occupational Tests. Educational Testing Service, Princeton, NJ; 1982.  Back to cited text no. 5
    
6.
Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach 2011;33:206-14.  Back to cited text no. 6
    
7.
Al-Wardy NM. Assessment methods in undergraduate medical education. Sultan Qaboos Univ Med J 2010;10:203-9.  Back to cited text no. 7
    
8.
Kaufman DM, Mann KV, Muijtjens AM, van der Vleuten CP. A comparison of standard-setting procedures for an OSCE in undergraduate medical education. Acad Med 2000;75:267-71.  Back to cited text no. 8
    
9.
Taylor CA. Development of a modified Cohen method of standard setting. Med Teach 2011;33:e678-82.  Back to cited text no. 9
    
10.
McCoubrie P. Improving the fairness of multiple-choice questions: A literature review. Med Teach 2004;26:709-12.  Back to cited text no. 10
    
11.
McCrorie P, Boursicot KA. Variations in medical school graduating examinations in the United Kingdom: Are clinical competence standards comparable? Med Teach 2009;31:223-9.  Back to cited text no. 11
    
12.
Cusimano MD, Rothman AI. The effect of incorporating normative data into a criterion-referenced standard setting in medical education. Acad Med 2003;78 10 Suppl:S88-90.  Back to cited text no. 12
    
13.
Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet 2001;357:945-9.  Back to cited text no. 13
    
14.
Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990;65 9 Suppl:S63-7.  Back to cited text no. 14
    
15.
Ben-David MF. The role of assessment in expanding professional horizons. Med Teach 2000;22:472-7.  Back to cited text no. 15
    
16.
Chandratilake M, Davis M, Ponnamperuma G. Evaluating and designing assessments for medical education: The utility formula. Int J Med Educ 2010;1:1-17.  Back to cited text no. 16
    
17.
Norcini JJ, Shea JA. The credibility and comparability of standards. Appl Meas Educ 1997;10:39-59.  Back to cited text no. 17
    
18.
Berk RA. Standard setting: The next generation (where few psychometricians have gone before!). Appl Meas Educ 1996;9:215-25.  Back to cited text no. 18
    
19.
Rubin P, Franchi-Christopher D. New edition of tomorrow's doctors. Med Teach 2002;24:368-9.  Back to cited text no. 19
    
20.
Christopher DF, Harte K, George CF. The implementation of tomorrow's doctors. Med Educ 2002;36:282-8.  Back to cited text no. 20
    
21.
Karle H. Global standards and accreditation in medical education: A view from the WFME. Acad Med 2006;81 12 Suppl:S43-8.  Back to cited text no. 21
    
22.
General Medical Council. Tomorrow's Doctors: Outcomes and Standards for Undergraduate Medical Education. Manchester, UK: General Medical Council; 2009.  Back to cited text no. 22
    
23.
General Medical Council Ethics Committee. Tomorrow's Doctors: Recommendations on Undergraduate Medical Education. London: General Medical Council; 1993.  Back to cited text no. 23
    




 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
References

 Article Access Statistics
    Viewed68    
    Printed2    
    Emailed0    
    PDF Downloaded26    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]