Standardisation Of Marks For The 10th Level Common Preliminary Examination

Report of the Committee of Statisticians, constituted by Kerala Public Service Commission, for standardization of marks for the 10th Level Common Preliminary Examinations.

The Kerala Public service Commission constituted the committee to standardize the marks obtained by candidates in the 10th level common preliminary examinations conducted in multi-phases with different question papers based on the same syllabus. The committee has the opinion that the standardization is required only if there is significant variation in difficulty levels among the different question papers.

One cannot determine which question is more difficult simply by reading the questions. It is not fair to decide a question is more difficult based on intuition or subjectivity of a person. The decision can be taken only based on empirical evidence.

So, the committee decided to compute the index of difficulty (difficulty level) of each question in each question paper. The index of difficulty (p) of a question is defined as the proportion of correct answers of that question (that is the number of correct answers of the question divided by the total number of candidates who wrote the examination with the respective question paper)( Nitko (1996), Crocker & Algina, 1986). The larger the proportion getting a question right, the easier the question. The higher the difficulty index indicates the easier the question and the lower the index of difficulty indicates the question is difficult. The index of difficulty will be always in between 0 and 1. The index of difficulty ‘0’ means the maximum is the difficulty and ‘1’ means the difficulty is minimum. The committee carried out an exploratory analysis on the marks of all phases. The marks scored by the candidates in various phases shows that even if there is significant variation in the difficulty of question papers, there are top scorers in each phase. This may be due to the heterogeneity in the educational qualifications of the candidates appeared for the examinations. It is to be noted that all these procedures can be done only under the assumption that there is no regional variation in the capabilities of candidates.

The questions in each question papers are to be divided into five strata as follows:

 

If the distribution of difficulty levels in different question papers varied significantly, then we can infer that all question papers are not in the same level. Under this circumstance we have to adopt some methods to standardize the marks.

The Committee considered different procedures for standardization and illustrated with sample data. No procedure will be absolutely correct. Even if we conduct the examination with same question paper, the absolute equal justice cannot be ensured. For example, the guess work of some candidates may be benefited for them while it negatively affects some others. Also, the difficulty of one candidate may be different from another one. The Committee was of the opinion that the effect of standardization should be same for all candidates who scored the same mark within a particular group (answered a particular question paper) because the weightage of all questions were same while conducting the examinations. Also it is not fair to give the same benefits of difficulty for all the candidates in a stream but to be given according to their performance in the examination. The benefit of standardization is given in proportion to the performance of the candidates in the respective question paper after equated the difficulty score of that question paper to the difficulty score of the question paper which is observed as minimum difficult by the above said procedure. Thus, if a candidate scored a mark zero or below zero, no benefit of standardization will be obtained.

The committee unanimously suggests the following procedure for standardization.

All questions in a question paper are to be stratified into five levels as given above. Then a score of difficulty ( DS) is to be computed for each question paper as given below.

Compute DSi = Σ[Nij× (1-Mi )] / Ni

Where DSi is the Score of difficulty for the ith question paper; Nij is the number of questions in the jth stratum of the ith question paper. Mj is the median difficulty score of the jth stratum (M1=0.1, M2=0.3, M3=0.5, M4=0.7, M5=0.9), Ni is the number of questions in the ith question paper.

Then the proportion of difficulty (Multiplier, Ki) of each question paper to the lowest difficult question paper ( DSi is minimum) can be computed as:

Ki = (DSi/ Dsmin)

DSmin is the score of difficulty of the question paper which has the minimum DS.

Then the final mark of the jth candidate who attended the ith question paper ( Sij) is to be computed as:

Sij = Minimum(Mij ×Ki, 100)

Where Mij is the mark actually scored out of 100 (including negative marks) by the j th candidate in the i th group.

============================================

Note:

1. All the marks may be corrected to a convenient number of decimal places to break tie.

2. The standardization procedure depends on the nature of data, hence the above said procedure cannot be applied to another situation without exploring the feasibility of the method.

References

1. Crocker L. and Algina J. (1986). Introduction to classical and modern test theory, New York: Holt, Rinehart and Winston.

2. Nitko A.J (1996): Educational assessment of students. Second edition, New Jersey, USA, Prentice- Hall.

Latest News