RELIABILITY AND VALIDITY

Views:
 
Category: Entertainment
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

RELIABILITY AND VALIDITY OF SCALES:

Presentation by, GOPAL Y.M Dept. of Agril. Extn UAS, Bangalore RELIABILITY AND VALIDITY OF SCALES

INTRODUCTION :

INTRODUCTION Shall the instrument consistently yield similar scores on repeated measurement? – Reliability Does the instrument measure what is actually intended to measure? – Validity Ex: Length measuring scale Measuring instrument should be economical, convenient and interpretable

Reliability of Measurement:

Reliability of Measurement Reliability is the degree to which a test consistently measures whatever it measures. 1. Temporal Stability of a test The consistency of scores obtained upon testing and retesting after a lapse of time is referred to as the Temporal Stability of a test 2 . Internal consistency of the test scores Consistency of scores obtained from two equivalent sets of items of a single test after a single administration, is referred to as the internal consistency of the test scores

In terms of statistics:

In terms of statistics It can be defined as relative absence of Error of Measurement in a measuring instrument or the proportion of true variance to total variance Score which normally consists of two components True Score: Whenever we measure any object, we presume it to be correct or true score i.e. genuine score obtained with a perfect instrument measured under ideal conditions Error Score: The score obtained due to any type of error in measurement process.

EXPRESSION OF RELIABILITY:

EXPRESSION OF RELIABILITY 1. Index of Reliability (r t ) The coefficient of correlation ( r ) with reference to reliability is called Index of Reliability 2. Index of Determination The square of Index of Reliability (Coefficient of Correlation) is called Index of Determination. It is also known as Coefficient of Determination. In fact, it is the Reliability Coefficient ( r tt ) ( r tt ) = rt 2 a 3. Standard Error of Measurement It is an estimate of how often you can expect errors of a given size.

Methods to estimate the reliability coefficient of test scores are::

Methods to estimate the reliability coefficient of test scores are: Test-retest reliability Internal consistency reliability

Types of Reliability Coefficient and their Measurement :

Types of Reliability Coefficient and their Measurement 1. Coefficient of Internal Consistency It indicates the accuracy or consistency of the test in measuring the individual’ performance at a particular moment. It is measured with split half method 2. Coefficient of Stability degree to which the scores on a particular test or measure are stable over a period of time. It is estimated by the test and retest method 3. Coefficient of Equivalence It reveals an index of how equivalent the psychological – measurement content of one form of the test is with the content of another. It indicates both methods

Formulae for estimating Reliability Coefficient:

Formulae for estimating Reliability Coefficient Mosier Formula (1941) (split half method) r ot  t  o r oe = ______________________ √t 2 + t 2 – 2 r ot  o  t r oe = Coefficient between two parts (odd-even or any other) r ot = Coefficient between odd and total score  t = SD of total scores  o = SD of odd scores Then work out the total reliability ( r tt ) with the help of SB formula

PowerPoint Presentation:

2. Rulon (1939) r tt = 1 -  2 d  2 t r tt = Reliability Coefficient of the total test score  2 d = Variance of the difference score (D i.e. o – E)  2 t = Variance of total score

PowerPoint Presentation:

Flanagan formula (1937) 1 2 + 2 2 r tt = 2 1 - ___________ t 2 r tt = Reliability Coefficient of the total  2 l = Variance of first half  2 2 = Variance of second half  2 t = Variance of total score

4. Kudar – Richardson (1937):

4. Kudar – Richardson (1937) (A) K. R. Formula 20 n t 2 - ∑ pq r tt =_____ _______ n – 1 t 2 r tt = Reliability Coefficient n = Number of items in test p = proportion of correct responses q = 1 – p

PowerPoint Presentation:

(B) K. R. Formula 21 _ _ _ n t 2 – n p q r tt = _____ ______________ n – 1  2 1 i.e. n t 2 – M t (n – M t ) r tt = _____ _______________ n –1 t 2 __ M t P = Mean of p for n items or _____ n __ n - M t q = Mean of q for n items or _______ n

Factors affecting Reliability:

Factors affecting Reliability I. EXTRINSIC FACTORS Group variability Group which is homogeneous in their ability will get lower reliability score and Heterogenous group will get high reliability 2. Guessing of Examinees Some may get high score on testing and low score on retesting 3. Environmental conditions Like light, sound and other comforts 4. Momentary fluctuations a broken pencil, sudden sound, fear of giving wrong answer, knowing no way to change it

II. Intrinsic factors:

II. Intrinsic factors Length of the test Longer tests yields a higher reliability 2. Range of total scores The range of the total scores is less then the reliability will be low 3. Difficulty index If the difficult index is around 0.50 then reliability will be more 4. Discrimination value More the discriminatory items more will be the reliability

Measures to improve reliability scores :

Measures to improve reliability scores The examinees should be heterogenous Items should be homogenous Test should be preferable longer one Item should have moderate difficulty index Items should be discriminatory once

Validity (Truthfulness):

Validity (Truthfulness) Lindquist (1951) defined validity of a test as the accuracy with which it measures that which is intended to measures

METHODS OF MEASURING VALIDITY:

METHODS OF MEASURING VALIDITY Content validity Degree to which a test measures an intended content area. Ex. 1. A test designed to measure knowledge of biology Ways of estimating the content validity Expert judgement Statistical test like correlation Face validity Degree to which the test is relevant to the examinees Ex: Attitude scale of students on RAWEP given post final year students

PowerPoint Presentation:

3. Criterion validity: Compares the test with other measures or outcomes (the criteria) already held to be valid. Ex. IQ tests are often validated against measures of academic performance (the criterion) Two types Predictive validity Ex. 1. Measuring intelligence at the time admission of the students 2. After 2 years students graded (criterion) based on class room performance If the correlation is high we can say that test is having high predictive validity b. Concurrent validity same as predictive but no time laps

PowerPoint Presentation:

4. Construct validity Construct validity is the extent to which scale items are tapping into the underlying theory or model of behavior. It's how well the items hang together ( convergent validty) or distinguish different people on certain traits or behaviors ( discriminant validity). Example: to what extent is an IQ questionnaire actually measuring "intelligence"?

PowerPoint Presentation:

Relationship between Reliability and Validity A valid test is generally reliable. A test which is not valid, may not be reliable A test which is reliable may or may not be valid A high degree of reliability does not necessarily lead to high validity. It may reflect a high degree of constant error A test that is not reliable, is never valid

Summary:

Summary The real difference between reliability and validity is mostly a matter of definition. Reliability estimates the consistency of measurement. Validity, on the other hand, involves the degree to which we are measuring what we are supposed to, more simply, the accuracy of measurement It is belief that validity is more important than reliability because if an instrument does not accurately measure what it is supposed to, there is no reason to use it even if it measures consistently (reliably)

SPLIT HALF METHOD:

SPLIT HALF METHOD EXAMINEE SCORE ON ODD NUMBER ITEMS SCORE ON EVEN NUMBER ITEMS A 21 12 B 14 13 C 23 13 D 12 14 E 13 34 F 42 34 G 13 54 H 12 54 I 13 54 J 22 23 RELIABILITY COEFFICIENT OF HALF TEST

SPEARMANS BROWN PROFICIENCY FORMULA:

SPEARMANS BROWN PROFICIENCY FORMULA RELIABILITY OF WHOLE TEST= 2× RELIABILITY OF HALF TEST 1+ RELIABILITY OF HALF TEST EX: suppose reliability of half test is 0.70 then reliability score of whole test is = 2*0.70 1+0.70 = 1.40 1.70 = 0.82

Group variability:

Group variability If the group is homogenous then each individual receives same score then standard deviation is zero then Z score is zero since correlation is product of Z score reliability index will be zero

authorStream Live Help