Validity types that are typically mentioned in texts and research papers when talking about the quality of measurement. Validity Types is the translation from concept to operationalization accurately representing the underlying concept. Does it measure what you think it measures.
1. Translation validity
a. Face validity
b. Content validity
2. Criterion-related validity
a. Predictive validity
b. Concurrent validity
c. Convergent validity
d. Discriminant validity
In essence, both of those validity types are attempting to assess the degree to which you accurately translated your construct into the operationalization, and hence the choice of name.
…
2. Criterion-Related Validity
In criteria-related validity, you check the performance of your operationalization against some criterion. How is this different from content validity? In content validity, the criteria are the construct definition itself -- it is a direct comparison. In criterion-related validity, we usually make a prediction about how the operationalization will perform based on our theory of the construct. The differences among the different criterion-related validity types is in the criteria they use as the standard for judgment.
Check the performance of operationalization against some criterion. Content validity differs in that the criteria are the construct definition itself -- it is a direct comparison. In criterion-related validity, a prediction is made about how the operationalization will perform based on our theory of the construct
a. Predictive Validity
In predictive validity, we assess the operationalization's ability to predict something it should theoretically be able to predict. For instance, we might theorize that a measure of math ability should be able to predict how well a person will do in an engineering-based profession. We could give our measure to experienced engineers and see if there is a high correlation between scores on the measure and their salaries as engineers. A high correlation would provide evidence for predictive validity -- it would show that our measure can correctly predict something that we theoretically think it should be able to predict.
Assess the operationalization's ability to predict something it should theoretically be able to predict.
A high correlation would provide evidence for predictive validity -- it would show that our measure can correctly predict something that we theoretically thing it should be able to predict.
b. Concurrent Validity
In concurrent validity, we assess the operationalization's ability to distinguish between groups that it should theoretically be able to distinguish between. For example, if we come up with a way of assessing manic-depression, our measure should be able to distinguish between people who are diagnosed manic-depression and those diagnosed paranoid schizophrenic. If we want to assess the concurrent validity of a new measure of empowerment, we might give the measure to both migrant farm workers and to the farm owners, theorizing that our measure should show that the farm owners are higher in empowerment. As in any discriminating test, the results are more powerful if you are able to show that you can discriminate between two groups that are very similar.
Assess the operationalization's ability to distinguish between groups that it should theoretically be able to distinguish between.
As in any discriminating test, the results are more powerful if you are able to show that you can discriminate between two groups that are very similar.
c. Convergent Validity
In convergent validity, we examine the degree to which the operationalization is similar to (converges on) other operationalizations that it theoretically should be similar to. For instance, to show the convergent validity of a Head Start program, we might gather evidence that shows that the program is similar to other Head Start programs. Or, to show the convergent validity of a test of arithmetic skills, we might correlate the scores on our test with scores on other tests that purport to measure basic math ability, where high correlations would be evidence of convergent validity.
Convergent validity is the degree to which an operation is similar to (converges on) other operations that it theoretically should also be similar to. For instance, to show the convergent validity of a test of mathematics skills, the scores on the test can be correlated with scores on other tests that are also designed to measure basic mathematics ability. High correlations between the test scores would be evidence of a convergent validity.
Convergent validity shows that the assessment is related to what it should theoretically be related to.
It is ideal that scales rate high in discriminant validity as well, which unlike convergent validity is designed to measure the extent to which a given scale differs from other scales designed to measure a different conceptual variable. Discriminant validity and convergent validity are the two good ways to measure construct validity.
d. Discriminant Validity
In discriminant validity, we examine the degree to which the operationalization is not similar to (diverges from) other operationalizations that it theoretically should be not be similar to. For instance, to show the discriminant validity of a Head Start program, we might gather evidence that shows that the program is not similar to other early childhood programs that don't label themselves as Head Start programs. Or, to show the discriminant validity of a test of arithmetic skills, we might correlate the scores on our test with scores on tests that of verbal ability, where low correlations would be evidence of discriminant validity.
Discriminant validity examine the degree to which the operationalization is not similar to (diverges from) other operationalizations that it theoretically should be not be similar to.
To show the discriminant validity of a test of arithmetic skills, we might correlate the scores on a test with scores on tests that of verbal ability, where low correlations would be evidence of discriminant validity.
Discriminant validity describes the degree to which the operationalization is not similar to (diverges from) other operationalizations that it theoretically should not be similar to.
Campbell and Fiske (1959) introduced the concept of discriminant validity within their discussion on evaluating test validity. They stressed the importance of using both discriminant and convergent validation techniques when assessing new tests. A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts.
In showing that two scales do not correlate, it is necessary to correct for attenuation in the correlation due to measurement error. It is possible to calculate the extent to which the two scales overlap by using the following formula where rxy is correlation between x and y, rxx is the reliability of x, and ryy is the reliability of y:
Although there is no standard value for discriminant validity, a result less than .85 tells us that discriminant validity likely exists between the two scales. A result greater than .85, however, tells us that the two constructs overlap greatly and they are likely measuring the same thing. Therefore, we cannot claim discriminant validity between them.
Source:
-. http://www.socialresearchmethods.net
-. http://www.wikipedia.com
Seminar Statistik STIS - 3 Oktober 2011
13 years ago
No comments:
Post a Comment