Validity types that are typically mentioned in texts and research papers when talking about the quality of measurement. Validity Types is the translation from concept to operationalization accurately representing the underlying concept. Does it measure what you think it measures. A scale is said to be valid if it measures what it is intended to measure. Physical measurements such as height and weight can be measured reliably (and they are also valid measures of how tall or heavy someone is), but they may not relate in any meaningful way to mental abilities, and etc.
1. Translation validity
a. Face validity
b. Content validity
2. Criterion-related validity
a. Predictive validity
b. Concurrent validity
c. Convergent validity
d. Discriminant validity
In essence, both of those validity types are attempting to assess the degree to which you accurately translated your construct into the operationalization, and hence the choice of name.
1. Translation Validity
Is the operationalization a good reflection of the construct?
This approach is definitional in nature assumes you have a good detailed definition of the construct and you can check the operationalization against it.
a. Face Validity
In face validity, you look at the operationalization and see whether "on its face" it seems like a good translation of the construct. This is probably the weakest way to try to demonstrate construct validity. For instance, you might look at a measure of math ability, read through the questions, and decide that yep, it seems like this is a good measure of math ability (i.e., the label "math ability" seems appropriate for this measure). Or, you might observe a teenage pregnancy prevention program and conclude that, "Yep, this is indeed a teenage pregnancy prevention program." Of course, if this is all you do to assess face validity, it would clearly be weak evidence because it is essentially a subjective judgment call. (Note that just because it is weak evidence doesn't mean that it is wrong. We need to rely on our subjective judgment throughout the research process. It's just that this form of judgment won't be very convincing to others.) We can improve the quality of face validity assessment considerably by making it more systematic. For instance, if you are trying to assess the face validity of a math ability measure, it would be more convincing if you sent the test to a carefully selected sample of experts on math ability testing and they all reported back with the judgment that your measure appears to be a good measure of math ability.
b. Content Validity
In content validity, you essentially check the operationalization against the relevant content domain for the construct. This approach assumes that you have a good detailed description of the content domain, something that's not always true. For instance, we might lay out all of the criteria that should be met in a program that claims to be a "teenage pregnancy prevention program." We would probably include in this domain specification the definition of the target group, criteria for deciding whether the program is preventive in nature (as opposed to treatment-oriented), and lots of criteria that spell out the content that should be included like basic information on pregnancy, the use of abstinence, birth control methods, and so on. Then, armed with these criteria, we could use them as a type of checklist when examining our program. Only programs that meet the criteria can legitimately be defined as "teenage pregnancy prevention programs." This all sounds fairly straightforward, and for many operationalizations it will be. But for other constructs (e.g., self-esteem, intelligence), it will not be easy to decide on the criteria that constitute the content domain.
Check the operationalization against the relevant content domain for the construct. Assumes that a well defined concept is being operationalized which may not be true. For example, a depression measure should cover the checklist of depression symptoms
-. Managerial Application of Multivariate – Analysis in Marketing, James H.. Myers and Gary M. Mullet, 2003, American Marketing Association, Chicago
Seminar Statistik STIS - 3 Oktober 2011
5 years ago