Wednesday, May 20, 2009

Content Validity

Content Validity is based on the extent to which a measurement reflects the specific intended domain of content (Carmines & Zeller, 1991, p.20). In psychometrics, content validity (also known as logical validity) refers to the extent to which a measure represents all facets of a given social construct.

Content validity is illustrated using the following examples: Researchers aim to study mathematical learning and create a survey to test for mathematical skill. If these researchers only tested for multiplication and then drew conclusions from that survey, their study would not show content validity because it excludes other mathematical functions. Although the establishment of content validity for placement-type exams seems relatively straight-forward, the process becomes more complex as it moves into the more abstract domain of socio-cultural studies. For example, a researcher needing to measure an attitude like self-esteem must decide what constitutes a relevant domain of content for that attitude. For socio-cultural studies, content validity forces the researchers to define the very domains they are attempting to study.

Content validity focuses on the adequacy with which the domain of the characteristic is captured by the measure. Content validity is sometimes known as “face validity”assessed by examining the measure with an eye toward ascertaining the domain being sampled. If the included domain is decidedly different from the domain of the variable as conceive, the measure is said to lack content validity.

How can we ensure that our measure will process content validity ?
We can never guarantee it because it is partly a matter of judgment. We may feel quite comfortable with the items included in a measure, for example, while a critic may argue that we have failed to sample from some relevant domain of the characteristic. Although we can never guarantee the content validity of a measure, we can severely diminish the objections of critics. The key to content validity lies in the procedures that are used to developed the instrument.

One widely used method of measuring content validity was developed by C. H. Lawshe. It is essentially a method for gauging agreement among raters or judges regarding how essential a particular item is. Lawshe (1975) proposed that each of the subject matter expert raters (SMEs) on the judging panel respond to the following question for each item: "Is the skill or knowledge measured by this item 'essential,' 'useful, but not essential,' or 'not necessary' to the performance of the construct?" According to Lawshe, if more than half the panelists indicate that an item is essential, that item has at least some content validity. Greater levels of content validity exist as larger numbers of panelists agree that a particular item is essential. Using these assumptions, Lawshe developed a formula termed the content validity ratio:

CVR = (ne - N/2)/(N/2)

CVR= content validity ratio,
ne = number of SME panelists indicating "essential",
N = total number of SME panelists.

This formula yields values which range from +1 to -1; positive values indicate that at least half the SMEs rated the item as essential. The mean CVR across items may be used as an indicator of overall test content validity.

One of the most critical elements in generating a content valid instrument is conceptually defining the domain of the characteristic. The researcher has to specify what the variable is and what it is not. The task of definition is expedited by examining the literature to determine now the variable has been defined and used. Because it is unlikely that all the definitions will agree, the researcher must specify which elements in the definitions underlie his or her use of the term. The researcher needs to be quite careful to include items from all the relevant dimensions of the variable. Again, a literature search may be productive in indicating the various dimensions or strata of a variable. At this stage, the researcher may wish to include items with slightly different shades of meaning, since the original list of items will be refined to produce the final measure.

The collection of items must be large so that after refinement the measure still contains enough items to adequately sample each of the variable’s domain. In the example cited previously, a measure of a sales representative’s job satisfaction would need so include items about each of the components of the job if it is to be content valid. The process of refinement, the essence of which is the internal consistency exhibited by the items within the test, is statistical in nature.


Source:
-. Marketing Research, Methodological Foundations, 5th edition, The Dryden Press International Edition, author Gilbert A. Churchill, Jr.
-. http://www.colostate.edu/
-. Wikipedia.com

No comments: