What is Internal Consistency Reliability ?
1. A procedure for studying reliability when the focus of the investigation is on the consistency of scores on the same occasion and on similar content, but when conducting repeated testing or alternate forms testing is not possible. The procedure uses information about how consistent the examinees' scores are from one item (or one part of the test) to the next to estimate the consistency of examinees' scores on the entire test.
2. The internal consistency reliability of survey instruments (e.g. psychological tests), is a measure of reliability of different survey items intended to measure the same characteristic.
3. Internal consistency reliability evaluates individual questions in comparison with one another for their ability to give consistently appropriate results.
4. In internal consistency reliability estimation we use our single measurement instrument administered to a group of people on one occasion to estimate reliability. In effect we judge the reliability of the instrument by estimating how well the items that reflect the same construct yield similar results. We are looking at how consistent the results are for different items for the same construct within the measure.
Example: there are 5 different questions (items) related to anxiety level. Each question implies a response with 5 possible values on a Likert scale , e.g. scores -2,-1,0,1,2. Responses from a group of respondents have been obtained. In reality, answers to different questions vary for each particular respondent, although the items are intended to measure the same aspect or quantity. The smaller this variability (or stronger the correlation), the greater the internal consistency reliability of this survey instrument.
There are a wide variety of internal consistency measures that can be used.
1. Average Inter-item Correlation
Average inter-item correlation compares correlations between all pairs of questions that test the same construct by calculating the mean of all paired correlations. The average inter-item correlation uses all of the items on our instrument that are designed to measure the same construct. We first compute the correlation between each pair of items, as illustrated in the figure. For example, if we have six items we will have 15 different item pairings (i.e., 15 correlations). The average interitem correlation is simply the average or mean of all these correlations. In the example, we find an average inter-item correlation of .90 with the individual correlations ranging from .84 to .95.
2. Average Itemtotal Correlation
Average item total correlation takes the average inter-item correlations and calculates a total score for each item, then averages these. This approach also uses the inter-item correlations. In addition, we compute a total score for the six items and use that as a seventh variable in the analysis. The figure shows the six item-to-total correlations at the bottom of the correlation matrix. They range from .82 to .88 in this sample analysis, with the average of these at .85.
3. Split-half correlation.
Split-half correlation divides items that measure the same construct into two tests, which are applied to the same group of people, then calculates the correlation between the two total scores. In split-half reliability we randomly divide all items that purport to measure the same construct into two sets. We administer the entire instrument to a sample of people and calculate the total score for each randomly divided half. the split-half reliability estimate, as shown in the figure, is simply the correlation between these two total scores. In the example it is .87.
It is often not feasible to obtain to obtain two or more measures of the same item by the same person at different points in time। This involves dividing a single survey measuring instrument into two parts and then correlating responses (scores) from one half with responses from other half. If all items are supposed to measure the same basic idea, the resulting correlation should be high.
4. Cronbach's alpha
Cronbach's alpha calculates an equivalent to the average of all possible split-half correlations. Imagine that we compute one split-half reliability and then randomly divide the items into another set of split halves and recompute, and keep doing this until we have computed all possible split half estimates of reliability. Cronbach's Alpha is mathematically equivalent to the average of all possible split-half estimates, although that's not how we compute it. Notice that when I say we compute all possible split-half estimates, I don't mean that each time we go an measure a new sample! That would take forever. Instead, we calculate all split-half estimates from the same sample. Because we measured all of our sample on each of the six items, all we have to do is have the computer analysis do the random subsets of items and compute the resulting correlations. The figure shows several of the split-half estimates for our six item example and lists them as SH with a subscript. Just keep in mind that although Cronbach's Alpha is equivalent to the average of all possible split half correlations we would never actually calculate it that way. Some clever mathematician (Cronbach, I presume!) figured out a way to get the mathematical equivalent a lot more quickly.
Coefficient alpha provides a summary measure of the inter-correlations among a set of items in any scale used in marketing research (Churchill 1995; Nunnaly 1978). Churchill (1995, p. 498; emphasis in original) observes that “Coeficient alpha routinely should be calculated to assess the quality of measure. Coefficient alpha is generally considered the best estimate of the true reliability of any multiple-item scale that is intended to measure some basic idea or construct useful to market researches or planners.
Source:
-. Managerial Application of Multivariate – Analysis in Marketing, James H.. Myers and Gary M. Mullet, 2003, American Marketing Association, Chicago
-. www.changingminds.org
-. www.statitics.com
-. www.socialresearchmethods.net
Seminar Statistik STIS - 3 Oktober 2011
13 years ago