Translate this page from English Print Page Change Text Size:
Supplement to Critical Thinking Assessment How can one assess, for purposes of instruction or research, the degree to which a person possesses the dispositions, skills and knowledge of a critical thinker?
In psychometrics, assessment instruments are judged according to their validity and reliability. Roughly speaking, an instrument is valid if it measures accurately what it purports to measure, given standard conditions. In other words, a test is not valid or invalid in itself. Rather, validity is a property of an interpretation of a given score on a given test for a specified use.
Criterion-related evidence consists of correlations between scores on the test and performance on another test of the same construct; its weight depends on how well supported is the assumption that the other test can be used as a criterion. Content-related evidence is evidence that the test covers the full range of abilities that it claims to test.
Construct-related evidence is evidence Ennis critical thinking assessment a correct answer reflects good performance of the kind being measured and an incorrect answer reflects poor performance.
An instrument is reliable if it consistently produces the same result, whether across different forms of the same test parallel-forms reliabilityacross different items internal consistencyacross different administrations to the same person test-retest reliabilityor across ratings of the same answer by different people inter-rater reliability.
Internal consistency should be expected only if the instrument purports to measure a single undifferentiated construct, and thus should not be expected of a test that measures a suite of critical thinking dispositions or critical thinking abilities, assuming that some people are better in some of the respects measured than in others for example, very willing to inquire but rather closed-minded.
Assessing dispositions is difficult if one uses a multiple-choice format with known adverse consequences of a low score.
They began with statements expressive of a disposition towards or away from critical thinking using the long list of dispositions in Facione avalidated the statements with talk-aloud and conversational strategies in focus groups to determine whether people in the target population understood the items in the way intended, administered a pilot version of the test with items, and eliminated items that failed to discriminate among test takers or were inversely correlated with overall results or added little refinement to overall scores Facione They used item analysis and factor analysis to group the measured dispositions into seven broad constructs: The resulting test consists of 75 agree-disagree statements and takes 20 minutes to administer.
A repeated disturbing finding is that North American students taking the test tend to score low on the truth-seeking sub-scale on which a low score results from agreeing to such statements as the following: Development of the CCTDI made it possible to test whether good critical thinking abilities and good critical thinking dispositions go together, in which case it might be enough to teach one without the other.
Facione reports that administration of the CCTDI and the California Critical Thinking Skills Test CCTST to almost 8, post-secondary students in the United States revealed a statistically significant but weak correlation between total scores on the two tests, and also between paired sub-scores from the two tests.
The implication is that both abilities and dispositions need to be taught, that one cannot expect improvement in one to bring with it improvement in the other. A more direct way of assessing critical thinking dispositions would be to see what people do when put in a situation where the dispositions would reveal themselves.
Ennis reports promising initial work with guided open-ended opportunities to give evidence of dispositions, but no standardized test seems to have emerged from this work.
There are however standardized aspect-specific tests of critical thinking dispositions. Stanovich, West and Toplak It is easier to measure critical thinking skills or abilities than to measure dispositions.
The following eight currently available standardized tests purport to measure them: Many of these standardized tests have received scholarly evaluations at the hands of, among others, EnnisMcPeckNorris and EnnisFisher and Scrivenand Possina, b, Their evaluations provide a useful set of criteria that such tests ideally should meet, as does the description by Ennis of problems in testing for competence in critical thinking: There are also aspect-specific standardized tests of critical thinking abilities.
They regard these tests along with the previously mentioned tests of critical thinking dispositions as the building blocks for a comprehensive test of rationality, whose development they write may be logistically difficult and would require millions of dollars.
The test focuses entirely on the ability to appraise observation statements and in particular on the ability to determine in a specified context which of two statements there is more reason to believe.
On a criterion-referenced interpretation, those who do well on the test have a firm grasp of the principles for appraising observation statements, and those who do poorly have a weak grasp of them. This interpretation can be justified by the content of the test and the way it was developed, which incorporated a method of controlling for background beliefs articulated and defended by Norris They constructed items in which exactly one of the 31 principles determined which of two statements was more believable.
In several iterations of the test, they adjusted items so that selection of the correct answer generally reflected good thinking and selection of an incorrect answer reflected poor thinking.A possible exception to this warning about the use of the listed tests for high—stakes situations is the Ennis Critical Thinking Assessment Table 1 An Annotated List of Critical Thinking Tests Tests Covering More Than One Aspect of Critical Thinking The California Critical Thinking Skills Test: Col- lege Level () by P.
Facione. Critical Thinking Publications of Robert H. Ennis. Books: Investigating and assessing multiple-choice critical thinking tests. (in press). Jan Sobocan and Leo Groarke (Eds.), Critical Thinking, Education and Assessment.
London, Ontario: Althouse Press. ‘Probable’ and ‘Probability’. The Ennis-Weir critical thinking essay test. Also see Critical Thinking Education and Assessment: Can Higher Order Thinking be Tested?which is a book containing recent papers concerned with current topics in critical thinking assessment, edited by Jan Sobocan and Leo Groarke, published in London, Ontario by the Althouse Press in An essay by Robert Ennis in this book that applies.
With the proposed framework for a next‐generation critical thinking assessment, we hope to make the assessment approach more transparent to the stakeholders and alert assessment developers and score users to the many issues that influence the quality and practical uses of critical thinking scores.
Related Post of Critical thinking conference assessment ennis senior research paper vices er diagram assignment applications funny creative writing northwestern.
Critical thinking is the disciplined, intellectual process of applying skilful reasoning as a guide to belief or action (Paul, Ennis & Norris). In nursing, critical thinking for clinical decision-making is the ability to think in a systematic and logical manner with openness to question and reflect on the reasoning process used to ensure safe.