Citation Information

Cao, L. & Nietfeld, L. J. (2005, Feb 05). Judgment of Learning, Monitoring Accuracy, and Student Performance in the Classroom Context. Current Issues in Education [On-line], 8(4). Available: http://cie.ed.asu.edu/volume8/number4/


Judgment of Learning, Monitoring Accuracy, and Student Performance in the Classroom Context

Li Cao

University of West Georgia

John L. Nietfeld

North Carolina State University



Abstract

As a key component in self-regulated learning, the ability to accurately judge the status of learning enables students to become strategic and effective in the learning process. Weekly monitoring exercises were used to improve college students' ( N =94) accuracy of judgment of learning over a 14-week educational psychology course. A time series design was used to assess the within-subject differences in judgment of learning (JOL), confidence, monitoring accuracy, and performance. Results show that the monitoring exercises have a positive effect: JOL, confidence, monitoring accuracy, and performance all increased over the semester. Performance outcomes were the strongest predictor of a formative estimate of monitoring accuracy, while self-efficacy predicted an overall summative estimate of monitoring accuracy. In addition, overall course test performance was predicted by background knowledge, self-efficacy, and class membership by section. Implications of the research results and directions for future research were discussed.


Table of Contents

Arrow Up

Introduction

In the self-regulated learning process, students are viewed as active seekers and processors of information. During this process, metacognitive knowledge and skills are crucial for effective thinking and competent performance. Classroom research shows that metacognitive strategies distinguish student abilities in transfer and problem solving (Campione & Brown, 1990; Pellegrino, Chudowsky, & Glaser, 2001), self-regulation (Butler & Winne, 1995; Schunk & Zimmerman, 2003; Zimmerman, 1989, 1990; Zimmerman & Risemberg, 1994), development of expertise (Sternberg, 2001), classroom participation (Delprato, 2001), and academic achievement (Hartman, 2001a; Pintrich & Garcia, 1994).

A popular model of metacognition includes two components: metacognitive knowledge and metacognitive skills. Metacognitive knowledge refers to awareness of one's own cognition, which includes declarative knowledge, procedural knowledge, and conditional knowledge. Metacognitive skills refer to a set of activities, i.e., planning, monitoring, and evaluating, that helps individuals regulate their learning process (Flavell, 1979; Flavell, Miller, & Miller, 2002; Hartman, 2001a; McCormick, 2003; Nelson, 1996; Nelson & Narens, 1990, 1994). More specifically, monitoring refers to one's awareness of task performance while performing the task. The ability to accurately monitor one's performance is an essential component of self-regulated learning (Butler & Winne, 1995, Pajares, 1997; Pajares & Valiante, 2002; Schraw, 2001).

Monitoring plays a major role in the current discrepancy-reduction model of self-regulated learning, in which one begins to study by setting a desired state of learning for the to-be-learned material. During the studying process, the person monitors how well the material has been learned to determine the current state of learning. The person will make continuous assessments of the discrepancy between the desired state and the current state of learning and decide how to study until the perceived discrepancy between the current state of learning and the desired state of learning reaches zero. Individuals who can accurately discriminate better learned material from less learned material will regulate their study more effectively (Maki, 1995; Thiede, Anderson, & Therriault, 2003).

Four metacognitive judgments have been used to operationalize the concept of monitoring to reflect the ongoing process-related metacognitive activities that one may engage in when performing a task. These judgments include: (a) task difficulty or ease of learning judgment (EOL), (b) learning and comprehension monitoring or judgment of learning (JOL), (c) feeling of knowing (FOK), and (d) confidence judgments. According to Nelson and Narens' (1990, 1994) framework, JOLs occur during the acquisition and retention phases of metamemory. In each case, individuals make predictions about their performance on a memory recall task. Based on this framework, JOLs have been defined in different ways and examined in different contexts. In the laboratory setting, JOLs occur during or soon after acquisition and are predictions about future test performance on currently studied items, usually a list of paired words (Kelemen, 2000; Nelson & Nares, 1994). Research in this area shows that when knowledge-based cues are used (Kelemen, 2000), delayed JOLs, with lapses between learning and JOL ranging from less than 1 second to a few minutes, are more accurate than the JOL estimations made immediately after learning (Nelson & Dunlosky, 1991; Narens, Jameson, & Lee, 1994).

Also, in the laboratory setting, JOLs have been examined in reading comprehension tasks. In this case, JOLs involve activities such as readers judging their understanding and making assessment of whether they will be able to recall information from the text at a later point in time (Ormrod, 2003). In a recent study (Maki, 1998b), participants read brief passages of narrative texts and then made predictions about their future test performance. JOLs occurred either immediately after reading a passage, or they were delayed until all the texts had been read. The results show that, in contrast to Nelson and Dunlosky's (1991) finding, predictive accuracy was best for immediate JOLs and tests. The conflict between these results was attributed to the discourse level and, in particular, the retrievability of target information at time of JOL. Using paired associates, Nelson & Dunlosky reported that 95% of their participants attempted to recall the target word during delayed JOLs. However, it seems unlikely that Maki's participants would be able to make a similar retrieval attempt for all the propositions in a target text. To deal with the challenge presented by a longer discourse in reading comprehension, the effect of summarizing has been examined. In a recent study, Thiede et al. (2003) report that monitoring accuracy was greater for the group who generated key words after a delay than those who wrote keywords immediately after reading and those who did not write keywords. The superior monitoring accuracy produced more effective regulation of study, which in turn produced greater overall test performance.

These laboratory studies demonstrate that people who monitor their own understanding during the learning phase of an experiment show better recall performance when their memories are tested (Maki, 1998a, 1998b; Nelson, 1996). Monitoring accuracy can be improved through strategies such as having subjects generate their own problem-solving items or by monitoring their learning after a practice test of the material (Thiede et al., 2003). These studies also show that the level of discourse is a variable that influences monitoring accuracy. Due to the prevalent usage of short discourses such as a word list in the laboratory studies, the accuracy in memory monitoring could be "due to concurrent awareness of the status of the memory system or to some kind of post recall performance evaluation not necessarily associated with conscious awareness of the contents of memory" (Herzog et al., 1990, p. 225). This methodological challenge needs to be addressed to improve validity of the research results of monitoring accuracy in the laboratory setting. Therefore, to what extent the laboratory findings can be generalized to the classroom context, which involves even larger discourses and more complex tasks than memorizing a word list and reading passages of materials, remains to be a question.

In college classrooms, students usually have to master a great deal of new knowledge in a limited amount of time. Classroom learning is often dynamic, with knowledge and information being acquired and updated frequently. In this largely self-regulated learning process, students need to make constant judgments of learning during study or very soon afterward. In this case, JOLs are related to student self-assessments about how much they have learned the class content and how well they will perform on the future test (Pintrich, Wolters, & Baxter, 2000). These JOLs may be used to assess the progress in learning (Campione & Brown, 1990), regulate future study activity, and predict subsequent performance (Nelson & Narens, 1994). In a recent study, Hacker et al. (2000) report that high performing students were accurate in predicting and postdicting their performance, and their monitoring accuracy improved over multiple exams, while low performing students had moderate accuracy on predictions but good postdiction accuracy. Lowest performing students showed gross overconfidence in both predictions and postdictions. These results show that students who observe and evaluate their performance accurately may react appropriately by keeping or changing their study strategies to achieve optimal study results (Hartman, 2001b). The ability to make accurate JOLs enables students to better monitor their comprehension, concentrate on new content, and adjust their learning goals, therefore, becoming more strategic and effective in learning (Everson & Tobias, 2001).

Despite the potential benefits to learning from promoting metacognitive knowledge and skills among students, current educational practices focus almost entirely on the subject content rather than on using metacognitive skills to facilitate self-regulation in the learning process. It is prudent for educators to shift some of their focus to the development and assessment of metacognitive knowledge and skills. Moreover, research shows that improving metacognitive knowledge and skills is a complex and challenging process (McCormick, 2003; Sperling et al., 2004). Monitoring ability develops slowly and is quite poor in children and even adults. While most students have metacognitive knowledge about their learning, this knowledge tends to become inert and is not readily accessible to better performance (Schraw, 1994). Our teaching experience and reports from others (e.g., Ormrod, 2003) also provided numerous cases in which students felt surprised upon receiving the results of a test that were significantly different from their expectations, a typical situation of erroneous judgments of learning.

Several recent studies have found a link between metacognitive knowledge and monitoring accuracy (Schraw, 1994; Schraw, Dunkle, Bendixen, & Roedel, 1995). They also suggest that monitoring ability improves through training and practice (Delclos & Harrington, 1991, Schraw, 2001) and that repeated practice yields a general shift from theory-based to experience-based judgment, which in turn has positive effects on monitoring accuracy (Kelly & Jacoby, 1996; Koriat, 1997). However, these studies focused mostly on theoretical issues and adopted the laboratory approach while leaving the issue of monitoring accuracy in the classroom setting largely unaddressed.

The purpose of the present study was to investigate the influence of repeated practice of self-monitoring on the accuracy of judgment of learning over a 14-week educational psychology class. A time-series design was chosen for the study because at the beginning of the course, students had little on which to base their JOL and performance other than their past experience in similar courses, their self-perceptions as learners, and their beliefs about the difficulty of the tests. Over the semester, students were provided with opportunities to make JOLs through weekly exercises and classroom tests. Each of these opportunities offered feedback to students on their actual performance and accuracy of JOLs. This repeated semester-length intervention enabled us to observe whether students' JOLs would shift from theory-based judgments of their general academic ability to classroom experience-based judgments of their knowledge and skills in this particular course, and thus become more accurate. Specifically, we attempted to address four research questions: 1) Can pre-service students accurately judge their learning? 2) To what extent do JOLs and performance predict monitoring accuracy? 3) Do the high-achieving students make more accurate JOLs than the low achieving students? 4) What variables predict overall monitoring accuracy and classroom performance?


Arrow Up

Method

Participants

Ninety-six undergraduate students (17 male and 77 female) enrolled in two sections of an educational psychology survey course voluntarily participated in the study as a course requirement. The course was taken during the junior or senior year after admission to the teacher education program. Two instructors each taught a section of the course. The participants were introduced to the study in the first class and a written consent form was obtained to ensure privacy and confidentiality of the participation.

Measures and Procedures

During the first class, a pre-test was administered as a measure of participants' background knowledge in the area of educational psychology. It consisted of 25 four-option multiple-choice questions. In addition, a short educational psychology self-efficacy inventory was given during the first and the final class. The inventory consisted of eight items answered on a 5-point Likert scale. The Cronbach alpha coefficient was .88 for the pre-test and .90 for the post-test. Finally, participants completed the Raven Advanced Progressive Matrices Test (Set II) (Raven, 1962) as a measure of general ability (Carpenter, Just, & Shell, 1990). Each problem on the Raven test consists of a 3 × 3 matrix, in which the lower right entry is missing and must be selected from among eight alternatives. Problems consist of eight entries that share common features across rows and columns. Individuals must infer these features and match them to one of the eight alternatives. The complete test consists of 36 items but we used an abbreviated version consisting of problems six through 20.

A time-series repeated measure design was used to assess students' JOLs throughout the course of the semester. Each section of the class met separately once per week for 14 weeks. A weekly monitoring exercise sheet (see Appendix for sample) was given at the end of each class, with the exception of test dates and the introductory class. Each exercise sheet asked the students 1) to rate their understanding of the day's content on a 100-point scale, 2) to list what concepts from the class they found difficult to understand, 3) to describe specifically what they would do to improve their understanding of the concepts they identified as difficult, and finally 4) to answer three multiple choice review questions of the day's material followed by a confidence judgment for each question on a 100-point scale. The exercise sheets were printed on two sheets of paper that allowed the students to carbon copy their responses identically on both sheets. After its completion, the bottom-page of the exercise sheet was collected for original documentation of students' JOLs and responses. The review questions were discussed before the class ended. The students kept the top-page of the monitoring exercise sheet for a portfolio to be handed in at the end of semester. They were encouraged to regularly revisit the exercise sheets and use them to guide their study and review process along the semester. In addition to the weekly feedback on self-monitoring exercises, the students received feedback on their monitoring proficiency through their overall monitoring accuracy scores and bias scores which indicate whether they were overconfident or underconfident in monitoring their performance, when the first three tests were handed back.

JOL was measured using students' ratings from the first question (“Please indicate below your overall understanding of the content from today's class”) from every class, except when a test was administered, totaling 10 monitoring exercise sheets. A unit JOL score was calculated by summing up the weekly JOL scores included in each of the four units of the course content. In the final review week, students were asked to estimate their overall understanding of course content and predict their performance score on the final exam in terms of percentage correct. Performance was measured separately using the number of correct answers on the four classroom exams and those of the review questions on the weekly exercise sheets. The first three classroom tests consisted of 20 four-option multiple-choice items and each of these tests covered a unit of the course content. The fourth test contained 40 items and was designed as a comprehensive measure of all the content covered in the course. Students in both sections of the class were tested with the same items on all four tests. The test items were either created by the instructors or selected from the test bank accompanying the textbook (Ormrod, 2003). They varied in difficulty from simple recall and recognition to more difficult application questions. An example of a recall and recognition question was:

Information in the ______ lasts about only a few seconds unless encoded:

A. Sensory register

B. Working memory

C. Short-term memory

D.  Long-term memory

An example of an application question was:

A math teacher strongly stresses individual responsibility in learning. She has students each set a weekly goal of the number of problems they can complete and explain. She encourages them to gradually increase the difficulty of the problems, and she emphasizes their progress. This example is illustrating the teacher's attempt to:

A. Display the model for effective direct instruction

B. Increase her students' self-efficacy in math

C. Emphasize performance goals

D. Fulfill her students' deficiency needs

Performance for each test item was scored as 1 if correct and 0 if incorrect. Confidence ratings were recorded for each item on the four classroom tests and for each review question on the weekly exercise sheets, on a 100-mm line that followed each question (Schiffman, Reynolds, & Young, 1981). The left end of the line corresponded to no confidence and was labeled 0% Confidence ; the right end corresponded to total confidence and was labeled 100% Confidence . The participants were instructed to respond to a test item and then draw a slash through the portion of the line that best corresponded to their perceived confidence in their answer to the question. This item-specific measure documents participants' on-line monitoring of their performance at the local item level during the test process.

Two indices (Keren, 1991; Yates, 1990) were used to measure monitoring proficiency. The first index was the monitoring accuracy score (calibration), which consisted of the absolute value of the difference between the confidence judgment and performance for each test item, summed over all items on a test and divided by the total number of items. Lower scores indicate higher accuracy and a greater level of calibration. Scores could range from zero (perfect accuracy) to one (complete inaccuracy). The second index was the bias score, which consisted of the signed difference between the average confidence and average performance scores on each test. Positive scores indicate overconfidence while negative scores indicate underconfidence. The farther the score is from zero the more biased it is.

No formal stand alone monitoring training was conducted during the course, except that the course content contained a detailed unit on metacognition including monitoring, that a weekly monitoring exercise sheet was used to prompt participants' self-monitoring as described above, and that opportunities were offered for participants to review the results of each test the week following each of the first three tests. During the review, participants were encouraged to examine item-by-item performance, compare the performance with monitoring judgments such as confidence ratings, post-diction of their test performance, and ask questions about any items they were unclear on.


Arrow Up

Results

In this section results are reported in accordance with the four research questions listed above. Table 1 reports the means and standard deviations from tasks on the weekly monitoring exercises including judgment of learning (JOL), performance scores, confidence ratings, bias of monitoring judgment, and accuracy of monitoring judgment. The indices represent composite scores derived from the review exercise items on the weekly monitoring sheets by the four content units of the course that were assessed by the four tests. As can be seen from Table 1, students made improvement in JOL, performance, confidence, and monitoring accuracy during the course. For instance, from Unit 1 to Unit 4, JOL increased from M =65.67 to M =75.69, performance increased from M =70.57 to M =79.30, confidence increased from M =70.22 to M =78.39, and monitoring accuracy decreased from M =.38 to M =.29, indicating a more accurate judgment of the performance.

Table 1

Composite Means and Standard Deviations of Weekly Performance and Monitoring Judgments by Content Unit


 

Unit 1

Unit 2

Unit 3

Unit4

 



 

M

SD

M

SD

M

SD

M

SD

JOL

65.67

10.94

70.35

12.79

73.28

12.46

75.69

11.59

Performance

70.57

15.17

84.16

14.62

74.73

12.55

79.30

8.91

Confidence

70.22

12.25

68.43

12.93

72.33

14.27

78.39

13.08

Bias

-.001

.18

-.11

.16

-.05

.15

-.001

.14

Accuracy

.38

.09

.33

.10

.35

.10

.29

.10


Note. N = 94 for each variable.

Can students accurately judge their learning?

We addressed this question by examining the relationship between monitoring judgments and performance. First, we examined the relationship between performance and monitoring judgments from the weekly monitoring exercises. Table 2 indicates correlations between JOLs, exercise performance scores, and monitoring accuracy measured at the end of each content unit. In particular, the JOLs are significantly related with their corresponding (same unit) performance scores and monitoring accuracy measures. Most importantly, between JOLs and monitoring accuracy displays a steady increase over the course of the semester, with Pearsons' rs increased from -.36 at Unit 1 to -.44 at Unit 2, to -.53 at Unit 3 and -.60 at Unit 4. The increased correlation between JOLs and monitoring accuracy indicates that gains made in monitoring judgments are associated with gains in students' ability to regulate or accurately calibrate performance. Furthermore, significant relationships are found between performance scores and monitoring accuracy at each content unit. This relationship becomes stronger in the second half of the semester than in the first half of the semester. Another estimate of the accuracy of student judgments was in the correlation between students' prediction of their final exam score and their actual score. This correlation was significant at r =.25 (p<.05) indicating that by the end of the semester students were, on the whole, able to make accurate global estimates of their test performance. These findings suggest that through weekly monitoring exercises students can accurately judge their learning. In particular, the fact that monitoring accuracy is significantly related to the performance score for each content unit indicates that the very experience of doing these exercises contributes to a shift by students from theory-based to experience-based judgment of learning (Kelly & Jacob, 1996; Koriat, 1997). The experience-based judgments of learning, in turn, have positive effects on students' ability to accurately monitor their learning.

Table 2

Correlations between Judgment of Learning, Exercise Score, and Monitoring Accuracy by Content Unit


   

1

2

3

4

5

6

7

8

9

10

11

12

   
  Test                        

1

JOL1

-----

.30**

-.36**

.60**

.29**

-.30**

.46**

.19

-.33**

.40**

.25*

-.19

2

Unit1 Score

 

-----

-.59**

.15

.30**

-.27**

.19

.32**

-.33**

.38**

.26*

-.28**

3

Unit1 Acc

 

 

------

-.24*

-.22*

.29**

-.32**

-.27**

.39**

-.36**

-.21*

.34**

4

JOL2

 

 

 

------

.22*

-.44**

.69**

.13

-.39**

.54**

.20

-.27**

5

Unit2 Score

 

 

 

 

------

-.46**

.21*

.35**

-.25*

.28**

.36**

-.36**

6

Unit2 Acc

 

 

 

 

 

------

-.47**

-.41**

.60**

-.44**

-.17

.44**

7

JOL3

 

 

 

 

 

 

------

.27**

-.53**

.59**

.21*

-.44**

8

Unit3 Score

 

 

 

 

 

 

 

------

-.69**

.32**

.33**

-.44**

9

Unit3 Acc

 

 

 

 

 

 

 

 

------

-.52**

-.23*

.57**

10

JOL4

 

 

 

 

 

 

 

 

 

------

.42**

-.60**

11

Unit4 Score

 

 

 

 

 

 

 

 

 

 

------

-.67**

12

Unit4 Acc

 

 

 

 

 

 

 

 

 

 

 

------


Note. * p <.05; **p <.01; Acc = Monitoring Accuracy.

To what extent do JOLs and performance predict monitoring accuracy?

Multiple linear regression procedures were used to predict monitoring accuracy using the JOLs and the exercise performance scores from the four content units. To ensure our observation was not confounded with the potential collinearity between the prediction variables, a Pearson correlation procedure was used to examine the relationships between the JOLs and the exercise performance scores of the four content units. Only moderate correlations were found (Table 2) between the JOLs and the exercise performance scores for all four content units, ranging from the lowest ( r =.22, p <.05) between JOL2 and Score 2 to the highest ( r =.42, p <.01), between JOL4 and Score 4.

Four separate models of hierarchical linear regression were used to predict monitoring accuracy. For each model, monitoring accuracy functioned as the dependent variable while JOL was entered as the first block and both JOL and performance were entered as the second block. Table 3 presents the results of these four models. As can be seen, the performance variable consistently accounted for significant amount of variance to monitoring accuracy across the four content units. Significance levels for each of the combined models (JOL + Performance) are as follows: unit 1, F (2,91)=28.52, p <.001; unit 2, F (2,91)=23.05, p <.001; unit 3, F (2,91)=71.40, p <.001; and final exam, F (2,89)=59.79, p <.001. While both JOLs and performance scores made significant contributions to monitoring accuracy, the results show that knowledge as translated through higher performance scores is the stronger of the two predictor variables. This result further confirms that students' JOLs are experience-based rather than theory-based.

Table 3

Summary of Hierarchical Regression Analysis for Variables Predicting Monitoring Accuracy by Unit (N = 94)


 

β

b

R-squared

ΔR-squared

 
Step & Variable        

Unit 1

 

 

 

 

Step 1 JOL1

-.003

-.36

.13

.13***

Step 2 JOL1

-.002

-.20

 

 

Performance Score 1

-.003

-.53

.39

.26***

Unit 2

 

 

 

 

Step 1 JOL2

-.004

-.44

.20

.20***

Step 2 JOL2

-.003

-.36

 

 

Performance Score 2

-.003

-.38

.34

.14***

Unit 3

 

 

 

 

Step 1 JOL3

-.005

-.53

.28

.28***

Step 2 JOL3

-.003

-.38

 

 

Performance Score 3

-.005

-.59

.61

.33***

Unit 4

 

 

 

 

Step 1 JOL4

-.005

-.60

.36

.36***

Step 2 JOL4

-.003

-.39

 

 

Final Score 4

-.55

-.51

.57

.21***


Note. ***p<.001

Do the high-achieving students make more accurate JOLs than the low achieving students?

The upper (top 25%) and lower (bottom 25%) quartiles of the composite scores of the four classroom tests were used to divide the participants into high and low performing groups. Independent sample t-tests were used to examine the difference between these two groups on JOLs, performance, confidence and bias of calibration, and monitoring accuracy. Results of the t-tests show that the high performing students make significantly higher JOLs than the low performing students at each unit throughout the course: JOL1 t (41)=-3.48, p =.001; JOL2 t (41)=-3.04, p =.004; JOL3 t (41)=-2.66, p =.01; JOL4 t (41)=-5.25, p =.001. They also achieved significantly higher test performance scores for units 2, 3, and 4: Performance Unit 2 t (41)=-2.32, p =.03; Performance Unit 3 t(41)=-2.69, p =.01; Performance Unit 4 t (41)=-11.50, p =.001; differences for Performance Unit 1 failed to reach significance t (41)=-1.95, p =.058. However, it was not until Unit 3 that high performing students became significantly more confident in making monitoring judgments than the low performing students: Confidence 3 t (41)=-2.11, p =.04; and they retained a significantly higher level of confidence at unit 4: Confidence t (41)=-3.25, p =.001. While no significant difference was found in the judgment bias between the high and low performing students, the high performing students are more accurate in judging their learning at unit 1, 3, and 4: Accuracy Unit 1 t (41)=2.80, p =.001; Accuracy Unit 3 t (41)=3.27, p =.001; Accuracy Unit 4 t (41)=7.11, p =.001; except at Unit 2. Given that the Unit 2 information appeared to be the least difficult among the four units, it is not surprising that we found no differences in accuracy between the high and low performing students. This result is consistent with Schraw and Roedel's (1994) finding that accurate monitoring is more difficult to achieve as the test difficulty level increases.

What variables predict overall monitoring accuracy and classroom performance?

The focus of this study centered upon the impact of the repeated monitoring exercises on monitoring accuracy and classroom performance. Therefore, an important element of the study is to uncover the contributions of important classroom variables in estimating overall monitoring and performance variables. In order to accomplish this task, we conducted two standard multiple regression analyses to identify variables that best predict monitoring accuracy and overall classroom performance. For the first analysis, a composite monitoring accuracy score was created to function as the dependent variable. This score was an average of the three unit estimates of monitoring accuracy. Independent variables included pre-test self efficacy, post-test self-efficacy, the Raven's score, class membership, educational psychology pre-test, and a measure of overall test performance in the class. Overall test performance was the mean percent correct over the three tests and final exam. Bivariate correlations between these variables are provided in Table 4.

Table 4

Bivariate Correlations Between Class Membership, Raven's Test, Pre-and Post-Self Efficacy, Monitoring Accuracy, Test Performance, and Ed. Psychology Pre-Test


 

 

1

2

3

4

5

6

7

   
  Test              

1

Class

-----

-.04

-.28**

-.02

.19

-.41**

-.29**

2

Raven's Test

 

-----

.17

.18

-.17

.02

-.03

3

Pre Self Efficacy

 

 

------

.48**

-.42**

.33**

.29**

4

Post Self Efficacy

 

 

 

------

-.53**

.48**

.31**

5

Monitoring Acc.

 

 

 

 

------

-.40**

-.23*

6

Test Performance

 

 

 

 

 

------

.50**

7

Ed. Psych Pre-test

 

 

 

 

 

 

------


Note. * p <.05; **p <.01

The regression analysis revealed that the model significantly predicted monitoring accuracy, F (6,81)=6.72, p <.001. R 2 for the model was .33 and adjusted R 2 was .28. Table 5 provides a summary of the regression analyses. In terms of individual relationships between the independent variables and monitoring accuracy, only post-test self-efficacy ( t =-3.27, p =.002) contributed a significant amount of unique variance with all of the independent variables entered. This is an interesting finding considering that the bivariate correlations between pre-test self-efficacy and monitoring accuracy ( r =-.42, p< .01) and overall test performance and monitoring accuracy ( r =-.40, p <.01) were both significant. When entered as predictors in the multiple regression, neither were significant when post-test self-efficacy entered simultaneously. This finding indicates a complex relationship between these variables, in that changes of self-efficacy during the course play a mediating role in the prediction of monitoring accuracy. Furthermore, individuals with the highest self-efficacy for learning educational psychology at the end of the course also appear to be the most accurate at monitoring their educational psychology content knowledge measured by the test performance.

Table 5

Summary of Regression Analyses for Variables Predicting Overall Monitoring Accuracy and Overall Test Performance


 

 

β

b

t

p

R

R-squared

   
Dependent Measure Predictor            

Monitoring Accuracy

 

 

 

 

 

.576

.332

 

Pre Self Efficacy

-.002

-.13

-1.21

.232

 

 

 

Post Self Efficacy

-.006

-.40

-3.27

.002

 

 

 

Raven's Test

-.018

-.04

-.40

.687

 

 

 

Class Membership

.015

.09

.84

.402

 

 

 

Ed Psych Pre-Test

.000

.01

.06

.953

 

 

 

Overall Test

-.120

-.13

-1.03

.308

 

 

Test Performance

 

 

 

 

 

.681

.464

 

Pre Self Efficacy

-.001

-.08

-.77

.446

 

 

 

Post Self Efficacy

.006

.39

3.60

.001

 

 

 

Raven's Test

-.020

-.04

-.47

.641

 

 

 

Class Membership

-.054

-.32

-3.52

.001

 

 

 

Ed Psych Pre-Test

.009

.26

2.81

.006

 

 

 

Monitoring Accuracy

-.107

-.10

-1.03

.308

 

 


For the second analysis, the overall test performance functioned as the dependent variable. Independent variables included pre-test self-efficacy, post-test self-efficacy, the Raven's score, class membership, educational psychology pre-test, and overall monitoring accuracy. The regression analysis revealed that the model significantly predicted the overall test performance, F (6,81)=11.69, p <.001. R 2 for the model was .46 and adjusted R 2 was .42. In terms of individual relationships between the independent variables and overall test performance, post-test self-efficacy ( t =-3.60, p =.001), class membership ( t =-3.52, p =.001), and the educational psychology pre-test ( t =2.81, p =.006) were significant predictors (Table 5). This finding indicates that in order to predict overall test performance one needs to take into account numerous influences including those from the individual instructor or class dynamics, motivational factors, and background knowledge. In general, this finding offers empirical support to the notion that self-regulated learning includes both metacognitive and motivational variables and that the interaction between these variables mediating the process of self-regulated learning (Schunk & Zimmerman, 2003; Sperling et al., 2004).

 

Arrow Up

Discussion

This study used a time-series design to examine the effects of repeated exercises on judgment of learning, monitoring accuracy, and performance throughout the course of a semester. In general, our findings support the practice of coupling of metacognitive exercises with classroom instruction to facilitate learning outcomes, metacognitive skills, and motivation. In this section, we discuss major findings of the study in relation to the existing literature and suggest directions for the future research.

Overall, the data in this study revealed trends showing improvement in metacognitive abilities over the course of the semester. Students showed consistent increases of JOL estimates throughout the semester (65.67 for Unit 1, 70.35 for Unit 2, 73.28 for Unit 3, and 75.69 for Unit 4). More importantly, they became more accurate at monitoring their performance through the weekly exercises over the semester as evidenced by their accuracy scores for Unit 1 (.38) and for Unit 4 (.29). These results show a promising match between students' judgment of their understanding of the course content at an increasingly deeper level and their ability to calibrate their performance more accurately. They also indicate that students' matacognitive monitoring skills can be improved through repeated exercises incorporated in the class process. This finding opens up an avenue for classroom interventions for developing students' metacognitive knowledge and skills, which are ignored to a large extent in the current classroom teaching (Ormrod, 2003; McCormick, 2003). In addition, these results offer empirical support to previous findings that people who monitor their own understanding during the learning process achieve better performance (Nelson & Dunlosky, 1991; Maki, 1998a; Nelson & Narens, 1994), that knowledge increase is related to development of students' metacognitive knowledge and skills (Hartman, 2001b; McCormick, 2003; Schraw, 2001; Schraw & Impara, 2000), and that improvement of self-regulation of learning is related to positive test outcomes (Peverly & Brost, 2003; Thiede et al., 2003).

Our data also revealed differences in metacognitive abilities between the high and low-performing students. High performing students reported significantly higher JOLs, monitored their performance more accurately, and were more confident than their low-performing peers by the second half of the semester. These results are consistent with the observation that the amount of metacognitive knowledge and skills distinguish students' competency in problem solving (Campione & Brown, 1990; Pellegrino, Chudowsky, & Glaser, 2001), metacognitive regulatory behaviors in learning (Maki, 1995, 1998b) and academic performance (Hacker et al., 2000; Thiede et al., 2003). Our findings, together with others, suggest that developing metacogntive knowledge and skills improve students' capacity and expertise in academic learning (Sternberg, 2001) and hence their academic achievement (Hartman, 2001a, 2001b). More specifically, our results indicate that developing students' metacognitive knowledge and skills takes time and continuous practice and that there is a reciprocal relationship between the process of developing metacognitive knowledge and skills and that of developing content knowledge. Metacognitive skills and subject content knowledge can and should be achieved simultaneously. These findings will be particularly useful in designing classroom instructions that aim at developing specific aspects of metacognitive knowledge and skills in order to promote problem solving skills, use of effective learning strategies, and academic achievement of low-achieving students.

Findings from our regression analyses revealed the complexity of predicting metacognitive and performance outcomes, particularly when motivational factors are considered. Our data demonstrate that although multiple predictor variables had significant bivariate correlations with the overall monitoring accuracy, only post-test self-efficacy was a significant predictor to the overall monitoring accuracy when all the predictor variables were entered simultaneously. This finding suggests that changes in the confidence to learn domain-specific content were strongly linked to students' ability to accurately monitor their learning in the given domain. Also, our results show that background knowledge, post-test self-efficacy, and class differences were all significant predictors of the overall test performance. In other words pre-existing differences in knowledge as students began the course, changes in self-efficacy during the semester, and variations presented by instructor differences and classroom dynamics all contributed to performance. Therefore, any attempt for improving students' classroom performance need to consider not only the subject content but also numerous other motivational, metacognitive, knowledge-based, and pedagogical variables that make significant contributions to regulation of student learning (Butler & Winne, 1995; McCormick, 2003; Pajares, 1997; Pajares & Valiante, 2002; Schraw, 2001).

In summary, the above findings have important implications for the pre-service teacher education program. They support the assertion that metacognitive skills are teachable through appropriate intervention (McCormick, 2003; Nietfeld & Schraw, 2002; Schraw, 2001). In our case, it is through repeated exercises and feedback on self-monitoring over the course of a 14-week semester. Our results suggest that offering multiple opportunities of exercises and tests, and providing continuous feedback on the student performance on the tests have the potential to promote students' ability to accurately monitor their learning. Over time, continual prompts, cues, and feedback may assist students to become more conscious in self-regulation and therefore, more strategic and effective in the learning process. This finding has significant implications for the design of classroom interventions to promote students' metacognitive knowledge and metacognitive skills. Research results have demonstrated that metacognitive awareness and self-regulatory behavior relate to student academic performance (Butler & Winne, 1995; Hacker et al., 2000; Hartman, 2001a). However, substantial efforts need to be made to foster students' metacognitive knowledge and skills. Students cannot be expected to be competent with the metacognitive skills even though they have multiple years of experience with school learning. As our study indicates, continuous opportunities need to be provided for students to explicitly address, practice, polish, and internalize their metacognitive skills.

Fortunately, research on metacognition has generated an abundant literature that offers various ways to foster metacognitive knowledge and skills among students. In this section, we briefly describe a few practice-oriented models and strategies that classroom teachers can use to develop students metacognitive knowledge and skills. Our results lend empirical support to these models and strategies. Recent literature advocates an explicit focus on metacognition in classroom practice (Delclos & Harrington, 1990; Hartman, 2001a; McCormick, 2003; Ormrod, 2003). Hartman (2001c) suggests that teaching metacognitively involves teaching with and for metacognition. Teaching with metacognition means teachers think about their own thinking regarding their teaching. It includes reflection on: instructional goals, students' characteristics and needs, content level and sequences, teaching strategies, materials, and other issues related to curriculum, instruction and assessment before, during, and after lessons in order to maximize instructional effectiveness. Teaching for metacognition means teachers think about how their instruction will activate and develop their students' metacognition, or thinking about their own thinking as learners. She goes on to illustrate strategic metacognitive knowledge about cooperative learning and scaffolding and how to apply these instructional strategies in class. In addition, she suggests the use of video taping one's own instruction and the Thinking About Teaching Strategies Scale as self assessment tools to stimulate and promote teaching metacognitively.

In the pre-K setting, Lyons , Pinnell, and DeFord (1993) observed that for far too many children in our primary classrooms, learning to read has remained confusing, frustrating, and fraught with feelings of failure. Careful observation and attention to fostering self-monitoring strategies can help us ease this transition into literacy for many students. To deal with this challenge, Clay (1991) has developed a theory of literacy learning that assigns a central role of monitoring strategies. He contends that an important question for beginning instruction is, “How can we support the development of a highly efficient and coordinated set of monitoring and searching strategies?” A necessary first step is to carefully observe the types of behaviors that signal strategic processing. He further suggests that our best window into the child's processing comes from analyzing errors or miscues. By analyzing the child's errors, we can infer the types of cues used and those neglected. Behaviors at the point of error, or shortly after an error, suggest the types of cues the child is monitoring. Monitoring is indicated when an error is followed by rereading all or part of the sentence, by making several attempts at a word (including self-correction), by showing signs of dissatisfaction, or by appealing for help (Clay, 1991; Goodman & Goodman, 1994).

To apply Clay's (1991) model, simple tenets such as: Good readers think about meaning; All readers make mistakes; and Good readers notice and fix some mistakes, can be put up in a poster to establish group context to foster self-monitoring in the classroom. These tenets can lead to negotiating procedures to (a) allow each reader time to discover and fix their mistakes; (b) provide help when requested; and (c) enable members of the group to note, analyze, and suggest errors for discussion after a student has finished reading and the meaning of the section has been established. With initial teacher modeling, these discussions can shift the focus of reading from accuracy to interpretation and strategy development (Brown & Palincsar, 1989; Manning, 2002; Pressely et al., 1992; Taylor & Nosbush, 1983).

The current literature offers rich practice-orientated resources of similar models and strategies like these. They include Brown and Palincsar's (1989) model of reciprocal teaching of reading in a cooperative context; Novak's (1998) recommendation of the use of graphic organizer such as concept maps to help teachers structure, monitor, and use knowledge to foster meaningful learning; Posner's (1991) guidebook to engage preservice teachers doing fieldwork in reflective teaching; Hartman's (1993) book to promote tutoring with and for metacognition through interactive structure among college tutors; Manning and Payne's (1996) book on applying teachers' metacognition to educational psychology through self-talk among teachers and students; and Osborne's (1999) behavioral measure of metacognition for teachers to use to assess the metacognitive abilities of their students.

Clearly, there is considerable research documenting metacognition as an essential ingredient of self-directed and self-regulated learning. A variety of procedures and strategies have been developed to foster students' metacognitive knowledge and skills in academic learning and everyday life. With extensive and varied use, metacognitive strategies and knowledge can be refined and used automatically as needed in skilled performance. However, because not all students develop and use metacognition spontaneously, teachers need to provide students with explicit instruction in both metacognitive knowledge and metacognitive strategies in order to help them eventually develop voluntary control over their own learning. This goal can be achieved through teaching students to reflect on how they think, learn, remember, and perform academic tasks and through repeatedly emphasizing and demonstrating actions that illustrate how students can be responsible for and can control their own outcomes in their education and their everyday life (Hartman, 2001b).

Also, our results suggest that developing students' monitoring accuracy is a complex and gradual process. Future research needs to further tease out relationships between variables such as domain background knowledge, metacognitive knowledge and skills, goal orientations and self-efficacy, classroom environment, group dynamics, and instructor differences in terms of their contributions to classroom learning. Specifically, future research needs to continue exploring ways not only to promote monitoring accuracy of the content knowledge, but also to scrutinize the process in which students develop their monitoring ability as they progress through teacher education programs and in their future practice. Based on this research, teacher education programs can fulfill the dual responsibility to facilitate self-regulatory abilities in pre-service teachers and to make that process explicit enough so that the pre-service teachers in turn facilitate development of these abilities in their future students.

Again, our research bears significant implications for developing and implementing curriculum for teacher education programs. In essence, our study sets out to operationalize the self-regulated learning process by focusing on examining students' self-judgment of learning and their use of this judgment to monitor and adjust the learning process for optimal academic performance. Our results contribute to a better understanding of the relationship between judgment of learning, self-monitoring, and academic performance within the context of an educational psychology course. We intend to continue our research project and further test the extent that our results can be generalized to other subject areas such as math, language and literacy, educational foundations, curriculum design and implementation, and methods courses in our teacher education program. Before obtaining empirical evidence from specific subject areas, we speculate the possible implications of our findings in these courses.

Our discussion concerns two important issues: the nature of metacognitive knowledge and skills and transfer of these knowledge and skills across the disciplines. Although various definitions of metacognition exist in the literature, there is a general agreement that metacognitive knowledge and skills are important for student learning and that fostering abilities to self-regulate the learning process empowers students and enhances their performance and learning. However, there is considerable debate about the nature of metacognitive knowledge and skills. One view believes that metacognitive knowledge and skills are generic. This domain-general metacognitive knowledge guides students' performance assessment and confidence judgments by helping them focus attention and review test responses so as to enhance monitoring. Research yields evidence of the effect of metacognitive knowledge on performance assessment through identifying a positive correlation between confidence judgments and measures of executive regulation such as bias scores (Brown, 1987; Pintrich & DeGroot, 1990; Pressley, Borkowski, & Schneider, 1987; Schraw & Nietfeld, 1998). This research shows that although domain knowledge constrains test performance, it does not affect confidence judgments (Schraw, 1997). This generic view of metacognitive knowledge distinguishes knowledge necessary for successful performance and a more general type of knowledge used to regulate performance. From this generic view, our research is useful for other courses in the teacher education program by offering specific classroom interventions such as repeated monitoring exercises and continuous feedback to enhance students' academic performance. This approach is compatible with the agenda of understanding the role of general learning and thinking strategies as one of the central areas for research on college student learning (Biggs, 1987; Campione & Brown, 1990; Kuo, Hagie, & Miller, 2004; Weinstein & Mayer, 1986). This research aims at fostering a set of general learning and thinking strategies to enable students to learn how to learn and to improve their problem solving abilities for the rest of their lives (McKeachie, Pintrich, & Lin, 1985). The present study contributes to this research from the metacognitive perspective by fostering students' ability to self-monitor their performance more accurately through repeated monitoring exercises and continuous feedback in educational psychology. Future research needs to address questions such as how this generic metacognitive knowledge applies to different domains and how the nature of general metacognitive knowledge changes over time.

The second view believes that metacognitive knowledge and skills are domain-specific. Accordingly, the ability to monitor one's performance accurately is a natural byproduct of domain-specific knowledge acquisition. As domain-specific skills increase, the knowledge needed to regulate problem solving in that domain increases as well. Since this type of regulatory knowledge is domain specific, it is of little use when regulating one's performance in an unrelated domain (Glaser & Chi, 1988; Kluwe, 1987). Research that supports this perspective shows that domain-specific content knowledge led to overconfidence of monitoring judgment in a different domain (Glenberg & Epstein, 1987), that the domain-specific knowledge improved test performance but had no effect on monitoring accuracy (Morris, 1990), and that there are meaningful qualitative differences among self-monitoring procedures (Reid & Harris, 1993). In addition, studies have demonstrated that different disciplines require different types of evidence and logic of argument and, therefore, entail different ways of thinking and reasoning (Donald, 1990; Pintrich & Garcia, 1994). Our study also seems to be useful in promoting this type of disciplinary thinking. The repeated measure and longitudinal design allowed us to examine how students' content knowledge of educational psychology evolved along with their metacognitive knowledge and skills within the course.

The domain-specific implications of our research concern how students actually confront different tasks and situations (e.g., types of exams, papers, discussion, labs, project, etc.) and how they transfer their knowledge across different tasks as they learn within a discipline and develop the domain-specific metacognitive knowledge. Research from this domain-specific perspective will help address the question of transfer within a discipline. For instance, how does a knowledge structure formed in an introductory course transfer to a more advanced course taken later in the program? How does the domain-specific metacognitive knowledge developed in the subject content courses transfer to the method courses in teacher training programs? It is clear that future research needs to address the development of metacognitive knowledge and skills in relation to the acquisition of content knowledge. We hypothesize that understanding the nature and development of both the domain-general and domain-specific metacognitive knowledge and, in particular, how these two types of knowledge converge will cast a brighter light on the study of self-regulated learning.


Arrow Up

Acknowledgement

An earlier version of this paper was presented at the annual conference of the Eastern Educational Research Association (EERA), Clearwater , FL , February 2004. The paper was named the EERA Distinguished Paper Award for 2004. We would like to express our appreciation to the editor of CIE for comments on the early version of the paper and our sincere thanks to Mark Parish for his assistance on this project.


Arrow Up

Author

Li Cao is Assistant Professor in the Department of Counseling and Educational Psychology University of West Georgia. His research focuses on metacognition, self-efficacy, and teacher knowledge development. Address correspondence to Li Cao, Counseling and Educational Psychology Department University of West Georgia, 1600 Maple Street Carrollton, GA 30118, phone (678)-839-6118, fax (678)-839-6099, E-mail: lcao@westga.edu

John Nietfeld is Assistant Professor in the Department of Curriculum & Instruction North Carolina State University. His research interests include metacognition, motivation, and reading.


Arrow Up

References

Biggs, J. B. (1987). Student approaches to learning and studying . Hawthorne , Australia : --Australian Council for Educational Research Ltd.

Brown, A. L. (1987). Metacognition, executive control, self-regulation, and other more --mysterious mechanisms. In F. Weiner & R. Kluwe (Eds.), Metacognition, motivation, and --understanding (65-116). Hillsdale , NJ : Erlbaum.

Brown, A. L., & Palinscar, A. S. (1989). Guided, cooperative learning and individual knowledge --acquisition. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of --Robert Glaser (pp. 392-451). Hillsdale , NJ : Erlbaum.

Butler , D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical --synthesis. Review of Educational Research, 65 , 245-281.

Campione, J. C., & Brown, A. L. (1990). Guided learning and transfer: Implications for --approaches to assessment. In N. Frederiksen, R. Glaser, A. Lesgold, & M. G. Shafto (Eds.), --Diagnostic monitoring of skill and knowledge acquisition (pp. 141-172). Hillsdale , NJ : --Erlbaum.

Carpenter, P. A., Just, M. A., & Shell, P. (1990). What one intelligence test measures: A --theoretical account of the processing in the Raven Progressive Matrices Test. Psychological --Review, 97 , 404-431.

Clay, M. M. (1991). Becoming literate: The construction of inner control. Postmouth, NH: --Heinemann.

Delclos, V. R., & Harrington, C. (1990). Effects of strategy monitoring and proactive instruction --on children's problem-solving performance. Journal of Educational Psychology, 83 , 35-42.

Delprato, D. J. (2001). Increasing classroom participation with self-monitoring. The Journal of --Educational Research, 225-227.

Donald, J. (1990). University professors' views of knowledge and validation processes. Journal --of Educational Psychology, 82, 242-249.

Everson, H. T., & Tobias, S. (2001). The ability to estimate knowledge and performance in --college: A metacognitive analysis. In H. J. Hartman (Ed.), Metacognition in learning and --instruction (pp. 69-83). Dordrecht , Netherlands : Kluwer.

Flavell, J. E. (1979). Metacognition and cognitive monitoring: A new area of cognitive --development inquiry. American Psychologist, 10 , 906-911.

Flavell, J. H., Miller, P. H., & Miller, S. A. (2002). Cognitive development (4 th . ed.). Upper --Saddle River , NJ : Prentice Hall.

Garcia, T., & Pintrich, P. R. (1994). Regulating motivation and cognition in the classroom: The --role of self-schemas and self-regulatory strategies. In D. H. Schunk & B. J. Zimmerman (Eds.), --Self-regulation of learning and performance: Issues and educational applications (pp. --127-153). Hillsdale , NJ : Erlbaum.

Glaser, R., & Chi, M. T. (1988). Overview. In M. Chi, R. Glaser, & M. Farr (Eds.), The nature --of expertise (pp. xv-xxviii). Hillsdale , NJ : Erlbaum.

Glenberg, A. M., & Epstein, W. (1987). Inexpert calibration of comprehension. Memory & --Cognition, 15 , 84-93.

Goodman, Y. M., & Goodman, K. S. (1994). To err is human: Learning about language --processes by analyzing miscues. In R. B. Ruddell, M. R. Ruddell, & H. Singer (Eds.), --Theoretical models and processes of reading (pp.104-123). Newark , DE : International --Reading Association.

Hacker, D. J., Bol, L., Horgan, D. D. , & Rakow, E. A. (2000). Test prediction and performance --in a classroom context. Journal of Educational Psychology, 92 , 160-170.

Hartman, H. J. (1993). Intelligent Tutoring. Clearwater , FL : H & H Publishing.

Hartman, H. J. (2001a). (Ed.). Metacognition in learning and instruction . Dordrecht , --Netherlands : Kluwer.

Hartman, H. J. (2001b). Developing students' metacognitive knowledge and skills. In H. J. --Hartman (Ed.), Metacognition in learning and instruction (pp. 33-67). Dordrecht , --Netherlands : Kluwer.

Hartman, H. J. (2001c). Teaching metacognitively. In H. J. Hartman (Ed.), Metacognition in --learning and instruction (pp. 149-171). Dordrecht , Netherlands : Kluwer.

Hertzog, C., Dixon , R. A., & Hultsch, D. F. (1990). Relationships between metamemory, --memory predictions, and memory task performance in adults. Psychology and Aging, 5 , 215- --227.

Kelemen, W. L. (2000). Metamemory cues and monitoring accuracy: Judging what you know and --what you will know. Journal of Educational Psychology, 92 , 800-810.

Kelley, C. M., & Jacoby, L. L. (1996). Adult egocentrism: Subjective experience versus analytic --bases for judgment. Journal of Memory and Language, 35 , 157-175.

Keren, G. (1991). Calibration and probability judgments: Conceptual and methodological issues. --Acta Pscyhologica, 77 , 217-273.

Kluwe, R. H. (1987). Executive decisions and regulation of problem solving. In F. Weiner & R. --Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 31-64). Hillsdale , NJ : --Erlbaum.

Koriat, A. (1997). Monitoring one's own knowledge during study: A cue-utilization approach to --judgments of learning. Journal of Experimental Psychology: General, 126 , 349-370.

Kuo, J., Hagie, C., & Miller, M. T. (2004). Encouraging college student success: The instructional --challenges, response strategies, and study skills of contemporary undergraduates. Journal of --Instructional Psychology, 31 (1), 60-67.

Lyons, C. A., Pinnell, G. S., & DeFord, D. E. (1993). Partners in learning: Teachers and --children in reading reacovery . New York : Teachers College Press.

Maki, R. H. (1995). Accuracy of metacomprehension judgments for questions of varying --importance levels. American Journal of Psychology, 108 , 327-344.

Maki, R. H. (1998a). Predicting performance on text: Delayed versus immediate predictions and --tests. Memory & Cognition, 26, 959-964.

Maki, R. H. (1998b). Test predictions over text material. In D. J. Hacker, J. Dunlosky, & A. C. --Graesser (Eds.), Metacognition in educational theory and practice (pp. 117-144). Mahwah --, NJ : Erlbaum.

Manning, B., & Payne, B. (1996). Self talk for teachers and students. Needham : Allyn & Bacon.

Manning, M. (2002). Self-monitoring meaning. Teaching Pre K-8, 32 (4), 103-104.

McCormick, C. B. (2003). Metacognition and learning. In W. M. Reynolds & G. E. Miller --(Eds.), Handbook of Psychology (Vol 7, pp. 79-102 ). New York : Wiley.

McKeachie, W. J., Pintrich, P. R., & Lin, Y. (1985). Teaching learning strategies, Educational --Psychologist, 20 , 153-160.

Morris, C. C. (1990). Retrieval process underlying confidence in comprehension judgments. --Journal of Experimental Psychology: Learning, Memory, and Cognition, 16 , 233-232.

Narens, L., Jameson, K. A., & Lee, V. A. (1994). Subthreshold priming and memory monitoring. --In J. Metcalfe & A. P. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. --71-92). Cambridge , MA : The MIT Press.

Nelson, T. O. (1996). Consciousness and metacognition. American Psychologist, 51 (2), --102-116.

Nelson, T. O., & Dunlosky, J. (1991). When people's judgments of learning (JOLs) are --extremely accurate at predicting subsequent recall: The “delayed-JOL effect.” Psychological --Science, 2, 267-270.

Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings. In --G. H. Bower (Ed.), The psychology of learning and motivation (Vol. 26, pp. 125-141). --New York : Academic Press.

Nelson, T. O., & Narens, L. (1994). Why investigate metacognition? In J. Metcalfe & A. P. --Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 1-27). Cambridge , MA : --MIT Press.

Nietfeld, J. L., & Schraw, G., (2002). The effect of knowledge and strategy training on monitoring --accuracy. The Journal of Educational Research, 95 , 131-142.

Novak, J. (1998). Learning, creating, and using knowledge. Mahwah , NJ : Erlbaum.

Ormrod, J. (2003). Educational psychology: Developing the learners (4 th . ed.). Upper --Saddle River , NJ : Prentice Hall.

Osborne, J. W. (1999). A behavioral measure of metacognition for teachers . Paper --presented at the annual meeting of the American Educational Research Association. Montreal , --Canada .

Pajares, F. (1997). Current directions in self-efficacy research. In M. Maehr & P. R. Pintrich --(Eds.), Advances in motivation and achievement (Vol. 10, pp. 1-49). Greenwich , CT : JAI --Press.

Pajares, F., & Valiante, G. (2002). Students' self-efficacy in their self-regulated learning stages: A --developmental perspective. Psychologia, 45, 211-221.

Pellegrino, J., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The --science and design of educational assessment. Washington DC : National Academy Press.

Peverly, S. T., & Brobst, K. E. (2003). College adults are not good at self-regulation: A study on --the relationship of self-regulation, note taking, and test taking. Journal of Educational --Psychology, 95 , 335-346.

Pintrich, P. R., & DeGroot, E. V. (1990). Motivational and self-regluated learning components for --classroom academic performance. Journal of Educational Psychology, 82 , 223-232.

Pintrich, P. R., & Garcia, T. (1994). Self-regulated learning in college students: Knowledge, --strategies, and motivation. In P. R. Pintrich, D. R. Brown, & C. E. Weinstein (Eds.), Student --motivation, cognition, and learning: Essays in honor of Wilbert J. McKeachie- --(pp.113--133). Hillsdale , NJ : Erlbaum.

Pintrich, P. R., Wolter, C. A., & Baxter, G. P. (2000). Assessing metacognition and-
--
self-regulated learning. In G. Schraw & J. C. Impara (Eds.), Issues in the measurement of --metacognition (pp. 43-97). Lincoln , NB : Buros Institute of Mental Measurements.

Posner, G. J. (1991). Field experience: A guide to reflective teaching. New York : Longman.

Pressley, M., Borkowski, J. G., & Schneider, W. (1987). Cognitive strategies: Good strategies --users coordinate metacognition and knowledge. In R. Vasta & G. Whitehurst (Eds.), Annals of --Child Development (Vol. 5, pp. 89-129). Greenwich , CT : JAI Press.

Raven, J. C. (1962). Advanced Progressive Matrices, Set II . London : H. K. Lewis.

Reid, R., & Harris, K. R. (1993). Self-monitoring of attention versus self-monitoring of --performance: Effects on attention and academic performance. Exceptional Children, 60 (1), --29-40.

Schiffman, S. S., Reynolds, M. L., & Young, F. W. (1981). Introduction to multidimensional --scaling: Theory, methods, and applications . New York : Academic Press.

Schraw, G. (1994). The effect of metacognitive knowledge on local and global monitoring. --Contemporary Educational Psychology, 19 , 143-154.

Schraw, G. (1997). The effects of generalized metacognitive knowledge on test performance and --confidence judgments. The Journal of Experimental Education, 65 (2), 135-146.

Schraw, G. (2001). Promoting general metacognitive awareness. In H. J. Hartman (Ed.), --Metacognition in learning and instruction (pp. 3-16). Dordrecht , Netherlands : Kluwer.

Schraw, G., & Impara, J. C. (Eds.). (2000). Issues in the measurement of metacognition. --Lincoln , NE : Buros Institute of Mental Measurements.

Schraw, G., & Nietfeld, J. (1998). A further test of the general monitoring skill hypothesis. --Journal of Educational Psychology , 90(2) , 236-248.

Schraw, G., & Roedel, T. D. (1994). Test difficulty and judgment bias. Memory & Cognition, --22 , 63-69.

Schunk, D. H., & Zimmerman, B. J. (2003). Self-regulation and learning. In W. M. Reynolds & --G. E. Miller (Eds.), Handbook of Psychology (Vol. 7, pp. 59-78). New York : Wiley.

Sperling, R. A., Howard, B. C., Staley, R., & Dubois, N. (2004). Metacognition and
-- self-regulated learning constructs. Educational Research and Evaluation, 10 (2), 117-139.

Sternberg, R. J. (2001). Metacognition, abilities, and developing expertise: What makes an expert --student? In H. J. Hartman (Ed.), Metacognition in learning and instruction , (pp. 229-260). --Netherlands : Kluwer.

Taylor, B. M., & Mosbush, L. (1983). Oral reading for meaning: A technique for improving word --identification skills. The Reading Teacher, 37, 234-237.

Thiede, K. W., Anderson, M. C., & Therriault, D. (2003). Accuracy of metacognitive monitoring --affects learning of texts. Journal of Educational Psychology, 95 (1), 66-73.

Weinstein, C. E., & Mayer, R. E. (1986). The teaching of learning strategies, In M. C., Wittrock --(Ed.), Handbook of Research on Teaching, (3 rd . ed. pp. 315-327). New York : MacMillan.

Yates, J. F. (1990). Judgment and decision making. Englewood Cliffs, NJ: Prentice Hall.

Zimmerman, B. J., & Risemberg, R. (1994). Investigating self-regulated processes and --perceptions of self-efficacy in writing by college students. In P. R. Pintrich, D. R. Brown, & C. --E. Weinstein (Eds.), Student motivation, cognition, and learning: Essays in honor of --Wilbert J. McKeachie (pp. 239-256). Hillsdale , NJ : Erlbaum.


Arrow Up

Appendix: Weekly Monitoring Exercise Sheet

Research/Cognitive Development

Please indicate below your overall understanding of the content from today's class:

0% 100%

 

  • What concept(s) from today's class did you find difficult to understand?

 

•  Specifically, what will you do to improve your understanding of the concept(s) you listed above?

 

1. Experimental research requires which one of the following?

A. Manipulating an aspect of the environment

B. Being able to predict two or more variables

C. Describing each variable in considerable detail

D. Studying behavior in an actual classroom environment

 

0% Accurate 100% Accurate

 

2. Mr. Johnson teaches a class of twenty 8-year-old third graders. His goal for the upcoming school year is to help at least 50% of his students reach formal operations. Judging from Piaget's theory, we would expect that Mr. Johnson's goal is:

A. An easy one to attain

B. Almost impossible to attain

C. Attainable only if he emphasizes abstract reasoning throughout the school year

D. Attainable only if his students have had enriched educational experiences most of their lives

 

0% Accurate 100% Accurate

 

3. From a Vygotskian perspective, scaffolding serves what purpose in instruction?

A. It gives an idea of what they need to do to get good grades

B. It keeps school tasks within their actual developmental level

C. It lets them learn by watching one another

D. It supports them as they perform difficult tasks

 

0% Accurate 100% Accurate


 
Arrow Up