Individual Differences Related to College Students’ Course Performance in Calculus II

Sara A. Hart
Department of Psychology
Florida Center for Reading Research
Florida State University
hart@psy.fsu.edu

Mia Daucourt
Department of Psychology, Florida State University

Colleen M. Ganley
Florida Center for Research in Science
Technology, Engineering and Math, Learning Systems Institute
Department of Psychology, Florida State University

ABSTRACT. In this study, we explore student achievement in a semester-long flipped Calculus II course, combining various predictor measures related to student attitudes (math anxiety, math confidence, math interest, math importance) and cognitive skills (spatial skills, approximate number system), as well as student engagement with the online system (discussion forum interaction, time to submission of workshop assignments, quiz attempts), in predicting final grades. Data from 85 students enrolled in a flipped Calculus II course was used in dominance analysis to determine which predictors emerged as the most important for predicting final grades. Results indicated that feelings of math importance, approximate number system (ANS) ability, total amount of discussion forum posting, and time grading peer workshop submissions was the best combination of predictors of final grade, accounting for 17% of variance in a student’s final grade. The point of this work was to determine which predictors are the most important in predicting student grade, with the end goal of building a recommendation system that could be implemented to help students in this traditionally difficult class. The methods used here could be used for any class.

Keywords: Math performance, calculus, flipped classroom, math attitudes, cognitive performance, student engagement

ISSN 1929-7750 (online). The Journal of Learning Analytics works under a Creative Commons License, Attribution - NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0)

(2017). Individual differences related to college students’ course performance in calculus II. Journal of Learning Analytics, 4(2), 129–153. http://dx.doi.org/10.18608/jla.2017.42.11

 

1 INTRODUCTION

Over the past few years, the attrition of STEM-focused undergraduates in the United States has become a critical national concern, with “a substantial number of undergraduate students initially enrolled in STEM degree programs [dropping] out in the first two years” (PCAST, 2012). In order to maintain highquality instructional practices in the face of STEM undergraduate attrition, growing demand for large numbers of graduates, and diminishing financial and human resources, many colleges and universities are turning to technologically aided teaching practices. This technological shift aims to integrate the traditional classroom environment with online course resources to enhance, replace, and supplement face-to-face instruction to reach more students in a cost-effective way (Garrison & Kanuka, 2004).

To accommodate this shift toward technologically aided teaching methods, schools have implemented campus-wide Learning Management Systems (LMS), such as Moodle and Blackboard. These web-based management systems provide data related to the “user,” with every action of the student tracked and recorded online. The underlying advantage of these LMS data is that they unobtrusively record individual student activity and interaction with course materials in real-time, providing a lens into traditionally unobservable learning-related behaviours and silently tracking individual students’ learning progression (Macfadyen & Dawson, 2010; Gašević, Dawson, & Siemens, 2015). In traditional college courses, many performance measures, for example midterm or final exams (Lee, Speglia, Ha, Finch, & Nehm, 2015; Macfadyen & Dawson, 2010), are taken too late in the semester to identify struggling students in time to prevent course failure. In contrast, the real-time nature of LMS data may provide early warning signs of potentially at-risk students, enabling the implementation of intervention efforts before failure becomes inevitable (Milne, Jeffrey, Suddaby, & Higgins, 2012; Gašević et al., 2015). By moving away from summative assessments such as exams, and providing visualizable, real-time information about individual aspects of student engagement and learning (Cocea & Weibelzahl, 2009), the use of LMS data, also called “academic analytics” (Goldstein & Katz, 2005; Macfadyen & Dawson, 2010), allows educators to monitor individual academic progress step-by-step (Bienkowski, Feng & Means, 2012).

With this study, we sought to determine the best individual differences predictors of student performance in a flipped Calculus II course. This course required students to interact with an LMS outside of class by watching videos of course content lectures and doing workshops and quizzes, as well as come to class for guided problem solving. The course performance predictors we used included attitudinal and cognitive factors commonly found to be associated with math achievement in the education and psychology literatures (Reyes & Stanic, 1988; Randhawa, Beamer, & Lundberg., 1993; House, 1993; House, 1995; Wigfield & Eccles, 2000; Maloney & Beilock, 2012). Beyond these factors, we also incorporated LMS online-usage data as course performance predictors, to capture student engagement and disengagement with the online portion of the course (Cocea & Weibelzahl, 2009; Tempelaar, Rientes, & Giesbers, 2015; Gašević et al., 2015). This study represents a step toward using learning analytics in combination with cognitive and attitudinal variables to inform potential recommendation systems based on predictive models to facilitate individualized learning and bolster student success (Tempelaar et al., 2015).

1.1 Attitudinal and Cognitive Factors Related to Math Achievement

There is a rich history of examining individual differences predictors of math achievement. For example, recently a study found that learning-related emotions were strong predictors of undergraduate student exam scores in a math and statistics course (Niculescu et al., 2015). In general, research across this area has found that attitudinal variables are typically weakly to moderately correlated with math achievement (e.g., meta-analyses for math anxiety and math achievement found r = –.34 for grades 5–12, Hembree, 1990; r = –.27 for grades 4–12, Ma, 1999). Critical attitudinal variables related to math that have been examined in past research include math anxiety, math confidence, math interest, and math importance (e.g., Wigfield & Eccles, 2000). Math anxiety is defined as a fear or an adverse emotional response to the idea of doing math. Math anxiety has been negatively linked to math achievement for several potential reasons, including the possibility that it acts as a proxy for poor math ability, that it is a source of worry that compromises cognitive load in a math testing situation, or because it leads to math avoidance, which results in poor math achievement (Ashcraft, 2002; Ashcraft & Krause, 2007; Ganley & Vasilyeva, 2014; Hembree, 1990; Ho et al., 2000; Ma & Xu, 2004; Maloney & Beilock, 2012). On the other hand, math confidence, which we conceptualize as including what others term expectations of success, academic self-concept, and perceived competence, has been related to better math outcomes (House, 1993; House, 1995; Randhawa et al., 1993; Reyes & Stanic, 1988). Math interest (Köller, Baumert, & Schnabel, 2001; Marsh, Trautwein, Lüdtke, Köller, & Baumert, 2005) also positively relates to math achievement because it may increase engagement, as students are motivated to engage with material in which they are interested (Cocea & Weibelzahl, 2009). This relation is found across educational levels and with multiple different math content areas (Richardson, Abraham, & Bond, 2012). Math importance, or math utility value, is another critical variable to consider, as research suggests that when students do not see the value in learning math they have lower math achievement and are less likely to sign up for math courses (Meece, Wigfield, & Eccles, 1990; Simpkins, Davis-Kean, & Eccles, 2006).

Differences in math achievement have also been positively correlated with cognitive skills, including the Approximate Number System (ANS) and spatial abilities. The ANS is a part of the non-symbolic number system, less precise than counting, that improves with age, and allows us to represent numbers nonverbally in order to quickly distinguish between two quantities (Halberda & Feigenson, 2008). Although not without controversy, a review of the literature links the ANS to math achievement (r = 0.20, 95% confidence intervals = 0.14, 0.26; Chen & Li, 2014). Another cognitive skill believed to be associated with math achievement is spatial ability, or the ability to mentally represent and manipulate objects in space. In particular, mental rotation skills, or the ability to rotate objects in space, is consistently found to be related to math achievement (r = .35–.38 for female students, r = –.03–.54 for male students; Casey, Nuttall, Pezaris, & Benbow, 1995). Some experimental work has also shown that training in spatial skills can produce enhanced math test performance (Cheng & Mix, 2014) and improved grades in a math course (Sorby, Casey, Veurink, & Dulaney, 2013).

1.2 Factors Related to Achievement in Online Courses

Looking specifically at the online learning environment, research has revealed that positive learning dispositions bolster student engagement (Tempelaar, Niculescu, Rientes, Giesbers, & Gijselaers, 2012; Tempelaar et al., 2015), and that student engagement and frequency of participation strongly predict student performance (Chen & Jang, 2010; Davies & Graff, 2005; de Barba, Kennedy, & Ainley, 2016; Tempelaar et al., 2015; Kizilcec, Piech, & Schneider, 2013; Morris, Finnegan, & Wu, 2005) and course completion (Milne et al., 2012; Macfadyen & Dawson, 2010; Morris et al., 2005) in online courses. Although these results are not surprising, they are important because they suggest that we may be able to use LMS data to identify students who are not engaged with course material. This affords the opportunity to do things, using the online platform, to help engage students in the material. An early study exploring the value of LMS student engagement data was the Course Signals program implemented at Purdue University. The study revealed that students who received regular feedback about their likelihood of successful course completion via an online colour-coded risk assessment system had higher retention rates than controls (Yukselturk & Bulut, 2007).

Student engagement with an LMS class can take different forms, including learner–content interaction or social interaction (i.e., learner–learner or learner–instructor interaction; Fulford & Zhang, 1993), and the studies analyzing LMS data have found inconsistent results about which engagement method is most related to achievement. Some studies find that learner–content interaction is most important for student success. For example, Morris and colleagues (2005) linked learner–content interaction, measured by both frequency and duration of content viewing, to student achievement, concluding that higher performing students are more likely to dedicate their time to viewing relevant content rather than creating their own original content. Similarly, Ramos and Yudko (2008) found that the total number of user hits on an LMS positively predicted exam outcomes, but discussion board participation, including number of posts contributed and number of posts read, did not. On the other hand, Gašević, Dawson, Rogers, and Gašević (2016) found that time spent on assignments was actually negatively associated with achievement, suggesting that perhaps students who spend more time on assignments specifically may be using their resources inefficiently or are trying to make up for a lack of comprehension (Gašević, et al., 2016).

Other work suggests that learner–learner interaction is the more important form of student engagement for student achievement. For example, multiple studies in different STEM content areas have found a positive relation between the number of discussion posts contributed by a student (Huon, Spehar, Adam, & Refkin, 2007; Macfadyen & Dawson, 2010), as well as the quality of these posts (Romero, Lopez, Luna, & Ventura, 2013) and their course performance, suggesting that successful students are more likely to use online resources to facilitate learning through social engagement with peers. Conversely, students who infrequently participate in discussion forums and show patterns of disengagement are more likely to fail (Milne et al., 2012; Romero et al., 2013). The strength of the association ranges from weak (Lauría, Baron, Devireddy, Sundararaju, & Jayaprakash, 2012; Chanlin, 2012) to strong (Macfadyen & Dawson, 2010), and mere participation in discussion does not significantly distinguish students with the highest grades from average performers (Davies & Graff, 2005). Furthermore, discussion participation matters most for students in danger of failing (Davies & Graff, 2005), most likely because the feeling of being supported makes struggling students more likely to persist (Romero et al., 2013).

The rise in computer-assisted classes represents a push toward learner-controlled and learner-centred learning environments, in which students are required to make decisions about how and to what extent they engage with class materials (Lust, Elen, & Clarebout, 2013), but a majority of students fail to utilize resources effectively (Lust, Juarez-Collazo, Elen, & Clarebout, 2012; Lust, Vandewaetere, Cuelemans, Elen, & Clarebout, 2011). Evidence shows that the degree to which students engage in the learning process through effective online tool use is associated with their learning outcomes, even when accounting for differences in cognitive ability (Shute & Gluck, 1996). Thus, pinpointing the factors that underlie student engagement with learning materials is essential for understanding why students succeed or fail in an online course (Tempelaar et al., 2012).

Time management is defined as the ability to prioritize time in order to fulfill learning demands, complete tasks, and modify plans to accommodate any changes in time or demands (Jo, Park, Yoon, & Sung, 2016). Time management has been shown to be a significant predictor of online achievement (Jo et al., 2016; Kwon, 2009; Choi & Choi, 2012), although the association may be weak (r=.14, p=.00; Broadbent & Poon, 2015). Other studies have found direct effects of time management on achievement (Jo & Kim, 2013; Loomis, 2000). Conversely, poor time management, or procrastination, negatively influences online participation (Michinov, Brunot, Le Bohec, Juhel, & Delaval, 2011; Balduf, 2009), with the students least likely to participate also demonstrating the poorest performance (Michinov et al., 2011).

Past educational research has consistently shown that the best predictor of performance is performance itself and the learning analytics field is no different (Tempelaar et al., 2015). In studies that include undergraduates taking LMS-supplemented courses, the strongest predictor of final exam grades, when accounting for a variety of student dispositional and engagement variables, was grades on formative quizzes (Huon et al., 2007; Tempelaar et al., 2015). In addition, early quiz grades have the same predictive power as later quiz grades, so these formative assessments can be used to create recommendations as well as to predict final exam grades (Wolff, Zdrahal, Nikolov, & Pantucek, 2013). Practice quizzes are often specifically designed as exam review tools, so the association between quiz grades and final exam grades is not surprising. However, the frequency of use of practice quizzes, and not just the grades, also has predictive value (Huon et al., 2007).

Thus, several potential factors related to LMS engagement have predicted subsequent achievement, but there are also some inconsistencies. These divergent results may be attributable to differences in contexts (i.e., school size, class type), outcome variables (i.e., continuous grades or pass/fail), choice of covariates, and prediction techniques, which challenges the generalizability of study results (Gašević et al., 2016). This study will use dominance analysis to try to figure out which facets of online LMS activity, in conjunction with individual student cognitive and affective factors, are important for student success in a flipped Calculus II course. By focusing on predictors of performance, we hope to create recommendations tailored specifically to future Calculus II classes, which will help overcome the challenges of generalizability.

1.3 The Present Study

Although substantial literature supports the role of the reviewed attitudinal and cognitive correlates of math achievement, as well as LMS activity data as a proxy for student engagement, in predicting student course achievement, our study is unique in its varied, multi-component approach. As the present data come from a flipped course, we have both the attitudinal and cognitive factors and the student engagement factors, and we use these sources of information to predict student performance in the flipped Calculus II course. We build upon the “dispositional learning analytics infrastructure” that proposes that learning attitudes can be gathered through self-report and combined with LMS activity data (Buckingham Shum & Deakin Crick, 2012). To this, we add cognitive factors to predict student course performance.

We will build on previous studies by exploring student achievement in a flipped classroom, semesterlong Calculus II course, combining various predictor variables often found to be related to student attitudes (math anxiety, math confidence, math interest, math importance) and cognitive skills (mental rotation, approximate number system), as well as student engagement with the online system (discussion forum interaction, time of submission of workshop assignments, time of submission of peer reviews, quiz attempts), in predicting final grades in the class. We began this project with the intention to use these data to build a recommendation system into the course platform, to provide feedback to students about actions they can take to improve their likelihood of success in the class. As such, we were most interested in which factors were the best at predicting final grades so that we could target those factors. Therefore, we will use a methodological approach called dominance analysis to rank order the relative importance of the attitudinal, cognitive, and LMS activity predictors on students’ final grades in Calculus II. This will allow us to determine which predictors emerge as the most important.

2 METHOD

2.1 Participants

Participants were 85 students who completed a flipped Calculus II course in Spring 2014. About 43% of the students were female. Approximately 87% of students were white, 7% were Black or African American, 2% were Asian, 4% were other, and 17% were Hispanic/Latino. About 51% of students were first year students, 22% were sophomores, 21% were juniors, and 7% were seniors. The students were on average 19.75 years old (SD = 1.90, range = 17.75–30.00). Most students were majors in STEM fields that required Calculus II to complete the major.

In the course, students used the online course platform (WEPS)1 to watch lecture videos of course content outside of class time and then solved problems with the professor and other students during class time. Each course topic had three lecture videos filmed by the course instructor, corresponding to three difficulty levels (gentle, normal, rigorous), and each video was approximately 5 to 25 minutes in length. All teaching content was available to students at all times, although graded items had specific time frames of availability based on due dates. Students were not required to watch the videos before class, but were highly encouraged to do so.

2.2 Measures

Measures for this study were obtained from multiple sources including the following: in-person data collection (spatial skills measure of mental rotation); a Qualtrics survey (math anxiety, math confidence, math interest, math importance, approximate number system); log data from the online course system (time to deadline for online workshops, time to deadline for grading other students’ workshops, online quiz attempts, active forum interaction, passive forum interaction); and class records (course grade). All attitudinal and cognitive measures used are open source and freely available, other than the mental rotation measure.2 All the attitudinal and cognitive measures were specifically included in the course for research purposes. Participants completed informed consent procedures and a pencil-and-paper mental rotation task in-person at the start of one of their class periods at the beginning of the Spring 2014 term. In total, our final n = 85, which represents the number of students who completed the course and received a final grade, as well as consented to be part of the study and for us to use their online course data and class records. Of this n = 85, one student did not complete the online Qualtrics survey.

2.2.1 Course grade

A student’s final course grade (0–100) in Calculus II was used as the outcome variable. This grade was a weighted average of the scores on the workshops (30% of grade), scores on the two course exams (30%), and score on the final exam (40%). Quiz grades were also used as bonus points (up to 3% on the final grade).

2.2.2 Math anxiety

Students’ math anxiety was measured with the Math Anxiety Rating Scale–Revised (MARS–R; Plake & Parker, 1982). This scale has 24 items rated on a five-point scale in which students are asked to indicate the amount of anxiety they feel in different situations (e.g., looking through the pages on a math text) from “not at all” to “very much.” The internal consistency (α) for the scale was .95.

2.2.3 Math confidence

Students’ math confidence was measured with the 12-item confidence subscale of the Fennema- Sherman Math Attitudes Scales (Fennema & Sherman, 1976). Items were rated on a seven-point scale from Strongly disagree to Strongly agree (e.g., I am sure I could do advanced work in mathematics). The internal consistency (α) for the scale was .92.

2.2.4 Math interest

We measured math interest with four items adapted from Wigfield and Eccles (2000), from the Educational Longitudinal Study (Ingels et al., 2007), and from the Early Childhood Longitudinal Study (Tourangeau, Nord, Le, Pollack, & Atkins-Burnett, 2006). Items were rated on a seven-point scale from Strongly disagree to Strongly agree. Sample items are “I like math” and “I find working on math assignments to be very interesting.” The internal consistency (α) for the scale was .91.

2.2.5 Math importance

We measured math importance with six items, two of which are adapted from Wigfield and Eccles (2000) and four of which are researcher-developed (e.g., What I learn in math is useful). Items were rated on a seven-point scale from Strongly disagree to Strongly agree. The internal consistency (α) for the scale was .91.

2.2.6 Mental rotation

The Mental Rotation Test (Vandenberg & Kuse, 1978) was administered to students. The test consists of 24 items divided into two blocks. Each item includes a picture of a three-dimensional object presented on the left (i.e., the target object). On the right, there are four other pictures of three-dimensional objects. Two of them depict objects identical to the target, only presented from a different perspective. The other two depict either a mirror image of the target or an object with slightly different features. Students are asked to identify which two of the four objects are the same as the target. Using the standard scoring for this measure, students received 1 point only in those cases when both of their choices were correct. They received 0 points for any other type of response (e.g., if they selected one correct and one incorrect choice) to help account for guessing. Internal consistency (Cronbach’s alpha) for the measure was .89.

2.2.7 Approximate number system (ANS)

The Approximate Number System (ANS), or intuitive recognition of number, was measured using an online test on the Panamath website3 (Halberda, Mazzocco, & Feigenson, 2008). This test measures the ability of an individual to non-verbally represent numbers, or understand and manipulate numerical quantities non-symbolically (Halberda et al., 2008; Halberda & Feigenson, 2008; Libertus & Brannon, 2010). In the task, participants were shown brief displays (600 milliseconds) of intermixed blue and yellow dots with five to 20 dots per colour, and asked to determine if there were more blue (by pressing the “b” key) or yellow dots (by pressing the “y” key). In total, the participants were given 120 trials of various ratios of dot quantities (~5–7 minutes of testing time), and accuracy and response time for each trial was recorded. Panamath then calculates the participant Weber fraction (w-score), which represents the smallest ratio that can accurately be discriminated by a given individual, and reports it in a hyperlinked .pdf. The participant was asked to provide the hyperlink, from which the w-score was obtained. Due to the additional step of having to navigate to a different website and then copy the hyperlink in Qualtrics, 13 participants did not report w-score data, thus there were n = 71 participants with ANS data. Additionally, upon looking at the initial data, 4 data points were deemed to be outliers (w-scores greater than .85, where the next highest score was .44), so the scores were set to missing, as scores this out of range are likely reflective of the participant not accurately completing the task.

2.2.8 Online forum interaction

The online learning platform (WEPS) included forums in which students could ask, answer, and read questions about the course material. We created two different scores from forum information. For “discussion forum posting,” we scored the number of times students actively interacted with the forum by counting each time they wrote a post, including when they wrote the original post or responded to another student’s post. Upon looking at the initial data, 1 data point was deemed to be an extreme outlier (corresponding to 101 posts, versus the next highest of 36 posts), so we set this score to missing. For “discussion forum viewing,” we also scored the number of times students passively interacted with the forum by counting the number of times they viewed (but did not contribute) content on the forum.

2.2.9 Time to deadline for online workshops

As part of the course, students had to complete and submit 13 workshops, which were essential homework problem sets, over the course of the semester. We created a score that represented the average number of hours remaining before the deadline at the time they submitted their assignment.

2.2.10 Time to deadline for grading other students’ workshops

Students also had to grade the workshop assignments of five other students in their class for each workshop. They were given a specific amount of time to do this and again, we coded the average number of hours remaining before the deadline at the time they submitted the graded workshop assignments. Their accuracy in grading made up 20% of their grade for each workshop.

2.2.11 Online quiz attempts

Students took seven online quizzes over the course of the semester, and they were allowed to take the quizzes an unlimited number of times until they were happy with their grade. Because of this, using their grade on the quiz was less meaningful (and was also included in the calculation of final grade), so instead we examined the mean total number of attempts across the seven quizzes. Upon looking at the initial data, 1 data point was deemed to be an extreme outlier (corresponding to a mean of 8.34 submissions per quiz, versus the next highest of 3.15 submissions per quiz), so we set this score to missing.

3 RESULTS

Descriptive statistics are presented in Table 1. One variable was positively skewed (skew > 2.00, with no obvious outliers; see Tabachnick & Fidell, 2013), and therefore this variable was log-transformed for use in all remaining analysis (skew was 0 after log-transformation). Following calculation of descriptive statistics, Proc MI (i.e., multiple imputation; Rubin, 1987) was used to fill in the missing data for each predictor. We elected to do this step because dominance analysis, the main analysis, drops cases listwise when there is missing data. Using Proc MI, 1000 imputed datasets were calculated with plausible values for each missing data point, after which we calculated a mean score of all 1000 data points that became the predictor value. This meant that every individual had a complete dataset (i.e., no missing data) in all subsequent analyses.

Table 1: Descriptive Statistics
Variable na M SD Min. Max. Possible range skew
Math anxiety 84 1.97 0.63 1.00 3.78 1–5 0.53
Math confidence 84 5.56 0.94 3.00 7.00 1–7 –0.72
Math interest 84 5.14 1.30 1.00 7.00 1–7 –0.97
Math importance 84 5.53 1.09 1.83 7.00 1–7 –1.03
Mental rotation 85 12.66 5.12 1.00 23.00 0–24 0.04
ANS 67 0.20 0.06 0.13 0.44 0–∞ 1.48
Discussion forum posting 77 8.75 8.86 0.00 36.00 0–∞ 1.47
Discussion forum viewing 78 193.60 196.53 19.00 978.00 0–∞ 2.21/0.00b
Workshop submissions ( hrs) 78 164.00 10.84 138.12 197.83 0–varying 0.64
Grading peer workshop submissions hrs) 78 69.44 21.69 23.27 128.04 0–varying 0.84
Quiz attempts 77 1.63 0.45 1.00 3.15 0–α 1.42
Final grade 85 84.38 11.84 33.28 100.00 0–100 –1.33
a n is reported as total cases available for each variable, out of a total possible n = 85. b skew is reported as before log-transformation/after log-transformation.

Pearson correlations among the measures are in Table 2. For the most part, correlations with final grade were low to moderate in magnitude, with only the correlations of math confidence and active discussion forum activity with final grade being statistically significant. Sensitivity analysis conducted using G*Power suggested that we were only powered to detect a statistically significant correlation coefficient of r = .26 and greater with our sample size of 85, an alpha-error probability of .05, and power = .80. Therefore, we advise that the magnitude, and not statistical significance, be considered. There are four correlation coefficients, for example, that are .19 or higher, but are not statistically significant (math interest, ANS, discussion forum viewing, grading peer workshop submissions).

Table 2: Pearson Correlations among All the Measures
  1 2 3 4 5 6 7 8 9 10 11
1. Math anxiety 1                    
2. Math confidence –.43* 1                  
3. Math interest –.24* .66* 1                
4. Math importance .17 .58* .65* 1              
5. Mental rotation –.05 –.05 .10 .07 1            
6. ANS –.13 –.03 .21 .07 .04 1          
7. Discussion forum posting –.02 .15 .28* .08 –.06 –.21* 1        
8. Discussion forum viewing .16 .03 .15 .04 –.10 –.28* .54* 1      
9. Workshop submissions –.18 .03 .06 –.09 –.10 .09 .10 .11 1    
10. Grading peer workshops .02 .14 –.13 –.24* –.36* .01 .14 .12 .26
*
1  
11. Quiz attempts .14 .10 .01 –.11 –.25* –.19 –.05 .06 .17 .33* 1
12. Final grade .02 .24* .15 .19 –.03 –.21 .24* .19 .01 .19 .09
* p<0.05.

As a first step, we used all-subsets regression to reduce the total predictor variables that would be entered into the dominance analysis (as dominance analysis is computationally intensive and limited to a maximum of 10 predictor variables). All-subsets regression computes all possible R² values for all possible set sizes of predictors (i.e., 1 predictor, 2 predictors, etc.) regressed onto final grade, and then rank-orders the obtained R² in order of highest to lowest within set size (Miller, 2002). Once the highest R² values for each set size was obtained, we used a “diminishing returns” technique to avoid overfitting the dominance analysis, mirroring a method laid out in Speece et al. (2010). Using this approach, we determined when going from a set of n predictors to a set of n + 1 predictors would not give us an important increase in R² value for the additional set increase. The models with 1 to all 11 predictors included yielded maximum R² values of .06, .10, .15, .17 .17, .17, .18, .18, .18, .18, and .18 (respectively). We determined that after the set size four, there were diminishing returns of R² increases, in that for each set size increase after four, there was at most only 1% more variance explained than the previous set. Table 3 displays the top 10 models, based on total R² values, of possible 4 predictor models. Looking at Table 3, only one model accounted for the highest proportion of variance accounted for in final grade (17% of the variance), representing the predictors of math importance, ANS, discussion forum posting, and grading peer workshop submission. These were moved forward into the dominance analysis.

Table 3: Top 10 Models of Predictors of Calculus II Final Grade for Set Size Four
Predictors R2 Adj R2
Math importance, ANS, discussion forum posting, grading peer workshop submission .17 .12
Math importance, ANS, discussion forum viewing, grading peer workshop submission .16 .11
Math interest, math importance, ANS, grading peer workshop submission .15 .11
Math confidence, math importance, ANS, grading peer workshop submission .15 .11
Math importance, mental rotation, ANS, grading peer workshop submission .15 .11
Math anxiety, math importance, ANS, grading peer workshop submission .15 .11
Math importance, ANS, workshop submission, grading peer workshop submission .15 .11
Math importance, ANS, grading peer workshop submission, quiz attempts .15 .10
Math confidence, ANS, discussion forum posting, grading peer workshop submission .15 .10
Math interest, ANS, discussion forum posting, grading peer workshop submission .14 .10

Dominance analysis was used to rank order the predictors by contributive importance to final grade. Dominance analysis uses bootstrapping to compute total and unique R2 for all the possible combinations of the entered predictors of the outcome variable, here final grade in Calculus II. In dominance analysis, a series of different regression models, called “subset models,” are run and used as a whole to determine dominance order of the predictor variables. The total number of regression models run is based on a combinatorial rule of probability (Hays, 1994). As we had four predictors, we ran 15 different models: four single predictor models, six combinations of two predictor models, four models with three predictors, and one model with four predictors. This was done using a macro in SAS 9.4 (Azen & Budescu, 2003).

There are three types, or levels, of dominance: complete, conditional, and general (Azen & Budescu, 2003; Budescu, 1993). In the most strict, complete dominance, a predictor variable is considered completely dominant over a different predictor variable if it contributes additional unique variance to final grade in both the pairwise comparison, as well as when all other possible combinations of predictors are added into the model. That is, a predictor is completely dominant over another predictor in its association with final grade when it predicts unique variance in final grade when competed against all other predictors in all possible subset models. Conditional dominance is a weaker form of dominance from complete, in that a predictor variable is considered conditionally dominant over another predictor variable when it contributes unique variance to final grade within each model size (i.e., averaging across all the subset models with two predictor variables in the multiple regression). General dominance is weaker still, in that general dominance is achieved when a predictor variable’s unique variance accounted for is greater than another predictor variable’s, averaged across all possible subset models. Achieving complete dominance is ideal, but often undeterminable (i.e., total unique prediction above all other variables is difficult to achieve), so the weaker forms are then used to rank order predictors (Azen & Budescu, 2003). If a variable shows complete dominance, it would then logically also show conditional and general dominance, and so on.

Table 4 presents the total and unique R² values for each variable, or variable combination, in each subset model. The subset models with one predictor accounted for 4–6% of the variance in final grade, the subset models with two predictors accounted for 8–9% of the variance in final grade, the subset model with three predictors accounted for 11–15% of the variance in final grade, and the subset model with all four predictors accounted for 17% of the variance in final grade (see R² column in Table 4). The columns in the far right of Table 4 represent the additional unique variance each variable would account for in the presence of the other predictors in the model, if they were added into the model. For example, the subset model with only math importance accounted for 4% of the variance in Calculus II final grade. After controlling for the variance attributable to math importance, ANS would have accounted for an additional 5% of variance in final grade if it were added as a second predictor, discussion forum posting would have accounted for an additional 5% of variance in final grade if it were added as a second predictor, and time of submission of grading peer assignments would have accounted for an additional 6% of the variance in final grade if it were added as a second predictor.

Table 4: R² Contributions across Subset Models of Predictors of Calculus II Final Grade
    Unique Contribution of Predictor
Subset model R2 1. 2. 3. 4.
Null model with zero predictors .04 .04 .06 .04
Models with 1 Predictor          
1. Math importance .04 .05 .05 .06
2. ANS .04 .04 .04 .04
3. Discussion forum posting .06 .03 .03 .02
4. Grading peer workshop submissions .04 .06 .05 .05
1 predictor average   .04 .04 .05 .04
Models with 2 Predictors          
Math importance-ANS .09 .03 .06
Math importance-Discussion forum posting .09 .03 .04
Math importance-Grading peer workshop submissions .09 .05 .04
ANS-Discussion forum posting .08 .03 .03
ANS-Grading peer workshop submissions .08 .07 .03
Discussion forum posting-Grading peer workshop submissions .08 .05 .03
2 predictor average   .05 .04 .03 .04
Models with 3 Predictors          
Math importance-ANS-Discussion forum posting .12 .05
Math importance-ANS-Grading peer workshop submissions .15 .02
Math importance-Discussion forum posting-Grading peer .13 .04
workshop submissions          
ANS-Discussion forum posting-Grading peer workshop .11 .06
submissions          
3 predictor average   .06 .04 .02 .05
Models with all 4 predictors          
Math importance-ANS-Discussion forum posting-Grading peer .17
workshop submissions          
Overall average   .05 .04 .04 .04
Note. The far four right columns represent the variance that each given variable would contribute to the model listed, if it were added to the model as an additional variable.

In each of the bootstrapped samples, the dominance value is Dij obtained for a given pair of predictors, Xi and Xj, which corresponds to one of three values: 1, if Xi dominates Xj; 0, if Xj dominates Xi; and 0.5, if dominance cannot be established between the two predictors. As we used 1000 bootstrapped samples, Dij_mean represents the expected dominance level in the population of Xi over Xj. Finally, as dominance analysis uses this bootstrapping, it does not employ the more traditional approach used in multiple regression of producing p-values to determine “significance.” Instead, the closer Dij_mean is to 1 or 0, the stronger the case for clear directional dominance, and the closer Dij_mean is to 0.5, the stronger the case for indeterminate dominance. Additionally, these analyses also produce a “reproducibility” value, which represents the proportion of bootstrapped samples that the given dominance pattern is produced. The closer the reproducibility value is to 1.00, the greater the robustness of the dominance results.

Table 5 presents the Dij results, as well as Dij_mean (and corresponding standard error), Pij (proportion of samples were Dij = 1.0), Pji (proportion of samples were Dij = 0.0), Pijno (proportion of samples were Dij = 0.5), and the reproducibility value. As described in Azen and Budescu (2003), complete dominance is established if the additional contribution of a given predictor is higher than another predictor in all subset models where the two predictor variables are competing head-to-head. Complete dominance was established for math importance over grading peer workshop submissions (as seen by the Dij = 1.0, Dij_mean = .55), but there was not enough evidence for any other pair-wise complete dominance to be established (as seen by the Dij = 0.5). No pairs passed the test for conditional dominance, so therefore the remaining pairs were ranked by the general dominance criteria (which, by definition, should always find dominance patterns as it simply rank orders the total amount of variance accounted for, averaged across all subset models, from highest to lowest). The results indicate the following general dominance pattern: math importance > grading peer workshop submissions > ANS > discussion forum posting. Reproducibility values topped out at .66, suggesting that some caution should be given when accepting the general dominance pattern, particularly considering Azen and Budescu’s (2003) recommendation to give more weight to the much stricter complete dominance.

Table 5. Results from the Dominance Analysis of All Pairwise Predictors across the Three Levels of Dominance, Predicting Calculus II Final Grade
Xi Xj Dij Dj_mean SE( Dj) Pij Pji Pijno Reproducibility
Complete                
Math importance ANS 0.5 .63 .42 .53 .26 .22 .22
Math importance Discussion forum posting 0.5 .56 .41 .41 .29 .30 .30
Math importance Grading peer workshop submissions 1.0* .55 .47 .50 .40 .10 .50
ANS Discussion forum posting 0.5 .43 .44 .32 .46 .23 .23
ANS Grading peer workshop submissions 0.5 .42 .43 .30 .47 .23 .23
Discussion forum posting Grading peer workshop submissions 0.5 .49 .39 .30 .32 .39 .39
Conditional                
Math importance ANS 0.5 .64 .43 .55 .27 .18 .18
Math importance Discussion forum posting 0.5 .56 .42 .41 .29 .30 .30
Math importance Grading peer workshop submissions 1.0 .55 .48 .51 .41 .08 .05
ANS Discussion forum posting 0.5 .43 .44 .33 .46 .21 .21
ANS Grading peer workshop submissions 0.5 .41 .44 .31 .50 .19 .19
Discussion forum posting Grading peer workshop submissions 0.5 .49 .40 .30 .33 .38 .38
General                
Math importance ANS 1.0* .66 .48 .66 .34 .00 .66
Math importance Discussion forum posting 1.0* .55 .50 .55 .45 .00 .55
Math importance Grading peer workshop submissions 1.0 .55 .50 .55 .46 .00 .55
ANS Discussion forum posting 1.0* .42 .49 .42 .58 .00 .42
ANS Grading peer workshop submissions 0.0* .40 .49 .40 .60 .00 .60
Discussion forum posting Grading peer workshop submissions 0.0* .49 .50 .49 .51 .00 .51
Note: i and j = variables that are competing; Dij_mean = average number of times variable i dominated variable j over all bootstrap samples; Pij = proportion of bootstrap samples in which i dominated j; Pji = proportion of bootstrap samples in which j dominated i; Pijno = proportion of bootstrap samples in which no dominance was established. Reproducibil ity is the proportion of bootstrap samples that replicated the reported effect. * indicates the highest level of dominance achieved and implies all subsequent levels of dominance are also achieved.

4 DISCUSSION

A substantial body of work has identified particular student attitudinal and cognitive factors related to math performance. Moreover, research that examines student engagement variables, using LMS activity data from online courses has grown. We sought to combine these research areas, determining which predictors, out of student attitudinal and cognitive factors, and indicators of student engagement in the online course, emerge as the most important in predicting final grades in a flipped Calculus II course. The goal of this work was to build a list of factors that predict performance in Calculus II, so that a (future) recommendation system can be built to provide feedback to students about actions they can take to improve their likelihood of success in the class.

Some might find it somewhat surprising that only two of our predictors were statistically significant in their correlation with final grade. This reaction is certainly warranted, but our small sample size of 85 is limiting our ability to obtain statistically significant results. Seven out of the 11 predictors had correlation coefficients that were over .15, and many of these had p-values = .05–.08. This, coupled with the possibility of multi-collinearity, is why we elected to use all-subsets regression, which takes into account variance explained rather than statistical significance. Our goal was to explain as much variance as possible in final grade, so we focused more on effect sizes than statistical significance.

The all-subsets regression indicated that believing math is important, having a stronger approximate number system (ANS), contributing more discussion forum posts, and submitting peer grading of workshops earlier together represent the best combination of predictors of final grade while also balancing model complexity when adding additional predictors. These variables were then chosen to be used in the dominance analysis. The results of the dominance analysis were not overly compelling for a specific ordering of importance of the four final predictor variables in predicting final grade. Math importance was established as completely dominant over time of submission of grading peer workshop submissions. Otherwise, dominance was only established using the least strict form, general dominance. These results suggested that math importance was more important than time of submission of grading peer workshop submissions, which was more importance than ANS ability, which was more important than the amount an individual did discussion forum posting. Overall, we take these results to mean that, in general, these variables are each similarly predictive.

The predictor variable of math importance is from a scale measuring thoughts about the usefulness of studying math. It is interesting to consider that students in Calculus II have already chosen a college course path that is STEM related, as this course is only required of certain majors. Thus, these students must already believe at some level that math is important. Yet, individual differences in the extent to which students believe math is important is associated with course performance. Perhaps math majors/minors believe that math is more useful than biology majors, and possibly math majors/minors are themselves better at calculus than biology majors based solely on the fact they have chosen to pursue a degree in it. Or it is possible that students able to more clearly see the connections between what they are learning in Calculus II and their chosen field do better (i.e., utility value; Wigfield & Eccles, 2000). We were not able to disentangle this nuance here, although this result might lend support to reminding students in Calculus II about the utility of the class in achieving their career goals.

ANS ability is a measure of non-symbolic numerosity, or an intuitive recognition of number. Researchers suggest that even as early as infancy, individuals develop the ability to make approximations and discriminate between large non-symbolic values (e.g., ten versus eight dots, objects, syllables, and or shapes; e.g., Halberda et al., 2008). We found that students with an ability to differentiate smaller set sizes quickly did better in the Calculus II class. The ANS is thought to be the foundation on which math ability builds (Verguts & Fias, 2004), although some work has found no significant relation between the ANS and future math performance (e.g., de Smedt, Noël, Gilmore, & Ansari, 2013). A meta-analysis has found that there does appear to be a small but stable relation between the ANS and math performance (Chen & Li, 2014) but this relation appears to be non-linear (Bonny & Lourenco, 2013; Purpura & Logan, 2015) with the ANS more related to earlier skills than with later skills (Chu, vanMarle, & Geary, 2015; Libertus, Feigenson, & Halberda, 2013). Given this mixed literature on the role of the ANS in complex math performance, we found it surprising that the ANS was one of the most important predictors of performance in Calculus II.

We also found it surprising that mental rotation was not an important predictor (or even a strong correlate of) final grade in Calculus II. Spatial skills are undeniably important in math performance. The best we can gather is that by the time students are enrolled in Calculus II, spatial skills themselves are no longer an important predictor of performance, as these students likely, at the very least, have good enough spatial skills to have gotten that far in math. The average score of this sample on the mental rotation task seems to be higher than a general undergraduate sample from the same school we have collected the same measure with (approximately 2 items correct more), but at this point this idea is just conjecture. We also remind the reader that spatial skills are not necessarily specifically needed for answering the problems in Calculus II (compared to say Calculus III or other math courses). Some of the units in this Calculus II class did require more overt spatial skills than others, and we anticipate that performance on those units might be more related to spatial skills than overall grade.

Beyond the attitudinal and cognitive predictors, we found that two of the online engagement variables were important predictors of Calculus II performance. The total number of posts that a student contributed to the online course discussion board was positively related to their final course grade. This variable is coarse, in that we are not able to say if the student was generating the root discussion post (e.g., asking the question), or if they were answering other student’s questions/posts. Therefore, we are not sure if this variable represented engagement in the course, or if students who posted questions were able to receive more help that led to a higher grade, or any other number of possible explanations. Interestingly, this finding replicates previous work that also indicated that the total number of posts to a discussion board was associated with success in an online biology course (Macfadyen & Dawson, 2010). Some have suggested that interacting with the course in this engaged way may deepen comprehension of the course content (Evans & Sabry, 2003), a conclusion that fits with the finding here that more discussion board posts was associated with higher grades.

We also found that students who submitted their grading of peer workshop assignments earlier did better in the course. One might suggest that this variable could be thought of as a “procrastination” variable. Procrastination in general is thought to be negatively associated with course performance, and this might explain our finding (e.g., Tice & Baumeister, 1997). Other studies have found that time management in online courses predicts achievement (Jo et al., 2016; Kwon, 2009; Choi & Choi, 2012). Alternatively, this variable could simply reflect that students who were struggling in the course found it difficult to grade these assignments (i.e., it’s hard to determine if something is right or wrong if you yourself are unsure of the right answer), and therefore took longer to submit it. We are unable to disentangle these possible explanations with the currently available data.

For both of the online engagement variables that were found to be important, we are conjecturing at best as to what these variables represent. But we can say that simply tracking a user through an online course, here with two variables, predicted 8% of the total variance in final grades. To put this in perspective with broader educational research, the variance in individual reading performance directly associated with a child’s teacher has been suggested to be similar in magnitude (Byrne et al., 2010). Although small, this 8% is meaningful in an educational context.

4.1 Limitations

There are three points we would like to caution readers about. First, certainly any of the predictors listed in Table 3 are likely interchangeable with the four we selected from the most predictive model, as there is no statistical test of the difference between the correlations of the predictor variables included into the dominance analysis and those that were not (Schatschneider, Fletcher, Francis, Carlson, & Foorman, 2004). In particular, math importance is likely interchangeable with some of the other attitudinal predictors, including math interest, and math confidence — variables that were fairly highly correlated with math importance. We take from this that including an attitudinal variable is important in predicting end of course grades, and is likely useful for readers interested in using student characteristics to incorporating predictive systems in their online courses (e.g., Yukselturk & Bulut, 2007) to include at least one rather than none. These attitudinal surveys are easy to administer online and take 5–10 minutes at the most, and contribute a non-trivial amount of variance in predicting final grade. Interestingly, previous work looking at predictors of success in math classes have found attitudinal predictors to be important (e.g., math confidence; House, 1995), supporting their role especially in predicting student performance in online math classes.

Second, the dominance analysis was not convincing for a clearly important difference in strength of the predictors, and subsequently we believe that all four predictors are important for the model predicting final grade. Readers should focus on how a combination of four predictors, across attitudinal, cognitive, and online student engagement variables, accounted for 17% of the variance in final grade in Calculus II. Although far from 100%, we feel this amount is impressive given that none of the predictors are obvious indicators of performance in Calculus II. Certainly, math performance would be a dominant predictor, including previous grades. We sought to test measurable student characteristic variables that could feasibly be added into prediction software attached to online courses. With that in mind, testing a student’s ANS ability is actually not as feasible as a reader might want, given that the task would either need to be programmed into the course platform, or users would be required to navigate to a thirdparty website and harvest their response (i.e., our method). We note our data shows that we predicted 13% of the variance in student final grade without including ANS, an option that might be considered. Equally important, not all instructors are able to harvest LMS activity data because of privacy concerns; therefore, the full range of student engagement variables might not be possible. Our data shows that we predict 12% of the variance in student final grade without including any of the LMS activity data. Though these variables alone do not give the full picture, something is to be gained even with this incomplete information.

Finally, it is also important to point out that the four predictors we determined to be most important in our data might not be equally important in other similar data or for other courses. These analyses are fundamentally sample specific. Therefore, we reiterate that acknowledging that student attitudes, cognitive performance, and online engagement variables are all important to consider when predicting grade performance and should be considered together, when feasible, or ethically possible.

4.2 Conclusion

In conclusion, we sought to predict student final grades in a Calculus II from a battery consisting of attitudinal, cognitive, and online student engagement variables. We found that a mix of variables across all three categories of variables predicted a non-trivial proportion of variance in final grade. The aim of this work was to determine which are the most important predictors of student grade, with the end goal of building a recommendation system that could be implemented to help students in this traditionally difficult class. The methods used here could be used for any class, with the intention to determine student performance early, and potentially allow an instructor to identify students who may need more intensive help earlier in the semester when intervention can be more effective.

ACKNOWLEDGEMENTS

This project was made possible by the tireless efforts of Dr. Mika Seppälä, who died before he could see the successful outcome of his work. Mika was a pioneer in online teaching, and was passionate in his efforts to make the undergraduate math curriculum more accessible to all students. We thank Dr. Olga Caprotti and Yahya Almalki for their efforts in continuing Mika’s work, including their important work in harvesting the student engagement variables from the WEPS system for us after Mika died so that this project could continue.

This material is based upon work supported by the National Science Foundation under Grant No. 1338509 and 1450501.

REFERENCES

Ashcraft, M. H. (2002). Math anxiety: Personal, educational, and cognitive consequences. Current Directions in Psychological Science, 11(5), 181–185. https://dx.doi.org/10.1111/1467-8721.00196
Ashcraft, M. H., & Krause, J. A. (2007). Working memory, math performance, and math anxiety. Psychonomic Bulletin & Review, 14, 243–248. http://dx.doi.org/10.3758/BF03194059
Azen, R., & Budescu, D. V. (2003). The dominance analysis approach for comparing predictors in multiple regression. Psychological Methods, 2, 129–148. http://dx.doi.org/10.1037/1082-989X.8.2.129
Balduf, M. (2009). Underachievement among college students. Journal of Advanced Academics, 20, 274–294. https://dx.doi.org/10.1177/1932202X0902000204
Bienkowski, M., Feng, M., & Means, B. (2012). Enhancing teaching and learning through educational data mining and learning analytics: An issue brief. US Department of Education, Office of Educational Technology (pp. 1–57).
Bonny, J. W., & Lourenco, S. F. (2013). The approximate number system and its relation to early math achievement: Evidence from the preschool years. Journal of Experimental Child Psychology, 114(3), 375–388. http://dx.doi.org/10.1016/j.jecp.2012.09.015
Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. Internet and Higher Education, 27, 1–13. https://dx.doi.org/10.1016/j.iheduc.2015.04.007
Buckingham Shum, S., & Deakin Crick, R. (2012). Learning dispositions and transferable competencies: Pedagogy, modelling and learning analytics. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK ʼ12), 29 April–2 May 2012, Vancouver, BC, Canada (pp. 92–101). New York: ACM. http://dx.doi.org/10.1145/2330601.2330629
Budescu, D. V. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114(3), 542–551. http://dx.doi.org/10.1037/0033-2909.114.3.542
Byrne, B., Coventry, W. L., Olson, R. K., Wadsworth, S. J., Samuelsson, S., Petrill, S. A., Willcutt, E. G., & Corley, R. (2010). “Teacher effects” in early literacy development: Evidence from a study of twins. Journal of Educational Psychology, 102, 37–42. http://dx.doi.org/10.1037/a0017288
Casey, M. B., Nuttall, R., Pezaris, E., & Benbow, C. P. (1995). The influence of spatial ability on gender differences in mathematics college entrance test scores across diverse samples. Developmental Psychology, 31(4), 697–705. http://dx.doi.org/10.1037/0012-1649.31.4.697
Chanlin, L.-J. (2012). Learning strategies in web-supported collaborative project. Innovations in Education and Teaching International, 49(3), 319–331. http://dx.doi.org/10.1080/14703297.2012.703016
Chen, K. C., & Jang, S. J. (2010). Motivation in online learning: Testing a model of self-determination theory. Computers in Human Behavior, 26(4), 741–752. https://dx.doi.org/10.1016/j.chb.2010.01.011
Chen, Q., & Li, J. (2014). Association between individual difference in non-symbolic number acuity and math performance: A meta-analysis. Acta Psychologica, 148, 163–172. https://doi.org/10.1016/j.actpsy.2014.01.016
Cheng, Y. L., & Mix, K. S. (2014). Spatial training improves children’s mathematics ability. Journal of Cognitive Development, 15(1), 2–11. http://dx.doi.org/10.1080/15248372.2012.725186
Choi, J. I., & Choi, J. S. (2012). The effects of learning plans and time management strategies on college students’ self-regulated learning and academic achievement in e-learning. Journal of Educational Studies, 43(4), 221–244.
Chu, F. W., vanMarle, K., & Geary, D. C. (2015). Early numerical foundations of young children’s mathematical development. Journal of Experimental Child Psychology, 132, 205–212. http://dx.doi.org/10.1016/j.jecp.2015.01.006
Cocea, M., & Weibelzahl, S. (2009). Log file analysis for disengagement detection in e-Learning environments. User Modeling and User-Adapted Interaction, 19(4), 341–385. http://dx.doi.org/10.1007/s11257-009-9065-5
Davies, J., & Graff, M. (2005). Performance in e-learning: Online participation and student grades. British Journal of Educational Technology, 36(4), 657–663. http://dx.doi.org/10.1111/j.1467-8535.2005.00542.x
de Barba, P. G., Kennedy, G. E., & Ainley, M. D. (2016). The role of students’ motivation and participation in predicting performance in a MOOC. Journal of Computer Assisted Learning, 32, 218–231. http://dx.doi.org/10.1111/jcal.12130
de Smedt, B., Noël, M.-P., Gilmore, C., & Ansari, D. (2013). How do symbolic and non-symbolic numerical magnitude processing skills relate to individual differences in children’s mathematical skills? A review of evidence from brain and behavior. Trends in Neuroscience and Education, 2(2), 48–55.
Evans, C., & Sabry, K. (2003). Evaluation of the interactivity of web-based learning systems: Principles and process. Innovations in Education and Teaching International, 40, 89–99. http://dx.doi.org/10.1080/1355800032000038787
Fennema, E., & Sherman, J. A. (1976). Fennema-Sherman mathematics attitudes scales: Instruments designed to measure attitudes toward the learning of mathematics by females and males. Journal for Research in Mathematics Education, 7(5), 324–326. http://dx.doi.org/10.2307/748467
Fulford, C. P., & Zhang, S. (1993). Perceptions of interaction: The critical predictor in distance education. American Journal of Distance Education, 7(3), 8–21. http://dx.doi.org/10.1080/08923649309526830
Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education, 7(2), 95–105.
Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64–71. https://doi.org/10.1016/j.iheduc.2004.02.001
Gašević, D., Dawson, S., Rogers, T., & Gašević, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002
Ganley, C. M., & Vasilyeva, M. (2014). The role of anxiety and working memory in gender differences in mathematics. Journal of Educational Psychology, 106(1), 105–120. http://dx.doi.org/10.1037/a0034099
Goldstein, P. J., & Katz, R. N. (2005). Academic analytics: The uses of management information and technology in higher education. Educause Center for Applied Research, 1–12. https://net.educause.edu/ir/library/pdf/ecar_so/ers/ers0508/EKF0508.pdf
Halberda, J., & Feigenson, L. (2008). Developmental change in the acuity of the “number sense”: The approximate number system in 3-, 4-, 5-, and 6-year-olds and adults. Developmental Psychology, 44(5), 1457–1465. http://dx.doi.org/10.1037/a0012682
Halberda, J., Mazzocco, M. M. M., & Feigenson, L. (2008). Individual differences in nonverbal number acuity correlates with maths achievement. Nature, 455, 665–668.
Hays, W. L. (1994). Statistics. London: Harcourt Brace Jovanovich.
Hembree, R. (1990). The nature, effects, and relief of mathematics anxiety. Journal for Research in Mathematics Education, 21(1), 33–46. http://dx.doi.org/10.2307/749455
Ho, H.-Z., Senturk, D., Lam, A. G., Zimme, J. M., Hong, S., Okamoto, Y., Chiu, S.-Y., Nakazawa, Y., & Wang, C.-P. (2000). The affective and cognitive dimensions of math anxiety: A cross-national study. Journal for Research in Mathematics Education, 31(3), 362–379. http://dx.doi.org/10.2307/749811
House, J. D. (1993). Achievement-related expectancies, academic self-concept, and mathematics performance of academically underprepared adolescent students. The Journal of Genetic Psychology, 154(1), 61–71. http://dx.doi.org/10.1080/00221325.1993.9914722
House, J. D. (1995). The predictive relationship between academic self-concept, achievement expectancies, and grade performance in college calculus. Journal of Social Psychology, 135(1), 111–112. http://dx.doi.org/10.1080/00224545.1995.9711411
Huon, G., Spehar, B., Adam, P., & Rifkin, W. (2007). Resource use and academic performance among first year psychology students. Higher Education, 53, 1–27. http://dx.doi.org/10.1007/s10734-005-1727-6
Ingels, S. J., Pratt, D. J., Wilson, D., Burns, L. J., Currivan, D., Rogers, J. E., & Hubbard-Bednasz, S. (2007). Education Longitudinal Study of 2002: Base-Year to Second Follow-up Data File Documentation (NCES 2008-347). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Jo, I., & Kim, Y. (2013). Impact of learner’s time management strategies on achievement in an e-learning environment: A learning analytics approach. Journal of Korean Association for Educational Information and Media, 19(1), 83–107.
Jo, I. H., Park, Y., Yoon, M., & Sung, H. (2016). Evaluation of online log variables that estimate learners’ time management in a Korean online learning context. The International Review of Research in Open and Distributed Learning, 17(1), 195–213. http://dx.doi.org/10.19173/irrodl.v17i1.2176
Kizilcec, R. F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: Analyzing learner subpopulations in massive open online courses. Proceedings of the 3rd International Conference on Learning Analytics and Knowledge (LAK ’13), 8–12 April 2013, Leuven, Belgium (pp. 170–179). New York: ACM. http://dx.doi.org/10.1145/2460296.2460330
Köller, O., Baumert, J., & Schnabel, K. (2001). Does interest matter? The relationship between academic interest and achievement in mathematics. Journal for Research in Mathematics Education, 32(5): 448–470.
Kwon, S. (2009). The analysis of differences of learners’ participation, procrastination, learning time and achievement by adult learners’ adherence of learning time schedule in e-Learning environments. Journal of Learner-Centered Curriculum and Instruction, 9(3), 61–86.
Lauría, E. J. M., Baron, J. D., Devireddy, M., Sundararaju, V., & Jayaprakash, S. M. (2012). Mining academic data to improve college student retention: An open source perspective. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK ʼ12), 29 April–2 May 2012, Vancouver, BC, Canada (pp. 139–142). New York: ACM. http://dx.doi.org/10.1145/2330601.2330637
Lee, U. J., Sbeglia, G. C., Ha, M., Finch, S. J., & Nehm, R. H. (2015). Clicker score trajectories and concept inventory scores as predictors for early warning systems for large STEM classes. Journal of Science and Educational Technology, 24, 848–860. http://dx.doi.org/10.1007/s10956-015-9568-2
Libertus, M. E., & Brannon, E. M. (2010). Stable individual difference in number discrimination in infancy. Developmental Science, 13(6), 900–906.
Libertus, M. E., Feigenson, L., & Halberda, J. (2013). Is approximate number precision a stable predictor of math ability? Learning and Individual Differences, 25, 126–133. http://dx.doi.org/10.1111/j.1467-7687.2009.00948.x
Loomis, K. D. (2000). Learning styles and asynchronous learning: Comparing the LASSI model to class performance. Journal of Asynchronous Learning Networks, 4(1), 23–31.
Lust, G., Elen, J., & Clarebout, G. (2013). Students’ tool-use within a web enhanced course: Explanatory mechanisms of students’ tool use pattern. Computers in Human Behavior, 29, 2013–2021. https://dx.doi.org/10.1016/j.chb.2013.03.014
Lust, G., Juarez-Collazo, N. A., Elen, J., & Clarebout, G. (2012). Content management systems: Enriched learning opportunities for all? Computers in Human Behavior, 28(3), 795–808. https://doi.org/10.1016/j.chb.2011.12.009
Lust, G., Vandewaetere, M., Cuelemans, E., Elen, J., & Clarebout, G. (2011). Tool use in a blended undergraduate course: In search of user profiles. Computers & Education, 57, 2135–2134. https://doi.org/10.1016/j.compedu.2011.05.010
Ma, X. (1999). A meta-analysis of the relationship between anxiety toward mathematics and achievement in mathematics. Journal for Research in Mathematics Education, 520–540.
Ma, X., & Xu, J. (2004). The causal ordering of mathematics anxiety and mathematics achievement: A longitudinal panel analysis. Journal of Adolescence, 27, 165–179. https://dx.doi.org/10.1016/j.adolescence.2003.11.003
Macfadyen, L. P., & Dawson, S. (2010). Mining LMS data to develop an “early warning system” for educators: A proof of concept. Computers and Education, 54, 588–599. https://doi.org/10.1016/j.compedu.2009.09.008
Maloney, E. A., & Beilock, S. L. (2012). Math anxiety: Who has it, why it develops, and how to guard against it. Trends in Cognitive Sciences, 16, 404–406. https://dx.doi.org/10.1016/j.tics.2012.06.008
Marsh, H. W., Trautwein, U., Lüdtke, O., Köller, O., & Baumert, J. (2005). Academic self-concept, interest, grades, and standardized test scores: Reciprocal effects models of causal ordering. Child Development, 76(2), 397–416. http://dx.doi.org/10.1111/j.1467-8624.2005.00853.x
Meece, J. L., Wigfield, A., & Eccles, J. S. (1990). Predictors of math anxiety and its influence on young adolescents’ course enrollment intentions and performance in mathematics. Journal of Educational Psychology, 82(1), 60–70.
Michinov, N., Brunot, S., Le Bohec, O., Juhel, J., & Delaval, M. (2011). Procrastination, participation, and performance in online learning environments. Computers & Education, 56, 243–252. https://dx.doi.org/10.1016/j.compedu.2010.07.025
Miller, A. J. (2002). Subset selection in regression, 2nd ed. New York: Chapman & Hall.
Milne, J. L., Jeffrey, M., Suddaby, G., & Higgins, A. (2012). Early identification of students at risk of failing. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Proceedings of the 29th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE 2012), 25–28 November 2012, Wellington, New Zealand, Volume 1.
Morris, L. V., Finnegan, C., & Wu, S. (2005). Tracking student behavior, persistence, and achievement in online courses. Internet and Higher Education, 8, 221–231. https://dx.doi.org/10.1016/j.iheduc.2005.06.009
Niculescu, A. C., Tempelaar, D., Leppink, J., Dailey-Hebert, A., Segers, M., & Gijselaers, W. (2015). Feelings and performance in the first year at university: Learning-related emotions as predictors of achievement outcomes in mathematics and statistics. Electronic Journal of Research in Educational Psychology, 13(3), 431–462.
Plake, B. S., & Parker, C. S. (1982). The development and validation of a revised version of the Mathematics Anxiety Rating Scale. Educational and Psychological Measurement, 42, 551–557. https://doi.org/10.1177/001316448204200218
PCAST (President’s Council of Advisors on Science and Technology). (2012). Engage to excel: Producing one million additional college graduates with degrees in science, technology, engineering, and mathematics. http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-executivereport-final_2-13-12.pdf
Purpura, D. J., & Logan, J. A. (2015). The nonlinear relations of approximate number system and mathematical language to early mathematics development. Developmental Psychology, 51(12), 1717–1724. http://dx.doi.org/10.1037/dev0000055
Ramos, C., & Yudko, E. (2008). “Hits” (not “discussion posts”) predict student success in online courses: A double cross-validation study. Computers & Education, 50(4), 1174–1182. https://dx.doi.org/10.1016/j.compedu.2006.11.003
Randhawa, B. S., Beamer, J. E., & Lundberg, I. (1993). Role of mathematics self-efficacy in the structural model of mathematics achievement. Journal of Educational Psychology, 85, 41–48. http://dx.doi.org/10.1037/0022-0663.85.1.41
Reyes, L. H., & Stanic, G. M. A. (1988). Race, Sex, Socioeconomic Status, and Mathematics. Journal for Research in Mathematics Education, 19(1), 26–43.
Richardson, M., Abraham, C., & Bond, R. (2012). Psychological correlates of university students’ academic performance: A systematic review and meta-analysis. Psychological Bulletin, 138, 353–387. http://dx.doi.org/10.1037/a0026838
Romero, C., Lopez, M.-I., Luna, J.-M., & Ventura, S. (2013). Predicting students’ final performance from participation in on-line discussion forums. Computers & Education, 68, 458–472. https://doi.org/10.1016/j.compedu.2013.06.009
Rubin, D. B. (1987). Multiple imputation for nonresponse in surveys. New York: John Wiley & Sons.
Schatschneider, C., Fletcher, J. M., Francis, D. J., Carlson, C. D., & Foorman, B. R. (2004). Kindergarten prediction of reading skills: A longitudinal comparative analysis. Journal of Educational Psychology, 96(2), 265–282. http://dx.doi.org/10.1037/0022-0663.96.2.265
Shute, V. J., & Gluck, K. A. (1996). Individual differences in patterns of spontaneous differences in patterns of spontaneous online tool use. The Journal of the Learning Sciences, 5(4), 329–355. http://dx.doi.org/10.1207/s15327809jls0504_2
Simpkins, S. D., Davis-Kean, P. E., & Eccles, J. S. (2006). Math and science motivation: A longitudinal examination of the links between choices and beliefs. Developmental Psychology, 42(1), 70–83. http://dx.doi.org/10.1037/0012-1649.42.1.70
Sorby, S., Casey, B., Veurink, N., & Dulaney, A. (2013). The role of spatial training in improving spatial and calculus performance in engineering students. Learning and Individual Differences, 26, 20–29. https://dx.doi.org/10.1016/j.lindif.2013.03.010
Speece, D. L., Ritchey, K. D., Silverman, R., Schatschneider, C., Walker, C. Y., & Andrusik, K. N. (2010). Identifying children in middle childhood who are at risk for reading problems. School Psychology Review, 39(2), 258–276.
Tabachnick, B. G., & Fidell, L. S. (2013). Using Multivariate Statistics, 6th ed. New Jersey: Pearson Education.
Tempelaar, D. T., Niculescu, A., Rienties, B., Giesbers, B., & Gijselaers, W. H. (2012). How achievement emotions impact students’ decisions for online learning, and what precedes those emotions. Internet and Higher Education, 15(3), 161–169.
Tempelaar, D. T., Rienties, B., & Giesbers, B. (2015). In search for the most informative data for feedback generation: Learning Analytics in a data-rich context. Computers in Human Behavior, 47, 157–167. https://dx.doi.org/10.1016/j.chb.2014.05.038
Tice, D. M., & Baumeister, R. F. (1997). Longitudinal study of procrastination, performance, stress, and health: The costs and benefits of dawdling. Psychological Science, 8(6), 454–458. https://dx.doi.org/10.1111/j.1467-9280.1997.tb00460.x
Tourangeau, K., Nord, C., Lê, T., Pollack, J. M., & Atkins-Burnett, S. (2006). Early Childhood Longitudinal Study, Kindergarten Class of 1998–99 (ECLS-K), combined user’s manual for the ECLS-K fifthgrade data files and electronic codebooks (NCES 2006–032). U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Vandenberg, S. G., & Kuse, A. R. (1978). Mental rotations, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills, 47, 599–604. https://doi.org/10.2466/pms.1978.47.2.599
Verguts, T., & Fias, W. (2004). Representation of number in animals and humans: A neural model. Journal of Cognitive Neuroscience, 16(9), 1493–1504.
Wigfield, A., & Eccles, J. S. (2000). Expectancy-value theory of achievement motivation. Contemporary Educational Psychology, 25, 68–81. https://doi.org/10.1006/ceps.1999.1015
Wolff, A., Zdrahal, Z., Nikolov, A., & Pantucek, M. (2013). Improving retention: Predicting at-risk students by analysing clicking behaviour in a virtual learning environment. Proceedings of the 3rd International Conference on Learning Analytics and Knowledge (LAK ’13), 8–12 April 2013, Leuven, Belgium (pp. 145–149). New York: ACM. http://dx.doi.org/10.1145/2460296.2460324
Yukselturk, E., & Bulut, S. (2007). Predictors for student success in an online course. Educational Technology & Society, 10(2), 71–83.

 

_________________________

1 https://geom.mathstat.helsinki.fi/moodle/

2 The Mental Rotation Test is free but must be requested from Dr. Michael Peters.

3 http://panamath.org/expt5_fsu/