Pages

Monday, September 11, 2017

New jobs, new blog

Is digital a thing? As of a week or so ago, I have two new jobs:

Digital Learning Lead
and
Metrics lead

for the School of Biological Sciences. What all this means (and whether digital is actually a thing) we will see in the next few months. While we work it all out, we have a new blog which we invite you to follow:

Biological Sciences SoTL
https://leicesterbiosotl.blogspot.com


As you know, this blog has been quiet for a long time. Whether that will continue or whether I will start to use this site for personal views and the new blog as a corporate organ ... time will tell.




Tuesday, August 08, 2017

Collaborative testing promotes higher order thinking

Graduates I'm not sure I completely understand why, but the English education system installs a fierce sense of competition in students. Although we attempt to make them aware that we use criterion- rather than norm-referenced assessment, we just can't seem to convince them that assessment is not a zero sum game. Consequently, group assessment, especially summative group assessment, generates fireworks. But it turns out that collaborative testing is a valuable exercise, assuming we're interested in higher order skills rather than just cramming their heads with facts?


The effects of collaborative testing on higher order thinking: Do the bright get brighter? Active Learning in Higher Education 04.08.2017 doi: 10.1177/1469787417723243
Collaborative testing has been shown to enhance student performance compared to individual testing. It is suggested that collaborative testing promotes higher order thinking, but research has yet to explore this assumption directly. The aim of this study was to explore the benefits of collaborative testing on overall performance, as well as performance on higher order thinking questions. It was hypothesised that, compared to individual test results, students would perform better overall and on higher order thinking questions under collaborative testing conditions. It was expected that these differences would be equal when comparing students of different academic abilities (i.e. ‘upper’, ‘middle’ and ‘lower’ performers). Undergraduate students completed an individual followed by a collaborative test as part of summative assessment. Analyses revealed that with the exception of upper performers, students performed better overall on the collaborative test. Additionally, regardless of their academic abilities, students performed better on the higher order thinking questions under collaborative conditions. This improvement was equal across different academic abilities, suggesting that collaborative testing promotes higher order thinking even when taking into account previous academic achievement. The acceptability and application of collaborative testing is discussed.



Wednesday, August 02, 2017

Making claims for assessment [Commentary]

Usain Bolt In a recent article Dylan Wiliam does what Dylan Wiliam does best – takes a sideways but highly practical look at assessment.

The starting point for his reflection is a recent article discussing models of assessment and learning (Baird, J., Andrich, D., Hopfenbeck, T. & Stobart, G. (2017) Assessment and learning: fields apart? Assessment in Education: Principles, Policy & Practice, 24:3, 317-350, DOI: 10.1080/0969594X.2017.1319337). This argues that learning is substantially driven by assessment and that theories of assessment and theories of learning have, for the most part, proceeded entirely separately. The paper argues that that if theories of assessment take into account theories of learning, assessments will somehow be more valid. This is the point that Wiliam takes issue with:

“There is no a priori reason that processes of development would be enhanced by attention to the processes by which the results of that development are assessed, nor that the processes of assessing the results of development would be improved by considering how that development took place. For example, if we consider the case of the 100 m at the Olympic Games, a theory of learning would provide insights into how people improve their sprinting ability, and would, as a result, help us improve the quality of sprinters, so that they record lower times for the 100 m at the Olympic Games. We could also look at ways of improving the accuracy of the measurement of the time taken by sprinters to run 100 m. However, the important point here is that these two processes are entirely separate. Improvements in the measurement of time do not help us improve the performance of athletes, and improvements in the performance of athletes do not contribute to measuring sprint times more accurately. The aetiology of a performance may be entirely irrelevant to the measurement of that performance.”

Wilam goes on providing good value:

“… it may be that the introduction of more authentic assessments were driven by a desire to make the assessment more closely aligned to the learning environment. But it is at least as likely that the changes were driven by a desire to extend the kinds of inferences that test outcomes would support – to address construct underrepresentation in the assessment design. In other words, the developments of assessment theory were driven by problems of assessment, not of learning. The assessment problems may have been raised by a concern for learning, but there is little evidence that that anything in learning theory prompted the changes. Indeed, the most convincing narrative is, in my view, that developments in each of these fields have been almost completely unaffected by developments in the other. Reasons for such lack of mutual engagement and influence are easy to propose. Academics work in silos, and the pressure to publish creates incentives to make incremental improvements within a narrowly defined field, not least because criteria of quality and rigour in different fields are often very different.”


“The real problem is that measures of student progress are inherently less reliable than status measures, because the former involve subtracting one number that is measured with error from another number that is measured with error (Cronbach & Furby, 1970). The problem is not that the theory is inadequate. The problem is that the theory gives answers that people don’t like.”

“At its heart, as Lee Cronbach pointed out almost 50 years ago, an assessment is a procedure for drawing inferences: ‘One validates, not a test, but an interpretation of data arising from a specified procedure’ (Cronbach, 1971, p. 447, emphasis in original). Any educational assessment is therefore a procedure for making inferences about learning.”


Dylan Wiliam (2017) Assessment and learning: some reflections. Assessment in Education: Principles, Policy & Practice, 24:3, 394-403, DOI: 10.1080/0969594X.2017.1318108


Monday, May 08, 2017

Statement bank feedback: worthwhile?

Feedback checklist After a long sleep, time to dust off the old blog. This paper (just published) is highly relevant to research I'm currently engaged in, so it's of interest to me. It should be of interest to you too, given that staff time is the major pressure on feedback and tricks such as statement banks are only going to grow in prominence. An interesting study, but it uses the language of "assessment literacy" which I don't buy into. My mental model of the "all or nothing" behaviour observed is a different one - student responses to assessment are yet another proxy of engagement, evidenced by the higher marks of the email responders. Apart from not reading their feedback or email, we don't know what the non-responders were doing. So if students aren't going to read it, let's save staff time by using statement banks.


Response of students to statement bank feedback: the impact of assessment literacy on performances in summative tasks.
Assessment & Evaluation in Higher Education 07.05.17, doi: 10.1080/02602938.2017.1324017

Efficiency gains arising from the use of electronic marking tools that allow tutors to select comments from a statement bank are well documented, but how students use this type of feedback remains under explored. Natural science students (N = 161) were emailed feedback reports on a spreadsheet assessment that included an invitation to reply placed at different positions. Outcomes suggest that students either read feedback completely, or not at all. Although mean marks for repliers (M = 75.5%, N = 39) and non-repliers (M = 57.2%, N = 68) were significantly different (p < .01), these two groups possessed equivalent attendance records and similar submission rates and performances in a contemporaneous formatively assessed laboratory report. Notably, average marks for a follow-up summative laboratory report, using the same assessment criteria as the formative task, were 10% higher for students who replied to the original invite. It is concluded that the repliers represent a group of assessment literate students, and that statement bank feedback can foster learning: a simple ‘fire’ analogy for feedback is advanced that advocates high-quality information on progress (fuel) and a curricular atmosphere conducive to learning (oxygen). However, only if students are assessment literate (ignition) will feedback illuminate.

Wednesday, October 05, 2016

Microsoft Madness #OneNoteFail

OneNote YiL that the desktop version of Microsoft OneNote 2016 (but none of the many, many other versions of OneNote as far as I can tell) allows students to alter the creation dates of documents.

I can't imagine what the possible justification for this is, but as we were relying on OneNote to datestamp student diaries, it looks as if Microsoft may just have confined OneNote to oblivion as far as we are concerned.

Why Microsoft, why?





Thursday, September 22, 2016

Use of statistics packages Statistics in science as a whole is a mess. Ecology is no different from the rest of the field, although maybe slightly better than some parts. Statistical analysis is becoming more sophisticated but one thing is clear - R is winning the race.



The mismatch between current statistical practice and doctoral training in ecology. Ecosphere 17th August 2016. doi: 10.1002/ecs2.1394
Ecologists are studying increasingly complex and important issues such as climate change and ecosystem services. These topics often involve large data sets and the application of complicated quantitative models. We evaluated changes in statistics used by ecologists by searching nearly 20,000 published articles in ecology from 1990 to 2013. We found that there has been a rise in sophisticated and computationally intensive statistical techniques such as mixed effects models and Bayesian statistics and a decline in reliance on approaches such as ANOVA or t tests. Similarly, ecologists have shifted away from software such as SAS and SPSS to the open source program R. We also searched the published curricula and syllabi of 154 doctoral programs in the United States and found that despite obvious changes in the statistical practices of ecologists, more than one-third of doctoral programs showed no record of required or optional statistics classes. Approximately one-quarter of programs did require a statistics course, but most of those did not cover contemporary statistical philosophy or advanced techniques. Only one-third of doctoral programs surveyed even listed an optional course that teaches some aspect of contemporary statistics. We call for graduate programs to lead the charge in improving training of future ecologists with skills needed to address and understand the ecological challenges facing humanity.





Friday, September 16, 2016

Motivation to learn

Motivation Five theories:
  1. Expectancy-value
  2. Attribution
  3. Social-cognitive
  4. Goal orientation
  5. Self-determination


Motivation to learn: an overview of contemporary theories. Medical Education 15 September 2016 doi: 10.1111/medu.13074
Objective: To succinctly summarise five contemporary theories about motivation to learn, articulate key intersections and distinctions among these theories, and identify important considerations for future research.
Results: Motivation has been defined as the process whereby goal-directed activities are initiated and sustained. In expectancy-value theory, motivation is a function of the expectation of success and perceived value. Attribution theory focuses on the causal attributions learners create to explain the results of an activity, and classifies these in terms of their locus, stability and controllability. Social- cognitive theory emphasises self-efficacy as the primary driver of motivated action, and also identifies cues that influence future self-efficacy and support self-regulated learning. Goal orientation theory suggests that learners tend to engage in tasks with concerns about mastering the content (mastery goal, arising from a ‘growth’ mindset regarding intelligence and learning) or about doing better than others or avoiding failure (performance goals, arising from a ‘fixed’ mindset). Finally, self-determination theory proposes that optimal performance results from actions motivated by intrinsic interests or by extrinsic values that have become integrated and internalised. Satisfying basic psychosocial needs of autonomy, competence and relatedness promotes such motivation. Looking across all five theories, we note recurrent themes of competence, value, attributions, and interactions between individuals and the learning context.
Conclusions: To avoid conceptual confusion, and perhaps more importantly to maximise the theory-building potential of their work, researchers must be careful (and precise) in how they define, operationalise and measure different motivational constructs. We suggest that motivation research continue to build theory and extend it to health professions domains, identify key outcomes and outcome measures, and test practical educational applications of the principles thus derived.


Nicely contextualizes Carol Dweck's work.