An increasing number of studies have investigated the implementation of Learner-Centred Pedagogy (LCP) in different countries, but there is still limited empirical evidence on what impacts LCP may have on learners and learning. This article summarises the findings of a systematic review of 62 journal articles reporting the outcomes of LCP implementation in low- to middle-income countries. The review found relatively few studies that provided objective evidence of LCP effectiveness. A higher number of studies identified non-objective perspectives of LCP effectiveness, such as teacher and student perceptions, as well non-cognitive outcomes such as increased student motivation, confidence, and enhanced relationships.
The main message of the review is that there are few studies that deliver objective evidence (e.g., using standardized assessment methods, employing rigorous research designs) that LCP affects learner outcomes positively, while there is an abundance of studies that seem to focus on non-objective outcomes (e.g., teacher-student relationships, student perceptions).
To know what the authors meant by LCP, have a look at the table below. A pretty broad definition, don't you think?
The review was quickly picked up on social media. What struck me is that this review was quickly used to dismiss practices that align with LCP. But I don't think that this review can be used for that at all. I think the review has too many issues to be the final nail in the coffin of LCP. Some of the issues are:
- In their search for studies, the authors searched for studies with the words ‘learner-centred’, ‘student-centred’, ‘child-centred’ in title and/or abstract. This means that studies that met the description of LCP but did not have these words in title or abstract were not included. The authors justify this because including the LCP elements in their search would have dramatically increased the number of hits. While this is understandable, it probably means that they missed out on a number of relevant studies.
- It also raises the question how useful it is to do a review on a concept that is as broad as learner centered pedagogy. If all studies conducted on peer tutoring would fall within this definition, probably studies on collaborative learning, problem-based learning, inquiry learning, simulation-based learning, etc. could also fall within in this definition. The same goes for studies that adopt self-determination theory to study the effects of learner autonomy on learning outcomes. And how about studies that investigated the effect of Montessori or Jenaplan education? These could also fall within the definition. Meta-analyses and systematic reviews have been criticized for comparing apples to oranges, and in this case this seems like a valid criticism.
- The issue above also leads to strange situations. The authors for example discuss a study on peer tutoring Malaysia. While peer tutoring matches some of the LCP elements the authors describe (e.g., active participation) and it thus is justifiable to include this study in the review, it is strange not to include other studies on peer tutoring in the review, just because they do not mention keywords such as 'learner-centred' in their title or abstract.
- While LCP consists of six elements according to the authors, they do not distinguish between studies that include only all elements of LCP and studies that contain less or even only one element. Is it justified to call studies that only address 21st century skills as studies into LCP?
- By not investigating whether the number of LCP elements included in an LCP classroom actually matters, the authors miss out on an opportunity to understand how different LCP implementations affect the outcomes of LCP differently. The review treats all LCP implementations as equal and is questionable whether this is actually the case.
- In most systematic reviews a type of coding is done: authors code various aspects of the study to be able to for example distinguish between different outcome measures (e.g., standardized tests vs. teacher-made tests). However, this also introduces a potential source of bias: coders have their own interpretations and these can vary. It is good practice to implement measures to counter these coder biases, by for example training coders and establishing interrater reliability. It is difficult to know if and how this was done in this study. The authors write: "two researchers comprehensively checked each of the codes", but this does not give us insight into what the results of these checks were and how often changes were made.
- In any systematic review there will be high quality studies and lower quality studies. These lower quality studies introduce another potential source of bias in the systematic review. It is good practice to determine the quality of each included study, for example using instruments for critical appraisal (e.g., CASP). This way, the reviewer can for example check whether the results of the review are similar for lower quality compared to higher quality studies. Usually, higher quality studies tend to provide more conservative estimates of the outcome than lower quality studies. It is a missed opportunity that the authors did not investigate the quality of the studies they included in their review.
- The review focuses on the use of LCP in low- and middle-income countries and thus excludes results of studies conducted in high-income countries. Although the authors may be right that "there have been very few systematic reviews or meta-analyses examining" LCP in low- and middle income countries, it is a missed opportunity not to survey the whole body of literature. A better way to address this would have been to code whether the study was conducted in a low-, middle- or high-income country and to check whether the effects of LCP differ across these countries.
- As the authors acknowledge, there is a strong tendency in journals to publish studies demonstrating significant effects (i.e., publication bias). One way to counter this is to explicitly search for studies that were not published in journals, by for example examining conference abstracts, archives of dissertations and by reaching out to researchers to ask whether they have unpublished work relevant to the review. The authors neglected to do this, thus running the risk of introducing another source of bias in their results.
Geen opmerkingen:
Een reactie posten