When researchers, educators, and partners gathered for the ALiVE research methodology workshop, the atmosphere was marked by both curiosity and resolve. This was not simply a training; rather, it was an invitation to question long-held assumptions. They were not just there to learn; they were there to challenge their own assumptions, to unlearn rigid old habits, and to embrace new ways of thinking about how evidence can meaningfully shape competence-based curriculum reforms. Led by the Impact Evaluation Lab under the guidance of Dr. Constantine Manda, the five-day workshop combined guided reflection, rigorous debates, and practical exercises. Together, participants embarked on a humbling journey to rethink what it truly takes to design research that not only measures outcomes but also illuminates the processes, contexts and lived realities of CBC implementation. From Reflection to Rethinking Learning Pathways A turning point came when participants reflected on the state of CBC in their countries, not through numbers alone, but through stories of classrooms, teachers, and learners navigating change. The insight was clear- to understand whether CBC is working, we must go beyond measuring outputs to interrogating processes of how learners build competencies, how teachers adapt pedagogy, and how schools manage systemic shifts. From Concepts to Practice The workshop prioritized doing over discussing. Participants: Engaged in hands-on sessions on research design, data analysis, and randomization, building confidence in practical tools such as Excel for experimental design. Worked in national and thematic groups to frame research questions, sketched tools, and developed timelines for implementation, supported by technical mentorship. The energy in the room was palpable as debates sparked clarity and collaboration bridged diverse perspectives. Participants discovered that rigorous research thrives when grounded in collective efforts. Growing a Research Community By the end, what began as individual reflections had grown into a shared commitment to build an evidence culture that is process-oriented, context-sensitive, and policy-relevant. Draft research tools and timelines were developed, but more importantly, participants left with a renewed confidencethat they could generate evidence robust enough to guide and strengthen CBC implementation. As one participant reflected: “It wasn’t about getting the answers- it was about learning to ask the right questions.” While another affirmed that, “ the beauty about this workshop was discovering that collaboration doesn’t weaken research- it strengthens it.” I leave here with not only knowledge, but also with the courage to use it. Joyce Kahembe Head of Research, Consultancy & Publication, TIE. “I benefited from an excellent session led by Prof. Constantine Manda (University of California, Irvine) on core program evaluation methods including causal inference, randomized designs, regression discontinuity, differences-in-differences, and the research ethics safeguarding human subjects. Two ideas stuck with me: the power of randomization in experiments, and how larger samples sharpen precision and data quality. Just as important, the workshop reinforced a simple rule: define constructs clearly, ask causal questions explicitly, and anchor every claim in a credible counterfactual so we don’t confuse selection effects or time trends for impact. Practical fieldwork takeaways were equally strong: use proper randomizers, recruit local enumerators, anticipate social desirability bias (e.g., list experiments), and consider mobile surveys where appropriate. Given Kenya’s rich tradition of quantitative and mixed-methods studies, I left convinced that whatever the approach, we should still aim for designs that create credible comparisons and preserve balance between treatment and control groups. Finally, the human lesson: social capital matters. Peer networks open doors to better studies and career opportunities. I’m grateful to ALiVE, the Tanzania Institute of Education (TIE), and the Impact Evaluation Lab for an evidence-focused, hands-on convening. Above all, the experience affirmed that learning, relearning, and unlearning are essential as we pursue process-oriented curriculum implementation research. This was indeed a good opportunity of learning about how to ensure our learners across East Africa have the best chance at living meaningful lives and being competitive globally.” – Evans Mos Olao, Senior Research and Knowledge Management Officer, KICD. “Orthogonality in randomization might sound like a heavy term, but it is actually a simple idea with big importance in experimental designs. It’s about making sure that the treatment and control groups are truly comparable before the experiment begins. Why does this matter? Because we want certainty that any difference we see later on is due only to the treatment. If the groups already differ in important ways like age, income, or education then we can’t be sure whether the treatment or those differences are driving the results. Orthogonality helps solve that problem by giving us balance. A balance test is one way to check this. By collecting data on key characteristics at the start, sometimes through a simple baseline survey, or even using existing secondary data we can test whether the treatment and control groups are equivalent. If they are, then we know randomization has done its job. And the good news? This often means we can avoid running expensive baseline surveys. With orthogonality in place, the analysis becomes much cleaner. The treatment effect can be seen directly in the difference in outcomes between the treatment and control groups. That’s the power of getting the basics right from the start.” – Martin Ariapa, ALiVE Regional Senior Analyst. “This workshop marked a milestone in building a regional research community that puts life skills and values at the core of education. The methodologies we explored will ripple outward—helping governments track progress, shape interventions, and equip learners with the competencies they need to thrive in a complex and divided world. The journey does not end here. The seeds planted will grow into evidence that informs, questions that challenge, and practices that reimagine how education systems nurture competencies for learning and life” – Akongo Rose Stella, ALiVE Co-PI Learning Hub. By Einoth Justine – ALiVE Manager, Tanzania
Reflections from the AEAA 41st Annual Conference, Addis Ababa By Samson Sitta, ALiVE Senior Program Officer, MZF This August, education leaders, researchers, and practitioners from across Africa and beyond gathered for the 41st Annual Conference of Association for Educational Assessment in Africa (AEAA), in Addis Ababa, Ethiopia. The conversations rode on the theme, Transforming Educational Assessment: Towards Quality learning and Informed Decision Making. The first Sub-theme resonated around the question: How can technology help Africa rethink the way we assess learning? Beyond Exams: Why Technology Matters For decades, examinations have been the gateway to opportunities in Africa. But too often, education systems have measured only a fraction of what learners truly know and can do. Now, with rapid advances in technology, new possibilities are emerging. Imagine classrooms where tests are not only faster to administer but also fairer, more inclusive, and more connected to real life. Imagine assessments that recognize creativity, problem-solving, and resilience—skills young people need to thrive in the 21st century. This vision animated the AEAA conference, with speakers from across the continent sharing both challenges and inspiring solutions (Dieteren, 2025; Aminu et al., 2025; Sitta & Marandu, 2025; Mahlet, 2025; Namigadde, 2025). AI Can Help – But Humans Are Still Key Dutch testologist Nico Dieteren presented on Testology and Technology: How the Human Factor Can Leverage and Enhance the Use of AI in Making Good Tests. He reminded participants that while artificial intelligence (AI) can make testing more efficient—automating tasks like marking, test assembly, and item generation—it cannot replace human judgment. “AI is strong in speed and scale,” he explained, “but unreliable in creativity and ethics.” In Africa, where culture and fairness are central, AI must be paired with human expertise to ensure assessments remain meaningful and just (Dieteren, 2025). Samson Sitta at the AEAA Conference 2025 From Nigeria, Dr. Mohammed Aminu and colleagues presented findings on the Implementation of Inclusive Assessment Practices in Technical Colleges in Southern Nigeria. Their study revealed that while students perceive inclusive assessments as improving participation and learning outcomes, many teachers still rely on traditional tests. Barriers such as limited digital skills, gaps in training, and inadequate funding stood out. Yet, with investment in digital tools, teacher training, and policy support, assessments can be redesigned to celebrate every learner’s talents—not just those who excel in exams (Aminu, Stephen, Iluobe & Raymond, 2025). Digital Literacy: More Than a Tech Skill From Tanzania and Zanzibar, Samson Sitta and Daniel Marandu, presented on behalf of the Action for Life Skills and Values in East Africa (ALiVE) initiative. They shared insights on Leveraging Digital Technologies to Transform Educational Assessments in Africa. Their evidence shows that only 31% of adolescents could easily use digital tools, with girls and poorer adolescents most disadvantaged. Yet adolescents with stronger digital skills demonstrated higher confidence, problem-solving skill, and resilience. They observe that digital literacy is not just a technical ability—it is a life skill that opens doors to learning, work, and empowerment (Marandu & Sitta, 2025). Ethiopia’s Experiment with : Structured Pedagogy From Ethiopia, Mahlet (Luminos Fund) presented on “Scaling Structured Pedagogy in Sidama Using EGRA/EGMA Data.” Working with the Ministry of Education, Luminos piloted structured lesson plans combined with tablet-based assessments. The results were striking. Children in structured pedagogy classrooms recorded 29+ correct words per minute in literacy and nearly doubled their performance in numeracy compared to peers in traditional programs. This shows that when technology is blended with pedagogy and teacher support, learning outcomes improve dramatically—even in resource-constrained settings (Mahlet, 2025). Confronting Exam Malpractice in Uganda From Uganda, Namigadde Salimah of UNEB presented on Examination Malpractice at High-Stakes Primary Leaving Examinations (PLE) in Luweero District. She noted that malpractice—driven by academic pressure, institutional competition, and inadequate preparation—remains a serious threat to fairness and integrity of assessments. Proposed solutions included CCTV surveillance, biometric verification, and data analytics. However, Namigadde emphasized that sustainable solutions require more than technology: building a culture of honesty and accountability among students, teachers, parents, and communities is critical (Namigadde, 2025). A Shared Call to Action Across all the presentations, one message was clear: technology alone cannot transform education. It must be guided by values of fairness, inclusion, and cultural relevance and powered by people: teachers, learners, parents, and policymakers. As one speaker reflected, “Africa cannot afford to be the missing continent in the digital revolution, but neither can it lose sight of the human factor that ensures education remains meaningful and just.” The conference ended not with final answers, but with renewed determination: to build an education system where technology helps every child to learn, every talent to shine, and every assessment count. @samsonsitta07
