Cover Image
close this bookThe Malawi Integrated In-Service Teacher Education Project: An Analysis of the Curriculum and Its Delivery in the Colleges (CIE, 2000, 75 p.)
close this folderChapter 2: The Curriculum Strategy
View the document2.1 Introduction
View the document2.2 Aims, general objectives, and underlying philosophy of MIITEP
View the document2.3 Content
View the document2.4 Pedagogy
View the document2.5 Assessment
View the document2.6 Teaching and Learning materials
View the document2.7 Teaching Practice
View the document2.8 The curriculum strategy and its coherence

2.5 Assessment

The official documentation states that candidates will be awarded a certificate if they pass:

English, Maths, Science and Health Education, Foundation Studies and Teaching Practice, plus one other subject from Category A (General Studies, Agriculture, Chichewa and Home Economics) and one from Category B (Music, PE, Creative Arts, R.E.)

All formal written assessment is set by the Malawi National Examinations Board (MANEB) and marked by tutors under their guidance. The regulations are set out below:

Table 2.2: Assessment





End of residential block

Written Examinations in all subjects


During school-based training

12 assignments (1 per subject)


Grades include assignments, projects and TP*

[Category B subjects: 4 projects


End of course

Final exams in main and Category A subjects


* A Teaching Practice grade is given during the residential block, for a lesson taught in the demonstration school, but the main TP grade is expected to be given during the field-based part of the course. A moderation team from different TTCs including staff from MANEB and TDU visit a sample of trainees to check consistency.

2.5.1 Formative assessment

Within each unit in the Handbooks, there are short questions, designed to check recall and understanding. At the end of each unit there is a ‘unit assessment’ which according to the writers’ guidelines should comprise an activity for each of the unit objectives, though this is not carried out for all the units. No other guidance is given to tutors for checking students’ on-going learning. In the self-study units there are similar short assessment exercises, with the answers given at the end; no reference is made to the MANEB - set assignments and projects to be done during this time.

In the colleges we found no assessment policy either at the department level or at the institutional level. Examinations Committees exist but under MIITEP they do not seem to function. This is an intrinsic flaw in the implementation of the course. Tutors are not required to keep any progress records for students. Individuals give exercises and test at their own discretion. Not one tutor was able to produce documentation of any kind showing there was some tracking of student’s progress. In defence some said they could tell the progress by the extent to which students were participating in class but this is preposterous considering the number of students involved.

Students acknowledged that there are some individuals who give and mark exercises and even tests. They thought this was very helpful including the remarks made on these exercises. There were also reports of tutors who had never given out exercises or tests. Examination of student notebooks confirmed this disparity between departments and even within departments. Occasionally, it was said, a department will give the whole cohort a test, modeled on the end-of-residential examination. This exam is the only assessment which is formalised. Students dread it, which negatively influences their learning habits, encouraging them to demand notes, to memorise and base their studies on past examinations.

2.5.2 Summative Assessment

We carried out an analysis of the written assessment instruments, using an opportunity sample of final exam papers and project requirements for Cohort 1, together with assignment questions for Cohort 1 and 5. As we did not have access to marking schemes or example scripts it was difficult to know exactly what kinds of answers were required. We looked at the coverage of the syllabus, the cognitive demands made, the extent to which the papers focussed on different domains of knowledge and skill, and finally tried to evaluate the relevance of the instruments to the wider aims and objectives.

2.5.3 Exams

The final exam papers followed a common pattern: one third of the questions tested subject-specific content knowledge and two thirds tested pedagogic content knowledge, focussing on methods. Most questions were variations on the short-answer format, requiring the student to write between 1-5 lines, which would be worth between 1 - 10 marks, though some subjects required short essays. The cognitive level demanded within the content section was predominantly recall of knowledge or simple comprehension, though in the pedagogic section there were more apparent examples of application, such as ‘draw up a lesson plan on x’. Most of the exams were based closely on the material in the Handbooks. It appears that the end-of-residence tests followed a very similar pattern.

2.5.4 Assignments

Students complete one assignment in each of the 12 subjects during their School-based Training. The formats are identical insofar as the students have to choose one question out of three. Some subjects ask for a structured essay format in which it is indicated what should be covered and how many marks are given for each point; other subjects set out structured questions.

All the topics are covered in the Handbooks, usually but not always in Books 4 and 5; in some subjects all the needed information is given in the units, so that the student only has to copy or paraphrase the text; in others they need to look more widely through the handbooks and/or consult documents relating to the primary school curriculum; occasionally they would need other library sources. In most subjects the focus is on content rather than pedagogic knowledge. Overall the cognitive demands appear to be low, requiring students to find and report information at a fairly simple level of comprehension, with some application where pedagogical knowledge is being tested.

2.5.5 Projects

In four subjects - Creative Arts, Music, Physical Education and Religious Education - the terminal exam is replaced by a project, carried out during the School-based Training period. These projects follow a similar format: students choose one option out of three and write an 8-10 page report on it, following detailed guidelines on both content and structure.

Analysis produced some rather unexpected results. In some ways, these appear far more demanding than the terminal exams, requiring a wide variety of physical and cognitive skills. Examples are: to learn to drum, or to make clay models; to develop a personal programme to enhance football or netball skills, or to organise a community service project; to carry out local research into traditional dances, or ‘spirit possession’ - most of which seem to require a wide range of cognitive, personal and professional skills, including research, for which the college syllabus provides little or no training. There are some anomalies: none of the tasks are directly related to the students’ work in the classroom, and they are assessed merely by written report, with no apparent requirement to produce artefacts, or demonstrate acquired skills.

For both assignment and projects, it was noticeable the three questions often differed considerably within a paper both in cognitive demand and with regard to the domain of knowledge, so that students who chose different options were being assessed on different things. When only one assignment/project is done during the course, this must reduce not only the validity and reliability of the instrument, but also equity as far as the students are concerned.

Although there is uniformity in instrument format across subjects, this hides some substantial discrepancies in content validity, coverage of domains of knowledge, and level of cognitive demand. Below we give some examples of differences between subjects, which are in some ways related to the different approaches outlined earlier.

2.5.6 Foundation Studies

The exam was different from the others in that it used Multiple-Choice Questions, ‘true/false’ items, and ‘filling in blanks’, as well as a short essay. This format allowed it to cover the syllabus widely, but apart from the essay the cognitive demands were very low - over 75% demanded only recall of knowledge - the quality of the test items were very poor, and the relevance of many of the items to the teacher’s professional understanding and competence was very questionable.

For the assignments, there were remarkable differences between those set for Cohort 1, which required students to bring together ideas from several sections of the syllabus and apply them in new ways to their own or an imaginary school, and those set for Cohort 5, which could have been answered simply by referring to specific units in the Handbooks. We have no idea why this should have been so; other subject assignments do not appear to have changed their approach so radically between the two cohorts.

2.5.7 English

The exam papers attempted in the content section to test students’ own knowledge of English, though this is hardly touched on in the Handbooks; some of it may have been quite challenging to these students. The questions did not cover much of the syllabus, but the items were well constructed and relevant to the classroom. Some of the questions appeared to require both real understanding and application, but others could have been answered by reference to examples given in specific Units.

Some assignment questions required the students to work with the pupil textbooks and teachers’ guides. Though many of the questions appear to have practical relevance, students were not asked them to apply the ideas to their own classrooms and report back, which would have been a much more valid test of their skill than simply describing ‘the steps taken to teach x.’

2.5.8 Maths

The exam paper had reasonably good coverage, and the quality of the items was judged good. The cognitive demands appeared quite high, and in some items the level of mathematical understanding went beyond what had been taught in the Handbook. In both the assignment and the exam paper, some attention was given to testing students’ knowledge of learners with respect to mathematics, e.g. an understanding of common misconceptions, which increases the relevance of these tests. However, these instruments, like the maths syllabus, use complex language about maths, which may increase the level of difficulty for students with poor linguistic skills. Many students reported problems with maths.

2.5.9 Science

Here the imposed exam format was particularly unfortunate, as most of the science syllabus is about content, yet two thirds of the exam questions had to be on pedagogy. Therefore coverage was poor. The cognitive level demanded was mainly recall, particularly as the items apparently requiring comprehension or application often used examples from the Handbooks, which could well have been simply remembered. Similarly the assignment items could all be answered by summarising or paraphrasing information from the Handbooks.

In sum, this analysis suggests that the current MIITEP assessment instruments test only a narrow range of subject specific objectives, rather than the general aims and objectives of the programme as a whole. It is obvious that written exams are poor vehicles for testing broad competences, but the school-based assignments and projects could have offered opportunities for real application and for assessing the students’ ability to integrate theory and practice. Instead, they were used simply to test the knowledge contained in the self-study Handbooks, as in traditional distance education, and in some cases the instruments were technically defective. While the projects are interesting, they do not seem very suitable for assessing professional practice. The analysis shows particularly how compartmentalised the course is; at no point do the students have to bring together their knowledge in an integrated and holistic way. The assessment may be closely matched to the content and to the teaching materials, but they are ill-suited to evaluating whether this programme is turning out ‘effective’ teachers, according to the broader criteria given in the aims.