HOW WE CAN HELP?
Pre- and Post- Questionnaire Testing
One major part of monitoring and evaluation (M&E) is to assess the impact and sustainability of any project. Nowadays, many projects focus on capacity-building by providing training. While in the past, project managers counted the number of people trained, the number of hours of training provided, and similar metrics, more is needed nowadays.
As part of the current, widely used, results-based monitoring approach, we want to understand how much people have learnt and if the results are attributable to the project in question.
One way to do this is by using pre- and post-testing. Testing is not a new way of assessing the level of acquired knowledge – everyone knows it from school or university.
One major difference from school testing is that we also want to assess the level of knowledge before the training started. Training should be based on needs assessments to understand where gaps exist. Thus, a pre-assessment helps us understand the current level of knowledge as our baseline. We therefore ensure that the training was indeed necessary. It is not helpful for us to pat ourselves on the back that everyone aced the post-test if, in actuality, they all knew the topics even before we started. To be sure we can attribute their knowledge to the training provided, we need a pre-test.
In general, the major questions we often get asked are regarding how to administer this testing with as little effort as possible, since most NGOs have never planned a sufficient budget for testing their capacity-building activities or they do not have a lot of staff members available to administer tests. This means that a proper test, graded by a human being, is out of question. Additionally, the length and depth of training often does not justify such labour- and time-intensive testing.
The pre-testing should always be done right at the beginning of the training (before any content is discussed in detail) to understand the pre-test situation – referred to as the ‘baseline’. The post-test should be done after the training is finished. And often, because we obviously want to make sure we can track down our training participants and have them all in the same room, we do the post-testing right at the end of the training. This is okay, but if we have the opportunity, we should consider doing a post-test later – some weeks, or even months, after the training is completed. This way, we can understand the participants’ retention rate – meaning how much they have kept in their long-term memory, and not just in the short-term.
Testing can be done in many different ways. We will present three different options here, and you can pick whichever works best for the context, your experience, and your budget.
But before we dive in, we are going to discuss some more basics that we consider for testing (all for the sake of finding a good and efficient way of testing). This should be seen as the minimum, and you can always do more.
Testing should always cover all of the topics discussed, so the questions developed should ideally be put together by the trainer. To differentiate between levels of knowledge, there should be both easier and more complicated questions. Because the grading of text-based written answers can take time, we suggest using multiple-choice questions. Again, for the ease of grading, we advise always having only one correct answer out of a list of possible answers. You can, of course, make multiple-choice questions more advanced, but then the grading also becomes more advanced. To really understand the improvement, we suggest that you use the same questions in your pre- and post-test, but you can of course change the order in which the questions appear, etc. It is also important that you not discuss the correct answers to the pre-test.
Basic Testing Using Paper-based Questionnaires
You can develop a paper-based questionnaire that you can give to the participants to fill out at the beginning and at the end. These can be simple Microsoft Word documents you print and give to the participants. Make sure you provide these questions in a language understood by all the participants. In case of a mixed group, have more than one language available.
Once you have done your testing, we have provided an Excel template file to help you. The file features note that guides you in detail on how to use it, along with five spreadsheets, where you can enter information. You only have to fill four of the spreadsheets, because when done correctly, the fifth will automatically analyse your test results for you.
The first sheet to fill for you is the Questionnaire Pre, your pre-test questions. We have given you space for 15 questions, with up to four answer options for each. You can have more, of course, but then you will need to make sure that all the formulas in the analysis sheet will see your additions (reach out to us if you need help).
Here you can see what the file looks like. We tried to make it quite easy and intuitive. In column C, you write your questions (while theoretically you do not need this, it’s good to have all the questions and answers at hand, as you might not remember a year or two down the line what the test was about, and this way, you have all the relevant information). Then, in column E, you write the answer options. We provide you space for four and would advise you to use all of them. Using fewer than four makes it easier for people to ‘guess’ correctly, and as Hans Rosling showed in his famous tests, you want to beat the chimpanzees (meaning the respondents just guess and get the answer right by chance). Then, in column F, you mark which of your given options is the correct one.
When you have finished, it looks like this. We created a little colour code – it turns green – which helps you make quickly sure that the right answer is marked ‘correct’. Now you proceed and do this for all your questions. Then you enter the results from the pre-test in the next sheet, called Pre-test.
Column B, C and D do not have to be filled, but if you do (especially for C/gender and D/age, additional analysis will be done). You can administer the test without knowing who provided which answers, or with all the details. Then in columns E to S, you get a drop-down list where you write which answer people gave.
You can directly see here how well people did, as column T shows you the number of correct answers given, and column U shows you the percentage of correct answers. You can, of course, use less than 15 questions here, in which case you just enter the responses for the actual number of questions participants answered. You do the exact same for the post-test. In the Questionnaire Post sheet, we offer you the ability to pose new questions (just in case you decide not to use the same ones used in the pre-test).
Our three sample students have done well; everyone seems to have learnt something. You can now go over to the Analysis sheet to see how things compare between the pre- and the post-test. First, we look at the descriptive summary. Here, we are just checking the gender (yes, we know this may seem odd to check, but bear with us) and the age of the respondents to see if there were any changes.
For our three samples, all is well: the number of pre- and post-test respondents by gender matches, so we get green results in row 13. If you were to use different numbers, row 13 would look like this.
We added two extra people, so the post-test has five respondents, while the pre-test only has three. Row 13 now turns red, showing us that something does not match. This could be okay, as you may know you had some people missing from the pre-test, but it can also help you spot any errors you may have made in data entry.
The results summary shows us the average percentage of correct answers by gender and in total. It also shows the difference between the pre- and post-test. If you had a target, you can put it in row 29 (the grey area), and then row 33 will show you if you reached the target.
With a target of 30% – that means 30% more correct answers – only the “Female” demographic achieved it. Although the others showed increases, they were below the target.
Lastly, the document provides you with an analysis by question, so you can understand where people struggled (if they struggled).
Instead of using a paper-based version, you can also administer the test by means of digital data collection efforts. For this, we provide you with a simple template (which again can be altered and adjusted as you need). The template uses Open Data Kit (ODK), which works on many different websites for digital data collection, such as ODK Aggregate, ODK Central, Kobotoolbox, ONA, CommCare, SurveyCTO, and others.
The Excel file is already in the correct format for ODK, so if you do not already know how to use xlsform well, you should stick to only making changes to the areas marked in green or orange. In the green areas, you write your questions and answers. In the sheet titled survey, we have again added 15 questions (and you can change it, as you need). Each question needs one row.
We have taken some measures here to allow for less bias, ensuring that the order of the answer options will be randomised. This means that you should not write answers like ‘None of the above’, as this might be listed as the first answer option. Instead, you can write ‘None of the answers is correct’, for example.
You need to write the answer options into the second sheet, choices. In column C, you write the answer options in the green areas, and each option gets its own row. In column B (orange), you indicate which answer is correct by writing a ‘1’, and all other options should get a ‘0’. We have inserted some lines to make it easier for you to see which answers belong to which question, but you can also see that by referring to column A.
If you need the survey in more than one language, you can check xlsform to see how to add further languages.
At the beginning of the survey, we have added two questions regarding gender and age, and at the end, we added a question summing up all the answers (that’s why the correct answer is a 1 and the wrong answers are a 0). This system only works when all questions are answered, so we have made them ‘required’, by writing ‘yes’ in the relevant column F.
There are three ways to use the ODK tool. You can upload it and use the web-link, which you send to participants who then fill the survey out themselves. This often works for educated people, such as in business incubators or for training government or internal staff. The second option is to load the questions onto tablets and pass these around during the training (of course, at the beginning or the end). Lastly, you can have interviewers ask the questions to people, and the interviewers record the answers – this works well for illiterate or semi-literate participants. But the last option takes a bit of time, especially for bigger groups.
In many countries, we have participants who are semi-literate or illiterate. While what we described above is one method to test people with difficulties, it takes time. Here, we present you with a simpler/quicker version (at least regarding the time to get the results).
From our package, you need to print each of the question files (Q1 – Q15) four times (one for each of the answer options). Then, you need to print the back side of each question page with one of the four icons (apple, mug, pencil, sheep). So in the end, you have four double-sided pages for each question: always with the question (Q1, Q2, etc.) on one side, and one of the icons on the other side (see below for an example using the ‘apple’ icon).
Side 1 Side 2
Now comes a bit of do-it-yourself arts and crafts: Cut out each of these 4cm-by-4cm squares (the black boxes give you a bit of guidance), which will result in 28 “apple” responses for question one (Q1). You will need to have a set of four answer options – each with its specific icon – for each question for each participant. If you want to be able to know which of your respondents answered what, you can now write the numbers 1-28 on the bottom of each set of the cards, where it says “Respondents: _____”. Otherwise, your test will be anonymous.
Continue in this manner until you have a set of answer cards for each of the questions for each of your 28 respondents. Yes, this will total 28 sets of 60 small, square answer cards. If you have more than 28 participants, you need to just repeat the above steps. When you do the printing, make sure that the double-sided printing works well and lines up right – this might need a few trial and error attempts to get it right. After you cut out the answer cards, you can laminate them to make them more durable. Note: You need to laminate them AFTER you cut them out, as otherwise, the lamination will not hold when you cut them. Also, a cutting machine will be your friend here.
Now you need some organised way to store everything. We advise that you keep all questions and answer options separate (if you do not need to keep track of respondents). This means all apple answers of Q1 are kept together, but separate from the sheep answers for that same question, and so on with the other icons.
In the training (here it helps to have an assistant), you will give out the four answer options for each question (let’s start with Q1) – with the icons face up – to each respondent. If you want to track who said what, have a document that tells you which respondent gets which number and keep track of it as you pass each question out (be sure each person gets answers with the same respondent number on them each time).
Next, the trainer reads out the question, and the answer options. Most likely, they will have to do this more than once. Each answer option is associated with one symbol. You ask people to choose the symbol of their chosen answer option. For greater ease and less stress, have one container ready to collect the answer options selected by participants and a separate container for all ‘other’ options, and collect these after each question. It is very helpful to be able to organise them in peace after the training.
Another option is to give each person two containers: one for their chosen answers and one for the answers they did not choose. Then, you collect all answers by each person (without having to write ‘respondent’ numbers on the back). We would advise against this option, because this system works only when you are very organised and don’t mix things up, and you’re able to keep track of which container came from which person.
Once you are back in the office, you can use the spreadsheet document introduced above to keep track of the answer options and analyse the training test.