As I mentioned in my previous post, we collect feedback from the participants on all our courses. We don't (yet) have a generic feedback form so courses which are scenario-based will ask how each scenario went, courses such as faculty-development will ask what we did well and what we could do to improve, etc.
this article from the BBC.) In particular the tick boxes of "pre-course administration" "administration during the course" "catering" etc. actually give us very little information. The majority of participants tick "very good" or "good", while the occasional "poor" or "very poor" remains unexplained.
Even specific questions such as "Was the duration of the course: a) Too short b) About right or c) Too long" can be difficult to interpret. When people were ticking "Too short", I wasn't sure if they meant that the actual course on the day was too short or that they would have preferred a 2 day course. (When I asked that anybody who ticked "Too short" explain what they meant in the comments section, it turned out that they meant that they would like to have a 2 day course, not that they wanted to finish at 5:30pm or minded if the course finished "early" at 4:45pm)
Currently we also ask people to fill out the feedback on a piece of paper which our administrator then transcribes onto an excel spreadsheet, quite a time-consuming task.
The temptation then is to just ask two questions at the end of the course:
1) What did we do well?
2) What could we do better?
Might these two questions get to the heart of the matter? Would the lack of numerical data make it difficult to show funders that the courses are value-for-money?
I would be interested to hear from anybody who has cracked the feedback puzzle.