I briefly talked with members of the Finance class (BUSADMIN 765 ) last night. They raised some interesting points, especially the desirability of the level of assessments being consistent throughout a course. In particular they were expressing a concern that on-course assessments should be at the same level of difficulty as final (examination) assessments.
Putting aside the difference between formative assessment (those designed to help the student develop; e.g. most on-course assessments) and summative assessments (those designed to assess the students level of achievement; e.g. examinations), the call for consistency of difficulty seems reasonable at first blush. However, this is not always desirable (or even achievable).
For example, let’s look at the strategy course (BUSADMIN 768) that I taught earlier in the year (targeted at the same type of students). In that course, where assessment takes place each week, I clearly indicated that as students became familiar with the type of assessment (participation in case discussions) the bar would move up each week. Indeed, in such strategy classes, I always tell the students that what might be considered good work in the first week is likely to be considered only average or poor by the standards reached at the end of the course. In this class, as students become more accomplished at strategising (and getting their point across) our expectations (and standards) rise.
However, this is not always the case. With different types of assignments, say essays and presentations, some students will be better at one type than the other–this can be related to their preferred learning or communication style. Consequently, a students performance might vary over different types of assessment. This is often true in examinations where the examination is of a different type of structure compared to a take-home assignment. If someone has a more reflective learning style, they may not do well in the time-limited format of an examination.
There is also another factor that needs to be taken into account. It is hard to design assignments that are reliably “difficult”. For example, when designing a multiple choice question (to be part of a test bank), I will often test it and analyse the results over several iterations and hundreds of students. I’ll use TestGraf to perform the analysis to ensure that question work in an appropriate (intended manner). Even with all my experience in writing multiple-choice questions I will still (occasionally) produce one that does not work in some way. For example, I have in the past accidentally produced questions that high performing students do poorly, whilst low performing students get it right. Clearly not the desired outcome.
All this means that assessment setting is not an exact science – there are a number of things that may intentionally or accidentally get in the way of providing assignments that are ‘consistent’. Most of the time this is managed in the marking processes across the whole range of on-course and final examinations. But sometimes, this results in students feeling “mistreated” by assignments. However, usually, when this happens the final spread of marks is sensible – neither the examiner, the assessor, or the Head of Department who signs-off on the final grades would do so, if things were wildly out.