Teaching is a hard job, a noble professional and personal commitment. Just like any other profession, its performance is evaluated on constant basis. In the academic world, one of the most common ways is through course evaluations done by students at the end of each semester. How effective teaching is can be determined via formative and summative assessment. Formative assessment of learning (directly) and of teaching (indirectly) happens all the time throughout the length of the course work. This is what course design is about: structuring the delivery of instruction and aligning every instructional chunk with an appropriate assessment technique in order to measure the level of knowledge and skills that are to be acquired. Summative evaluation is tied to the course learning outcomes and reflects overall student academic achievement per course (on a local scale) and per degree (on a global scale).
On one hand, test scores and grades are an indicator of student success, on the other, formal feedback (course evaluations) provide learner perspective on the strengths and weaknesses of applied instructional strategies within a course or a program. These two indicators cross-check and reflect results from already completed actions. This is summative evaluation of learning and teaching.
What about formative evaluation? Teachers assess students’ learning progress as part of their pedagogy. What is not widely adopted is direct evaluation of their own teaching efforts during the run of a course. This could be done by cross-checking results from self-assessment and from surveying students. Doesn’t it make sense to evaluate how good a teaching practice is just like we evaluate how well learners master a skill as they complete an assignment? The focus has always been on the direct result of student performance, which is the main indicator of teaching mastery. As a consequence, a mandatory implementation of formative self-assessment of teaching in class has never gained much ground until recently.
Many subject matter experts find out what teaching techniques work and what don’t throughout their professional experience and in communication with students and colleagues. Over time effectiveness of certain instructional strategies outweighs and defines SMEs teaching methods and preferences. Sometimes, however, the strategies successfully implemented once, no longer work the way they did initially. Faculty may not notice the negative impact until after the end of a course. Or, as it happens more and more often nowadays, proactive teachers who implement innovative pedagogical techniques and educational technologies need feedback prior to the overall course evaluation in order to see on the go what does not work as excepted and how it can be modified before it is too late.
One way of implementing formative classroom assessment is to use a midway survey. It is a preferred technique among educators who like the informality and the immediacy of feedback they get from students. This method works in two ways: first, it addresses the quality of learning and teaching by determining whether and how well the course design is aligned with the actual learning process. Second, it addresses students expectations and concerns. A midway survey provides a channel for communication that may reflect any otherwise undetected learning gaps; it provides with an opportunity for the teacher to identify patterns of logistical or other problems in class that can prompt her to modulate her role of a leader and mentor. The attached template is an adapted example of a midway survey, created by Dr. Wayne LaMorte who teaches both at Boston University Medical School and in the BU online Health Communication program.
Another way of assessing the effectiveness of teaching techniques is by utilizing a survey with open-ended questions like the “critical incident questionnaire” developed by Dr. Stephen Brookfield. It is an anonymous survey that students must complete at the end of each class. At the next class, voiced out concerns regarding learning or events that happened in the last session are responded to by the professor.
A third option is Robert Marzano’s Exit Tickets. These are instructional strategies successfully applied as formative classroom assessment. He developed four exit ticket prompts for constructive feedback:
1) Students level of understanding in class: “How would you rate your level of understanding of today’s learning?”
2) Students level of effort: “How would you rate your effort in class today? What could you done differently to help yourself learn better?”
3) Focus on instructional strategy effectiveness: “What activity did you like the most and what the least at today’s class? Why?”
4) Open communication to the teacher: “What should I [the teacher] be doing differently to improve your understanding of the content?”
Exit tickets are an immediate, more informal feedback than course evaluations; they can be done several times a semester or at the end of every class. Completion of the four questions (tickets) is required and even counted as attendance in class. The exit tickets can be given in any format , e.g. post-it notes on a wall, one-piece paper questionnaire submission, or as an online survey using the free online survey tool Padlet.com.
Exit Tickets provide faculty with valuable insight on students’ meta-cognitive processes. By aligning teacher and student viewpoints, faculty can improve their lesson plans and gain experience sooner and more effectively than they otherwise would if they were analyzing course evaluations post factum. An interesting example was given recently at the Teaching Professor Conference in Boston by presenters Deborah Theiss and Angela Danley, two teachers from University of Central Missouri. They emphasized the importance of transparency of applied pedagogical methods and the connection between learning goals, learning objectives, and learning activities. Teachers, they said, should not only focus on the learning processes and steps, but provide more clarity on the overall learning goals and show students the big picture, share with them not only the immediate learning goals, but inspire them to expand their own learning aspirations and efforts. They gave an example with the exit ticket that asks about student self-evaluation on effort. By “effort” these two teachers meant knowledge application as an end result. For students however, effort simply meant the time spent on a task. Such survey outcomes can definitely be an eye-opening method for faculty to improve their own teaching approaches on a granular level, which may ultimately affect their overall teaching expertise in a positive way.