Peer Observations – Adams Academy
This month in Adams Academy we are chatting about peer observations of teaching practices. This topic is quite relevant for me because we are having similar discussions in another faculty group with which I am involved.
My college has established recommendations for departments to conduct peer reviews, and now the departments are trying to figure out what observations look like for them. One purpose of peer observation is to provide feedback that supplements student reviews (SIRS); another goal is to increase discussion of teaching-related topics among faculty and to help promote a culture of teaching excellence at the university.
The faculty discussions I have been a part of are strikingly similar to the main points of the articles we read:
- Observing Teaching: Teaching Academy at University of Wisconsin-Madison
- Chism, Nancy V. (1999). Setting up a System for Peer Review. Peer Review of Teaching: A Sourcebook – Chapter 2. Anker: Bolton, MA.
- Jill Cosh (1998) Peer Observation in Higher Education ‐‐ A Reflective Approach, Innovations in Education and Training International, 35:2, 171-176.
First is the dilemma of trying to determine if the peer observation is truly an observation or if it is evaluative. This difference can be quite important to faculty. An observation is simply a statement of what was done in the classroom, whereas an evaluation gives judgement power to a peer. Depending on the colleague’s opinions (and/or bias) of teaching, content, or the faculty member being reviewed, an evaluation may not be fair. Or, on the other hand, as Cosh (1998) points out, sometimes peer evals end up all being gushingly positive, having little value.
Closely related to this distinction is determining the audience of the review. Is the information collected being returned to the instructor as a formative assessment for growth and professional development? Or is it being submitted to a chair or dean as part of an annual or promotional review? The focus of the observation and the feedback generated can vary greatly depending on the overall purpose. I believe there should be a development piece to these reviews. Faculty could be given the opportunity to discuss with their observers areas of instruction where they would like suggestions for improvement. Observations are perfect platforms for discussion, feedback, and reflection. However, submitting critiques of that manner to administration – pointing out areas that could be modified and giving constructive recommendations – could be interpreted as that faculty member not doing a sufficient job in the classroom, even though that is not the intent of the feedback at all.
Time commitment is also a huge issue. To complete a truly successful peer observation, the observer should have at least some training in the rubric being used. Additionally, pre- and post-observation meetings with the faculty member should occur in a best-case scenario, and when this is combined with the observation itself and the time needed for a write up, a peer observation becomes quite the task. Add to that the fact that one observation rarely gives a full perspective of an instructor’s teaching and that best practice would include multiple observations, and peer observation becomes practically a full-time job.
These are just some of the main points we have been discussing and that I expect to chat about in Adams Academy this month. There are many others (for example, how to create a peer evaluation rubric that fits all classes – online, in person, lab, etc…), but even just these issues make it difficult to come to enough of a consensus to move forward. However, with the right group of folks and some determination, successful peer review processes can lead to positive outcomes for instructors and departments.