Take out your pens or laptops; it’s time for students to relieve themselves of the pains they’ve endured all quarter long.

Towards the end of the quarter, the Evaluation of Instruction Program gives students a survey that asks them to share their experience of a course. Two sections within the survey ask students to rate the instructor based on a given scale. Another section asks an open-ended question about the instructor’s strengths and weaknesses.

Evaluations are intended to be a teaching tool, said Kathy Komar, acting co-director of the Office of Instructional Development. The office is meant to help the instructors learn from their students about the learning environment and their teaching methods.

Constructive criticism comes in many forms, but the ratings are not a major player in helping the instructors’ improvement because of their various interpretations. Also, the comment section is very open to biases due to its lack of specificity. EIP can enhance the evaluations by lengthening the non-standardized question portion and incorporating more department-specific questions, thereby decreasing the number of questions that call for instructor ratings.

EIP checks with faculty to see if the data provided is useful, Komar said. Data points allow for the program to see whether something should be explored more deeply. If instructors receive uneven feedback, they are encouraged to seek assistance at the OID.

Evaluations are reflections, which are “essential for an educator to be effective.” Genuine constructive criticism is one of the most valuable ways that teachers can improve their craft.

An attempt to obtain constructive criticism is done with a major portion of evaluations by asking students to rate their instructors. Instructors are rated numerically as a way to measure their effectiveness. However, the numerical ratings make the evaluations much too standardized. With the standardization of feedback, evaluations lack in-depth detail of what the students’ experiences were in the course. Constructive criticism doesn’t result from standardized responses.

Moreover, standardized responses can be interpreted differently based on the person. This can cause confusion on what should be done in order to improve the coursework.

Many instructors also agree that the meaning of the numbers isn’t clear enough to have a coherent understanding of what improvements should be made.

“I’m a little bit more skeptical of the numbers,” said Caitlin Benson, UCLA graduate student and teaching associate in the department of English. “I don’t find myself looking at numbers and knowing what to do about them. When I get comments there’s a reason or explanation for why they felt the way they did.”

Feedback for instructors can be optimized through different means of evaluation. As such, instead of evaluations asking students to rate instructors based on a scale, EIP should increase the questions that are meant for students to seriously analyze their experience in the class.

The replacement of ratings by department-specific questions would allow for the time needed to complete the evaluations to stay relatively the same. This way, students wouldn’t feel the need to object because it wouldn’t be made longer or more painful to endure.

While ratings are harder for instructors to interpret, prompts would give them constructive criticism.

However, another problem arises with this suggestion. Biases can easily be added if the open-ended questions are left as is. When instructors receive biased feedback, they lose the opportunity of constructive criticism, which deters the improvement of instructors.

“People somehow feel like they can comment on women’s (teaching assistants’) clothing, hairstyle and makeup,” said Christopher Mott, TA coordinator in the English department. “That has nothing to do with their teaching.”

Biased responses in evaluations, for example, can result in the lack of diversity among faculty in universities.

The downside to this suggestion could easily be solved by adding more department-specific questions that would narrow down the scope of responses possible. Students should be prompted to answer more specific questions depending on the department, because these would help the instructor or department know what to improve on.

Also, the department-specific questions would eliminate the possibility of receiving biased answers based on something such as gender. There wouldn’t be any more responses or evaluations solely centered around the sex or gender of the instructor.

The biased responses in the comment box could be eradicated if the evaluations narrowed down what they were looking for in their responses compared to the strengths and weaknesses they ask about now. Simply asking for the strengths and weaknesses of an instructor can lead to various prejudices to come into play. However, if evaluations were to ask more department-specific questions, students would be forced to think more about the coursework and material rather than about the way an instructor dresses.

Looking into the biases within the UCLA evaluations is currently being discussed, Komar said.

Evaluations are meant to contain constructive criticism, which can only be accomplished through open-ended questions. Open-ended but department-specific questions would eliminate the possibility of gender biases in evaluations. Also, decreasing questions that ask for students to rate their instructors would allow for the time needed to complete the evaluation to remain the same, therefore not upsetting the students.

A more in-depth understanding of students’ experiences campus-wide can only enhance your experience in the classroom. After all, evaluations are meant to be a teaching tool.

Published by Sandra Wenceslao

Sandra Wenceslao is an Opinion columnist.

Leave a comment

Your email address will not be published. Required fields are marked *