| Year Two onward for Probationary Faculty: In this section, reflect on the data collected by the College of Marin from the previous academic year to identify your teaching strengths and areas for improvement within an equity lens. For evidence of teaching effectiveness, you need to consider in your reflection: A. Student Feedback Form information, including written comments B. Evaluation feedback from the Evaluation Team C. Course-level and/or other relevant data from Planning, Research, and Institutional Effectiveness In addition, you may consider: A. Grading or assessments B. Student letters or other feedback C. Other evidence of teaching effectiveness |
Student Feedback (Fall 2025)
Note: After receiving these evaluations during the third week of the semester, I decided to add this on February 7, 2026, after portfolio submission.
Student feedback from Fall 2025 provides a consistent picture of my teaching across both lecture and laboratory courses. My students gave high ratings to the clarity of expectations, subject-matter expertise, inclusivity, enthusiasm, and care for student learning that I provide. In this semester, I taught two large Introduction to Biology (BIOL 110) lecture sections and four Introduction to Biology Lab (BIOL 110L) courses, and scores were stable across course formats and class sizes, suggesting that the strengths my students identified reflect the intentional and consistent nature of my teaching approach rather than course-specific factors.
Quantitative Summary
Across 217 enrolled students and 37 respondents (17.1%), overall means fall between 3.8–4.0 on a 4-point scale for every core teaching dimension.
- Clarity of expectations & grading: Mean = 3.9–4.0, SD ≈ 0.0–0.23
- Subject-matter expertise: Mean = 4.0, SD = 0.00
- Inclusivity & respect: Mean = 4.0, SD = 0.00
- Care for student learning and success: Mean = 3.9–4.0
- Encouraging participation: Mean ≈ 3.8
- Availability for communication: Mean ≈ 3.8
Qualitative Summary
While the overall response rates to my course evaluations were not as high as I expected, the responses themselves were consistent across all sections and reflect broader participation trends in large courses. In reviewing written comments, I was pleased (and in some cases, touched) by the words responders left for me.
“I look forward to her funny lectures. They always boost my mood.”
I was struck by how frequently students described the classroom environment as welcoming and engaging. Many students highlighted my enthusiasm for the material, the clarity and accessibility of instructional materials, and my approachability as an instructor.
“Fantastic presentations, fun interactive elements.”
It was also extremely gratifying to read several comments that emphasized how students felt supported and comfortable asking questions, as one of the things I work most to achieve is a welcoming, inclusive educational environment.
“She is very understanding… if you’re stressed or overwhelmed she doesn’t pressure you.”
Students’ suggestions for improvement were constructive, and demonstrated a desire for additional opportunities for interaction or discussion, especially in the large lecture setting. As I have a variety of strategies I already use to encourage this type of discussion, I can easily include more activities that promote helping students connect to one another and increase participation, especially in my high-enrollment courses.
Reflection
As I mentioned above, while overall response rates were not as high as I would have liked, on the whole I was satisfied with the feedback I did receive. Based on this round of feedback, I want to take additional steps to encourage higher response rates from students. Last semester, I reminded all of my classes to complete the evaluations, and let them know that I do read them and find them genuinely helpful. Even so, only a modest number of students responded. In the future, I will be more intentional in explaining the value of student feedback and why these evaluations are important to me. In addition to asking them to provide their feedback, I also plan to end class early one day during the evaluation period, and let them have that dedicated time to complete the evaluation (after I have left the room to ensure their privacy). Hopefully that will increase response rates moving forward and lead to my evaluations representing a broader range of student voices.
Methodology and Reflection on Student Learning Outcomes
Because Student Feedback Form data and written feedback from the Evaluation Team are not yet available for this review cycle, my reflection on teaching effectiveness focuses on Student Learning Outcome (SLO) data collected from my lecture and laboratory courses.
To assess student achievement, I analyzed all exams in both courses: BIOL 110 (lecture; two sections) and BIOL 110L (laboratory; four sections). Each exam question was mapped to one or more course SLOs.


I then entered individual student scores by question into the spreadsheet (see below), allowing me to calculate the maximum possible points for each SLO and each student’s earned points relative to that maximum. Using this approach, I converted SLO scores into percentages of the maximum possible points and assigned performance rankings based on the College’s established five-point scale. This method allowed me to examine patterns of student learning across outcomes rather than relying solely on aggregate exam scores or final course grades.
| % of Maximum Points Possible | Rank |
| 90 – 100 | 5 |
| 75 – 89 | 4 |
| 60 – 74 | 3 |
| 50 – 59 | 2 |
| 49 and below | 1 |
Here is a screenshot of a partial sheet for one of my lab classes, along with a link to the full spreadsheet, with student names obscured:

https://docs.google.com/spreadsheets/d/1ytGrtWB8eMBL5IwsCFBo4ddNT6HGtZt6A7XQuSHPOiU/edit?usp=sharing
After entering the scores into eLumen, I used the Results Explorer to view aggregated results for each of the courses I taught. Below is a brief discussion of those results.
Lecture Course (BIOL 110)
SLO data from my lecture course indicate strong student achievement across all assessed outcomes. In each SLO, the majority of students scored at the Proficient or Exceeds Expectations levels, with relatively small percentages demonstrating minimal achievement. When compared with aggregated data across all sections of the course, my students consistently performed at higher levels, particularly in areas related to cell biology, molecular biology, scientific reasoning, and energy flow in ecosystems. Very few students fell into the lowest performance categories, indicating that most students were able to meet core learning expectations.

Because this was my first term entering and analyzing SLO data in eLumen, I did not have strong expectations going in. One pattern that emerged was that SLO performance did not always align neatly with final course grades. In some cases, students who ultimately failed the course due to inconsistent attendance demonstrated relatively high SLO scores, likely only reflecting their performance earlier in the semester. Conversely, some students who remained enrolled throughout the term did not always demonstrate SLO performance that matched their final grades.
This discrepancy has prompted me to reflect more carefully on how assessment timing and course participation interact. Moving forward, I plan to be more intentional about balancing SLO coverage across exams, including setting a minimum number of assessment items per outcome. I am also interested in engaging in department-level conversations about how course content is mapped to SLOs, as I do not always find the current SLO structure to be the most intuitive representation of the course. In particular, I would advocate for an explicit SLO focused on evolution and natural selection, which are foundational concepts in biology and central to the course.
Laboratory Course (BIOL 110L)
SLO data from the laboratory course show very strong student achievement across all assessed outcomes. In most cases, 80–97% of students scored at the Proficient or Exceeds Expectations levels, with few to no students demonstrating minimal achievement. When compared with aggregated course-wide data, my students again performed at or above overall averages.

One minor surprise was that a few very strong students did not receive top scores across every SLO, even though their overall performance in the course was high. Again, as a result, I plan to adjust my exams to for better coverages of all SLOs. In particularly, I will include additional questions or practical checks to ensure that Outcome #3 (“select correct kind of microscope”) is being assessed as directly and comprehensively as possible.
As I reflect on this initial round of SLO analysis, I am also mindful of its limitations. I had not completed a similar assessment of individual students at my previous institution, and this semester’s exams were designed prior to explicitly mapping questions to SLOs. As a result, some outcomes were assessed by fewer questions than others. In future iterations of both lecture and lab courses, I plan to be more intentional about aligning assessments to SLOs during exam design and ensuring more balanced coverage across outcomes.
As additional data sources become available – including student feedback, evaluation feedback, and institutional research – I will be able to see a clearer picture that will help me further strengthen teaching effectiveness and equitable learning outcomes.