A culture of testing and assessment has come to dominate public
and higher education. The purposes of exams and the ways we give them have
evolved over time. I decided to contemplate my own experience giving exams as a
university professor. When I gave my first exam to undergraduate students years
ago, I followed the unwritten law that assessment entails providing students
with a set of questions or exercises that they answer themselves, without
access to books, the Internet, or their neighbors. By tradition, analysis of their answers
provides scores that reflect on the students’ academic records. I saw the exam
as students’ opportunity to demonstrate that they understand the course concepts
and can apply them in both simple and more advanced ways. I included questions
that required them to apply combinations of the concepts of the course in ways
not previously discussed, to assess their critical thinking skills. Typically
the students would cram the night before, often going without sleep. Months
later at the end of the course, I asked the students related questions on a
final exam, only to find that many of them failed miserably, even when they
responded correctly the first time.
The next year in the same course, I allowed the students to
bring notecards crammed with equations and data from their notes. I found that
as they worked to construct their cards, they reviewed their notes and became
more familiar with the material. However, they still crammed the night before, soon
forgot much of what they learned, and many of them were unable to solve
problems that required them to think critically.
Since that year, my approach has continued to evolve. First,
I allowed the students to consult the Internet while in the exam. Since my
purpose was to assess their individual abilities to use the information at hand,
I required that they not communicate with each other directly, through their
phones, or through the Internet. Their ability to connect to the Internet changed
the culture of my classroom. Students began to emphasize not the facts (which
were readily available to them), but their understandings of key concepts. I found that their
abilities to address deeper concept-based questions improved since my first
years teaching.
My most recent exams were not even confined to the
classroom. I am presently teaching a course in applied data analysis to
graduate students. We recently studied Fourier techniques to analyze the
“sizes” in time and space of signals in datasets obtained from measurements of
natural systems. I decided to try an approach to exams completely new to me. I
asked the students to find online a dataset relevant to a topic of interest to them. I
asked them to analyze that dataset using Fourier techniques of their choice,
and to explain their conclusions about the natures of the signals in their
datasets. I did not specify the questions they should address. Upon completion about a week later,
they e-mailed me their results. Each student’s project was unique. I created a
simple rubric of my expectations in the context of their results, and I managed
to find a satisfactory way to generate a fair grade. I think I learned more about
their understandings of course concepts through this exercise than I would have
learned from a more standard exam format, and I was later able to address their deficiencies in the classroom.
My results argue that open-ended, student-driven projects
can provide the students and their instructors clear assessment of
their understandings and abilities. Exams that simulate the real world may be
more difficult to grade objectively, but I think that they yield insights beyond trivia that really matter.