Assessment Archives - Teaching and Learning Services /tls/tag/assessment/ ĐÓ°ÉÔ­´´ University Fri, 13 Aug 2021 14:11:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.1 Blog: Their cheatin’ hearts: Assessment online and off /tls/2020/blog-their-cheatin-hearts-assessment-online-and-off/?utm_source=rss&utm_medium=rss&utm_campaign=blog-their-cheatin-hearts-assessment-online-and-off&utm_source=rss&utm_medium=rss&utm_campaign=blog-their-cheatin-hearts-assessment-online-and-off Tue, 30 Jun 2020 11:30:01 +0000 /edc/?p=24503 By Bruce H. Tsuji, Instructor, Department of Psychology

Although cheating is one of the “zombie ideas” that emerges repeatedly in conversations about online teaching (Lalonde, 2020) it is important to keep in mind that academic integrity violations happen in both online and offline environments. Yes of course, cheating happens in proctored face-to-face assessments too! However, since many ĐÓ°ÉÔ­´´ instructors will soon be dealing with online assessments for perhaps the first time, I thought a list of some ways I have used to limit the problem could be useful.

The 15,928 students in my online courses at ĐÓ°ÉÔ­´´ since 2014 have taught me much more than I have taught them. However, be warned that in the same way as there is no silver bullet for academic integrity violations in face-to-face environments, there is no one thing that will suffice online. The following should be viewed as part of an arsenal of tools to battle cheating but it is unlikely that one can ever hope to eradicate the beast.

(Also, assessment more generally is a huge topic. As a result, I will not deal here with remote proctoring and plagiarism-detection tools like TurnItIn. These kinds of tools are fraught with ethical, technical and possibly even equity issues that are deserving of a separate and broader discussion. I will also leave out any consideration of cuPortfolio and peer assessment nor will I discuss the obvious fact that a consideration of one’s learning outcomes should be paramount in terms of planning assessments.)

First and foremost, talk about cheating in your course! Many students simply do not understand the many types of academic integrity rat holes they can find themselves in. Some specific examples of plagiarism or cheating for your particular assessments can be very useful. One of my students was very surprised when, in repeating one of my courses, her re-submission of a previous assignment resulted in an alleged academic integrity offence. Since that incident I always make clear that copying one’s own work is also considered plagiarism.

I include an honour code to which students must agree before any other elements in cuLearn are opened. The text I use looks like this:

IĚý˛š˛ľ°ůąđąđĚýto abide by the following code of conduct in <CourseName>:

  1. My answers to questions, exercises and assignments will be my own work.
  2. I will NOT share questions, answers or assignments with anyone else or post them anywhere on the internet.
  3. I will NOT share course content (videos, lecture slides, or any other material) with anyone else or post them anywhere on the internet.
  4. I amĚýawareĚýofĚýsanctions that may be used if I engage in any activity that will dishonestly improve my results in this course.

You must select an option below then clickĚýSubmit to continue:

  • I do not agree
  • I agree

When students click on “sanctions” they are taken to the university’s formal statements on academic integrity. While far from guaranteeing compliance, the honour code repeats and reinforces the seriousness of the topic. Unfortunately, many students (and many of us as well) have become anesthetized to End-User License Agreements (EULA) and the honour code may end up in a similar mental category. In that case, however, its brevity coupled with the fact that students may not progress further without clicking the appropriate choice may help it to achieve at least some of its goals.

While there are many reasons why high stakes testing fails to serve the needs of students (see for example National Academy of Sciences, Engineering, & Medicine, 2018) one more reason is that exams worth 40, 50, 60% or more of a final grade encourages academic integrity violations by increasing the potential benefit. Instead, design your course with many smaller formative tests and a relatively large number of summative ones.

To illustrate, my one semester Intro Psych course has 87 separate assessments! No one of these is worth more than 15% of the final grade and many are worth 0%. Most students, in calculating their subjective risk/benefit ratios will be somewhat less inclined to cheat on so many assessments of such low overall value. Moreover, key assessments are “daisy-chained,” or linked together, so that students must complete one or more before moving on to the next. For example each of my tests is daisy-chained to three quizzes which are worth 5% each, and which represent three chapters of content. Students must achieve a cumulative score of at least 50% on the three quizzes in order to gain access to the test. This also becomes an important internal check; if students perform surprisingly well on (for example) a set of three quizzes but equally poorly on the summative test, we may be looking at a potential academic integrity violation. The daisy-chain also helps students to understand the idea of formative quizzes as retrieval practice for my summative tests (TeachOnline, 2020).

Where possible introduce a written assignment(s) that requires some element of personal information to be included. This may not be feasible in all programs of study, but in psychology I have asked for personal examples of common psychological concepts; personal introductions of student’s past or future aspirations, or the classic “ice-breaker” of introducing some other student in the class to all (this can still be done in a totally online class!). While assignments like these are not immune to academic integrity violations, the risk is the same whether they are assigned online or in a face-to-face classroom.

Although many have decried multiple-choice questions (MCQ) they continue to be a staple of assessment in large classes and particularly in online settings (Bates, 2019). In my own case, I have had online classes of over 1,000 students supported by two or three TA’s and obviously assessment relies heavily on MCQ. Regardless of your personal stance on this subject, there are a number of ways that MCQ may be designed to at least discourage cheating.

First and foremost, I have developed a test bank of over 6,000 MCQ that I use in my online classes and the questions are constantly being edited and refurbished. Furthermore, every time a student opens a test or quiz that accesses that bank, the order of the questions as well as the order of the alternatives is automatically randomized by cuLearn. This ensures that even if two students open an exam together at the same time, there is a reasonable probability that they will not see the same questions in the same order or that the correct alternative (a, b, c, d, etc.) will be the same.

Recognizing that online tests are de facto open-book (see Cormier, 2020), I hired graduate students a few years ago to help me write a set of 600 case-based MCQ. These are MCQ that ask students to apply or derive course concepts from or to a paragraph of additional information. They are intentionally written to be relatively resistant to “googling.” Currently, my online tests are conducted via an open-resource protocol—students are allowed to use their texts, their notes and a browser while engaged in them. My 15% tests normally have a 60-minute time limit (with the exception of PMC accommodations) that includes 30 standard MCQ plus another 12 case-based MCQ. The standard MCQ are drawn at random from the pool of approximately 5,400 questions and the 12 case-based are drawn at random from the 600 question pool.

Approximately 8,400 students have written my open-resource protocol assessments since September 2017 and I have distributed a number of surveys to allow students to provide feedback on the concept. Students are not unanimously in favour of my implementation, with the most frequent criticism being the lack of time afforded to the tests and the fact that many of the questions require a nuanced understanding of the content. My response to the time criticism is that I have asked several of my TA’s to write the tests and the 60-minute limit is sufficient if one does not look up each and every question. I also tell my students that many of the questions we must answer in job settings are also time limited, but if they have particular problems with respect to their English language or their cognitive processing abilities I suggest that they consider making an appointment with the PMC.

I am relatively unconcerned with the second criticism, since I have been able to prove to myself that the historical proportion of A’s, B’s, and C’s in my courses without the open-resource protocol is not significantly different from the same proportions with the open-resource protocol. That said, I am constantly editing the items in my test bank with a particular focus on those questions that fail to discriminate well (i.e., the questions for whom a correct answer is not well correlated with a high overall grade). I repeatedly survey my students and I have also reassured myself most of them find the open-resource protocol less stressful, less anxiety-provoking, and would prefer if their other courses adopted a similar policy.

Another key to my MCQ is the use of proper names. Once upon a time I used students’ names in my questions as a way of being “cute” and acknowledging students. However, I learned that if students are communicating with each other about particular questions then unique names helped them in that endeavour. I now try to use the same generic names (“Jess” and “Taylor”) repeatedly throughout my MCQ to decrease the probability that someone might successfully ask “what was your answer for the ‘Bruce’ question?”

Another relatively unpopular choice I have made is to restrict the number of cuLearn questions that are visible at any time and to turn off the ability to browse forward and backward in tests.Ěý I also turn off any indication of the correct answer for any given question. These three design decisions help to reduce the ability of students to take screenshots of my tests and then sell them on the internet (as I discovered to my chagrin in 2016). When students ask why they can’t see the correct answers I tell them that the evidence is clear that if they are forced to discover the correct answer themselves they will remember it much longer than if they are simply told the answer. I also encourage them to review their assessments with their TA’s or me as a way of initiating a relationship with a member of the university teaching community.

Although it may not ameliorate cheating per se, I feel that providing a window of time for assessments is an essential element of good online teaching as a way of helping to reduce stress and encourage a sense of agency in students. All of my quizzes (9 x 5%) are available from the beginning of the semester until the end. My tests (3 x 15%) are available for a 64-hour period (8 a.m. on Day 1 until 11:55 p.m. on Day 3) at an appropriate point in the term. These assessment elements are still time limited (10 minutes for my 10 question MCQ quizzes and 60 minutes for my 42 question MCQ tests) but the semester-long availability of quizzes also helps to reinforce the idea of their use as formative retrieval practice. Since I discuss the concept of distributed practice in the course (the idea that cramming is rarely as effective as memory practice that is distributed over time) this arrangement allows students to select quiz and test times to optimize their memory performance and to best accommodate things like part-time jobs, shared computers or internet, and the challenges of participation from different time zones. Unfortunately, a feature that I have not yet undertaken is to ask students to explicitly set their quiz and test times to hone their planning skills. I will try to do that sometime in the future.

Finally, there are many other types of assessment that may help to mitigate academic integrity violations in online courses but I have focused here on those that may be most amenable to large classes and their attendant marking challenges. It is also important to note that many of the reports and logs available on cuLearn can be of great assistance. I make my students aware that I know the IP address (internet protocol) that they access the course from. I also know the time of day, how long they were online, and how many times they “touched” the course over the semester. Although these data are rather blunt instruments, they do allow me a better perspective into student behaviour than I can ever achieve in a face-to-face classroom

This brief barely scratches the surface of this topic but I hope you might take away one or two ideas. Please let me know what you think at bruce.tsuji@carleton.ca.

Best of luck!

-Bt.

Interested in contributing to our blog? Please emailĚýtls@carleton.caĚýto find out how.

]]>
Develop your skills in creating multiple-choice assessments /tls/2019/develop-your-skills-in-creating-multiple-choice-assessments/?utm_source=rss&utm_medium=rss&utm_campaign=develop-your-skills-in-creating-multiple-choice-assessments&utm_source=rss&utm_medium=rss&utm_campaign=develop-your-skills-in-creating-multiple-choice-assessments Tue, 02 Jul 2019 13:11:21 +0000 /edc/?p=22700 Which of the following best describesĚýZuul crurivastator?

  1. The Gatekeeper in the 1984 film Ghostbusters.
  2. A close friend of Gozer, the Gozerian.
  3. A demigod worshipped by the Mesopotamians.
  4. It’s a swear word that the ancient Romans used when banging their shin against a marble column.
  5. A new type of ankylosaurine dinosaur.

At our Multiple-Choice Retreat on July 24, we’ll reveal the answer to this question – a question that has challenged and puzzled individuals for minutes. 3M teaching fellow and multiple-choice expert, Dr. Anthony Marini, will also help you develop your skills in writing and refining multiple-choice questions.

The hands-on retreat will explore a variety of topics, including:

  • Best practices in item writing
  • Techniques for constructing items that target higher cognitive levels of learning
  • Using item analysis results to refine and enhance your tests

Lunch will be provided.

]]>
Top 10 tips for designing online assessments /tls/2019/top-10-tips-for-designing-online-assessments/?utm_source=rss&utm_medium=rss&utm_campaign=top-10-tips-for-designing-online-assessments&utm_source=rss&utm_medium=rss&utm_campaign=top-10-tips-for-designing-online-assessments Wed, 13 Mar 2019 12:33:26 +0000 /edc/?p=22370 Student sitting on floor with laptopBy Nathan White,ĚýFirst-Year Masters of Journalism, ĐÓ°ÉÔ­´´ University

At first glance, online assessments might seem more daunting than traditional papers, tests and final exams. But they don’t have to be. Whether you’re looking at using formative assessments to assess progress throughout the course, or summative assessments to evaluate competence and the achievement of course outcomes, online assessments provide an avenue for you to expand your existing teaching toolkit.

The EDC’s educational technology and instructional design teams have seen it all when it comes to assessment structures. Educational Technology Development Coordinator, Kim Loenhart, and Instructional Design Supervisor, Maristela Petrovic-Dzerdz, have worked with many instructors to incorporate online assessments into their courses. Below is a compilation of their top tips to help you get the most out of using online assessments.

  1. Assessed online doesn’t always mean “done” online – Online assignments can include experiential learning elements, such as conducting interviews, visiting museums or doing experiments, and then uploading the results online. If you want to brainstorm creative online assignment options, schedule a consultation with an instructional designer at the EDC.
  2. Explore your online tool options – For formative assessments, this could include quizzes, polls, discussion boards or other feedback tools. Summative assessments can incorporate everything from Kaltura Capture videos for visual presentations of research or ideas, to BigBlueButton for group work and presentations, to cuPortfolio for documenting the learning process and reflecting on final outcomes. The EdTech team can help you set up and learn how to use the tools. Schedule a consultation at edtech@carleton.ca.
  3. Low hanging fruit – Multiple-choice quizzes are popular and efficient. As a formative assessment, they can be a valuable learning tool, providing immediate, automated feedback. To be effective though, multiple-choice questions have to be designed following best research-based practices. Meeting with an instructional designer or signing up for one of the EDC’s multiple-choice retreats can sharpen your skills in this area.ĚýIf you’re looking for help with the quiz setup, such as quiz settings, PMC overrides, or managing the question bank, you can consult with a member of the EdTech team.
  4. Clear connections between assessments and course learning outcomesĚý–ĚýWhichever assessment you use, consider the course goals and learning outcomes. Tailor assessments to ensure students demonstrate the required knowledge and learning. This may mean going beyond a standard midterm or individual final paper.
  5. Be up front – As with all assessments, assignment instructions need to be clear in every aspect: submission requirements, deadlines, marking guides, etc. Make sure students know what’s expected and give them assessment criteria they can use as a final quality check before submitting. Pro tip: Keep online assignment instructions in one place in your cuLearn course page to avoid confusion if you need to make a change.
  6. Provide choice whenever possible – We all love options. When students have a choice, they might invest more in their work because they are selecting based on their interests, preferred learning modes and prior knowledge. Choice can mean selecting from a group of topics, or from different assignment requirements or formats. The cuLearn choice tool provides an easy-to-visualize way for students to indicate their selections.
  7. Model competence – Students love examples. When they don’t know where to start or how to approach an assessment, they may delay starting until it’s too late. Show them good work, or demonstrate the process you expect them to take to help them overcome that “getting started” hurdle.
  8. Sharing is caring – Technology allows for easy sharing of individual and group work. Knowing that a submission will be visible to theĚýclass,Ěýand being able to see their peers’ submissions, improves student motivation and usually raises the quality of work.
  9. Group assignments – If one of your course learning outcomes is building and improving collaboration and organizational skills, online group assignments will allow you to mark the assignment one time, and every student in that group will receive the same mark.
  10. Meet with an EdTech consultant or an instructional designer – If you’re trying something new, it’s never a bad idea to talk about the best online tools and strategies that could work for you. Consulting with a member of the EDC team before creating the assessments can help ward off any logistical or pedagogical issues that may arise. Plus, it provides a great opportunity to talk through assessments and solidify your purpose and desired outcomes. Email edc@carleton.ca to set up a consultation.
]]>
Assessment fundamentals workshop /tls/2019/assessment-fundamentals-workshop/?utm_source=rss&utm_medium=rss&utm_campaign=assessment-fundamentals-workshop&utm_source=rss&utm_medium=rss&utm_campaign=assessment-fundamentals-workshop Thu, 31 Jan 2019 13:29:55 +0000 /edc/?p=22123 Assessment is often one of the most dreaded parts of teaching and learning. But as challenging as it can be, designing sound assessments is critical to the success of your course.

Join us on Feb. 5 from 1-4 p.m. for a workshop that will enable you to:

  • Distinguish between various types of assessment on the basis of their purpose and place in learning
  • Discuss methods and tools for providing effective feedback to support student learning
  • Appreciate the role grading plays in communicating student learning and highlight techniques that can make that communication clear and accurate

You’ll leave this session with refreshed ideas on a range of assessment principles and purposes, and an appreciation of the importance of designing assessment strategies as one of the first stages of course design.

This session is intended for all participants who are involved in designing learning experiences, in any delivery format.

]]>
Brush up on assessment fundamentals /tls/2018/brush-up-on-assessment-fundamentals/?utm_source=rss&utm_medium=rss&utm_campaign=brush-up-on-assessment-fundamentals&utm_source=rss&utm_medium=rss&utm_campaign=brush-up-on-assessment-fundamentals Thu, 11 Oct 2018 12:17:20 +0000 /edc/?p=21649 Assessment is often one of the most dreaded parts of teaching and learning. But as challenging as it can be, designing sound assessments is critical to the success of your course.

Join us on Oct. 17 from 9 a.m. – noon for a workshop that will enable you to:

  • Distinguish between various types of assessment on the basis of their purpose and place in learning
  • Discuss methods and tools for providing effective feedback to support student learning
  • Appreciate the role grading plays in communicating student learning and highlight techniques that can make that communication clear and accurate

You’ll leave this session with refreshed ideas on a range of assessment principles and purposes, and an appreciation of the importance of designing assessment strategies as one of the first stages of course design.

]]>
Develop your skills in writing multiple-choice assessments /tls/2018/develop-your-skills-in-writing-multiple-choice-assessments/?utm_source=rss&utm_medium=rss&utm_campaign=develop-your-skills-in-writing-multiple-choice-assessments&utm_source=rss&utm_medium=rss&utm_campaign=develop-your-skills-in-writing-multiple-choice-assessments Thu, 12 Jul 2018 12:15:16 +0000 /edc/?p=21204 Which of the following is a sign that you’re NOT young anymore?

  1. You choose your cereal for the fiber, not the toy.
  2. You play connect the dots on your liver spots.
  3. Getting the mail is one of the highlights of your day.
  4. You sprinkle tenderizer on your applesauce.
  5. You look both ways before crossing a room.

At our Multiple-Choice Retreat on July 25, we’ll reveal the answer to this question from Ron Berk’s Test of Testwiseness that has puzzled generations for years. 3M teaching fellow and multiple-choice expert, Dr. Anthony Marini, will also help you develop your skills in writing and refining multiple-choice questions.

The hands-on retreat will explore a variety of topics, including:

  • Best practices in item writing
  • Techniques for constructing items that target higher cognitive levels of learning
  • Using item analysis results to refine and enhance your tests

Learn more and register here.

]]>
Assessment Fundamentals /tls/2018/assessment-fundamentals/?utm_source=rss&utm_medium=rss&utm_campaign=assessment-fundamentals&utm_source=rss&utm_medium=rss&utm_campaign=assessment-fundamentals Thu, 05 Jul 2018 11:46:06 +0000 /edc/?p=21194 Assessment is often one of the most dreaded parts of teaching and learning. But as challenging as it can be, designing sound assessments is critical to the success of your course.

Join us at the EDC on July 18 from 10 a.m. – noon and get some refreshed ideas on a range of assessment principles and purposes, and an appreciation of the importance of designing assessment strategies as one of the first stages of course design.

This session is intended for all participants who are involved in designing learning experiences, in any delivery format.

]]>
Top 10 things to consider when designing assessments /tls/2018/top-10-things-consider-designing-assessments/?utm_source=rss&utm_medium=rss&utm_campaign=top-10-things-consider-designing-assessments&utm_source=rss&utm_medium=rss&utm_campaign=top-10-things-consider-designing-assessments Wed, 28 Feb 2018 12:57:00 +0000 /edc/?p=20799 Scantron sheets spread across a desk with a pencil on topBy Bianca Chan, TLS staff writer

Assessments are often the bane of many students. But they can be just as taxing for the instructors who are tasked with designing them.

“Assessment is challenging for everyone – from novices to people with a lot of teaching experience. It’s always a challenge,” says Anthony Marini, senior teaching development associate at the Educational Development Centre.

Despite its inherent demands, Marini adds, evaluations can be made into a simpler task if you know what to focus on. Below is a curated list of what instructors need to know about assessments.

1. The importance of assessments.

“The primary function of assessments is to provide evidence that learning has occurred for both the instructor and student,” says Marini. It not only is the end goal, but it helps to shape what the course material will be and how it will be taught.

2. Clear understanding of learning outcomes is a must.

The first step when designing assessments is to have a clear understanding of what the learning outcomes are. Marini says the next steps are to develop an assessment process that best addresses those outcomes, and to create learning activities and content so that students can develop the necessary skills and knowledge.

3. Certain assessments are better suited to different outcomes.

The best form of assessment depends on what you want to achieve, Marini advises. If the content is broad, multiple-choice tests are ideal. “We’re trying to actually find evidence that learning has occurred over a vast area of content and so that format lends itself quite well to that,” he says. If you are looking for the ability to put ideas together or form arguments, short or long answer essays may be a better fit.

4. Feedback is key.

Marini says that feedback is absolutely essential to help students get to the next step, which, he adds, is the primary function of assessments in the first place.

5. Creating productive feedback.

Productive feedback is both actionable and understandable. In other words, use language that students will digest easily, and make critiques that are plausible to implement.

6. Check lists vs. scoring rubrics.

If what you are assessing falls on a continuum, or shows the development of a skill, then scoring rubrics are appropriate. With that in mind, Marini cautions that instructors often misuse scoring rubrics for checklists. “If you’re checking for absences, then check lists are the way to go,” he says.

7. Be vigilant about content validity.

Content validity is, in a basic sense, avoiding a situation where your students say ‘where did that come from?’ It says that a test is valid when your assessment process reflects the learning landscape that students were exposed to.

8. On the topic of online assessments…

When it comes to online assessments, Marini says the principles of assessment remain the same. With that in mind, multiple choice is widely and successfully used among the online community, where feedback can be immediate.

9. Multiple choice is not limited to recalling information.

It is possible to get into higher forms of thinking with multiple choice. Multiple choice items can be designed to urge students to think about what the correct answers are and to construct a response that is more than a simple assessment of facts.

10. Assessing is a collaborative activity.

In an exercise that is often undertaken alone, Marini says that attending workshops and talking about assessment is one of the best ways to help people have a stronger understanding of assessment and a robust sense of confidence.

To learn more about assessments, visit the EDC’s teaching resources page

]]>
Explore assessment fundamentals at the EDC /tls/2018/explore-assessment-fundamentals-edc/?utm_source=rss&utm_medium=rss&utm_campaign=explore-assessment-fundamentals-edc&utm_source=rss&utm_medium=rss&utm_campaign=explore-assessment-fundamentals-edc Fri, 26 Jan 2018 13:47:27 +0000 /edc/?p=20707 Assessment is often one of the most dreaded parts of teaching and learning. But as challenging as it can be, designing sound assessments is critical to the success of your course.

Join us at the EDC on Feb. 1 from 1-4 p.m. for a workshop that will provide you with refreshed ideas on a range of assessment principles and purposes, and an appreciation of the importance of designing assessment strategies as one of the first stages of course design.

Learn more and register here.

]]>
Blog: High-stake final exams: To give or not to give /tls/2016/blog-high-stake-final-exams-give-not-give/?utm_source=rss&utm_medium=rss&utm_campaign=blog-high-stake-final-exams-give-not-give&utm_source=rss&utm_medium=rss&utm_campaign=blog-high-stake-final-exams-give-not-give Wed, 03 Aug 2016 16:18:14 +0000 http://carleton.ca/edc/?p=18945 By Kevin Cheung, Associate Professor, School of Mathematics and Statistics

In some countries, graduation from a degree program requires passing a number of high-stake exams. Examples of high-stake exams include the SAT, International Baccalaureate exams, and many licensure exams. Music exams administered by the Royal Conservatory of Music are also high-stake. These examples lie at the extreme end of high-stake exams in the sense that the exam grade is all there is for determining success. In a course setting, an exam needs only be worth 50 per cent or more to be considered high-stake, though some might say that any test that causes anxiety is high-stake regardless of the percentage of the final mark.

Much has been written on the pros and cons of high-stake exams (see for example for a fascinating analysis). Here, I would like to talk about one feature of high-stake exams that can be considered beneficial in certain contexts: High-stake exams require exam-takers to prepare themselves to perform on-demand.

There are real-life situations where the ability to perform at an acceptable level on-demand is required. Performing arts and competitive sports are prominent examples. In finance, health care, emergency response etc., one often needs to have the required knowledge at one’s fingertips to make timely decisions. In these areas, high-stake exams hold their place as professional requirements. Yet in many other areas, the pressure of high-stake exams doesn’t seem beneficial. Does it make sense, for instance, for a calculus final exam to be worth 50 per cent or more of the final grade? After all, mathematicians take their time to think through math problems.

I feel that, at least for lower-year math courses, the answer is a qualified yes. An initial hurdle that must be overcome is that such exams must be designed to assess students’ knowledge and ability fairly and accurately. Then, such exams can serve as training opportunities for high-stake professional exams in the future.

Also, high-stake exams make students put in their best effort in preparation (or so I hope). As a result, students will have the opportunity, though under stress, to organize the material in a way that they have not done during the term. Of course, how long they retain what they have prepared is another topic for discussion.

Finally, high-stake final exams could benefit the instructors in the following way: If most students put in their best preparation effort (especially when they follow ), then the results of such exams could be an indication on how well the term work has worked for the students. Certainly, the results can be skewed by the cramming effect and test anxiety. But if the class as a whole underperforms significantly, then some soul-searching is in order.

]]>