Helping Students Articulate Knowledge and Skills

Submitted by Leslie Madsen to the 2018-2019 Teaching Issues Writing Consortium

Students, particularly those in the humanities, arts, and social sciences, often struggle to articulate their knowledge and skills to prospective employers.

Your college’s career center may have worked with local employers to identify the skills they most desire in students. Boise State University’s Career Center, for example, maintains a list that includes, among other things, analyzing and interpreting information, collaboration, communication, problem solving, and taking initiative.

These are, of course, all skills students build through course assignments. Near the end of each semester, I co-create, with my students, a list of the skills they have built that semester. We then craft phrases they might use in résumés, cover letters, and interviews. Here are some examples from a recent women’s history course:

  • Located valuable sources when information was difficult to find
  • Conducted primary source research in analog and digital repositories
  • Collaborated with a diverse team on multiple iterations of a project
  • Pivoted a project’s focus when resources proved unavailable
  • Navigated ambiguity; can “think on my feet” when obstacles arise
  • Demonstrated persistence and resilience when identifying and learning new technologies
  • Set realistic goals and timelines
  • Learned who to ask, what to ask for, and how to ask for it
  • Built accessible digital resources

Most students wouldn’t consider a women’s history course vocationally focused, yet this exercise helped them emerge from the class confident they had transferable skills. Chances are your courses are similarly useful to students on the job market, but they might not realize it, let alone know how to describe the knowledge and skills they acquired.

Consider setting aside class time near the end of the term to help students brainstorm their skills so that they, too, can articulate them to potential employers.

Another option is to create an online discussion board for students to post to; this could be on-going throughout the semester or at the end of the course.

Further reading:

A curriculum model for transferable skills development

Analysing student perceptions of transferable skills via undergraduate degree programmes

Humanities and social science degrees ‘develop key employment skills’

Dispelling the myth of the unemployable humanities major

A list of transferable skills undergraduates develop, from Marquette University

Submitted by:

Leslie Madsen – Director, Instructional Design and Educational Assessment (IDEA Shop), Center for Teaching and Learning, Boise State University

Included in:

2018-2019 Teaching Issues Writing Consortium

Revised by:

Judith Littlejohn – updated URLs, edited grammar, added ideas.



Wrapping Up the Semester: Two Ways to Capture Thoughts for Next Time

Thinking Allowed

Now that the semester is winding down it is important for you to take a few minutes to think about how your courses went – what worked well, what could use improvement, what new things would you like to try next time?

Here are two ways to capture your feelings and ideas about your courses so that you can recall what changes you would like to make prior to teaching these courses again.

1.Take Notes

The first way is easy – create a document or note and brainstorm ideas related to the course. Since I use Blackboard all the time, I create an “item” called “Notes for Next Time” and list things I would like to change. On this list I put everything from announcements I would like to tweak to clarification of research directions. I keep it at the top of my “Start Here” page, hidden from students, to make sure I see it right away when I roll the course.

“Notes for Next Time” can be left in the course all semester so that you can capture ideas as they occur to you throughout the term.

Here is an example:

Notes for Next Time

2. Reflect

The second way to ensure you capture your ideas for improving your courses is a little more formal.

Spend a few minutes reflecting on your course, mulling over the high points and the rocky roads, and capture those thoughts in a dedicated document.

Guiding questions on the “Instructor’s Course Reflection” form posted below will help you through the process.

Download the PDF, reflect, and respond honestly. Keep it with your syllabus or other course materials you plan to update for the next time you teach the course, and you will be able to easily remind yourself of what you want to change, focus on, or implement.

Teaching, like learning, is a process of continuous improvement. Hopefully these suggestions will help you meet your instructional goals.


Judith Littlejohn

Instructor’s Course Reflection Form


Checklist for Digital Content Accessibility

All digital content – on websites, in courses, in blogs, everywhere – must be accessible.

Accessibility is not difficult; it requires attention to detail and a bit of patience until you grow accustomed to using heading styles and alt tags. It can, however, be stressful if you feel that you have no idea what needs to be done or how to begin.

Here is what to check for in your courses, whether they are online, hybrid, hy-flex, or brick-and-mortar:

Text and Links:

  • All text is in a font size of at least 12 pt.
  • Only sans serif fonts are used throughout the content (such as Calibri or Arial).
  • All bulleted or ordered lists are designated using the editor toolbar (not dashes from your keyboard).
  • Text is not underlined unless it is a hyperlink.
  • Hyperlinks use descriptive text to provide meaning and context for links. (Links are not designated with text such as “read more” or “click here.”)
  • Text formatting (shape, color, and styling) is not used exclusively to convey information. Example: do not designate “homework assignments are red, quiz due dates are blue”. Instead, use, “homework assignments have red ‘hw’ indicators, quiz due dates have blue “Q” indicators.”


  • Headings have been created using heading styles.
  • A logical heading structure has been used so that subheadings have been designated and nested appropriately. (Follow an appropriate outline structure with headings.)


  • Images do not blink, flash or use sparkling animation.
  • All pictures, charts, and graphs that contain information or data also have alternate text or a text description that conveys the same information.
  • Images of text have been avoided except where a particular presentation of text as images is essential to the information being conveyed. If that happens, provide a text transcript of the text that is in the image.


  • Scanned image PDFs are not used.
  • Proper heading styles and structure have been used throughout all documents.
  • PowerPoint presentations have been created using templates with master slides.
  • Each slide in a deck has a unique title.
  • Accessibility checkers in programs such as Word and PowerPoint indicate that the content follows your intended reading order.
  • Documents (Word, PowerPoint, Excel, etc.) are formatted and saved as HTML or PDF accessible.


  • Tables are used for tabular data, not for layout purposes.
  • Complex tables with merged or split cells have been broken down into smaller simple tables.
  • Tables include properly identified column and/or row headings.
  • Headings repeat on each page


  • Course can be navigated with only a keyboard.
  • Navigation menu items are consistent throughout the site.


  • Text and background color have sufficient contrast on all documents and site pages.
  • These color combinations are avoided: red/black, red/green, and blue/yellow.
  • Color alone is not used to indicate meaning. Example: You could not have a list of items and state that the items in red are overdue; they must also have a clear “late” indicator other than color.


  • All audio content includes transcripts.
  • All videos include synchronized, and correct, captions.

Check Your Content:

Once you have completed your course content, here are a few checks you can do to ensure your information is digitally accessible:

  • Try navigating your course with your keyboard. Can you do everything you would need to do as a student? Watch this keyboard accessibility video for more information.
  • Download a browser extension that will run an accessibility check. WebAIM’s WAVE tool works in Blackboard using Chrome or Firefox.
  • For Microsoft Word documents, select “Check Accessibility” to generate a report about the accessibility of your document. Google the version of Word that you are using to get instructions for accessing the tool. Watch this Productivity/Accessibility video from the Office of the Texas Governor for more info.
  • For PowerPoint presentations, select the “Outline” view to see the reading order of the text from your PowerPoint. (Using the pre-made PowerPoint templates typically ensures proper reading order.)
  • Try to highlight some text within your PDF documents. If it highlights, you’ll also want to see the Adobe Accessibility Report to ensure that the reading order in your document is correct.
  • Select the HTML view in your editor toolbar in Blackboard and check the semantic structure of your content. Are all of your headings appropriately identified?
  • Use a tool like the Paciello Group’s Colour Contrast Analyser to ensure that you have sufficient contrast between your text and background.




Checklist items are derived from Section 504 and Section 508 of the United States Rehabilitation Act, Title II of the Americans with Disabilities Act of 1990, WCAG 2.0 requirements, Office of Civil Rights rulings involving online education, and principles outlined by the National Center on Universal Design for Learning.

Interactive checklist at Angelo State University

Website Accessibility Infographic from Digital Ink

Top image from Digital Ink

About Metacognition

Thinking about One’s Thinking |  Putting Metacognition into Practice   by Nancy Chick

Thinking about One’s Thinking

Metacognition is, put simply, thinking about one’s thinking.  More precisely, it refers to the processes used to plan, monitor, and assess one’s understanding and performance. Metacognition includes a critical awareness of a) one’s thinking and learning, and b) oneself as a thinker and learner.

Initially studied for its development in young children (Baker & Brown, 1984; Flavell, 1985), researchers soon began to look at how experts display metacognitive thinking and how, then, these thought processes can be taught to novices to improve their learning (Hatano & Inagaki, 1986).  In How People Learn, the National Academy of Sciences’ synthesis of decades of research on the science of learning, one of the three key findings of this work is the effectiveness of a “‘metacognitive’ approach to instruction” (Bransford, Brown, & Cocking, 2000, p. 18).

Metacognitive practices increase students’ abilities to transfer or adapt their learning to new contexts and tasks (Bransford, Brown, & Cocking, p. 12; Palincsar & Brown, 1984; Scardamalia et al., 1984; Schoenfeld, 1983, 1985, 1991).  They do this by gaining a level of awareness above the subject matter: they also think about the tasks and contexts of different learning situations and themselves as learners in these different contexts.  When Pintrich (2002) asserts that “Students who know about the different kinds of strategies for learning, thinking, and problem solving will be more likely to use them” (p. 222), notice the students must “know about” these strategies, not just practice them.  As Zohar and David (2009) explain, there must be a “conscious meta-strategic level of H[igher] O[rder] T[hinking]” (p. 179).

Metacognitive practices help students become aware of their strengths and weaknesses as learners, writers, readers, test-takers, group members, etc.  A key element is recognizing the limit of one’s knowledge or ability and then figuring out how to expand that knowledge or extend the ability. Those who know their strengths and weaknesses in these areas will be more likely to “actively monitor their learning strategies and resources and assess their readiness for particular tasks and performances” (Bransford, Brown, & Cocking, p. 67).

The absence of metacognition connects to the research by Dunning, Johnson, Ehrlinger, and Kruger on “Why People Fail to Recognize Their Own Incompetence” (2003).  They found that “people tend to be blissfully unaware of their incompetence,” lacking “insight about deficiencies in their intellectual and social skills.” They identified this pattern across domains—from test-taking, writing grammatically, thinking logically, to recognizing humor, to hunters’ knowledge about firearms and medical lab technicians’ knowledge of medical terminology and problem-solving skills (p. 83-84).  In short, “if people lack the skills to produce correct answers, they are also cursed with an inability to know when their answers, or anyone else’s, are right or wrong” (p. 85). This research suggests that increased metacognitive abilities—to learn specific (and correct) skills, how to recognize them, and how to practice them—is needed in many contexts.


Putting Metacognition into Practice

In “Promoting Student Metacognition,” Tanner (2012) offers a handful of specific activities for biology classes, but they can be adapted to any discipline. She first describes four assignments for explicit instruction (p. 116):

  • Preassessments—Encouraging Students to Examine Their Current Thinking: “What do I already know about this topic that could guide my learning?”
  • The Muddiest Point—Giving Students Practice in Identifying Confusions: “What was most confusing to me about the material explored in class today?”
  • Retrospective Postassessments—Pushing Students to Recognize Conceptual Change: “Before this course, I thought evolution was… Now I think that evolution is ….” or “How is my thinking changing (or not changing) over time?”
  • Reflective Journals—Providing a Forum in Which Students Monitor Their Own Thinking: “What about my exam preparation worked well that I should remember to do next time? What did not work so well that I should not do next time or that I should change?”

Next are recommendations for developing a “classroom culture grounded in metacognition” (p. 116-118):

  • Giving Students License to Identify Confusions within the Classroom Culture:  ask students what they find confusing, acknowledge the difficulties
  • Integrating Reflection into Credited Course Work: integrate short reflection (oral or written) that ask students what they found challenging or what questions arose during an assignment/exam/project
  • Metacognitive Modeling by the Instructor for Students: model the thinking processes involved in your field and sought in your course by being explicit about “how you start, how you decide what to do first and then next, how you check your work, how you know when you are done” (p. 118)

To facilitate these activities, she also offers three useful tables:

  • Questions for students to ask themselves as they plan, monitor, and evaluate their thinking within four learning contexts—in class, assignments, quizzes/exams, and the course as a whole (p. 115)
  • Prompts for integrating metacognition into discussions of pairs during clicker activities, assignments, and quiz or exam preparation (p. 117)
  • Questions to help faculty metacognitively assess their own teaching (p. 119)

Weimer’s “Deep Learning vs. Surface Learning: Getting Students to Understand the Difference” (2012) offers additional recommendations for developing students’ metacognitive awareness and improvement of their study skills:

“[I]t is terribly important that in explicit and concerted ways we make students aware of themselves as learners. We must regularly ask, not only ‘What are you learning?’ but ‘How are you learning?’ We must confront them with the effectiveness (more often ineffectiveness) of their approaches. We must offer alternatives and then challenge students to test the efficacy of those approaches.” (emphasis added)

She points to a tool developed by Stanger-Hall (2012, p. 297) for her students to identify their study strategies, which she divided into “cognitively passive(“I previewed the reading before class,” “I came to class,” “I read the assigned text,” “I highlighted the text,” et al) and “cognitively active study behaviors(“I asked myself: ‘How does it work?’ and ‘Why does it work this way?’” “I wrote my own study questions,” “I fit all the facts into a bigger picture,” “I closed my notes and tested how much I remembered,” et al).  The specific focus of Stanger-Hall’s study is tangential to this discussion,1 but imagine giving students lists like hers adapted to your course and then, after a major assignment, having students discuss which ones worked and which types of behaviors led to higher grades. Even further, follow Lovett’s advice (2013) by assigning “exam wrappers,” which include students reflecting on their previous exam-preparation strategies, assessing those strategies and then looking ahead to the next exam, and writing an action plan for a revised approach to studying. A common assignment in English composition courses is the self-assessment essay in which students apply course criteria to articulate their strengths and weaknesses within single papers or over the course of the semester. These activities can be adapted to assignments other than exams or essays, such as projects, speeches, discussions, and the like.

As these examples illustrate, for students to become more metacognitive, they must be taught the concept and its language explicitly (Pintrich, 2002; Tanner, 2012), though not in a content-delivery model (simply a reading or a lecture) and not in one lesson. Instead, the explicit instruction should be “designed according to a knowledge construction approach,” or students need to recognize, assess, and connect new skills to old ones, “and it needs to take place over an extended period of time” (Zohar & David, p. 187).  This kind of explicit instruction will help students expand or replace existing learning strategies with new and more effective ones, give students a way to talk about learning and thinking, compare strategies with their classmates’ and make more informed choices, and render learning “less opaque to students, rather than being something that happens mysteriously or that some students ‘get’ and learn and others struggle and don’t learn” (Pintrich, 2002, p. 223).

Metacognition instruction should also be embedded with the content and activities about which students are thinking.  Why? Metacognition is “not generic” (Bransford, Brown, & Cocking, p. 19) but instead is most effective when it is adapted to reflect the specific learning contexts of a specific topic, course, or discipline (Zohar & David, 2009).  In explicitly connecting a learning context to its relevant processes, learners will be more able to adapt strategies to new contexts, rather than assume that learning is the same everywhere and every time.  For instance, students’ abilities to read disciplinary texts in discipline-appropriate ways would also benefit from metacognitive practice.  A literature professor may read a passage of a novel aloud in class, while also talking about what she’s thinking as she reads: how she makes sense of specific words and phrases, what connections she makes, how she approaches difficult passages, etc.  This kind of modeling is a good practice in metacognition instruction, as suggested by Tanner above. Concepción’s “Reading Philosophy with Background Knowledge and Metacognition” (2004) includes his detailed “How to Read Philosophy” handout (pp. 358-367), which includes the following components:

  • What to Expect (when reading philosophy)
  • The Ultimate Goal (of reading philosophy)
  • Basic Good Reading Behaviors
  • Important Background Information, or discipline- and course-specific reading practices, such as “reading for enlightenment” rather than information, and “problem-based classes” rather than historical or figure-based classes
  • A Three-Part Reading Process (pre-reading, understanding, and evaluating)
  • Flagging, or annotating the reading
  • Linear vs. Dialogical Writing (Philosophical writing is rarely straightforward but instead “a monologue that contains a dialogue” [p. 365].)

What would such a handout look like for your discipline?

Students can even be metacognitively prepared (and then prepare themselves) for the overarching learning experiences expected in specific contexts. Salvatori and Donahue’s The Elements (and Pleasures) of Difficulty (2004) encourages students to embrace difficult texts (and tasks) as part of deep learning, rather than an obstacle.  Their “difficulty paper” assignment helps students reflect on and articulate the nature of the difficulty and work through their responses to it (p. 9).  Similarly, in courses with sensitive subject matter, a different kind of learning occurs, one that involves complex emotional responses. In “Learning from Their Own Learning: How Metacognitive and Meta-affective Reflections Enhance Learning in Race-Related Courses (Chick, Karis, & Kernahan, 2009), students were informed about the common reactions to learning about racial inequality (Helms, 1995; Adams, Bell, & Griffin, 1997; see student handout, Chick, Karis, & Kernahan, p. 23-24) and then regularly wrote about their cognitive and affective responses to specific racialized situations.  The students with the most developed metacognitive and meta-affective practices at the end of the semester were able to “clear the obstacles and move away from” oversimplified thinking about race and racism ”to places of greater questioning, acknowledging the complexities of identity, and redefining the world in racial terms” (p. 14).

Ultimately, metacognition requires students to “externalize mental events” (Bransford, Brown, & Cocking, p. 67), such as what it means to learn, awareness of one’s strengths and weaknesses with specific skills or in a given learning context, plan what’s required to accomplish a specific learning goal or activity, identifying and correcting errors, and preparing ahead for learning processes.


1 Students who were tested with short answer in addition to multiple-choice questions on their exams reported more cognitively active behaviors than those tested with just multiple-choice questions, and these active behaviors led to improved performance on the final exam.



Originally written by Nancy Chick, Vanderbilt CFT Assistant Director

Revised by Judith Littlejohn, Instructional Designer, SUNY GCC – Updated URLs, changed images to CC images from Pixabay, added additional resources.

This teaching guide is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Test-Enhanced Learning (Retrieval Practice)


Test-enhanced learning: Using retrieval practice to help students learn

Brame, C.J. and Biel, R. – Vanderbilt 

  • What is “test-enhanced learning”?
  • Six things research tells us about the effects of retrieval practice
  • Why is it effective?
  • How can instructors implement test-enhanced learning in their classes?
  • What are important caveats to keep in mind?
  • References

What is “test-enhanced learning”?

In essence, test-enhanced learning is the idea that the process of remembering concepts or facts—retrieving them from memory—increases long-term retention of those concepts or facts. This idea, also known as the testing effect, rests on myriad studies examining the ability of various types of “tests”—prompts to promote retrieval—to promote learning when compared to studying. It is one of the most consistent findings in cognitive psychology (Roediger and Butler 2011; Roediger and Pyc 2012).

In some ways, the terms “test-enhanced learning” and the “testing effect” are misnomers, in that the use of the word “tests” calls up notions of high-stakes summative assessments. In fact, most or all studies elucidating the testing effect examine the impact of low-stakes retrieval practice on a delayed summative assessment. The “testing” that actually enhances learning is the low-stakes retrieval practice that accompanies study in these experiments.

With that caveat in mind, the testing effect can be a powerful tool to add to instructors’ teaching tool kits—and students’ learning tool kits.

In this teaching guide, we provide six observations about the effects of testing from the cognitive psychology literature, summarizing one or two key studies that led to each of these conclusions. We have chosen studies performed with undergraduates learning educationally relevant materials (e.g., text passages as opposed to word pairs).   We also suggest ways to implement test-enhanced learning in your class as well as important caveats to keep in mind.

Six things research tells us about the effects of retrieval practice

1. Repeated retrieval enhances long-term retention in a laboratory setting.

The idea that active retrieval of information from memory improves memory is not a new one: William James proposed this idea in 1890, and Edwina Abbott and Arthur Gates provided support for this idea in the early part of the 20th century (James, 1890; Abbott, 1909; Gates, 1917). During the last decade, however, evidence of the benefits of testing has mounted.

In one influential study, Roediger and Karpicke investigated the effects of single versus multiple testing events on long-term retention using educationally relevant conditions (Roediger and Karpicke, 2006). Their goal was to determine if any connection existed between the number of times students were tested and the size of the testing effect. The investigators worked with undergraduates in a laboratory environment, asking them to read passages about 250 words long. The authors compared three conditions (see Figure 1): students who studied the passages four times for five minutes each (SSSS group); students who studied the passages three times and completed one recall test in which they were given a blank sheet of paper and asked to recall as much of the passage as they could (SSST group); students who studied the passages one time and then performed the recall practice three times (STTT group). Student retention was then tested either five minutes or one week later using the same type of recall test used for retrieval practice.

Figure 1. Study design comparing the effects of study versus retrieval practice from Roediger and Karpicke, 2006.

Interestingly, results differed significantly depending on when the final test was performed. Students who took their final test very soon after their study period (i.e., 5 minutes) benefited from repeated studying, with the SSSS group performing best, the SSST group performing second-best, and the STTT group performing least well. This result suggests that studying is more effective when the information being learned is only needed for a short time. However, when the long-term retention is the goal, testing is more effective. The researchers found that when the final test was delayed by a week the results were reversed, with the STTT group performing about 5% higher than the SSST group and about 21% higher than the SSSS group. Testing had a greater impact on long-term retention than did repeated study, and the participants who were repeatedly tested had increased retention over those who were only tested once. practice three times (STTT group). Student retention was then tested either five minutes or one week later using the same type of recall test used for retrieval practice.

Figure 2. Effects of repeated studying versus repeated retrieval practice. Derived from Roediger and Karpicke, 2006.

The study described here is one of many making up a rich literature on the testing effect; several recent review articles provide a thorough overview of the work in this area (Roediger and Butler, 2011; Roediger and Karpicke, 2006b; Roediger, Putnam, and Smith, 2011).

2. Various testing formats can enhance learning.

Smith and Karpicke examined whether different types of questions were equally effective at inducing the testing effect (2014). The researchers performed a series of experiments with undergraduate students in a laboratory environment, examining the effects of short answer (SA), multiple choice (MC), and hybrid SA/MC formats for promoting students’ ability to remember information from a text. In one experiment, five groups of students were compared (see Figure 3). Students read four texts, each approximately 500 words long. After each, four groups of students then participated in different types of retrieval practice, while the fifth group was the no-retrieval control.  One week later, the students returned to the lab for a short-answer test on each of the reading passages.

Figure 3. Study design comparing different approaches to promote retrieval. Derived from Smith and Karpicke, 2014.

Confirming other studies, students who had participated in some type of retrieval practice performed much better on the final assessment, getting approximately twice as many questions correct as those who did not have any retrieval practice. This was true both for questions that were directly taken from information in the texts as well as questions that required inference from the text (see Figure 4). Interestingly, there was no significant difference in the benefits conferred by the different types of retrieval practice; multiple-choice, short-answer, and hybrid questions following the reading were equally effective at enhancing the students’ learning. Other experiments in the series essentially replicated these results, although one experiment did find a slight advantage for hybrid retrieval practice (short-answer + multiple-choice) in preparing students for short-answer tests consisting of verbatim questions on short reading passages. These results suggest that the benefits of testing are not tied to a specific type of retrieval practice, but rather retrieval practice in general.

Figure 4. Different question formats can promote test-enhanced learning. Derived from Smith and Karpicke, 2014.

This and other studies suggest that multiple question formats can provide the benefit associated with testing. It appears that the context may determine which question type provides the greatest benefit, with free recall questions, multiple-choice, hybrid free recall/multiple-choice, and cued-recall questions all providing significant benefit over study alone. The most influential studies in the field suggest that free recall provides greater benefit than other question types (see Pyc et al., in press), but the results described here reveal an incompletely answered question.

3. Feedback enhances the benefits of testing.

Considerable work has been done to examine the role of feedback on the testing effect.

Butler and Roediger designed an experiment in which undergraduates studied 12 historical passages and then took multiple-choice tests in a lab setting (Butler and Roediger, 2008). The students either received no feedback, immediate feedback (i.e., following each question), or delayed feedback (i.e., following completion of the 42-item test). One week later, the students returned for a comprehensive cued-recall test. While simply completing multiple-choice questions after reading the passages did improve performance on the final test, corresponding to other reports on the testing effect, feedback provided an additional benefit (see Figure 5). Interestingly, delayed feedback resulted in better final performance than did immediate feedback, although both conditions showed benefit over no feedback.

Feedback enhances the effects of retrieval practice. Derived from Butler and Roediger, 2008.

4. Learning is not limited to rote memory.

One concern that instructors may have with regard to using testing as a teaching and learning strategy is that it may promote rote memory. While most instructors recognize that memory plays a role in allowing students to perform well within their academic domain, they want their students to be able to do more than simply remember and understand facts, but instead to achieve higher cognitive outcomes (Bloom, 1956). Some studies address this concern and report results suggesting that testing provides benefits beyond improving simple recall. For example, the study by Smith and Karpicke (2014) described above determined the effects of testing on students’ recall of specific facts from reading passages as well as their ability to answer questions that required inference. In these studies, the authors defined inference as drawing conclusions that were not directly stated within the passages but that could be drawn by synthesizing from multiple facts within the passage. The investigators observed that testing following reading improved students’ ability to answer both types of questions on a delayed test, thereby providing evidence that benefits of testing are not limited to answers that require only rote memory.

Karpicke and Blunt sought to directly address the question of whether retrieval practice can promote students’ performance on higher order cognitive activities in a 2011 study. They investigated the impact of retrieval practice on students’ learning of undergraduate-level science concepts, comparing the effects of retrieval practice to the elaborative study technique, concept mapping (Karpicke and Blunt, 2011). In one experiment, students studied a science text and were then divided into one of four conditions: a study-once condition, in which they did not interact further with the concepts in the text; a repeated study condition, in which they studied the text four additional times; an elaborative study condition, in which they studied the text one additional time, were trained on concept mapping, and produced a concept map of the concepts in the text; a retrieval practice condition, in which they completed a free recall test, followed by an additional study period and recall test. All students were asked to complete a self-assessment predicting their recall within one week; students in the repeated study group predicted better recall than students in any of the other groups. Students then returned a week later for a short-answer test consisting of questions that could be answered verbatim from the text and questions that required inferences from the text. Students in the retrieval practice condition performed significantly better on both the verbatim questions and the inference questions than students in any other group. The authors then asked whether the advantage of retrieval practice would persist if the final test consisted of a concept mapping exercise.  The authors observed that retrieval practice produced better performance than did elaborative study using concept mapping on both types of final tests (short-answer and concept mapping). When they examined the effects on individual learners, they found that 84% (101/120) students performed better on the final tests when they used retrieval practice as a study strategy rather than concept mapping.

5. Testing can potentiate further study.

Wissman, Rawson, and Pyc have reported work that suggests that retrieval practice over one set of material may facilitate learning of later material, which may be related or unrelated (Wissman, Rawson, and Pyc, 2011). Specifically, they investigated the use of “interim tests.” Undergraduate students were asked to read three sections of a text. In the “interim test” group, they were tested after reading each of the first two sections, specifically by typing everything they could remember about the text. After completing the interim test, they were advanced to the next section of material. The “no interim test” group read all three sections with no tests in between. Both groups were tested on Section 3 after reading it. Interestingly, the group that had completed interim tests on Sections 1 and 2 recalled about twice as many “idea units” from Section 3 as the students who did not take interim tests. This result was observed both when Sections 1, 2, and 3 were about different topics and when they were about related topics. Thus testing may have benefits that extend beyond the target material.

6. The benefits of testing appear to extend to the classroom.

All of the reports described above focused on experiments performed in a laboratory setting. In addition, there are several studies that suggest the benefits of testing may also extend to the classroom.

In 2002, Leeming used an “exam-a-day” approach to teaching an introductory psychology course (Leeming, 2002). He found that students who completed an exam every day rather than exams that covered large blocks of material scored significantly higher on a retention test administered at the end of the semester.

Larsen, Butler, and Roediger asked whether a testing effect was observed for medical residents’ learning about status epilepticus and myasthenia gravis, two neurological disorders, at a didactic conference (Larsen et al., 2009). Specifically, residents participated in an interactive teaching session on the two topics and then were randomly divided into two groups. One group studied a review sheet on myasthenia gravis and took a test on status epilepticus, while the other group took a test on myasthenia gravis and studied a review sheet on status epilepticus. Six months later, the residents completed a test on both topics. The authors observed that the testing condition produced final test scores that averaged 13% higher than the study condition.

Lyle and Crawford examined the effects of retrieval practice on student learning in undergraduate statistics class (Lyle and Crawford, 2011). In one section of the course, students were instructed to spend the final 5 to 10 minutes of each class period answering two to four questions that required them to retrieve information about the day’s lecture from memory. The students in this section of the course performed about 8% higher on exams over the course of the semester than students in sections that did not use the retrieval practice method, a statistically significant difference.

Other classroom studies have been published by McDaniel, Wildman, and Anderson (2012), Orr and Foster (2013), and Stanger-Hall and colleagues (2011).

Why is it effective?

Several hypotheses have been proposed to explain the effects of testing. The retrieval effort hypothesis suggests that the effort involved in retrieval provides testing benefits (Gardiner, Craik, and Bleasdale, 1973). This hypothesis predicts that tests that require production of an answer, rather than recognition of an answer, would provide greater benefit, a result that has been observed in some studies (Butler and Roediger, 2007; Pyc and Rawson, 2009) but not others (Little and Bjork, 2012; some experiments in Smith and Karpicke, 2014; some experiments in Kang, McDermott, and Roediger 2007).

Bjork and Bjork’s new theory of disuse provides an alternative hypothesis to explain the benefits of testing (Bjork and Bjork, 1992). This theory posits that memory has two components: storage strength and retrieval strength. Retrieval events improve storage strength, enhancing overall memory, and the effects are most pronounced at the point of forgetting—that is, retrieval at the point of forgetting has a greater impact on memory than repeated retrieval when retrieval strength is high. This theory aligns with experiments that demonstrate that study is as or more effective as testing when the delay before a final test is very short (see, for example, Roediger and Karpicke 2006), because the very short delay between study and the final test means that retrieval strength is very high—an experience many students can verify from their own experience cramming. At a greater delay, however, experiences that build retrieval strength (e.g., testing) confer greater benefit than studying.

How can instructors implement test-enhanced learning in their classes?

There are many ways to take advantage of the testing effect, some during class time and some outside of class time. The following are a few suggestions.

  • Incorporating frequent quizzes into a class’s structure may promote student learning.These quizzes can consist of short-answer or multiple-choice questions, and can be administered online or face-to-face. Studies investigating the testing effect suggest that providing students the opportunity for retrieval practice—and ideally, providing feedback for the responses—will increase learning of targeted as well as related material.
  • Providing “summary points” during a class to encourage students to recall and articulate key elements of the class.Lyle and Crawford’s study examined the effects of asking to students to write the main points of the day’s class during the last few minutes of a class meeting, and observed a significant effect on student recall at the end of the semester (Lyle and Crawford, 2011). Setting aside the last few minutes of a class to ask students to recall, articulate, and organize their memory of the content of the day’s class may provide significant benefits to their later memory of these topics.
  • Pretesting to highlight important information and instructor expectations.Elizabeth Ligon Bjork and colleagues have reported results that suggest that pretesting students’ knowledge of a subject may prime them for learning (Little and Bjork, 2011). By pretesting students prior to a unit or even a day of instruction, an instructor may help alert students both to the types of questions that they need to be able to answer as well as the key concepts and facts they need to be alert to during study and instruction.
  • Telling students about the testing effect. Instructors may be able to aid their students’ metacognitive abilities by sharing a synopsis of these observations. Telling students that frequent quizzing helps learning—and that effective quizzing can take a variety of forms—can give them a particularly helpful tool to add to their learning toolkit (Stanger-Hall et al., 2011). Adding the potential benefits of pretesting may further empower students to take control of their own learning, such as by using example exams as primers for their learning rather than simply as pre-exam checks on their knowledge.

This list is a starting point. Instructors should use the principles that underlie test-enhanced learning—frequent low-stakes opportunities for students to practice recall—to develop approaches that are well-adapted for their class and context.

What are important caveats to keep in mind?

Keep it low-stakes. The term “testing” evokes a certain response from most of us: the person being tested is being evaluated on his or her knowledge or understanding of a particular area, and will be judged right or wrong, adequate or inadequate based on the performance given. This implicit definition does not reflect the settings in which the benefits of “test-enhanced learning” have been established. In the experiments done in cognitive science laboratories, the “testing” was simply a learning activity for the students; in the language of the classroom, it could be considered a “no-stakes” formative assessment where students could evaluate their memory of a particular subject. In most of the studies from classrooms, the “testing” was either no-stakes recall practice (Larsen et al. 2009; Lyle and Crawford, 2001; Stanger-Hall et al., 2011) or low-stakes quizzes (McDaniel et al., 2012; Orr and Foster, 2013). Thus, the term retrieval practice may be a more accurate description of the activity that promoted students’ learning. Implementing approaches to test-enhanced learning in a class should therefore involve no-stakes or low-stakes scenarios in which students are engaged in a recall activity to promote their learning rather than being repeatedly subjected to high-stakes testing situations.

Share your learning objectives so that students understand their targets. It’s important to note that incorporating testing—or recall practice—as a learning tool in a class should be done in conjunction with other evidence-based teaching practices, such as sharing learning objectives with students, carefully aligning learning objectives with assessments and learning activities, and offering opportunities to practice important skills. If you want students to be able apply their knowledge, analyze complex situations, and synthesize different points of view, be sure to let them know that retrieval practice will help them learn the basic information they need for these skills—but that retrieval alone is not sufficient.


Abbott EE (1909). On the analysis of the factors of recall in the learning process. Psychological Monographs, 11, 159-177.

Bjork RA (1975). Retrieval as a memory modifier: An interpretation of negative recency and related phenomena. In R.L. Solso (Ed.), Information processing and cognition (pp. 123-144) New York, NY: Wiley.

Bjork RA and Bjork EL (1992). A new theory of disuse and an old theory of stimulus fluctuation. In A. Healy, S. Kosslyn, and R. Shiffrin (Eds.), From learning processes to cognitive processes: Essays, in honor of William K. Estes (Vol 2, pp. 35067) Hillsdale, NJ: Erlbaum.

Bloom BS (1956). Taxonomy of Educational Objectives: Handbook I: The Cognitive Domain. New York: David McKay Co Inc.

Butler AC (2010). Repeated testing produces superior transfer of learning relative to repeated studying. Journal of Experimental Psychology: Learning, Memory, and Cognition 36, 1118-1133.

Butler AC, Karpicke JD, and Roediger HL III (2008). Correcting a metacognitive error: Feedback increases retention of low-confidence correct responses. Journal of Experimental Psychology: Learning, Memory, and Cognition 14, 918-928.

Butler AC and Roediger HL III (2007). Testing improves long-term retention in a simulated classroom setting. European Journal of Cognitive Psychology 19, 514-527.

Butler AC and Roediger HL III (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory and Cognition 36, 604-616.

Cantor AD, Eslick AN, Marsh EJ, Bjork RA, Bjork EL (2014). Multiple-choice tests stabilize access to marginal knowledge. Memory and Cognition SOI 10.3758/s13421-014-0462-6.

Cohen GL, Garcia J, Apfel N, and Master A (2006). Reducing the racial achievement gap: A social-psychological intervention. Science 313, 1307-1310.

Gardiner JM, Craik FIM, and Bleasdale FA (1973). Retrieval difficulty and subsequent recall. Memory and Cognition 1, 213-216.

Gates AI (1917) Recitation as a factor in memorizing. Archives of Psychology, 6(40).

Hays MJ, Kornell N, and Bjork RA (2013). When and Why a Failed Test Potentiates the Effectiveness of Subsequent Study. Journal of Experimental Psychology: Learning, Memory, and Cognition 39, 290-296.

James W (1890). The principles of psychology. New York: Holt.

Kang SHK, McDermott KB, and Roediger HL III. (2007). Test format and corrective feedback modify the effect of testing on long-term retention. European Journal of Cognitive Psychology 19, 528-558.

Karpicke JD and Blunt JR (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science 331, 772-775.

Klionsky DJ (2008). The quiz factor. CBE—Life Sciences Education 7, 265-266.

Larsen DP, Butler AC, and Roediger HL III (2009). Repeated testing improves long-term retention relative to repeated study: a randomized controlled trial. Medical Education 43, 1174-1181.

Leeming FC (2002). The exam-a-day procedure improves performance in psychology classes. Teaching of Psychology 29, 210-212.

Leight H, Saunders, Calkins R, and Withers M (2012). Collaborative testing improves performance but not content retention in a large-enrollment introductory biology class. CBE—Life Sciences Education 11, 392-401.

Little JL and Bjork EL (2011). Pretesting with multiple-choice questions facilitates learning. Presentation at Cognitive Science Society. Retrieved from, November 15, 2014.

Little JL and Bjork EL (2012). The persisting benefits of using multiple-choice tests as learning events. Presentation at Cognitive Science Society. Retrieved from , November 11, 2014.

Lyle KB and Crawford NA (2011). Retrieving essential material at the end of lectures improves performance on statistics exams. Teaching of Psychology 38, 94-97.

McDaniel MA and Masson MEJ (1985). Altering memory representations through retrieval. Journal of Experimental Psychology: Learning, Memory, and Cogntion 11, 371-385.

McDaniel MA, Wildman KM, and Anderson JL (2012). Using quizzes to enhance summative-assessment performance in a web-based class: An experimental study. Journal of Applied Research in Memory and Cognition 1, 18-26.

Miyake A, Kost-Smith LE, Finkelstein ND, Pollock SJ, Cohen GL, Ito TA (2010). Reducing the gender achievement gap in college science: A classroom study of values affirmation. Science 330, 1234-1237.

Orr R and Foster S (2013). Increasing student success using online quizzing in introductory (majors) biology. CBE—Life Sciences Education 12, 509-514.

Pulfrey C, Buchs C, and Butera F (2011). Why grades engender performance-avoidance goals: The mediating role of autonomous motivation. Journal of Educational Psychology 103, 683-700.

Pyc MA, Agarwal PK, and Roediger H L III (in press). Test-enhanced learning. In V. Benassi, C. Overson, & C. Hakala (Eds.), Applying the science of learning in education: Infusing psychological science into the curriculum. Society for the Teaching of Psychology. Retrieved from’s/Roediger%20&%20Pyc%20(2012)a_MemCog.pdf on November 14, 2014. Updated resource URL.

Pyc MA and Rawson KA (2009). Testing the retrieval effort hypothesis: Does greater difficulty correctly recalling information lead to higher levels of memory? Journal of Memory and Language 60, 437-447.

Roediger HL III, Putnam AL, and Smith MA. (2011). Ten benefits of testing and their applications to educational practice. Psychology of Learning and Motivation, Volume 55: 1-36.

Roediger HL III and Butler AC (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences 15, 20-27.

Roediger HL III and Karpicke JD (2006a). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science 17, 249-255.

Roediger HL III and Karpicke JD (2006b). The power of testing memory: basic research and implications for educational practice. Perspectives on Psychological Science, 1, 181-210.

Roediger HL III and Pyc MA (2012). Inexpensive techniques to improve education: Applying cognitive psychology to enhance educational practice. Journal of Applied Research in Memory and Cognition 1, 242-248.

Schwartz DL and Bransford JD (1998). A time for telling. Cognition and Instruction 16, 475-522.

Smith MA and Karpicke JD (2014). Retrieval practice with short-answer, multiple-choice, and hybrid tests. Memory 22, 784-802.

Smith MK, Wood WB, Krauter K, and Knight JK (2011). Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE—Life Sciences Education 10, 55-63.

Stanger-Hall KF, Shockley FW, and Wilson RE (2011). Teaching students how to study: A workshop on information processing and self-testing helps students learn. CBE—Life Sciences Education 10, 187-198.

Steele, CM (2010). Whistling Vivaldi: How stereotypes affect us and what we can do. New York: W.W. Norton & Company.

Tanner, KD (2012). Promoting student metacognition. CBE—Life Sciences Education 11, 113-120.

Wissman KT, Rawson KA, and Pyc MA (2011). The interim test effect: Testing prior material can facilitate the learning of new material. Psychonomic Bulletin Review 18, 1140-1147.

Cite this guide:

Brame, C.J. and Biel, R. (2015). Test-enhanced learning: Using retrieval practice to promote learning. Retrieved July 12, 2018 from

This teaching guide is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Updated by Judith Littlejohn 10/29/2018 – edited URLs, edited minor grammar, removed incomplete figures (6, 7, and 8), added image of dogs from Pixabay.

About Bloom’s Taxonomy

Background Information

In 1956, Benjamin Bloom with collaborators Max Englehart, Edward Furst, Walter Hill, and David Krathwohl published a framework for categorizing educational goals: Taxonomy of Educational Objectives. Familiarly known as Bloom’s Taxonomy, this framework has been applied by generations of K-12 teachers and college instructors in their teaching.

The framework elaborated by Bloom and his collaborators consisted of six major categories: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. The categories after Knowledge were presented as “skills and abilities,” with the understanding that knowledge was the necessary precondition for putting these skills and abilities into practice.

While each category contained subcategories, all lying along a continuum from simple to complex and concrete to abstract, the taxonomy is popularly remembered according to the six main categories.

The Original Taxonomy (1956)

Here are the authors’ brief explanations of these main categories in from the appendix of Taxonomy of Educational Objectives (Handbook One, pp. 201-207):

  • Knowledge “involves the recall of specifics and universals, the recall of methods and processes, or the recall of a pattern, structure, or setting.”
  • Comprehension “refers to a type of understanding or apprehension such that the individual knows what is being communicated and can make use of the material or idea being communicated without necessarily relating it to other material or seeing its fullest implications.”
  • Application refers to the “use of abstractions in particular and concrete situations.”
  • Analysis represents the “breakdown of a communication into its constituent elements or parts such that the relative hierarchy of ideas is made clear and/or the relations between ideas expressed are made explicit.”
  • Synthesis involves the “putting together of elements and parts so as to form a whole.”
  • Evaluation engenders “judgments about the value of material and methods for given purposes.”

Barbara Gross Davis, in the “Asking Questions” chapter of Tools for Teaching, provides examples of questions corresponding to the six categories. 

The Revised Taxonomy (2001)

A group of cognitive psychologists, curriculum theorists, instructional researchers, and testing and assessment specialists published in 2001 a revision of Bloom’s Taxonomy with the title A Taxonomy for Teaching, Learning, and Assessment. This title draws attention away from the somewhat static notion of “educational objectives” (in Bloom’s original title) and points to a more dynamic conception of classification.

The authors of the revised taxonomy underscore this dynamism, using verbs and gerunds to label their categories and subcategories (rather than the nouns of the original taxonomy). These “action words” describe the cognitive processes by which thinkers encounter and work with knowledge:

  • Remember
    • Recognizing
    • Recalling
  • Understand
    • Interpreting
    • Exemplifying
    • Classifying
    • Summarizing
    • Inferring
    • Comparing
    • Explaining
  • Apply
    • Executing
    • Implementing
  • Analyze
    • Differentiating
    • Organizing
    • Attributing
  • Evaluate
    • Checking
    • Critiquing
  • Create
    • Generating
    • Planning
    • Producing

In the revised taxonomy, knowledge is at the basis of these six cognitive processes, but its authors created a separate taxonomy of the types of knowledge used in cognition:

  • Factual Knowledge
    • Knowledge of terminology
    • Knowledge of specific details and elements
  • Conceptual Knowledge
    • Knowledge of classifications and categories
    • Knowledge of principles and generalizations
    • Knowledge of theories, models, and structures
  • Procedural Knowledge
    • Knowledge of subject-specific skills and algorithms
    • Knowledge of subject-specific techniques and methods
    • Knowledge of criteria for determining when to use appropriate procedures
  • Metacognitive Knowledge
    • Strategic Knowledge
    • Knowledge about cognitive tasks, including appropriate contextual and conditional knowledge
    • Self-knowledge

Mary Forehand from the University of Georgia provides a guide to the revised version giving a brief summary of the revised taxonomy and a helpful table of the six cognitive processes and four types of knowledge.

Why Use Bloom’s Taxonomy?

The authors of the revised taxonomy suggest a multi-layered answer to this question, to which the author of this teaching guide has added some clarifying points:

  1. Objectives (learning goals) are important to establish in a pedagogical interchange so that teachers and students alike understand the purpose of that interchange.
  2. Teachers can benefit from using frameworks to organize objectives.
  3. Organizing objectives helps to clarify objectives for themselves and for students.
  4. Having an organized set of objectives helps teachers to:
    • “plan and deliver appropriate instruction”;
    • “design valid assessment tasks and strategies”;and
    • “ensure that instruction and assessment are aligned with the objectives.”

Citations are from A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives.

Further Information

Section III of A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, entitled “The Taxonomy in Use,” provides over 150 pages of examples of applications of the taxonomy. Although these examples are from the K-12 setting, they are easily adaptable to the university setting.

Section IV, “The Taxonomy in Perspective,” provides information about 19 alternative frameworks to Bloom’s Taxonomy, and discusses the relationship of these alternative frameworks to the revised Bloom’s Taxonomy.

by Patricia Armstrong, former Assistant Director, Vanderbilt Center for Teaching

edited by Judith Littlejohn, GCC – updated links, revised punctuation

This teaching guide is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

The main Bloom’s Taxonomy graphic is released under a Creative Commons Attribution license. You’re free to share, reproduce, or otherwise use it, as long as you attribute it to the Vanderbilt University Center for Teaching. For a higher resolution version, visit Vanderbilt’s Flickr account and look for the “Download this photo” icon.

About Learning Styles


What Are Learning Styles?

The term learning styles is widely used to describe how learners gather, sift through, interpret, organize, come to conclusions about, and “store” information for further use.  As spelled out in VARK (one of the most popular learning styles inventories), these styles are often categorized by sensory approaches:  visual, aural, verbal [reading/writing], and kinesthetic.  Many of the models that don’t resemble the VARK’s sensory focus are reminiscent of Felder and Silverman’s Index of Learning Styles, with a continuum of descriptors for how learners process and organize information:  active-reflective, sensing-intuitive, verbal-visual, and sequential-global.

There are well over 70 different learning styles schemes (Coffield, 2004), most of which are supported by “a thriving industry devoted to publishing learning-styles tests and guidebooks” and “professional development workshops for teachers and educators” (Pashler, et al., 2009, p. 105).

Despite the variation in categories, the fundamental idea behind learning styles is the same: that each of us has a specific learning style (sometimes called a “preference”), and we learn best when information is presented to us in this style.  For example, visual learners would learn any subject matter best if given graphically or through other kinds of visual images, kinesthetic learners would learn more effectively if they could involve bodily movements in the learning process, and so on.  The message thus given to instructors is that “optimal instruction requires diagnosing individuals’ learning style[s] and tailoring instruction accordingly” (Pashler, et al., 2009, p. 105).


Despite the popularity of learning styles and inventories such as the VARK, it is important to know that there is no evidence to support the idea that matching activities to one’s learning style improves learning.  It is not simply a matter of “the absence of evidence doesn’t mean the evidence of absence.”  On the contrary, for years researchers have tried to make this connection through hundreds of studies.

In 2009, Psychological Science in the Public Interest commissioned cognitive psychologists Harold Pashler, Mark McDaniel, Doug Rohrer, and Robert Bjork to evaluate the research on learning styles to determine whether there is credible evidence to support using learning styles in instruction.  They came to a startling but clear conclusion: “Although the literature on learning styles is enormous,” they “found virtually no evidence” supporting the idea that “instruction is best provided in a format that matches the preference of the learner.” Many of those studies suffered from weak research design, rendering them far from convincing.  Others with an effective experimental design “found results that flatly contradict the popular” assumptions about learning styles (p. 105). In sum,

“The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing” (p. 117).

Why Are They So Popular?

Pashler and his colleagues point to some reasons to explain why learning styles have gained—and kept—such traction, aside from the enormous industry that supports the concept.  First, people like to identify themselves and others by “type.” Such categories help order the social environment and offer quick ways of understanding each other.  Also, this approach appeals to the idea that learners should be recognized as “unique individuals”—or, more precisely, that differences among students should be acknowledged—rather than treated as a number in a crowd or a faceless class of students (p. 107). Carried further, teaching to different learning styles suggests that “all people have the potential to learn effectively and easily if only instruction is tailored to their individual learning styles” (p. 107).

There may be another reason why this approach to learning styles is so widely accepted. They very loosely resemble the concept of metacognition, or the process of thinking about one’s thinking.  For instance, having your students describe which study strategies and conditions for their last exam worked for them and which didn’t is likely to improve their studying on the next exam (Tanner, 2012).  Integrating such metacognitive activities into the classroom—unlike learning styles—is supported by a wealth of research (e.g., Askell Williams, Lawson, & Murray-Harvey, 2007; Bransford, Brown, & Cocking, 2000; Butler & Winne, 1995; Isaacson & Fujita, 2006; Nelson & Dunlosky, 1991; Tobias & Everson, 2002).

Importantly, metacognition is focused on planning, monitoring, and evaluating any kind of thinking about thinking and does nothing to connect one’s identity or abilities to any singular approach to knowledge.

Now What?

Learning styles are a myth when people are told that their preferred learning style is the only way they can learn. It is important to be aware that all people (barring physical or intellectual disabilities) learn in all ways. People learn by using all their senses.

Often children are told that they have a specific learning style and that they can only learn that specific way; this both limits the child’s ability to learn and serves as a crutch later on in life when they say things like, “I can’t be expected to learn or do that unless you show me how in my learning style.”

There is, however, something you can take away from these varied approaches to learning—not based on the learner, but instead on the content being learned.  To explore the persistence of the belief in learning styles, Vanderbilt CFT Assistant Director Nancy Chick interviewed Dr. Bill Cerbin, Professor of Psychology and Director of the Center for Advancing Teaching and Learning at the University of Wisconsin-La Crosse and former Carnegie Scholar with the Carnegie Academy for the Scholarship of Teaching and Learning.  He points out that the differences identified by the labels “visual, auditory, kinesthetic, and reading/writing” are more appropriately connected to the nature of the discipline:

“There may be evidence that indicates that there are some ways to teach some subjects that are just better than others, despite the learning styles of individuals…. If you’re thinking about teaching sculpture, I’m not sure that long tracts of verbal descriptions of statues or of sculptures would be a particularly effective way for individuals to learn about works of art. Naturally, these are physical objects and you need to take a look at them, you might even need to handle them.” (Cerbin, 2011, 7:45-8:30)

Pashler and his colleagues agree: “An obvious point is that the optimal instructional method is likely to vary across disciplines” (p. 116). In other words, it makes disciplinary sense to include kinesthetic activities in sculpture and anatomy courses, reading/writing activities in literature and history courses, visual activities in geography and engineering courses, and auditory activities in music, foreign language, and speech courses.  Obvious or not, it aligns teaching and learning with the contours of the subject matter, without limiting the potential abilities of the learners.

Related Ted Talk:

References and Resources:

Originally by Nancy Chick, Vanderbilt CFT Assistant Director

Revised by Judith Littlejohn, Genesee Community College

This teaching guide is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Image, “Fiction”, Pixabay.

Encouraging Students to Read

“From the moment I picked up your book until I put it down, I was convulsed with laughter. Someday I intend reading it.” –Groucho Marx

Most of us have seen this downward spiral:  We assign reading. Students—inexperienced at academic reading—find it challenging and don’t complete it. During the next session, we encounter blank faces, so we give an ad hoc lecture on the reading instead of leading a planned discussion. We assign more reading.  Students—having concluded that they don’t really need to read—skip the assignment. In class, we again encounter blank faces and again begin summarizing the contents of the reading.

As the spiral continues, we become more frustrated and students lose opportunities to engage in the richness of the course content and to develop the reading skills they need. What to do? Here are three suggestions:

  • Mary Ann Weimer (“Eleven Strategies for Getting Students to Read What’s Assigned,” 2010)  suggests stopping the downward spiral early.  The first time students show up unprepared, she suggests calmly saying something like this:  “This article is really quite important. Too bad you aren’t ready to work with it as I had planned,” and moving to an alternative activity designed for just that moment. Weimer says no scolding–but no summarizing the reading, either. Going forward, assigning reading responses and requiring that students submit them helps to move students toward reading regularly.
  • John Bean (Engaging Ideas, 2011) notes that background knowledge helps students understand a text.  Often we provide that just before a discussion. Bean suggests shifting the overview to the end of the previous class, when we make the assignment. We might point out the central focus of the reading, or alert students to a tricky passage or important term. We can also record these short introductions and post them on the class web site.
  • Norman Eng (Teaching College: The Ultimate Guide to Lecturing, Presenting, and Engaging Students, 2017) proposes an activity he calls QQC for “Question, Quotation, Comment.” As students read, they note a question, select an interesting quotation, or make a comment; the instructor then devotes 10 or 15 minutes to QQCs.  Eng suggests three ways to make QQCs work:
    • Use them regularly and consistently.
    • Call on students randomly rather than waiting for the typical volunteers. Involve many students but avoid deliberating embarrassing the momentarily distracted.
    • Give points for QQC work. This can be done by collecting the students’ questions, quotations, and comments, or by having them post them in an online discussion or assignment, or by having them log their QQCs throughout the term in a journal or doc which can be turned on once.

Incorporating one or more of these suggestions can help your students become regular readers, and allow you to use your class time productively.

Want to read more?

Bean, J. C. (2011). Engaging ideas. 2nd. ed. San Francisco: Jossey-Bass.   

Gonzalez, J. (2017). 5 Ways College Teachers Can Improve Their Instruction.  Cult of Pedagogy.

Weimer, M. (2010). 11 Strategies for Getting Students to Read What’s Assigned.

Submitted to the 2018-2019 Teaching Issues Writing Consortium Teaching Tips by:

Susan Hall, Director, Center for Teaching and Learning, University of the Incarnate Word

Edited by:

Judith Littlejohn, October 8, 2018 – expanded examples, edited formatting, added conclusion.

Image: Guy-Man-Reading

Assignment Design Checklist

Careful planning and implementation of assignments will help your students produce th evidence you expect to prove they met your learning objective. Consider using this checklist as a tool to trouble-shoot your assignment design and identify possible areas to refine. Other considerations may be required for your specific assignment, but this will give you a great start, no matter what type of assignment you plan to give.

1 Planning

A) When planning the assignment, decide how it can:

  • Fit with main learning objectives for the course, term, and program
  • Relate to previous work done in this course and past courses
  • Be new and different from the type of assignments given in this course and other courses (go beyond another paper)
  • Benefit from an audience other than yourself (peers, community professionals, librarians, others)
  • Use current topics and current resources
  • Be broken into a series of smaller assignments to avoid overwhelming students (scaffold)
  • Be completed – in groups, pairs, or individually
  • Be completed – in the online or hybrid environment
  • Build on students’ previous experience and current skill set
  • Develop important skills for students, both for your course work and beyond (skills for the workplace, skills for life)
  • Require a reasonable amount of work and be successfully completed in the allotted time, given other courses and demands outside of school
  • Have value to you (will be interesting to grade, lead to a research project)
  • Require a level of commitment you can meet (student support, grading and feedback)

B) Consider the support demands students may have:

  • Identify types of assistance students will require to complete the assignment
  • Contact librarians, community professionals, or other people who can assist you and your students in completing the assignment
  • Arrange guest lectures relevant to assignment process (librarian, community professional, colleagues)
  • When possible, use class time for activities to help students complete the assignment (discuss how to write an annotated bibliography, run lab activities to demonstrate a requisite skill, discuss material related to assignment topic)
  • Decide if students are required to meet with you as they complete the assignment and set times and policies for availability to help students avoid procrastinating

C) Make evaluation decisions by choosing the:

  • Assignment length expectations and due dates
  • Type of feedback to give – written, oral, anonymous
  • Evaluators – you, peers, community professional, librarian
  • Type of grade required (check mark, pass/fail, numeric grade)
  • Parts to evaluate – effort, research process, thinking process, progress, sequence of assignments, drafts, final products
  • Weighting of components – how much is each part worth
  • Turnaround times for grading/feedback to make the assignment meaningful for students
  • Policies for possible problems – late or incomplete assignments, missed meetings, poor group work practices, plagiarism

2: Implementing

A) Prepare an assignment description or handout that:

  • Comprises the key parts –  situation (background information, audience, relevance), task (what to do), stages (a timeline for completing key stages of the assignment), and evaluation criteria (specific grading rubric, special policies)
  • Uses plain language – avoids jargon
  • Provides advice from past experiences with the assignment
  • Explains proper citations and acceptable sources for information – be specific and expect to be taken literally

Have a colleague (preferably someone not familiar with your course) read the handout and identify any unclear instructions and jargon, then revise accordingly. As well, do your assignment before giving it to students whenever possible, so you can identify problems before they do. And, when you distribute the handout in class, take time to discuss it and allow for questions and clarifications about the task.

B) Consider giving ongoing support:

  • Share useful student feedback with the class
  • Keep in touch with support people (librarian)
  • Ask for mid-assignment feedback since no news is not necessarily good news
  • Have a backup plan for areas identified as difficult to complete (i.e., if a resource is hard to get, have a copy available on reserve) – but take care not to modify the assignment too much from the handout because this confuses students

3 Follow Up

After all the assignments have been graded and returned:

  • List 5 strengths and 5 weaknesses of the assignment and suggest changes for next time
  • Ask for evaluative feedback from students and support contacts – find out what worked well, what could be improved, where students had the most difficulty, and how you can better facilitate the process next time
  • Use feedback and experiences to modify assignment plan for the next time

Notice that the bulk of the work is in the first section, planning. The more thought and care you put into planning well-constructed assignments the more opportunities your students will have for success.

Download and use the Assignment Design Checklist.

This Creative Commons license lets others remix, tweak, and build upon our work non-commercially, as long as they credit us and indicate if changes were made. Use this citation format: Assignment Design: checklist. Centre for Teaching Excellence, University of Waterloo.

Remixed by Judith Littlejohn, October 1, 2018. Edits to formatting, wording, and conclusion; created actual checklist.

Featured image = “Pencil” from Pixabay