Are our assessments wasting time?

February 23, 2023

Using the Active View of Reading to critically evaluate our reading assessments

By Leah Steiner 

(c) 2021 Duke and Cartwright

After listening to Episode #1 of the To the Classroom podcast featuring Dr. Kelly Cartwright, I became curious about classroom assessments and how they do and don’t align with Cartwright and Duke’s Active View of Reading (AVR) (2021).

As a former classroom teacher, and now literacy consultant, I know from experience and observations how overwhelmed some classrooms are with mandatory  assessments–and what a time burden it can be for teachers who must administer and analyze them.  If our assessments serve as a roadmap to guide our instruction, we must ensure that the tools we choose align with current advances in reading research and gives us a comprehensive view of all aspects of our readers’ development. To echo John Hattie’s “know thy impact,” we must pause and reevaluate our efforts to make sure that our assessments give us the necessary information to pave the way for responsive instruction, resulting in maximum impact on learning and achievement.  

Cartwright and Duke (2021) argue that “not all profiles of reading difficulty are explained by low word recognition and/or language comprehension” [as explained in the Simple View of Reading] and that there are many distinct profiles of reading difficulty. They include in their model a category called “bridging processes” which explain the coordination between word recognition and language comprehension, a category called Active Self Regulation which includes executive function skills, strategy use, engagement and motivation, and more. 

Here are some guiding questions to evaluate which of our current assessments align with AVR, and when we may want to consider alternative assessment ideas.  

Guiding Question 1: How do we assess active self-regulation? What types of tools do you use in the classroom to evaluate active self regulation ? 

Cartwright begins the podcast by highlighting the essential role of self-regulation and executive functioning skills in reading comprehension.  She poignantly remarks “We don’t know about executive functioning skills, yet we expect students to have them.” Cartwright emphasizes that skilled readers are highly active, strategic, and engaged, developing executive skills to manage the reading process (Duke and Cartwright, 2021). Unlike the Simple View of Reading, the relatively new body of research she’s referencing is critical to understanding how students learn to regulate themselves, use strategies flexibly, and engage with texts (Cartwright, 2015).  

Dr. Cartwright’s analogy of the mental process we engage in when shopping at the grocery store is a great example of the many skills and potential roadblocks that could occur when evaluating executive functioning skills.  On the podcast, she explains that, similar to shopping, when our students are asked to engage in a reading task, they need the mental skills to manage complex tasks to complete a goal.  

So, what types of assessment data do we have to evaluate motivation, strategy use, and executive functioning skills? The answer is not many! When it comes to engagement, one tried and true tool to evaluate this goal is the engagement inventory (Serravallo, 2010, 2015, 2023), which involves observing and recording student behaviors while independently reading.  For example, teachers may watch for and record whether a student is flipping quickly through pages, looking out the window, chatting with a friend, or smiling as they turn the pages. 

However, assessments like this should be coupled with information about students’ attitudes towards reading, such as interest inventories and conferences to learn about  an individual's plan(s), goal(s), and purpose(s) for reading, as well as asking students to reflect on their own engagement (Serravallo, 2023).  This helps to ensure we are not conflating outward signs of being on task with true engagement.

Guiding Questions 2: How do we assess Language Comprehension? 

Are we asking students to read and answer questions on grade-level texts? Are we choosing the text for them? Can students choose a text that is both culturally relevant and one that they can read both accurately and fluently?

Cartwright’s construct of language comprehension (see Table 2 in the original article) complements some of the reader and text variables that can impact comprehension, discussed in Understanding Texts and Readers (Serravallo, 2018).

(c)2018 Serravallo Understanding Texts and Readers, page 17

Now think about your current assessments.  What variables are you assessing? Looking at these tables side by side, it becomes quite clear that a single assessment is inadequate and when possible, we should cast a wide net to assess for multiple variables and/or features. We learn some things about our readers, for example, when asking them to stop and jot responses to comprehension prompts during a read aloud. Asking students to read self-selected short texts independently may offer different insight. Discussing a student’s understanding of a novel during a one-on-one conference tells us something slightly else. Looking across these different snapshots of student comprehension can give us a more holistic view.

Guiding Question 3: Do our current assessments overlap with all four features of the AVR? Where are the gaps in our assessments and how can we combine them to simplify our data analysis and improve our instructional decisions? 

If you’re thinking that there are some areas that you don’t yet assess, here are some ideas… 

An Assessment Conference

One approach is an Assessment Conference (Serravallo, 2019), which allows teachers time to meet one-on-one with students to explore most of the goals within the “Reading Goals: Hierarchy of Action”(Serravallo, 2023). During an assessment conference, teachers engage students in conversation, asking open-ended questions, and analyzing their responses (Serravallo, 2018).  

Through an Assessment Conference, teachers can assess for self-regulation (engagement), word recognition (accuracy), language comprehension (both fiction and expository comprehension goals),  as well as the bridging processes (fluency, print concepts, vocabulary knowledge, etc.).  Compared to computer-based or written assessments, I find that assessment conferences provide a safe environment for students to share more, allowing teachers to gather more comprehensive data. 

Complete Comprehension Toolkits

Similar to an assessment conference, the Complete Comprehension Toolkit is designed to assess students' understanding of multiple features of the AVR, as well as several components of ‘The Reader and Text Variables That Impact Comprehension’.  With this assessment kit, teachers have a tool to evaluate students’ language comprehension as they read a whole-book of their choosing. In addition to assessing comprehension, the assessment also considers student motivation and engagement by encouraging students to reflect on the assessment experience.  This provides the teacher with insight into the students’ ability to monitor their own comprehension and helps them make informed instructional decisions. 

What else can we ask? 

Sparked by the conversation between Jen, Dr. Cartwright, and my colleagues, these are just a few questions to encourage us to think about how to align our assessment data with current reading research.  

What can we let go of, adopt, or create to better assess our readers and make sure we are not wasting our—or our students’— time with assessments?


References: 

Cartwright,K.B. (2015) Executive skills and reading comprehension. Guildford. 


Duke N.K., & Cartwright, K.B. (2021). The science of reading progresses: Communicating advances beyond the simple view of reading. Reading Research Quarterly, 56(S1), S25-44.

Hattie, J.A.C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge. 

Serravallo, J. (2023). The Reading Strategies Book 2.0. 

Previous
Previous

Maryanne Wolf

Next
Next

Kelly Cartwright