Teaching Archives - The Hechinger Report https://hechingerreport.org/tags/topic_teaching/ Covering Innovation & Inequality in Education Fri, 19 Jan 2024 22:28:26 +0000 en-US hourly 1 https://hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon-32x32.jpg Teaching Archives - The Hechinger Report https://hechingerreport.org/tags/topic_teaching/ 32 32 138677242 PROOF POINTS: Two groups of scholars revive the debate over inquiry vs. direct instruction https://hechingerreport.org/proof-points-two-groups-of-scholars-revive-the-debate-over-inquiry-vs-direct-instruction/ https://hechingerreport.org/proof-points-two-groups-of-scholars-revive-the-debate-over-inquiry-vs-direct-instruction/#comments Mon, 22 Jan 2024 11:00:00 +0000 https://hechingerreport.org/?p=98101

Educators have long debated the best way to teach, especially the subjects of science and math. One side favors direct instruction, where teachers tell students what they need to know or students read it from textbooks. Some call it explicit or traditional instruction. The other side favors inquiry, where students conduct experiments and figure out […]

The post PROOF POINTS: Two groups of scholars revive the debate over inquiry vs. direct instruction appeared first on The Hechinger Report.

]]>

Educators have long debated the best way to teach, especially the subjects of science and math. One side favors direct instruction, where teachers tell students what they need to know or students read it from textbooks. Some call it explicit or traditional instruction. The other side favors inquiry, where students conduct experiments and figure out the answers themselves like a scientist would. It’s also known as exploration, discovery learning or simply “scientific practices.”

The debate reignited among university professors during the pandemic with the 2021 online publication of a commentary in the journal Educational Psychology Review. Combatively titled “There is an Evidence Crisis in Science Educational Policy,” four experts in science education argued that the evidence for inquiry instruction is weak and that proponents of inquiry “exclude” or “mark as irrelevant” high-quality studies, particularly controlled trials, that “overwhelmingly show minimal support” for inquiry learning.  

One of the authors is the prominent Australian psychologist John Sweller, who formulated cognitive load theory, the widely accepted idea that our working memory can process only so much information at once. Other academics took notice. Traditionalists applauded it.

Sweller and his co-authors’ complaints date back to an influential 1996 report of the National Research Council, an arm of the National Academies of Sciences that shapes science education policy. The report encouraged science teachers to adopt an inquiry-based approach, and it was followed by similar calls from other policymakers. But the authors of the 2021 article said the council’s references for this policy change were “theoretical ideas packaged in conceptual articles rather than empirical evidence.”

The critics say that much of the positive evidence for inquiry comes from classroom studies where there are no control or comparison groups, making it impossible to know if inquiry is really better than alternatives. And they say that this research frequently lumps together inquiry instruction with other teaching practices and interventions, making it hard to disentangle how much the use of inquiry is making a difference. 

Soon after, another group of prominent education researchers issued a rebuttal. In March 2023, 13 scholars led by a Dutch researcher, Ton de Jong, took on the debate in the academic journal Educational Research Review. Titled “Let’s talk evidence – The case for combining inquiry-based and direct instruction,” their article acknowledged that the research is complicated and doesn’t unequivocally point to the superiority of inquiry-based learning. Some studies show inquiry is better. Some studies show direct instruction is better. Many show that students learn the same amount either way.  (As they walked through a series of meta-analyses that summarized hundreds of studies, they pointedly noted that inquiry critics also ignored or mischaracterized some of the research.) 

Their bottom line: “Inquiry-based instruction produces better overall results for acquiring conceptual knowledge than does direct instruction.” 

How could two groups of scholars look at the same body of research and come to opposite conclusions?

The first thing to notice is that the two groups of scholars are arguing about two different things. The inquiry critics pointed out that inquiry wasn’t great at helping students learn content and skills. The inquiry defenders emphasize that inquiry is better at helping students develop conceptual understandings. Different teaching methods may be better for different learning goals.

The second takeaway is that even this group of 13 inquiry defenders argue that teachers should use both approaches, inquiry and direct instruction. That’s because students also need to learn content and procedural skills, which are best taught through direct instruction, and in part because it would be boring to learn only one way all the time. 

Indeed, even the critics of inquiry instruction noted that inquiry lessons and exercises may be better at sparking a love of science. Students often say they enjoy science more or become more interested in the field after an inquiry lesson. Changing students’ attitudes about science is certainly not a compelling reason to teach this way all the time, as students need to learn content too, but even traditionalists admit there’s something to be gained from fun exploration. 

My third observation is that the inquiry defenders listed a bunch of caveats about when inquiry learning has proven to be most effective. Unstructured inquiry lessons where students groped in the dark weren’t successful in building any kind of understanding.

Caveat 1: Students need a strong foundation of knowledge and skills in order for inquiry learning to be successful. In other words, students need some facts and the ability to calculate things in different ways to take advantage of inquiry learning and arrive at deeper conceptual understandings. Complete mastery isn’t a prerequisite, but some familiarity is. The authors suggested, for example, that it can be beneficial to start with some direct instruction before launching into an inquiry lesson. 

Caveat 2: Inquiry learning is far more effective when students receive a lot of guidance and feedback from their teacher during an inquiry lesson. Sometimes the most appropriate guidance is a clear explanation, the authors said, which is the same as direct instruction. (My brain started to hurt, thinking about how direct instruction could be woven into inquiry-based learning. Is it really inquiry learning if you’re also telling students what they need to do or know? At some point, shouldn’t we be labeling it direct instruction with hands-on activities?) 

The 13 authors admitted that each student needs different amounts and types of guidance during an inquiry lesson. Low-achieving students appear to benefit more from guidance than middle- or high-achieving students. But low-achieving students also need more of it. And that can be tough, if not impossible for a single teacher to manage. I began to wonder if effective inquiry teaching is humanly possible.

Not only can inquiry include a lot of direct instruction, but sometimes direct instruction can resemble an inquiry classroom. While many people may imagine that direct instruction means that students are passively absorbing information through lectures or books, the inquiry defenders explained that students can and should be engaged in activities even when a teacher is practicing direct instruction. Students still solve problems, practice new things independently, build projects and conduct experiments. The core difference can be a subtle one and hinge upon whether the teacher explains the theory to the students first or shows examples before students try it themselves (direct), or if the teacher asks students to figure out the theories and the procedures themselves, but gives them explicit guidance along the way (inquiry).

Like all long-standing academic debates, this one is far from resolved. Some educators prefer inquiry; some prefer direct instruction.  Depending upon your biases, you’re likely to see a complicated, mixed body of research as glass half full or glass half empty.

In December 2023, Sweller and the inquiry critics wrote a response to the rebuttal in the same Educational Research Review journal.  Beyond the academic sniping and nitpicking, the two sides seem to have found some common ground.

“Our view… is that explicit instruction is essential for novices” but that as students gain knowledge, there should be “an increasing emphasis on independent problem-solving practice,” Sweller and his camp wrote.  “To the extent that De Jong et al. (2023) agree that explicit instruction can be important, we appear to have reached some level of agreement.”

The real test will be watching to see whether that consensus makes it to the classroom.

This story about teaching strategies was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Proof Points newsletter.

The post PROOF POINTS: Two groups of scholars revive the debate over inquiry vs. direct instruction appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-two-groups-of-scholars-revive-the-debate-over-inquiry-vs-direct-instruction/feed/ 2 98101
PROOF POINTS: How to get teachers to talk less and students more https://hechingerreport.org/proof-points-how-to-get-teachers-to-talk-less-and-students-more/ https://hechingerreport.org/proof-points-how-to-get-teachers-to-talk-less-and-students-more/#respond Mon, 15 Jan 2024 11:00:00 +0000 https://hechingerreport.org/?p=97983

Silence may be golden, but when it comes to learning with a tutor, talking is pure gold. It’s audible proof that a student is paying attention and not drifting off, research suggests. More importantly, the more a student articulates his or her reasoning, the easier it is for a tutor to correct misunderstandings or praise […]

The post PROOF POINTS: How to get teachers to talk less and students more appeared first on The Hechinger Report.

]]>
Example of the talk meter shown to Cuemath tutors at the end of the tutoring session. Source: Figure 2 of Demszky et. al. “Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform.”

Silence may be golden, but when it comes to learning with a tutor, talking is pure gold. It’s audible proof that a student is paying attention and not drifting off, research suggests. More importantly, the more a student articulates his or her reasoning, the easier it is for a tutor to correct misunderstandings or praise a breakthrough. Those are the moments when learning happens.

One India-based tutoring company, Cuemath, trains its tutors to encourage students to talk more. Its tutors are in India, but many of its clients are American families with elementary school children. The tutoring takes place at home via online video, like a Zoom meeting with a whiteboard, where both tutor and student can work on math problems together. 

The company wanted to see if it could boost student participation so it collaborated with researchers at Stanford University to develop a “talk meter,” sort of a Fitbit for the voice, for its tutoring site. Thanks to advances in artificial intelligence, the researchers could separate the audio of the tutors from that of the students and calculate the ratio of tutor-to-student speech.

In initial pilot tests, the talk meter was posted on the tutor’s video screen for the entire one-hour tutoring session, but tutors found that too distracting. The study was revised so that the meter pops up every 20 minutes or three times during the session. When the student is talking less than 25 percent of the time, the meter goes red, indicating that improvement is needed. When the student is talking more than half the time, the meter turns green. In between, it’s yellow. 

Example of the talk meter shown to tutors every 20 minutes during the tutoring session. Source: Figure 2 of Demszky et. al. “Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform.”

More than 700 tutors and 1,200 of their students were randomly assigned to one of three groups: one where the tutors were shown the talk meter, another where both tutors and students were shown the talk meter, and a third “control” group which wasn’t shown the talk meter at all for comparison.

When just the tutors saw the talk meter, they tended to curtail their explanations and talk much less. But despite their efforts to prod their tutees to talk more, students increased their talking only by 7 percent. 

When students were also shown the talk meter, the dynamic changed. Students increased their talking by 18 percent. Introverts especially started speaking up, according to interviews with the tutors. 

The results show how teaching and learning is a two-way street. It’s not just about coaching teachers to be better at their craft. We also need to coach students to be better learners. 

“It’s not all the teacher’s responsibility to change student behavior,” said Dorottya Demszky, an assistant professor in education data science at Stanford University and lead author of the study. “I think it’s genuinely, super transformative to think of the student as part of it as well.”

The study hasn’t yet been published in a peer-reviewed journal and is currently a draft paper, “Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform,” so it may still be revised. It is slated to be presented at the March 2024 annual conference of the Society of Learning Analytics in Kyoto, Japan. 

In analyzing the sound files, Demszky noticed that students tended to work on their practice problems with the tutor more silently in both the control and tutor-only talk meter groups. But students started to verbalize their steps aloud once they saw the talk meter. Students were filling more of the silences.

In interviews with the researchers, students said the meter made the tutoring session feel like a game.  One student said, “It’s like a competition. So if you talk more, it’s like, I think you’re better at it.” Another noted:  “When I see that it’s red, I get a little bit sad and then I keep on talking, then I see it yellow, and then I keep on talking more. Then I see it green and then I’m super happy.” 

Some students found the meter distracting.  “It can get annoying because sometimes when I’m trying to look at a question, it just appears, and then sometimes I can’t get rid of it,” one said.

Tutors had mixed reactions, too. For many, the talk meter was a helpful reminder not to be long-winded in their explanations and to ask more probing, open-ended questions. Some tutors said they felt pressured to reach a 50-50 ratio and that they were unnaturally holding back from speaking. One tutor pointed out that it’s not always desirable for a student to talk so much. When you’re introducing a new concept or the student is really lost and struggling, it may be better for the teacher to speak more. 

Surprisingly, kids didn’t just fill the air with silly talk to move the gauge. Demszky’s team analyzed the transcripts in a subset of the tutoring sessions and found that students were genuinely talking about their math work and expressing their reasoning. The use of math terms increased by 42 percent.

Unfortunately, there are several drawbacks to the study design. We don’t know if students’ math achievement improved from the talk meter. The problem was that students of different ages were learning different things in different grades and different countries and there was no single, standardized test to give them all. 

Another confounding factor is that students who saw the talk meter were also given extra information sessions and worksheets about the benefits of talking more. So we can’t tell from this experiment if the talk meter made the difference or if the information on the value of talking aloud would have been enough to get them to talk more.

Excerpts from transcribed tutoring sessions in which students are talking about the talk meter. Source: Table 4 of Demszky et. al. “Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform.”

Demszky is working on developing a talk meter app that can be used in traditional classrooms to encourage more student participation. She hopes teachers will share talk meter results with their students. “I think you could involve the students a little more: ‘It seems like some of you weren’t participating. Or it seems like my questions were very closed ended? How can we work on this together?’”

But she said she’s treading carefully because she is aware that there can be unintended consequences with measurement apps. She wants to give feedback not only on how much students are talking but also on the quality of what they are talking about. And natural language processing still has trouble with English in foreign accents and background noise. Beyond the technological hurdles, there are psychological ones too.

 “Not everyone wants a Fitbit or a tool that gives them metrics and feedback,” Demszky acknowledges.

This story about student participation was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Proof Points newsletter.

The post PROOF POINTS: How to get teachers to talk less and students more appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-how-to-get-teachers-to-talk-less-and-students-more/feed/ 0 97983
PROOF POINTS: It’s easy to fool ChatGPT detectors https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/ https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/#respond Mon, 04 Sep 2023 10:00:00 +0000 https://hechingerreport.org/?p=95538

A high school English teacher recently explained to me how she’s coping with the latest challenge to education in America: ChatGPT.  She runs every student essay through five different generative AI detectors. She thought the extra effort would catch the cheaters in her classroom.  A clever series of experiments by computer scientists and engineers at […]

The post PROOF POINTS: It’s easy to fool ChatGPT detectors appeared first on The Hechinger Report.

]]>

A high school English teacher recently explained to me how she’s coping with the latest challenge to education in America: ChatGPT.  She runs every student essay through five different generative AI detectors. She thought the extra effort would catch the cheaters in her classroom. 

A clever series of experiments by computer scientists and engineers at Stanford University indicate that her labors to vet each essay five ways might be in vain. The researchers demonstrated how seven commonly used GPT detectors are so primitive that they are both easily fooled by machine generated essays and improperly flagging innocent students. Layering several detectors on top of each other does little to solve the problem of false negatives and positives.

“If AI-generated content can easily evade detection while human text is frequently misclassified, how effective are these detectors truly?” the Stanford scientists wrote in a July 2023 paper, published under the banner, “opinion,” in the peer-reviewed data science journal Patterns. “Claims of GPT detectors’ ‘99% accuracy’ are often taken at face value by a broader audience, which is misleading at best.”

The scientists began by generating 31 counterfeit college admissions essays using ChatGPT 3.5, the free version that any student can use. GPT detectors were pretty good at flagging them. Two of the seven detectors they tested caught all 31 counterfeits. 

But all seven GPT detectors could be easily tricked with a simple tweak. The scientists asked ChatGPT to rewrite the same fake essays with this prompt: “Elevate the provided text by employing literary language.”

Detection rates plummeted to near zero (3 percent, on average). 

I wondered what constitutes literary language in the ChatGPT universe. Instead of college essays, I asked ChatGPT to write a paragraph about the perils of plagiarism. In ChatGPT’s first version, it wrote: “Plagiarism presents a grave threat not only to academic integrity but also to the development of critical thinking and originality among students.” In the second, “elevated” version, plagiarism is “a lurking specter” that “casts a formidable shadow over the realm of academia, threatening not only the sanctity of scholastic honesty but also the very essence of intellectual maturation.”  If I were a teacher, the preposterous magniloquence would have been a red flag. But when I ran both drafts through several AI detectors, the boring first one was flagged by all of them. The flamboyant second draft was flagged by none. Compare the two drafts side by side for yourself. 

Simple prompts bypass ChatGPT detectors. Red bars are AI detection before making the language loftier; gray bars are after. 

For ChatGPT 3.5 generated college admission essays, the performance of seven widely used ChatGPT detectors declines markedly when a second round self-edit prompt (“Elevate the provided text by employing literary language”) is applied. Source: Liang, W., et al. “GPT detectors are biased against non-native English writers” (2023)

Meanwhile, these same GPT detectors incorrectly flagged essays written by real humans as AI generated more than half the time when the students were not native English speakers. The researchers collected a batch of 91 practice English TOEFL essays that Chinese students had voluntarily uploaded to a test-prep forum before ChatGPT was invented. (TOEFL is the acronym for the Test of English as a Foreign Language, which is taken by international students who are applying to U.S. universities.) After running the 91 essays through all seven ChatGPT detectors, 89 essays were identified by one or more detectors as possibly AI-generated. All seven detectors unanimously marked one out of five essays as AI authored. By contrast, the researchers found that GPT detectors accurately categorized a separate batch of 88 eighth grade essays, submitted by real American students.

My former colleague Tara García Mathewson brought this research to my attention in her first story for The Markup, which highlighted how international college students are facing unjust accusations of cheating and need to prove their innocence. The Stanford scientists are warning not only about unfair bias but also about the futility of using the current generation of AI detectors. 

Bias in ChatGPT detectors. Leading detectors incorrectly flag a majority of essays written by international students, but accurately classify writing of American eighth graders. 

More than half of the TOEFL (Test of English as a Foreign Language) essays written by non-native English speakers were  incorrectly classified as “AI-generated,” while detectors exhibit near-perfect accuracy for U.S. eighth graders’ essays. Source: Liang, W., et al. “GPT detectors are biased against non-native English writers” (2023)

The reason that the AI detectors are failing in both cases – with a bot’s fancy language and with foreign students’ real writing – is the same. And it has to do with how the AI detectors work. Detectors are a machine learning model that analyzes vocabulary choices, syntax and grammar. A widely adopted measure inside numerous GPT detectors is something called “text perplexity,” a calculation of how predictable or banal the writing is. It gauges the degree of “surprise” in how words are strung together in an essay. If the model can predict the next word in a sentence easily, the perplexity is low. If the next word is hard to predict, the perplexity is high.

Low perplexity is a symptom of an AI generated text, while high perplexity is a sign of human writing. My intentional use of the word “banal” above, for example, is a lexical choice that might “surprise” the detector and put this column squarely in the non-AI generated bucket. 

Because text perplexity is a key measure inside the GPT detectors, it becomes easy to game with loftier language. Non-native speakers get flagged because they are likely to exhibit less linguistic variability and syntactic complexity.

The seven detectors were created by originality.ai, Quill.org, Sapling, Crossplag, GPTZero, ZeroGPT and OpenAI (the creator of ChatGPT). During the summer of 2023, Quill and OpenAI both decommissioned their free AI checkers because of inaccuracies. Open AI’s website says it’s planning to launch a new one.

“We have taken down AI Writing Check,” Quill.org wrote on its website, “because the new versions of Generative AI tools are too sophisticated for detection by AI.” 

The site blamed newer generative AI tools that have come out since ChatGPT launched last year.  For example, Undetectable AI promises to turn any AI-generated essay into one that can evade detectors … for a fee. 

Quill recommends a clever workaround: check students’ Google doc version history, which Google captures and saves every few minutes. A normal document history should show every typo and sentence change as a student is writing. But someone who had an essay written for them – either by a robot or a ghostwriter – will simply copy and paste the entire essay at once into a blank screen. “No human writes that way,” the Quill site says. A more detailed explanation of how to check a document’s version history is here

Checking revision histories might be more effective, but this level of detective work is ridiculously time consuming for a high school English teacher who is grading dozens of essays. AI was supposed to save us time, but right now, it’s adding to the workload of time-pressed teachers!

This story about ChatGPT detectors was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters. 

The post PROOF POINTS: It’s easy to fool ChatGPT detectors appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/feed/ 0 95538
PROOF POINTS: The best way to teach might depend on the subject https://hechingerreport.org/proof-points-the-best-way-to-teach-might-depend-on-the-subject/ https://hechingerreport.org/proof-points-the-best-way-to-teach-might-depend-on-the-subject/#comments Mon, 19 Jun 2023 10:00:00 +0000 https://hechingerreport.org/?p=94101

What is the best way to teach? Some educators like to deliver clear explanations to students. Others favor discussions or group work. Project-based learning is trendy. But a June 2023 study from England could override all these debates: the most effective use of class time may depend on the subject. The researchers found that students […]

The post PROOF POINTS: The best way to teach might depend on the subject appeared first on The Hechinger Report.

]]>

What is the best way to teach? Some educators like to deliver clear explanations to students. Others favor discussions or group work. Project-based learning is trendy. But a June 2023 study from England could override all these debates: the most effective use of class time may depend on the subject.

The researchers found that students who spent more time in class solving practice problems on their own and taking quizzes and tests tended to have higher scores in math. It was just the opposite in English class. Teachers who allocated more class time to discussions and group work ended up with higher scorers in that subject. 

“There does seem to be a difference between language and math in the best use of time in class,” said Eric Taylor, an economist who studies education at the Harvard Graduate School of Education and one of the study’s authors. “I think that is contradictory to what some people would expect and believe.”

Indeed, the way that the 250 secondary school teachers in this study taught didn’t differ that much between math and English. For example, math teachers were almost as likely to devote most or all of the hour of class time to group discussions as English teachers were: 35 percent compared to 41 percent. Lectures were one of the least common uses of time in both subjects.

The study, “Teacher’s use of class time and student achievement,” published in the Economics of Education Review, gives us a rare glimpse inside classrooms thanks to a sister experiment in teacher ratings that provided the data for this study. Teachers observed their colleagues and filled out surveys on how frequently teachers were doing various instructional activities.

How secondary school teachers in low-income secondary schools in England allocate class time

In this study of 32 English secondary schools, math teachers didn’t allocate class time in a radically different way than English teachers. Source: Appendix of Teacher’s use of class time and student achievement, Economics of Education Review, June 2023

The researchers studied 32 high-poverty English secondary schools and looked at how the allocation of classroom time in years 10 and 11 related to the test scores of 7,000 students. Throughout the United Kingdom, including England where this study took place, 11th year students take General Certificate of Secondary Education [GCSE] exams, which are akin to high school exit exams. (Years 10 and 11 are equivalent to 9th and 10th grades in the United States.)

Researchers didn’t prove that teachers’ choices on how to spend class time caused GCSE scores to go up. But they were able to control for teacher quality, and they noticed that even among teachers who had the same ratings, those who opted to allocate more time to individual practice work had higher student math scores. Similarly, among English teachers with the same quality ratings, those who opted to allocate more time to discussions and group work had higher student English scores. “Better” teachers who received higher ratings from their peers had a slight tendency to allocate time more effectively (that is, more practice work in math and more discussion time in English), but there were plenty of teachers who had gotten strong ratings from peers who didn’t spend class time this way.

The researchers did not theorize about why individual practice work is more important in math than in English. I’ve noticed that doing a lot of practice problems during school hours is a big part of the algebra tutoring programs that have produced strong results for teens. Advocates of project-based learning once tried to develop a curriculum to teach math, but backed off when they struggled to come up with good projects for teaching abstract math concepts and skills. But they had success with English, science and social studies. 

Although the study took place in England, Taylor sees lessons here for U.S. educators on how to spend their class time.  “I suspect that if we repeated this whole setup in high schools in New York or elsewhere in the United States that we would see similar results,” said Taylor. 

In this country many teachers are encouraged to incorporate “math talks” as a way to develop mathematical reasoning and help students see multiple strategies for solving a problem. Progressive math educators might also favor group over individual work. Yet this study found stronger math achievement for students whose teachers devoted less class time to math discussions or group work. 

Critics might complain that test scores shouldn’t be the ultimate goal of mathematics education. Some teachers care more about developing a love of math or inspiring students to pursue math-heavy fields. We cannot tell from this study if teachers who conduct more math discussions produce other long-term benefits for students. 

It’s also unclear from this study exactly what math teachers are doing during the long stretches of independent work time. Some may be milling about offering hints and one-to-one help. Others might be kicking back at their desks, catching up on email or drinking a cup of tea while students complete their homework in class.

Even teachers who devote most of their class time to independent practice work may begin class with five or 10 minutes of lecturing. It’s not as if students are magically teaching themselves math, muddling through on their own, Taylor said.

“It’s not the only thing that’s going on in these classes,” said Taylor. 

I suspect that we’re going to have more information on how good teachers spend their precious minutes of class time in the near future, thanks to improvements in artificial intelligence and learning analytics. I can imagine algorithms more accurately analyzing how class time is spent from audio and video recordings, eliminating the need for human observers to code hours of instructional time. 

“Even if we don’t know exactly the recipe to give to teachers today, I think this study does say, ‘Well, hold on a minute, maybe we should be thinking differently about what’s right if we’re teaching math or language’,” said Taylor. These results, he added, should encourage educators to think more about what works best for each subject.  

This story about math teaching methods was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters. 

The post PROOF POINTS: The best way to teach might depend on the subject appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-the-best-way-to-teach-might-depend-on-the-subject/feed/ 2 94101
PROOF POINTS: 2022 in review https://hechingerreport.org/proof-points-2022-in-review/ https://hechingerreport.org/proof-points-2022-in-review/#respond Mon, 19 Dec 2022 11:00:00 +0000 https://hechingerreport.org/?p=90879

For my year-end post, I’m highlighting 10 of the most important Proof Points stories of 2022. This year, I was proud to write several watchdog stories that use research evidence to highlight poor or ineffective practices in schools. I put a special focus on tutoring – the good and the bad –  as well as […]

The post PROOF POINTS: 2022 in review appeared first on The Hechinger Report.

]]>
Catching up is hard to do. Several studies marked the pandemic’s toll on student achievement and hinted at challenges for even the most promising solutions. Credit: Allison Shelley for EDUimages

For my year-end post, I’m highlighting 10 of the most important Proof Points stories of 2022. This year, I was proud to write several watchdog stories that use research evidence to highlight poor or ineffective practices in schools. I put a special focus on tutoring – the good and the bad –  as well as test-optional admissions and reading. 

Thank you to everyone who read and commented on my weekly stories about education data and research. I look forward to continuing this conversation with you next year. If you would like to receive my email newsletter and be notified when the column comes out each week, please click here and fill out the form. I’ll be back again on Jan. 2, 2023 with a story about arts education. Happy New Year!

1. PROOF POINTS: Many schools are buying on-demand tutoring but a study finds that few students are using it

Companies market 24/7 online tutoring services as “high-dosage” tutoring to help students catch up from pandemic learning losses. But researchers warn that these products don’t have an evidence base behind them and now a study finds that not many students are using them.

2. PROOF POINTS: Does growth mindset matter? The debate heats up

I took a deep dive inside the scholarly debate over boosting students’ “mindsets” — one of the most popular ideas in education. Dueling meta-analyses conclude it’s either generally ineffective or effective only for low-achievers. 

3. PROOF POINTS: Colleges that ditched test scores for admissions find it’s harder to be fair in choosing students, researcher says

A qualitative study gives us a rare, unvarnished glimpse inside college admissions offices as they struggle to admit students under new “test-optional” policies. Admissions officers often described a “chaotic” and “stressful” process where they lacked clear guidance on how to select students without SAT or ACT scores. This story came out a couple weeks before the U.S. Supreme Court heard two affirmative action cases and it was my most read story of 2022.  

4. PROOF POINTS: Leading dyslexia treatment isn’t a magic bullet, studies find, while other options show promise

New research casts doubt on the most sought-after and expensive way of teaching children with dyslexia to read: the Orton-Gillingham method. 

5. PROOF POINTS: Seven new studies on the impact of a four-day school week

As policymakers debate the schedule switch, some research shows a tiny negative effect on rural students, where the shortened week is most popular. 

6. PROOF POINTS: Researchers say cries of teacher shortages are overblown

I was the first reporter to question the media narrative that teachers were leaving the profession en masse. I discovered that teacher vacancies weren’t that much higher than we have had during previous tight labor markets.

7. PROOF POINTS: The paradox of ‘good’ teaching

Researchers find a tradeoff between raising achievement and engaging students. It’s extremely rare for teachers to do both in the classroom. 

8. PROOF POINTS: Debunking the myth that teachers stop improving after five years

Newer research finds that even experienced educators get better, albeit at a slower pace. 

9. PROOF POINTS: Third graders struggling the most to recover in reading after the pandemic

As the coronavirus pandemic ravaged communities and shuttered schools, many educators and parents worried about kindergarteners who were learning online. That concern now appears well-founded as we’re starting to see evidence that remote school and socially distanced instruction were profoundly detrimental to their reading development. 

10. PROOF POINTS: Six puzzling questions from the disastrous NAEP results

How bad were pandemic learning losses among fourth graders? My best analogy is a cross-country road trip. Imagine that students were traveling at 55 miles an hour, ran out of gas and started walking instead. Now they’re back in their cars and humming along again at 55 miles an hour. Some are traveling at 60 miles an hour, catching up slightly, but they’re still far away from the destination that they would have reached if they hadn’t run out of gas.

This story about the top education research stories of 2022 was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: 2022 in review appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-2022-in-review/feed/ 0 90879
PROOF POINTS: Black and white teachers from HBCUs are better math instructors, study finds https://hechingerreport.org/proof-points-black-and-white-teachers-from-hbcus-are-better-math-instructors-study-finds/ https://hechingerreport.org/proof-points-black-and-white-teachers-from-hbcus-are-better-math-instructors-study-finds/#comments Mon, 26 Sep 2022 10:00:00 +0000 https://hechingerreport.org/?p=88866

A large body of research shows that Black students are likely to learn more when they are taught by a Black teacher. Quantitative researchers have found better results for Black students taught by Black teachers in Texas, Florida, Missouri, Tennessee and North Carolina. It’s one of the reasons that many education advocates have called for […]

The post PROOF POINTS: Black and white teachers from HBCUs are better math instructors, study finds appeared first on The Hechinger Report.

]]>
Black elementary students in North Carolina tended to score higher on annual math tests when they were taught by an HBCU-trained teacher, but not necessarily a Black teacher, according to an unpublished study from a Stanford University graduate student. Credit: Cheryl Gerber for The Hechinger Report

A large body of research shows that Black students are likely to learn more when they are taught by a Black teacher. Quantitative researchers have found better results for Black students taught by Black teachers in Texas, Florida, Missouri, Tennessee and North Carolina. It’s one of the reasons that many education advocates have called for diversifying the teacher workforce, which is overwhelmingly white.

But a large study of a million elementary school students and nearly 35,000 teachers in North Carolina found that Black teachers aren’t always better for Black students. The race of the teacher didn’t affect the academic achievement of Black students in third through fifth grade across eight school years, from 2009-10 to 2017-18. Almost a quarter of the students were Black and they did just as well on their annual reading and math tests with a white teacher as they did with a Black one. 

Instead, what mattered was where a teacher went to college. Both Black and white teachers trained at an historically black college or university (HBCU) helped Black students do better in math. Almost one out of 10 teachers in North Carolina graduated from an HBCU. Though not a large number, a quarter of these HBCU-trained teachers were white. During a year that a Black elementary school student had one of these HBCU-trained teachers, his or her math scores were higher. In the following year, if their teacher was trained elsewhere, these same Black students tended to post lower math scores. 

“I thought that this has to be wrong somehow because so many papers have found an effect for a Black-teacher Black-student match,” said Lavar Edmonds, a graduate student in economics and education at Stanford University, who conducted the analysis. Edmonds ran the numbers in different ways “over and over again” and kept getting the same results. “I only note a same-race teacher effect for Black students when that teacher went to an HBCU.”

Previous studies weren’t necessarily wrong, but differences in the data can yield different results. For example, one earlier study focused on long-term outcomes, instead of test scores, and found higher college going rates for Black students taught by Black teachers. Edmonds’s study, “Role Models Revisited: HBCUs, Same Race Teacher Effects and Black Student Achievement,” hasn’t been peer reviewed or published in an academic journal, but an August 2022 draft was publicly posted. Bolstering Edmonds’s results is another unpublished national study of 18,000 students, presented at a September 2022 conference of the Society for Research on Educational Effectiveness. It also failed to find higher achievement in math, reading or science for students taught by a teacher of the same race.

The boost to math achievement for a Black student learning from an HBCU teacher wasn’t terribly large, but it was often larger than the benefit of having a Black teacher in previous studies. The increase in math test scores was equal to about 5 percent of the typical test score gap between Black and white students. White and Hispanic students weren’t penalized; they did just as well with HBCU teachers as they did with non-HBCU teachers. 

It’s worth emphasizing that this HBCU teacher benefit was detected only in math – not in reading. Black children’s reading scores were unaffected by their teacher’s race or university. 

Exactly what HBCUs are doing to train more effective math teachers is an excellent question and Edmonds admits he doesn’t know the answer. There are 11 HBCUs in North Carolina and five of them, such as Fayetteville State University and Elizabeth City State University, produced most of the teachers in this particular study. Historically, many of the nation’s 100 HBCUs were founded as teacher training grounds or “normal” schools. In North Carolina, half of all Black teachers hailed from an HBCU.

At first glance, one might think that HBCUs produce teachers of lower quality. In this study, the HBCU trained teachers posted much lower scores on their teacher certification exams, called Praxis. “They’re clearly outperforming more ‘qualified’ teachers,” said Edmonds. “At a minimum, this raises the question of what we’re measuring.”

Edmonds doubts that math instructional approaches at HBCUs are dramatically different from those at other teaching programs. “The general concept of adding is going to be more or less the same,” said Edmonds, a former high school math teacher himself. 

Edmonds speculates that HBCU-trained teachers experienced a different culture and climate in college that they replicate in their own classrooms. “Many of my family members went to HBCUs and a recurring theme is how they found it more welcoming,” he said. “They felt more at peace, more at home at an HBCU. Warmer, I would say. I think there is a component of that in how a teacher conveys information to a student. If you’re getting more of that environment, yourself, as a student at these institutions, I think it makes a difference in your disposition as a teacher.”

To be sure, different types of people choose to attend an HBCU in the first place. HBCU students might have had life experiences before college that helped them better connect with Black children in their professional lives. It’s possible that HBCUs aren’t doing anything magical at all, but that the people who attend them are special.

Teacher race remains a big factor when it comes to student discipline. Black boys were more likely to be suspended with white teachers than with Black teachers, according to the study. But once again HBCU training makes a difference here too. Black boys were less likely to be suspended by an HBCU-trained white teacher than a white teacher who trained elsewhere. (HBCU training didn’t make a difference for the suspension rates of Black girls.)

Given that the teaching profession is overwhelmingly white – nearly 80 percent of teachers – it’s heartening to see a study that can perhaps shine a light on how white teachers might become more effective with Black students, even as we try to diversify the ranks. 

Edmonds, who is Black, says the point of his paper is to help the field of education “think more deeply about teacher-student relationships” and what makes them work well in ways that can transcend race. “Not to say that race is not important, but I think if we are overly reliant on these characteristics, it’s a slippery slope, I think, to race essentialism,” he said.

HBCUs are clearly enjoying  a renaissance. Applications to HBCUs spiked almost 30 percent from 2018 to 2021 even as the total number of U.S. undergraduate students dropped by almost 10 percent during the pandemic. This study suggests another reason why HBCUs remain relevant and important. 

This story about HBCU teachers was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: Black and white teachers from HBCUs are better math instructors, study finds appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-black-and-white-teachers-from-hbcus-are-better-math-instructors-study-finds/feed/ 2 88866
PROOF POINTS: The paradox of ‘good’ teaching https://hechingerreport.org/proof-points-the-paradox-of-good-teaching/ https://hechingerreport.org/proof-points-the-paradox-of-good-teaching/#comments Mon, 11 Jul 2022 10:00:00 +0000 https://hechingerreport.org/?p=87749

What is “good” teaching? Ask 10 people and you’ll get 10 different answers. Hollywood celebrates teachers who believe in their students and help them to achieve their dreams. The influential education economist Eric Hanushek, a senior fellow at Stanford University’s Hoover Institution, argues that good teachers raise their students’ achievement. Teachers are expected to impart […]

The post PROOF POINTS: The paradox of ‘good’ teaching appeared first on The Hechinger Report.

]]>

What is “good” teaching? Ask 10 people and you’ll get 10 different answers. Hollywood celebrates teachers who believe in their students and help them to achieve their dreams. The influential education economist Eric Hanushek, a senior fellow at Stanford University’s Hoover Institution, argues that good teachers raise their students’ achievement. Teachers are expected to impart so many things, from how to study and take notes to how to share and take turns. Deciding what constitutes good teaching is a messy business.

Two researchers from the University of Maryland and Harvard University waded into this mess. They analyzed 53 elementary school teachers who had been randomly assigned to classrooms in their schools located in four different districts along the East Coast. Focusing on math instruction, the researchers compared students’ math scores with surveys that the fourth and fifth-grade students had filled out as part of an experiment. Students were asked to rate their math classes the way consumers fill out customer satisfaction surveys: “This math class is a happy place for me to be;” “Being in this math class makes me feel sad or angry;” “The things we have done this math this year are interesting;” “Because of this teacher, I am learning to love math;” and “I enjoy math class this year.” 

The academics found that there was often a tradeoff between “good teaching” where kids learn stuff and “good teaching” that kids enjoy. Teachers who were good at raising test scores tended to receive low student evaluations. Teachers with great student evaluations tended not to raise test scores all that much. 

“The teachers and the teaching practices that can increase test scores often are not the same as those that improve student-reported engagement,” said David Blazar, one of the study’s co-authors and an associate professor of education policy at the University of Maryland College Park. 

Blazar’s study, “Challenges and Tradeoffs of ‘Good’ Teaching: The Pursuit of Multiple Educational Outcomes,” was co-written with Cynthia Pollard, a doctoral student at Harvard University’s Graduate School of Education. It was publicly posted in June 2022 as a working paper of the Annenberg Institute at Brown University. 

It’s hard to understand exactly why the tradeoff between achievement and student engagement exists. One theory is that “drill and kill” style rote repetition might be effective in helping students do well on tests but make class dreadfully dull. The researchers watched hours of videotaped lessons of these teachers in classrooms, but they didn’t find statistical evidence that teachers who spent more class time on test prep produced higher test scores. High achievement didn’t seem to be associated with rote instruction. 

Instead, it was teachers who had delivered more cognitively demanding lessons, going beyond procedural calculations to complex understandings, who tended to produce higher math scores. The researchers admitted it was “worrisome” that the kind of cognitively demanding instruction that we want to see “can simultaneously result in decreased student engagement.”  

Other researchers and educators have noted that learning is hard work. It often doesn’t feel good for students when they’re making mistakes and struggling to figure things out. It can feel frustrating during the moments when students are learning the most.

It was rare, but the researchers managed to find six teachers among the 53 in the study that could do both types of good teaching simultaneously. Teachers who incorporated a lot of hands-on, active learning received high marks from students and raised test scores. These teachers often had students working together collaboratively in pairs or groups, using tactile objects to solve problems or play games. For example, one teacher had students use egg cartons and counters to find equivalent fractions.

These doubly “good” teachers had another thing in common: they maintained orderly classrooms that were chock full of routines. Though strict discipline and punishing kids for bad behavior has fallen out of fashion, the researchers noticed that these teachers were proactive in setting up clear behavioral rules at the start of each class. “Teachers appeared quite thoughtful and sophisticated in their use of routines to maintain efficiency and order across the classroom,” the researchers wrote. “The time that teachers did spend on student behavior typically involved short redirections that did not interrupt the flow of the lesson.”

These teachers also had a good sense of pacing and understood the limits of children’s attention spans.  Some used timers. One teacher used songs to measure time. “The teachers seemed intentional about the amount of time spent on activities,” the researchers noted. 

Given that it’s not common or easy to engage students and get them to learn math, Blazar was curious to learn which teachers were ultimately better for students in the long run. This experiment actually took place a decade ago in 2012, and the students were tracked afterward. Blazar is currently looking at how these students were doing five and six years later. In his preliminary calculations, he’s finding that the students who had more engaging elementary school teachers subsequently had higher math and reading achievement scores and fewer absences in high school. The students who had teachers who were more effective in raising achievement were generally doing better in high school too, but the long-run benefits faded out somewhat. Though we all want children to learn to multiply and divide, it may be that engaging instruction is ultimately more beneficial. 

Researchers like Blazar dream of developing a “science of teaching,” so that schools of education and school coaches can better train teachers to teach well. But first we need to agree what we want teachers to do and what we want students to achieve.

This story about good teaching was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: The paradox of ‘good’ teaching appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-the-paradox-of-good-teaching/feed/ 4 87749
PROOF POINTS: College students often don’t know when they’re learning https://hechingerreport.org/proof-points-college-students-often-dont-know-when-theyre-learning/ https://hechingerreport.org/proof-points-college-students-often-dont-know-when-theyre-learning/#comments Mon, 14 Mar 2022 10:00:00 +0000 https://hechingerreport.org/?p=85538

The research evidence is clear. Learning by trying something yourself is superior to passively listening to lectures, especially in science. It’s puzzling why more university professors don’t teach in this more hands-on, interactive way. Logan McCarty, director of science education at Harvard University, is a prime example. Ten years ago, he told me, he was […]

The post PROOF POINTS: College students often don’t know when they’re learning appeared first on The Hechinger Report.

]]>

The research evidence is clear. Learning by trying something yourself is superior to passively listening to lectures, especially in science. It’s puzzling why more university professors don’t teach in this more hands-on, interactive way.

Logan McCarty, director of science education at Harvard University, is a prime example. Ten years ago, he told me, he was aware of the anti-lecture studies dating back to the 1980s. But he continued to lecture. Indeed, his title at Harvard was and is “lecturer.” He also happens to be very good at it. A former opera singer, McCarty has a flair for drama and is a natural performer. When I interviewed him by Zoom, his blue-violet hair was styled vertically like a DreamWorks troll (the adorable kind). He makes the intricacies of static electricity comprehensible and fascinating to lay people. Frankly, I would listen to him read the phone book. 

But he changed his classroom approach after 2014, when Canadian Louis Deslauriers joined the physics department. Deslauriers is a proselytizer for teaching by doing, what he calls “active learning,” and promised to show McCarty how to do it. McCarty was a convert. 

The two scratched their heads about why scientists – who teach the scientific method to their students – weren’t heeding the science themselves. So they conducted an experiment together where they each taught both ways and studied what happened.

Half the students in introductory physics classes were randomly assigned to learn the concept of static equilibrium the traditional way through lectures. The other half was instructed to solve sample problems on static equilibrium without any explanation by working together in small groups. McCarty and Deslauriers, in their respective sections, roamed the room asking questions and offering assistance. After the students attempted each problem, the instructors showed the solution. In total, the instructor talked for only half of the lesson time. 

For the next class, the students swapped. The lecture students learned about fluids through problem sets first. And the active learning students listened to a long lecture on fluids.

At the end of each lesson, students filled out surveys about their perceptions of the class and completed a 12-question multiple choice test to demonstrate their knowledge. As expected, students mastered the material more when they were actively learning regardless of whether McCarty or Deslauriers had been their instructor. McCarty’s students did as well as Deslauriers’; it didn’t seem to matter if they had the superstar lecturer or not. 

But the fascinating outcome was that most students felt just the opposite, that they had learned more in the lecture. The lecture students more strongly agreed with statements such as “I enjoyed this lecture,” “I feel like I learned a great deal from this lecture,” “Instructor was effective at teaching,” and  “I wish all my physics courses were taught this way.”

To confirm, McCarty and Deslauriers repeated the experiment the following semester and got the same results. Almost 150 Harvard undergraduates agreed that lectures were more enjoyable and easier to follow, but they were deluding themselves that they were learning more that way.

“When students hear a lecture from a superstar lecturer, they feel, ‘This is good. I am learning.’ But an hour later, they’re not going to remember it,” said Deslauriers. In other words, the feeling of learning is misleading.

The results were published in a 2019 article, “Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom,” in the Proceedings of the National Academy of Sciences, the journal of the National Academy of Sciences. 

In follow-up interviews with some of the students, researchers heard the students complain that active learning felt “disjointed” and they didn’t like the frequent transitions from group work to instructor feedback. They were worried that their errors during class wouldn’t be corrected. Generally, they felt frustrated and more confused. (Interestingly, none of the  students complained about group work itself even though conventional wisdom suggests that students often don’t like it.)

Two things appear to be going on here. When you’re listening to a great expert explain something well, it’s easy to mistake the speaker’s smooth, easy delivery for your own understanding. If you’ve ever watched a great cooking show and then stumbled to make a béchamel sauce at home, you’ve experienced this. Students often think they’re following along in class, but at home, they don’t know how to do the homework and they struggle in the course.

The second part of the explanation is that real learning is hard work and it often doesn’t feel good. When you’re struggling to solve a problem in an active learning classroom, it may feel frustrating.  Making mistakes and getting feedback to correct misunderstandings is where the learning happens. 

It’s also more challenging to teach this way. “As an instructor, I’m adapting what I’m saying on the fly to what I see when they’re working on the problem,” said McCarty. “So I’m not giving a canned lecture. And that makes it a little bit like a high wire act. But it’s also definitely more cognitively engaging for me because I have to decide in the moment, ‘Okay, I have five minutes to talk about this question, what are the most important things for me to say?’”

I was captivated by this study because I think it not only explains why active learning isn’t more popular in college classrooms, but it also helps to explain why teachers, students and parents often reject the conclusions of well-designed educational experiments. We trust our instincts and gut feelings to tell us when we’re learning, but we don’t know what actual learning feels like. (This study should also make us more skeptical of the veracity of student evaluations, but that’s a different topic.)

I am a huge fan of lectures. They inspire me. When I look back on my undergraduate years, I wouldn’t trade my best professors for more time spent on problem sets in class. McCarty and Deslauriers agree that not every course should be taught through active learning. In physics classes, the goal is to get students to solve the kinds of problems that physicists encounter so it makes sense to spend class time practicing this. 

“Sports and music instruction make this really clear,” McCarty said. “Watching [Roger] Federer play tennis can get you really excited about tennis, but it’s not going to make you a great tennis player.” 

McCarty also co-teaches a class with a biologist called “What is Life? From Quarks to Consciousness.” Inspiration is the goal. Here, McCarty spends much more of his time lecturing. I’m jealous of his students.

This story about lectures was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: College students often don’t know when they’re learning appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-college-students-often-dont-know-when-theyre-learning/feed/ 3 85538
PROOF POINTS: Debunking the myth that teachers stop improving after five years https://hechingerreport.org/proof-points-debunking-the-myth-that-teachers-stop-improving-after-five-years/ https://hechingerreport.org/proof-points-debunking-the-myth-that-teachers-stop-improving-after-five-years/#comments Mon, 07 Mar 2022 11:00:00 +0000 https://hechingerreport.org/?p=85483

The idea that teachers stop getting better after their first few years on the job has become widely accepted by both policymakers and the public. Philanthropist and former Microsoft CEO Bill Gates popularized the notion in a 2009 TED Talk when he said “once somebody has taught for three years, their teaching quality does not […]

The post PROOF POINTS: Debunking the myth that teachers stop improving after five years appeared first on The Hechinger Report.

]]>

The idea that teachers stop getting better after their first few years on the job has become widely accepted by both policymakers and the public. Philanthropist and former Microsoft CEO Bill Gates popularized the notion in a 2009 TED Talk when he said “once somebody has taught for three years, their teaching quality does not change thereafter.” He argued that teacher effectiveness should be measured and good teachers rewarded.

That teachers stop improving after three years was, perhaps, an overly simplistic exaggeration but it was based on sound research at the time. In a 2004 paper, economist Jonah Rockoff, now at Columbia Business School, tracked how teachers improved over their careers and noticed that teachers were getting better at their jobs by leaps and bounds at first, as measured by their ability to raise their students’ achievement test scores. But then, their effectiveness or productivity plateaued after three to 10 years on the job. For example, student achievement in their classrooms might increase by the same 50 points every year. The annual jump in their students’ test scores didn’t grow larger. Other researchers, including Stanford University’s Eric Hanushek, found the same. 

But now, a new nonprofit organization that seeks to improve teaching, the Research Partnership for Professional Learning, says the conventional wisdom that veteran teachers stop getting better is one of several myths about teaching. The organization says that several groups of researchers have since found that teachers continue to improve, albeit at a slower rate, well into their mid careers. 

“It’s not true that teachers stop improving,” said John Papay, an associate professor of education and economics at Brown University. “The science has evolved.”

Papay cited his own 2015 study with Matt Kraft, along with a 2017 study of middle school teachers in North Carolina and a 2011 study of elementary and middle school teachers. These analyses all found that teachers continue to improve beyond their first five years. Papay and Kraft calculated that teachers increased student performance by about half as much between their 5th and 15th year on the job as they did during the first five years of their career. The data are unclear after year 15. 

Using test scores to measure teacher quality can be controversial. Papay also looked at other measures of how well teachers teach, such as ratings of their ability to ask probing questions, generate vibrant classroom discussions and handle students’ mistakes and confusion. Again, Papay found that more seasoned teachers were continuing to improve at their profession beyond the first five years of their career. Old dogs do appear to learn new tricks.

The debate over whether teachers get better with experience has had big implications. It has prompted the public to question union pay schedules. Why pay teachers more who’ve been on the job longer if they’re no better than a third-year teacher? It has encouraged school systems to fire “bad” teachers because ineffective teachers were thought to be unlikely to improve. It has also been a way of justifying high turnover in the field. If there’s no added value to veteran teachers, why bother to hang on to them, or invest more in them? Maybe it’s okay if thousands of teachers leave the profession every year if we can replace them with loads of new ones who learn the job fast. 

So, how is it that highly regarded quantitative researchers could be coming to such different conclusions when they add up the numbers?

It turns out that it’s really complicated to calculate how much teachers improve every year. It’s simple enough to look up their students’ test scores and see how much they’ve gone up. But it’s unclear how much of the test score gain we can attribute to a teacher. Imagine a teacher who had a classroom of struggling students one year followed by a classroom of high achievers the next year. The bright, motivated students might learn more no matter who their teacher was; it would be misleading to say this teacher had improved. 

Many other things can affect student test scores from year to year, such as unexpected snow days or natural disasters. We wouldn’t want to say that most teachers in America became worse at their jobs in 2020 and 2021 because test scores declined during the pandemic.  Other changes, such as switching to a new curriculum, can affect test scores too. Broader population changes at the school also complicate the math. If a city is gentrifying, the test scores in a teacher’s classroom might rise a lot because test scores are generally higher in richer neighborhoods. Higher test scores, in this case, would probably not be a sign that the teacher is getting a lot better at teaching. 

Economists need to make assumptions when they try to disentangle how much of the classroom’s test score gain should be attributed to the teacher and how much should be attributed to all the other stuff that’s going on. In Rockoff’s influential 2004 paper, he assumed that there were diminishing returns to job experience. It’s a reasonable assumption, given that we all have a steep learning curve when we first learn something new and then annual improvements are smaller as we refine our practice. In Rockoff’s data, annual improvements were so tiny by a teacher’s 10th year that he effectively assumed that teachers stopped improving any more and plateaued. Arguably, Rockoff assumed what he was trying to study.

When Papay and other economists relaxed the assumptions about how much teachers typically improve each year, they found that teachers tended to get better and better well into their mid careers. But they had to make other assumptions. For example, Papay assumed that new teachers start at the same starting line every year. That is, the cohort of rookie teachers in 2001 were just as effective as the cohort of rookie teachers in 2009. That might not be true if teacher preparation programs have improved. 

I emailed Jonah Rockoff to see if he agrees that the science has evolved and that teachers improve throughout their careers.  He told me that he still stands by his 2004 analysis and he generally sees a consensus among researchers, not a debate. According to his reading of the research, everyone is finding the same patterns: student achievement increases the most during the early part of a teacher’s career and tends to plateau after 10 years of experience. Whether teachers plateau or continue to improve at a very sluggish pace isn’t a meaningful difference to him. 

Papay agrees that the story is “nuanced” and that mid-career teachers aren’t showing “tremendous improvement.” 

“It’s not like teachers continue to improve at the same rate that they do early in their career,” Papay said. “It’s more modest.” 

Regardless of whether teachers plateau or slowly improve, the more interesting policy question is whether there are better ways to help teachers improve throughout their careers. Papay and other scholars are trying to pinpoint the kinds of working conditions and on-the-job training that help teachers flourish. For example, Papay is finding promise in pairing teachers together to learn from each other and Kraft has studied whether every teacher should have a coach.

Just because inexperienced teachers are improving the fastest also doesn’t mean that professional development should be targeted at them.  Rockoff thinks it might be “too early in their careers” for them to get much out of some types of training.

And most importantly, teachers late in their careers might be improving student outcomes in ways that test scores – or even classroom observations – cannot capture. They might inspire students to go to college or to become scientists or artists some day.  That’s an impact that’s priceless but harder to measure.

This story about teacher improvement was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: Debunking the myth that teachers stop improving after five years appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-debunking-the-myth-that-teachers-stop-improving-after-five-years/feed/ 4 85483
PROOF POINTS: Researchers blast data analysis for teachers to help students  https://hechingerreport.org/proof-points-researchers-blast-data-analysis-for-teachers-to-help-students/ https://hechingerreport.org/proof-points-researchers-blast-data-analysis-for-teachers-to-help-students/#respond Mon, 28 Feb 2022 11:00:00 +0000 https://hechingerreport.org/?p=85238

The numbers were supposed to shed light on what was happening in public schools. That was the idea behind the No Child Left Behind Act of 2001. It mandated that every third through eighth grade student had to take an annual test to see who was performing at grade level.  In the years after the […]

The post PROOF POINTS: Researchers blast data analysis for teachers to help students  appeared first on The Hechinger Report.

]]>

The numbers were supposed to shed light on what was happening in public schools. That was the idea behind the No Child Left Behind Act of 2001. It mandated that every third through eighth grade student had to take an annual test to see who was performing at grade level. 

In the years after the law went into effect, the testing and data industries flourished, selling school districts interim assessments to track student progress throughout the year along with flashy data dashboards that translated student achievement into colored circles and red warning flags. Policymakers and advocates said that teachers should study this data to understand how to help students who weren’t doing well. 

Teachers are spending a lot of time talking about student data. In a 2016 survey by Harvard’s Center for Education Policy Research, 94 percent of middle school math teachers said they analyzed student performance on tests in the prior year, and 15 percent said they spent over 40 hours on this kind of data analysis. In high poverty schools, where student test scores are often low and there is pressure from state and local governments to raise them, data analysis can dominate weekly or monthly meetings among teachers. 

Apart from controversies over the use of tests and cheating scandals, researchers are asking another basic question: has all that time teachers spent studying data helped students learn? The emerging answer from education researchers is no. That conclusion is like dropping a bomb on a big part of what happens at schools today.

“Studying student data seems to not at all improve student outcomes in most of the evaluations I’ve seen,” said Heather Hill, a professor at the Harvard Graduate School of Education, at a February 2022  presentation of the Research Partnership for Professional Learning, a new nonprofit organization that seeks to improve teaching.

“It’s a huge industry and there are major sales to schools,” said Hill in an interview afterward. “The market forces continue to push this on schools even with very, very limited efficacy evidence unfortunately.” 

Hill reviewed 23 student outcomes from 10 different data programs used in schools and found that the majority showed no benefits for students. Only two were positive for students and in one study, students were worse off. 

Another pair of researchers also reviewed studies on the use of data analysis in schools, much of which is produced by assessments throughout the school year, and reached the same conclusion.  “Research does not show that using interim assessments improves student learning,” said Susan Brookhart, professor emerita at Duquesne University and associate editor of the journal Applied Measurement in Education. “The few studies of interim testing programs that do exist show no effects or occasional small effects.”

Two randomized controlled experiments found weak gains in math but not in reading. Two additional studies with control or comparison groups found no significant results. Brookhart co-wrote a chapter summarizing the research in an upcoming new edition of the book Educational Measurement, an influential text in the field. She and her co-author, Charles DePascale, a retired assessment consultant, provided me with a pre-publication draft of their chapter.

Why doesn’t data analysis work? All three researchers explained that while data is helpful in pinpointing students’ weaknesses, mistakes and gaps, it doesn’t tell teachers what to do about them. Most commonly, teachers review or re-teach the topic the way they did the first time or they give a student a worksheet for more practice drills.

Teachers need to change their approach to address student misunderstandings, Hill said.

The upside is that the data analysis bandwagon has prompted many schools to allocate more time for teachers to meet. And researchers believe this collaboration, apart from the solitary work of classroom teaching, is valuable for teachers in improving their craft. 

“As long as it’s not studying student data,” said Hill.  

The most effective use of teachers’ time together is not clear and Hill said she and her colleagues at Research Partnership for Professional Learning are studying that now.

This story about teacher use of data was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: Researchers blast data analysis for teachers to help students  appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-researchers-blast-data-analysis-for-teachers-to-help-students/feed/ 0 85238