Solutions Archives - The Hechinger Report https://hechingerreport.org/tags/solutions/ Covering Innovation & Inequality in Education Sat, 13 Jan 2024 13:20:19 +0000 en-US hourly 1 https://hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon-32x32.jpg Solutions Archives - The Hechinger Report https://hechingerreport.org/tags/solutions/ 32 32 138677242 PROOF POINTS: How to get teachers to talk less and students more https://hechingerreport.org/proof-points-how-to-get-teachers-to-talk-less-and-students-more/ https://hechingerreport.org/proof-points-how-to-get-teachers-to-talk-less-and-students-more/#respond Mon, 15 Jan 2024 11:00:00 +0000 https://hechingerreport.org/?p=97983

Silence may be golden, but when it comes to learning with a tutor, talking is pure gold. It’s audible proof that a student is paying attention and not drifting off, research suggests. More importantly, the more a student articulates his or her reasoning, the easier it is for a tutor to correct misunderstandings or praise […]

The post PROOF POINTS: How to get teachers to talk less and students more appeared first on The Hechinger Report.

]]>
Example of the talk meter shown to Cuemath tutors at the end of the tutoring session. Source: Figure 2 of Demszky et. al. “Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform.”

Silence may be golden, but when it comes to learning with a tutor, talking is pure gold. It’s audible proof that a student is paying attention and not drifting off, research suggests. More importantly, the more a student articulates his or her reasoning, the easier it is for a tutor to correct misunderstandings or praise a breakthrough. Those are the moments when learning happens.

One India-based tutoring company, Cuemath, trains its tutors to encourage students to talk more. Its tutors are in India, but many of its clients are American families with elementary school children. The tutoring takes place at home via online video, like a Zoom meeting with a whiteboard, where both tutor and student can work on math problems together. 

The company wanted to see if it could boost student participation so it collaborated with researchers at Stanford University to develop a “talk meter,” sort of a Fitbit for the voice, for its tutoring site. Thanks to advances in artificial intelligence, the researchers could separate the audio of the tutors from that of the students and calculate the ratio of tutor-to-student speech.

In initial pilot tests, the talk meter was posted on the tutor’s video screen for the entire one-hour tutoring session, but tutors found that too distracting. The study was revised so that the meter pops up every 20 minutes or three times during the session. When the student is talking less than 25 percent of the time, the meter goes red, indicating that improvement is needed. When the student is talking more than half the time, the meter turns green. In between, it’s yellow. 

Example of the talk meter shown to tutors every 20 minutes during the tutoring session. Source: Figure 2 of Demszky et. al. “Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform.”

More than 700 tutors and 1,200 of their students were randomly assigned to one of three groups: one where the tutors were shown the talk meter, another where both tutors and students were shown the talk meter, and a third “control” group which wasn’t shown the talk meter at all for comparison.

When just the tutors saw the talk meter, they tended to curtail their explanations and talk much less. But despite their efforts to prod their tutees to talk more, students increased their talking only by 7 percent. 

When students were also shown the talk meter, the dynamic changed. Students increased their talking by 18 percent. Introverts especially started speaking up, according to interviews with the tutors. 

The results show how teaching and learning is a two-way street. It’s not just about coaching teachers to be better at their craft. We also need to coach students to be better learners. 

“It’s not all the teacher’s responsibility to change student behavior,” said Dorottya Demszky, an assistant professor in education data science at Stanford University and lead author of the study. “I think it’s genuinely, super transformative to think of the student as part of it as well.”

The study hasn’t yet been published in a peer-reviewed journal and is currently a draft paper, “Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform,” so it may still be revised. It is slated to be presented at the March 2024 annual conference of the Society of Learning Analytics in Kyoto, Japan. 

In analyzing the sound files, Demszky noticed that students tended to work on their practice problems with the tutor more silently in both the control and tutor-only talk meter groups. But students started to verbalize their steps aloud once they saw the talk meter. Students were filling more of the silences.

In interviews with the researchers, students said the meter made the tutoring session feel like a game.  One student said, “It’s like a competition. So if you talk more, it’s like, I think you’re better at it.” Another noted:  “When I see that it’s red, I get a little bit sad and then I keep on talking, then I see it yellow, and then I keep on talking more. Then I see it green and then I’m super happy.” 

Some students found the meter distracting.  “It can get annoying because sometimes when I’m trying to look at a question, it just appears, and then sometimes I can’t get rid of it,” one said.

Tutors had mixed reactions, too. For many, the talk meter was a helpful reminder not to be long-winded in their explanations and to ask more probing, open-ended questions. Some tutors said they felt pressured to reach a 50-50 ratio and that they were unnaturally holding back from speaking. One tutor pointed out that it’s not always desirable for a student to talk so much. When you’re introducing a new concept or the student is really lost and struggling, it may be better for the teacher to speak more. 

Surprisingly, kids didn’t just fill the air with silly talk to move the gauge. Demszky’s team analyzed the transcripts in a subset of the tutoring sessions and found that students were genuinely talking about their math work and expressing their reasoning. The use of math terms increased by 42 percent.

Unfortunately, there are several drawbacks to the study design. We don’t know if students’ math achievement improved from the talk meter. The problem was that students of different ages were learning different things in different grades and different countries and there was no single, standardized test to give them all. 

Another confounding factor is that students who saw the talk meter were also given extra information sessions and worksheets about the benefits of talking more. So we can’t tell from this experiment if the talk meter made the difference or if the information on the value of talking aloud would have been enough to get them to talk more.

Excerpts from transcribed tutoring sessions in which students are talking about the talk meter. Source: Table 4 of Demszky et. al. “Does Feedback on Talk Time Increase Student Engagement? Evidence from a Randomized Controlled Trial on a Math Tutoring Platform.”

Demszky is working on developing a talk meter app that can be used in traditional classrooms to encourage more student participation. She hopes teachers will share talk meter results with their students. “I think you could involve the students a little more: ‘It seems like some of you weren’t participating. Or it seems like my questions were very closed ended? How can we work on this together?’”

But she said she’s treading carefully because she is aware that there can be unintended consequences with measurement apps. She wants to give feedback not only on how much students are talking but also on the quality of what they are talking about. And natural language processing still has trouble with English in foreign accents and background noise. Beyond the technological hurdles, there are psychological ones too.

 “Not everyone wants a Fitbit or a tool that gives them metrics and feedback,” Demszky acknowledges.

This story about student participation was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Proof Points newsletter.

The post PROOF POINTS: How to get teachers to talk less and students more appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-how-to-get-teachers-to-talk-less-and-students-more/feed/ 0 97983
PROOF POINTS: Four lessons from post-pandemic tutoring research https://hechingerreport.org/proof-points-four-lessons-from-post-pandemic-tutoring-research/ https://hechingerreport.org/proof-points-four-lessons-from-post-pandemic-tutoring-research/#comments Mon, 08 Jan 2024 11:00:00 +0000 https://hechingerreport.org/?p=97826

Research points to intensive daily tutoring as one of the most effective ways to help academically struggling children catch up. There have been a hundred randomized control trials, but one of the most cited is of a tutoring program in Chicago high schools, where ninth and 10th graders learned an extra year or two of […]

The post PROOF POINTS: Four lessons from post-pandemic tutoring research appeared first on The Hechinger Report.

]]>

Research points to intensive daily tutoring as one of the most effective ways to help academically struggling children catch up. There have been a hundred randomized control trials, but one of the most cited is of a tutoring program in Chicago high schools, where ninth and 10th graders learned an extra year or two of math from a daily dose of tutoring. That’s the kind of result that could offset pandemic learning losses, which have remained devastating and stubborn nearly four years after Covid first erupted, and it’s why the Biden Administration  has recommended that schools use their $190 billion in federal recovery funds on tutoring.

This tutoring evidence, however, was generated before the pandemic, and I was curious about what post-pandemic research says about how tutoring is going now that almost 40 percent of U.S. public schools say they’re offering high-dosage tutoring and more than one out of 10 students (11 percent) are receiving it this 2023-24 school year. Here are four lessons. 

  1. Why timing matters

Scheduling tutoring time during normal school hours and finding classroom space to conduct it are huge challenges for school leaders. The schedule is already packed with other classes and there aren’t enough empty classrooms. The easiest option is to tack tutoring on to the end of the school day as an after-school program.

New Mexico did just that and offered high school students free 45-minute online video sessions three times a week in the evenings and weekends. The tutors were from Saga Education, the same tutoring organization that had produced spectacular results in Chicago. Only about 500 students signed up out of more than 34,000 who were eligible, according to a June 2023 report from MDRC, an outside research organization. Researchers concluded that after-school tutoring wasn’t a “viable solution for making a sizable and lasting impact.” The state has since switched to scheduling tutoring during the school day.

Attendance is spotty too. Many after-school tutoring programs around the country report that even students who sign up don’t attend regularly.

  1. A hiring dilemma 

The job of tutor is now the fastest-growing position in the K–12 sector, but 40 percent of schools say they’re struggling to hire tutors. That’s not surprising in a red-hot job market, where many companies say it’s tough to find employees. 

Researchers at MDRC in a December 2023 report wrote about different hiring strategies that schools around the country are using. I was flabbergasted to read that New Mexico was paying online tutors $50 an hour to tutor from their homes. Hourly rates of $20 to $30 are fairly common in my reporting. But at least the state was able to offer tutoring to students in remote, rural areas where it would otherwise be impossible to find qualified tutors.

Tutoring companies are a booming business. Schools are using them because they take away the burden of hiring, training and supervising tutors. However, Fulton County, Georgia, which includes Atlanta, found that a tutoring company’s curriculum might have nothing to do with what children are learning in their classrooms and that there’s too little communication between tutors and classroom teachers. Tutors were quitting at high rates and replaced with new ones; students weren’t able to form long-term relationships with their tutors, which researchers say is critical to the success of tutoring. 

When Fulton County schools hired tutors directly, they were more integrated into the school community. However, schools considered them to be “paraprofessionals” and felt there were more urgent duties than tutoring that they needed to do, from substitute teaching and covering lunch duty to assisting teachers. 

Chicago took the burden off schools and hired the tutors from the central office. But schools preferred tutors who were from the neighborhood because they could potentially become future teachers. The MDRC report described a sort of catch-22. Schools don’t have the capacity to hire and train tutors, but the tutors that are sent to them from outside vendors or a central office aren’t ideal either. 

Oakland, Calif., experienced many of the obstacles that schools are facing when trying to deliver tutoring at a large scale to thousands of students. The district attempted to give kindergarten through second grade students a half hour of reading tutoring a day. As described by a December 2023 case study of tutoring by researchers at the Center for Reinventing Public Education (CRPE), Oakland struggled with hiring, scheduling and real estate. It hired an outside tutoring organization to help, but it too had trouble recruiting tutors, who complained of low pay. Finding space was difficult. Some tutors had to work in the hallways with children. 

The good news is that students who worked with trained tutors made the same gains in reading as those who were given extra reading help by teachers. But the reading gains for students were inconsistent. Some students progressed less in reading than students typically do in a year without tutoring. Others gained almost an additional year’s worth of reading instruction – 88 percent more.

  1. The effectiveness of video tutoring 

Bringing armies of tutors into school buildings is a logistical and security nightmare. Online tutoring solves that problem. Many vendors have been trying to mimic the model of successful high dosage tutoring by scheduling video conferencing sessions many times a week with the same well-trained tutor, who is using a good curriculum with step-by-step methods. But it remains a question whether students are as motivated to work as hard with video tutoring as they are in person. Everyone knows that 30 hours of Zoom instruction during school closures was a disaster. It’s unclear whether small, regular doses of video tutoring can be effective. 

In 2020 and 2021, there were two studies of online video tutoring. A randomized control trial in Italy produced good results, especially when the students received tutoring four times a week. The tutoring was less than half as potent when the sessions fell to twice a week, according to a paper published in September 2023. Another study in Chicago found zero results from video tutoring. But the tutors were unpaid volunteers and many students missed out on sessions. Both tutors and tutees often failed to show up.

The first randomized controlled trial of a virtual tutoring program for reading was conducted during the 2022-23 school year at a large charter school network in Texas. Kindergarten, first and second graders received 20 minutes of video tutoring four times a week, from September through May, with an early reading tutoring organization called OnYourMark. Despite the logistical challenges of setting up little children on computers with headphones, the tutored children ended the year with higher DIBELS scores, a measure of reading proficiency for young children, than students who didn’t receive the tutoring. One-to-one video tutoring sometimes produced double the reading gains as video tutoring in pairs, demonstrating a difference between online and in-person tutoring, where larger groups of two and three students can be very effective too. That study was published in October 2023. 

Video tutoring hasn’t always been a success. A tutoring program by Intervene K-12, a tutoring company, received high marks from reviewers at Johns Hopkins University, but outside evaluators didn’t find benefits when it was tested on students in Texas. In an unpublished study, the National Student Support Accelerator, a Stanford University organization that is promoting and studying tutoring, found no difference in year-end state test scores between students who received the tutoring and those who received other small group support. Study results can depend greatly on whether the comparison control group is getting nothing or another extra-help alternative.

Matthew Kraft, a Brown University economist who studies tutoring, says there hasn’t been an ideal study that pits online video tutoring directly against in-person tutoring to measure the difference between the two. Existing studies, he said, show some “encouraging signs.” 

The most important thing for researchers to sort out is how many students a tutor can work with online at once. It’s unclear if groups of three or four, which can be effective in person, are as effective online. “The comments we’re getting from tutors are that it’s significantly different to tutor three students online than it is to tutor three students in person,” Kraft said.

In my observations of video tutoring, I have seen several students in groups of three angle their computers away from their faces. I’ve watched tutors call students’ names over and over again, trying to get their attention. To me, students appear far more focused and energetic in one-to-one video tutoring.

  1. How humans and machines could take turns

A major downside to every kind of tutoring, both in-person and online, is its cost. The tutoring that worked so well in Chicago can run $4,000 per student. It’s expensive because students are getting over a hundred hours of tutoring and schools need to pay the tutors’ hourly wages. Several researchers are studying how to lower the costs of tutoring by combining human tutoring with online practice work. 

In one pre-pandemic study that was described in a March 2023 research brief by the University of Chicago’s Education Lab, students worked in groups of four with an in-person tutor. The tutors worked closely with two students at a time while the other two students worked on practice problems independently on ALEKS, a widely used computerized tutoring system developed by academic researchers and owned by McGraw-Hill. Each day the students switched:  the ALEKS kids worked with the tutor and the tutored kids turned to ALEKS. The tutor sat with all four students together, monitoring the ALEKS kids to make sure they were doing their math on the computer.

The math gains nearly matched what the researchers had found in a prior study of human tutoring alone, where tutors worked with only two students at a time and required twice as many tutors. The cost was $2,000 per student, much less than the usual $3,000-$4,000 per student price tag of the human tutoring program.

Researchers at the University of Chicago have been testing the same model with online video tutoring, instead of in-person, and said they are seeing “encouraging initial indications.” Currently, the research team is studying how many students one tutor can handle at a time, from four to as many as eight students, alternating between humans and ed tech, in order to find out if the sessions are still effective.

Researchers at Carnegie Mellon University conducted a similar study of swapping between human tutoring and practicing math on computers. Instead of ALEKS, this pilot study used Mathia, another computerized tutoring system developed by academic researchers and owned by Carnegie Learning. This was not a randomized control trial, but it did take place during the pandemic in 2020-21. Middle school students doubled the amount of math they learned compared to similar students who didn’t receive the tutoring, according to Ken Koedinger, a Carnegie Mellon professor who was part of the research team. 

“AI tutors work when students use them,” said Koedinger. “But if students aren’t using them, they obviously don’t work.” The human tutors are better at motivating the students to keep practicing, he said. The computer system gives each student personalized practice work, targeted to their needs, instant feedback and hints.

Technology can also guide the tutors. With one early reading program, called Chapter One, in-person tutors work with young elementary school children in the classroom. Chapter One’s website keeps track of every child’s progress. The tutor’s screen indicates which student to work with next and what skills that student needs to work on.  It also suggests phonics lessons and activities that the tutor can use during the session.  A two-year randomized control trial, published in December 2023, found that the tutored children – many of whom received short five-minute bursts of tutoring at a time – outperformed children who didn’t receive the tutoring. 

The next frontier in tutoring, of course, is generative AI, such as Chat GPT. Researchers are studying how students learn directly from Khan Academy’s Khanmigo, which gives step-by-step, personalized guidance, like a tutor, on how to solve problems. Other researchers are using this technology to help coach human tutors so that they can better respond to students’ misunderstandings and confusion. I’ll be looking out for these studies and will share the results with you.

This story about video tutoring was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: Four lessons from post-pandemic tutoring research appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-four-lessons-from-post-pandemic-tutoring-research/feed/ 1 97826
PROOF POINTS: Schools keep buying online drop-in tutoring. The research doesn’t support it https://hechingerreport.org/proof-points-schools-keep-buying-online-drop-in-tutoring-the-research-doesnt-support-it/ https://hechingerreport.org/proof-points-schools-keep-buying-online-drop-in-tutoring-the-research-doesnt-support-it/#comments Mon, 16 Oct 2023 10:00:00 +0000 https://hechingerreport.org/?p=96577

Ever since schools reopened and resumed in-person instruction, districts have been trying to help students catch up from pandemic learning losses. The Biden Administration has urged schools to use tutoring. Many schools have purchased an online version that gives students 24/7 access to tutors. Typically, communication is through text chat, similar to communicating with customer […]

The post PROOF POINTS: Schools keep buying online drop-in tutoring. The research doesn’t support it appeared first on The Hechinger Report.

]]>

Ever since schools reopened and resumed in-person instruction, districts have been trying to help students catch up from pandemic learning losses. The Biden Administration has urged schools to use tutoring. Many schools have purchased an online version that gives students 24/7 access to tutors. Typically, communication is through text chat, similar to communicating with customer service on a website. Students never see their tutors or hear their voices. 

Researchers estimate that billions have been spent on these online tutoring services, but so far, there’s no good evidence that they are helping many students catch up. And many students need extra help. According to the most recent test scores from spring 2023, 50 percent more students are below grade level than before the pandemic; even higher achieving students remain months behind where they should be.

Low uptake

The main problem is that on-demand tutoring relies on students to seek extra help. Very few do. Some school systems have reported usage rates below 2 percent. A 2022 study by researchers at Brown University of an effort to boost usage among 7,000 students at a California charter school network found that students who needed the most help were the least likely to try online tutoring and only a very small percentage of students used it regularly. Opt-in tutoring could “exacerbate inequalities rather than reduce them,” warned a  September 2023 research brief by Brown University’s Annenberg Center, Results for America, a nonprofit that promotes evidence-backed policies, the American Institutes for Research and NWEA, an assessment firm.

In January 2023, an independent research firm Mathematica released a more positive report on students’ math gains with an online tutoring service called UPchieve, which uses volunteers as tutors. It seemed to suggest that high school students could make extraordinary math progress from online homework help. 

UPchieve is a foundation-funded nonprofit with a slightly different model. Instead of schools buying the tutoring service from a commercial vendor, UPchieve makes its tutors freely available to any student in grades eight to 12 living in a low-income zip code or attending a low-income high school. Behind the scenes, foundations cover the cost to deliver the tutoring, about $5 per student served. (Those foundations include the Bill & Melinda Gates and the Overdeck Family foundations, which are also among the many funders of The Hechinger Report.)

UPchieve posted findings from the study in large font on its website: “Using UPchieve 9 times caused student test scores to meaningfully increase” by “9 percentile rank points.” If true, that would be equivalent to doubling the amount of math that a typical high school student learns. That would mean that students learned an extra 14 weeks worth of math from just a few extra hours of instruction. Not even the most highly regarded and expensive tutoring programs using professional tutors who are following clear lesson plans achieve this.

The study garnered a lot of attention on social media and flattering media coverage “for disrupting learning loss in low-income kids.” But how real was this progress? 

Gift card incentives

After I read the study, which was also commissioned by the Gates foundation, I immediately saw that UPchieve’s excerpts were taken out of context. This was not a straightforward randomized controlled trial, comparing what happens to students who were offered this tutoring with students who were not. Instead, it was a trial of the power of cash incentives and email reminders. 

For the experiment, Mathematica researchers had recruited high schoolers who were already logging into the UPchieve tutoring service. These were no ordinary ninth and 10th graders. They were motivated to seek extra help, resourceful enough to find this tutoring website on their own (it was not promoted through their schools) and liked math enough to take extra tests to participate in the study. One group was given extra payments of $5 a week for doing at least 10 minutes of math tutoring on UPchieve, and sent weekly email reminders. The other group wasn’t. Students in both groups received $100 for participating in the study.

The gift cards increased usage by 1.6 hours or five to six more sessions over the course of 14 weeks. These incentivized students “met” with a tutor for a total of nine sessions on average; the other students averaged fewer than four sessions. (As an aside, it’s unusual that cash incentives would double usage. Slicing the results another way, only 22 percent of the students in the gift-card group used UPchieve more than 10 times compared with 14 percent in the other group. That’s more typical.) 

At the end of 14 weeks, students took the Renaissance Star math test, an assessment taken by millions of students across the nation. But the researchers did not report those test scores. That’s because they were unlucky in their random assignment of students. By chance, comparatively weaker math students kept getting assigned to receive cash incentives. It wasn’t an apples-to-apples comparison between the two groups, a problem that can happen in a small randomized controlled trial. To compensate, the researchers statistically adjusted the final math scores to account for differences in baseline math achievement. It’s those statistically adjusted scores that showed such huge math gains for the students who had received the cash incentives and used the tutoring service more.

However, the huge 9 percentile point improvement in math was not statistically significant. There were so few students in the study – 89 in total – that the results could have been a fluke. You’d need a much larger sample size to be confident.

A caution from the researcher 

When I interviewed one of the Mathematica researchers, he was cautious about UPchieve and on-demand tutoring in general.  “This is an approach to tutoring that has promise for improving students’ math knowledge for a specific subset of students:  those who are likely to proactively take up an on-demand tutoring service,” said Greg Chojnacki, a co-author of the UPchieve study. “The study really doesn’t speak to how promising this model is for students who may face additional barriers to taking up tutoring.”

Chojnacki has been studying different versions of tutoring and he says that this on-demand version might prove to be beneficial for the “kid who may be jumping up for extra help the first chance they get,” while other children might first need to “build a trusting relationship” with a tutor they can see and talk to before they engage in learning. With UPchieve and other on-demand models, students are assigned to a different tutor at each session and don’t get a chance to build a relationship. 

Chojnacki also walked back the numerical results in our interview. He told me not to “put too much stock” in the exact amount of math that students learned. He said he’s confident that self-motivated students who use the tutoring service more often learned more math, but it could be “anywhere above zero” and not nearly as high as 9 percentile points – an extra three and a half months worth of math instruction.

UPchieve defends “magical” results

UPchieve’s founder, Aly Murray, told me that the Mathematica study results initially surprised her, too. “I agree they almost seem magical,” she said by email. While acknowledging that a larger study is needed to confirm the results, she said she believes that online tutoring without audio and video can “lead to greater learning” than in-person tutoring “when done right.”

“I personally believe that tutoring is most effective when the student is choosing to be there and has an acute need that they want to address (two things that are both uniquely true of on-demand tutoring),” she wrote. “Students have told us how helpful it is to get timely feedback and support in the exact moment that they get confused (which is often late at night in their homes while working on their homework). So in general, I believe that on-demand tutoring is more impactful than traditional high-dosage tutoring models on a per tutoring session or per hour of tutoring basis. This could be part of why we were able to achieve such outsized results despite the low number of sessions.”

Murray acknowledged that low usage remains a problem. At UPchieve’s partner schools, only 5 percent of students logged in at least once during the 2022-23 year, she told me. At some schools, usage rates fell below 1 percent. Her goal is to increase usage rates at partner schools to 36 percent. (Any low-income student in grades eight to 12 can use the tutoring service at no cost and their schools don’t pay UPchieve for the tutoring either, but some “partner” schools pay UPchieve to promote and monitor usage.) 

The downside to homework help

Helping students who are stuck on a homework assignment is certainly nice for motivated kids who love school, but relying on homework questions is a poor way to catch up students who are the most behind, according to many tutoring experts. 

“I have a hard time believing that students know enough about what they don’t know,” said Susanna Loeb, a Stanford University economist who founded the National Student Support Accelerator, which aims to bring evidence-based tutoring to more students. 

For students who are behind grade level, homework questions often don’t address their gaps in basic math foundations. “Maybe underneath, they’re struggling with percentages, but they’re bringing an algebra question,” said Loeb. “If you just bring the work of the classroom to the tutor, it doesn’t help students very much.” 

Pre-pandemic research of once-a-week after-school homework help also produced disappointing results for struggling students. Effective tutoring starts with an assessment of students’ gaps, Loeb said, followed by consistent, structured lessons.

Schools struggle to offer tutors for all students

With so little evidence, why are schools buying on-demand online tutoring? Pittsburgh superintendent Wayne Walters said he was unable to arrange for in-person tutoring in all of his 54 schools and wanted to give each of his 19,000 students access to something. He signed a contract with Tutor.com for unlimited online text-chat tutoring in 2023-24. 

“I’m going forward with it because it’s available,” Walters said. “If I don’t have something to provide, or even offer, then that limits opportunity and access. If there’s no access, then I can’t even push the needle to address the most marginalized and the most vulnerable.”

Walters hopes to make on-demand tutoring “sexy” and appealing to high schoolers accustomed to texting. But online tutoring is not the same as spontaneous texting between friends. One-minute delays in tutors’ replies to questions can test students’ patience. 

On-demand tutoring can appear to be an economical option. Pittsburgh is able to offer this kind of tutoring, which includes college admissions test prep for high schoolers, to all 19,000 of its students for $600,000. Providing 400 students with a high-dosage tutoring program – the kind that researchers recommend – could cost $1.5 million. There are thousands of Pittsburgh students who are significantly behind grade level. It doesn’t seem fair to deliver high-quality in-person tutoring to only a lucky few.  

However, once you factor in actual usage, the economics of on-demand tutoring looks less impressive. In Fairfax County, Va., for example, only 1.6 percent of students used Tutor.com. If Pittsburgh doesn’t surpass that rate, then no more than 300 of its students will be served.

There are no villains here. School leaders are trying to do the best they can and be fair to everyone. Hopes are raised when research suggests that online on-demand tutoring can work if they can succeed in marketing to students. But they should be skeptical of studies that promise easy solutions before investing precious resources. That money could be better spent on small-group tutoring that dozens of studies show is more effective for students.

This story about drop-in tutoring was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

The post PROOF POINTS: Schools keep buying online drop-in tutoring. The research doesn’t support it appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-schools-keep-buying-online-drop-in-tutoring-the-research-doesnt-support-it/feed/ 1 96577
PROOF POINTS: Lowering test anxiety in the classroom https://hechingerreport.org/proof-points-lowering-test-anxiety-in-the-classroom/ https://hechingerreport.org/proof-points-lowering-test-anxiety-in-the-classroom/#comments Mon, 25 Sep 2023 10:00:00 +0000 https://hechingerreport.org/?p=96010

In education circles, it’s popular to rail against testing, especially timed exams. Tests are stressful and not the best way to measure knowledge, wrote Adam Grant, an organizational psychologist at the University of Pennsylvania’s Wharton School in a Sept. 20, 2023 New York Times essay.  “You wouldn’t want a surgeon who rushes through a craniectomy, […]

The post PROOF POINTS: Lowering test anxiety in the classroom appeared first on The Hechinger Report.

]]>

In education circles, it’s popular to rail against testing, especially timed exams. Tests are stressful and not the best way to measure knowledge, wrote Adam Grant, an organizational psychologist at the University of Pennsylvania’s Wharton School in a Sept. 20, 2023 New York Times essay.  “You wouldn’t want a surgeon who rushes through a craniectomy, or an accountant who dashes through your taxes.” 

It’s tempting to agree. But there’s another side to the testing story, with a lot of evidence behind it. 

Cognitive scientists argue that testing improves learning. They call it “practice retrieval” or “test-enhanced learning.” In layman’s language, that means that the brain learns new information and skills by being forced to recall them periodically. Remembering consolidates information and helps the brain form long-term memories.  Of course, testing is not the only way to accomplish this, but it’s easy and efficient in a classroom. 

Several meta-analyses, which summarize the evidence from many studies, have found higher achievement when students take quizzes instead of, say, reviewing notes or rereading a book chapter. “There’s decades and decades of research showing that taking practice tests will actually improve your learning,” said David Shanks, a professor of psychology and deputy dean of the Faculty of Brain Sciences at University College London. 

Still, many students get overwhelmed during tests. Shanks and a team of four researchers wanted to find out whether quizzes exacerbate test anxiety.  The team collected 24 studies that measured students’ test anxiety and found that, on average, practice tests and quizzes not only improved academic achievement, but also ended up reducing test anxiety. Their meta-analysis was published in Educational Psychology Review in August 2023. 

Shanks says quizzes can be a “gentle” way to help students face challenges. 

“It’s not like being thrown into the deep end of a swimming pool,” said Shanks. “It’s like being put very gently into the shallow end. And then the next time a little bit deeper, and then a little bit deeper. And so the possibility of becoming properly afraid just never arises.”

Why test anxiety diminishes is unclear. It could be because students are learning to tolerate testing conditions through repeated exposure, as Shanks described. Or it could be because quizzes are helping students master the material and perform better on the final exam. We tend to be less anxious about things we’re good at. Unfortunately, the underlying studies didn’t collect the data that could resolve this academic debate.

Shanks doesn’t think competency alone reduces test anxiety. “We know that many high achieving students get very anxious,” he said. “So it can’t just be that your anxiety goes down as your performance goes up.” 

To minimize test anxiety, Shanks advises that practice tests be low stakes, either ungraded or ones that students can retake multiple times. He also suggests gamified quizzes to make tests more fun and entertaining. 

Some of this advice is controversial.  Many education experts argue against timed spelling tests or multiplication quizzes, but Shanks recommends both. “We would strongly speculate that there is both a learning benefit from those tests and a beneficial impact on anxiety,” he said. 

Shanks said a lot more research is needed. Many of the 24 existing studies were small experiments and of uneven quality, and measuring test anxiety through surveys is an inexact science. The underlying studies covered a range of school subjects, from math and science to foreign languages, and took place in both classrooms and laboratory settings, studying students as young as third grade and as old as college. Nearly half the studies took place in the United States with the remainder in the United Kingdom, Malaysia, Nigeria, Iran, Brazil, the Netherlands, China, Singapore and Pakistan. 

Shanks cautioned that this meta-analysis should not be seen as a “definitive” pronouncement that tests reduce anxiety, but rather as a summary of early research in a field that is still in its “infancy.” One big issue is that the studies measured average test anxiety for students. There may be a small minority of students who are particularly sensitive to test anxiety and who may be harmed by practice tests. These differences could be the subject of future research. 

Another issue is the tradeoff between boosting achievement and reducing anxiety. The harder the practice test, the more beneficial it is for learning. But the lower the stakes for a quiz, the better it is for reducing anxiety. 

Shanks dreams of finding a Goldilocks “sweet spot” where “the stakes are not so high that the test begins to provoke anxiety, but the stakes are just high enough to get the full benefit of the testing effect. We’re miles away from having firm answers to subtle questions like that.” 

This story about test anxiety was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

The post PROOF POINTS: Lowering test anxiety in the classroom appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-lowering-test-anxiety-in-the-classroom/feed/ 1 96010
PROOF POINTS: The value of one-size-fits-all math homework https://hechingerreport.org/proof-points-the-value-of-one-size-fits-all-math-homework/ https://hechingerreport.org/proof-points-the-value-of-one-size-fits-all-math-homework/#respond Mon, 11 Sep 2023 10:00:00 +0000 https://hechingerreport.org/?p=95687

In theory, education technology could redesign school from a factory-like assembly line to an individualized experience. Computers, powered by algorithms and AI, could deliver custom-tailored lessons for each child. Advocates call this concept “personalized learning” but this sci-fi idyll (or dystopia, depending on your point of view) has been slow to catch on in American […]

The post PROOF POINTS: The value of one-size-fits-all math homework appeared first on The Hechinger Report.

]]>

In theory, education technology could redesign school from a factory-like assembly line to an individualized experience. Computers, powered by algorithms and AI, could deliver custom-tailored lessons for each child. Advocates call this concept “personalized learning” but this sci-fi idyll (or dystopia, depending on your point of view) has been slow to catch on in American classrooms.

Meanwhile one piece of ed tech, called ASSISTments, takes the opposite approach. Instead of personalizing instruction, this homework website for middle schoolers encourages teachers to assign the exact same set of math problems to the entire class. One size fits all. 

Unlike other popular math practice sites, such as Khan Academy, IXL or ALEKS, in which a computer controls the content, ASSISTments keeps the control levers with the teachers, who pick the questions they like from a library of 200,000. Many teachers assign the same familiar homework questions from textbooks and curricula they are already using.

ASSISTments encourages teachers to project anonymized homework results on a whiteboard and review the ones that many students got wrong. Credit: Screenshot provided by ASSISTments.

And this deceptively simple – and free –  tool has built an impressive evidence base and a following among middle school math teachers. Roughly 3,000 teachers and 130,000 students were using it during the 2022-23 school year, according to the husband and wife team of Neil and Cristina Heffernan who run ASSISTments, a nonprofit based at Worcester Polytechnic Institute in Massachusetts, where Neil is a computer science professor.

After Neil built the platform in 2003, several early studies showed promising results, and then a large randomized control trial (RCT) in Maine, published in 2016, confirmed them. For 1,600 seventh-grade students whose classrooms were randomly selected to use ASSISTments for math homework, math achievement was significantly higher at the end of the year, equivalent to an extra three quarters of a year of schooling, according to one estimate. Both groups – treatment and control – were otherwise using the same textbooks and curriculum. 

On the strength of those results, an MIT research organization singled out ASSISTments as one of the rare ed tech tools proven to help students. The Department of Education’s What Works Clearinghouse, which reviews education evidence, said the research behind ASSISTments was so strong that it received the highest stamp of approval: “without reservations.”

Still, Maine is an unusual state with a population that is more than 90 percent white and so small that everyone could fit inside the city limits of San Diego. It had distributed laptops to every middle school student years before the ASSISTments experiment. Would an online math platform work in conditions where computer access is uneven? 

The Department of Education commissioned a $3 million replication study in North Carolina, in which 3,000 seventh graders were randomly assigned to use ASSISTments. The study, set to test how well the students learned math in spring of 2020, was derailed by the pandemic. But a private foundation salvaged it. Before the pandemic, Arnold Ventures had agreed to fund an additional year of the North Carolina study, to see if students would continue to be better at math in eighth grade. (Arnold Ventures is among the many funders of The Hechinger Report.)

Those longer-term results were published in June 2023, and they were good.  Even a year later, on year-end eighth grade math tests, the 3,000 students who had used ASSISTments in seventh grade outperformed 3,000 peers who hadn’t. The eighth graders had moved on to new math topics and were no longer using ASSISTments, but their practice time on the platform a year earlier was still generating dividends. 

Researchers found that the lingering effect of practicing math on ASSISTments was similar in size to the long-term benefits of Saga Education’s intensive, in-person tutoring, which costs $3,200 to $4,800 per year for each student. The cost of ASSISTments is a tiny fraction of that, less than $100 per student. (That cost is covered by private foundations and federal grants. Schools use it free of charge.)

Another surprising result is that students, on average, benefited from solving the same problems, without assigning easier ones to weaker students and harder ones to stronger students. 

How is it that this rather simple piece of software is succeeding while more sophisticated ed tech has often shown mixed results and failed to gain traction?

The studies aren’t able to explain that exactly. ASSISTments, criticized for its “bland” design and for sometimes being “frustrating,” doesn’t appear to be luring kids to do enormous amounts of homework. In North Carolina, students typically used it for only 18 minutes a week, usually split among two to three sessions. 

From a student’s perspective, the main feature is instant feedback. ASSISTments marks each problem immediately, like a robo grader. A green check appears for getting it right on the first try, and an orange check is for solving it on a subsequent attempt. Students can try as many times as they wish. Students can also just ask for the correct answer. 

Nearly every online math platform gives instant feedback. It’s a well established principle of cognitive science that students learn better when they can see and sort out their mistakes immediately, rather than waiting days for the teacher to grade their work and return it. 

The secret sauce might be in the easy-to-digest feedback that teachers are getting. Teachers receive a simple data report, showing them which problems students are getting right and wrong. 

ASSISTments encourages teachers to project anonymized homework results on a whiteboard and review the ones that many students got wrong. Not every teacher does that. On the teacher’s back end, the system also highlights common mistakes that students are making. In surveys, teachers said it changes how they review homework.

Other math platforms generate data reports too, and teachers ought to be able to use them to inform their instruction. But when 30 students are each working on 20 different, customized problems, it’s a lot harder to figure out which of those 600 problems should be reviewed in class. 

There are other advantages to having a class work on a common set of problems. It allows kids to work together, something that motivates many extroverted tweens and teens to do their homework. It can also trigger worthwhile class discussions, in which students explain how they solved the same problem differently.

ASSISTments has drawbacks. Many students don’t have good internet connections at home and many teachers don’t want to devote precious minutes of class time to screen time. In the North Carolina study, some teachers had students do the homework in school. 

Teachers are restricted to the math problems that Heffernan’s team has uploaded to the ASSISTments library. It currently includes problems from three middle school math curricula:  Illustrative Mathematics, Open Up Resources and Eureka Math (also known as EngageNY). For the Maine and North Carolina studies, the ASSISTments team uploaded math questions that teachers were familiar with from their textbooks and binders. But outside of a study, if teachers want to use their own math questions, they’ll have to wait until next year, when ASSISTments plans to allow teachers to build their own problems or edit existing ones.

Teachers can assign longer open-response questions, but ASSISTments doesn’t give instant feedback on them. Heffernan is currently testing how to use AI to evaluate students’ written explanations. 

There are other bells and whistles inside the ASSISTments system too. Many problems have “hints” to help students who are struggling and can show step-by-step worked out examples. There are also optional “skill builders” for students to practice rudimentary skills, such as adding fractions with unlike denominators.  It is unclear how important these extra features are. In the North Carolina study, students generally didn’t use them.

There’s every reason to believe that students can learn more from personalized instruction, but the research is mixed. Many students don’t spend as much practice time on the software as they should. Many teachers want more control over what the computer assigns to students. Researchers are starting to see good results in using differentiated practice work in combination with tutoring. That could make catching up a lot more cost effective.

I rarely hear about “personalized learning” any more in a classroom context. One thing we’ve all learned during the pandemic is that learning has proven to be a profoundly human interaction of give and take between student and teacher and among peers. One-size-fits-all instruction may not be perfect, but it keeps the humans in the picture. 

This story about ASSISTments was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.

The post PROOF POINTS: The value of one-size-fits-all math homework appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-the-value-of-one-size-fits-all-math-homework/feed/ 0 95687
PROOF POINTS: It’s easy to fool ChatGPT detectors https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/ https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/#respond Mon, 04 Sep 2023 10:00:00 +0000 https://hechingerreport.org/?p=95538

A high school English teacher recently explained to me how she’s coping with the latest challenge to education in America: ChatGPT.  She runs every student essay through five different generative AI detectors. She thought the extra effort would catch the cheaters in her classroom.  A clever series of experiments by computer scientists and engineers at […]

The post PROOF POINTS: It’s easy to fool ChatGPT detectors appeared first on The Hechinger Report.

]]>

A high school English teacher recently explained to me how she’s coping with the latest challenge to education in America: ChatGPT.  She runs every student essay through five different generative AI detectors. She thought the extra effort would catch the cheaters in her classroom. 

A clever series of experiments by computer scientists and engineers at Stanford University indicate that her labors to vet each essay five ways might be in vain. The researchers demonstrated how seven commonly used GPT detectors are so primitive that they are both easily fooled by machine generated essays and improperly flagging innocent students. Layering several detectors on top of each other does little to solve the problem of false negatives and positives.

“If AI-generated content can easily evade detection while human text is frequently misclassified, how effective are these detectors truly?” the Stanford scientists wrote in a July 2023 paper, published under the banner, “opinion,” in the peer-reviewed data science journal Patterns. “Claims of GPT detectors’ ‘99% accuracy’ are often taken at face value by a broader audience, which is misleading at best.”

The scientists began by generating 31 counterfeit college admissions essays using ChatGPT 3.5, the free version that any student can use. GPT detectors were pretty good at flagging them. Two of the seven detectors they tested caught all 31 counterfeits. 

But all seven GPT detectors could be easily tricked with a simple tweak. The scientists asked ChatGPT to rewrite the same fake essays with this prompt: “Elevate the provided text by employing literary language.”

Detection rates plummeted to near zero (3 percent, on average). 

I wondered what constitutes literary language in the ChatGPT universe. Instead of college essays, I asked ChatGPT to write a paragraph about the perils of plagiarism. In ChatGPT’s first version, it wrote: “Plagiarism presents a grave threat not only to academic integrity but also to the development of critical thinking and originality among students.” In the second, “elevated” version, plagiarism is “a lurking specter” that “casts a formidable shadow over the realm of academia, threatening not only the sanctity of scholastic honesty but also the very essence of intellectual maturation.”  If I were a teacher, the preposterous magniloquence would have been a red flag. But when I ran both drafts through several AI detectors, the boring first one was flagged by all of them. The flamboyant second draft was flagged by none. Compare the two drafts side by side for yourself. 

Simple prompts bypass ChatGPT detectors. Red bars are AI detection before making the language loftier; gray bars are after. 

For ChatGPT 3.5 generated college admission essays, the performance of seven widely used ChatGPT detectors declines markedly when a second round self-edit prompt (“Elevate the provided text by employing literary language”) is applied. Source: Liang, W., et al. “GPT detectors are biased against non-native English writers” (2023)

Meanwhile, these same GPT detectors incorrectly flagged essays written by real humans as AI generated more than half the time when the students were not native English speakers. The researchers collected a batch of 91 practice English TOEFL essays that Chinese students had voluntarily uploaded to a test-prep forum before ChatGPT was invented. (TOEFL is the acronym for the Test of English as a Foreign Language, which is taken by international students who are applying to U.S. universities.) After running the 91 essays through all seven ChatGPT detectors, 89 essays were identified by one or more detectors as possibly AI-generated. All seven detectors unanimously marked one out of five essays as AI authored. By contrast, the researchers found that GPT detectors accurately categorized a separate batch of 88 eighth grade essays, submitted by real American students.

My former colleague Tara García Mathewson brought this research to my attention in her first story for The Markup, which highlighted how international college students are facing unjust accusations of cheating and need to prove their innocence. The Stanford scientists are warning not only about unfair bias but also about the futility of using the current generation of AI detectors. 

Bias in ChatGPT detectors. Leading detectors incorrectly flag a majority of essays written by international students, but accurately classify writing of American eighth graders. 

More than half of the TOEFL (Test of English as a Foreign Language) essays written by non-native English speakers were  incorrectly classified as “AI-generated,” while detectors exhibit near-perfect accuracy for U.S. eighth graders’ essays. Source: Liang, W., et al. “GPT detectors are biased against non-native English writers” (2023)

The reason that the AI detectors are failing in both cases – with a bot’s fancy language and with foreign students’ real writing – is the same. And it has to do with how the AI detectors work. Detectors are a machine learning model that analyzes vocabulary choices, syntax and grammar. A widely adopted measure inside numerous GPT detectors is something called “text perplexity,” a calculation of how predictable or banal the writing is. It gauges the degree of “surprise” in how words are strung together in an essay. If the model can predict the next word in a sentence easily, the perplexity is low. If the next word is hard to predict, the perplexity is high.

Low perplexity is a symptom of an AI generated text, while high perplexity is a sign of human writing. My intentional use of the word “banal” above, for example, is a lexical choice that might “surprise” the detector and put this column squarely in the non-AI generated bucket. 

Because text perplexity is a key measure inside the GPT detectors, it becomes easy to game with loftier language. Non-native speakers get flagged because they are likely to exhibit less linguistic variability and syntactic complexity.

The seven detectors were created by originality.ai, Quill.org, Sapling, Crossplag, GPTZero, ZeroGPT and OpenAI (the creator of ChatGPT). During the summer of 2023, Quill and OpenAI both decommissioned their free AI checkers because of inaccuracies. Open AI’s website says it’s planning to launch a new one.

“We have taken down AI Writing Check,” Quill.org wrote on its website, “because the new versions of Generative AI tools are too sophisticated for detection by AI.” 

The site blamed newer generative AI tools that have come out since ChatGPT launched last year.  For example, Undetectable AI promises to turn any AI-generated essay into one that can evade detectors … for a fee. 

Quill recommends a clever workaround: check students’ Google doc version history, which Google captures and saves every few minutes. A normal document history should show every typo and sentence change as a student is writing. But someone who had an essay written for them – either by a robot or a ghostwriter – will simply copy and paste the entire essay at once into a blank screen. “No human writes that way,” the Quill site says. A more detailed explanation of how to check a document’s version history is here

Checking revision histories might be more effective, but this level of detective work is ridiculously time consuming for a high school English teacher who is grading dozens of essays. AI was supposed to save us time, but right now, it’s adding to the workload of time-pressed teachers!

This story about ChatGPT detectors was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters. 

The post PROOF POINTS: It’s easy to fool ChatGPT detectors appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/feed/ 0 95538
PROOF POINTS: How important was your favorite teacher to your success? Researchers have done the math https://hechingerreport.org/proof-points-how-important-was-your-favorite-teacher-to-your-success-researchers-have-done-the-math/ https://hechingerreport.org/proof-points-how-important-was-your-favorite-teacher-to-your-success-researchers-have-done-the-math/#respond Mon, 05 Jun 2023 10:00:00 +0000 https://hechingerreport.org/?p=93757

It’s often hard to express exactly why certain teachers make such a difference in our lives. Some push us to work harder than we thought we could. Others give us good advice and support us through setbacks. Students describe how a caring teacher helped them “stay out of trouble” or gave them “direction in life.” […]

The post PROOF POINTS: How important was your favorite teacher to your success? Researchers have done the math appeared first on The Hechinger Report.

]]>

It’s often hard to express exactly why certain teachers make such a difference in our lives. Some push us to work harder than we thought we could. Others give us good advice and support us through setbacks. Students describe how a caring teacher helped them “stay out of trouble” or gave them “direction in life.” What we cherish often has nothing to do with the biology or Bronze Age history we learned in the classroom.

For the lucky among us who have formed connections with a teacher, a school counselor or a coach, their value can seem immeasurable. That has not deterred a trio of researchers from trying to quantify that influence.

“Many of us have had a teacher in our lives that just went above and beyond and was more than a classroom teacher,” said Matthew Kraft, an associate professor of education and economics at Brown University and one of the researchers on a draft working paper circulated in May 2023 by the National Bureau of Economic Research that has not been peer reviewed. “It’s really an underappreciated way in which teachers matter.”

Kraft and two other researchers from Harvard University and the University of Virginia turned to the National Longitudinal Study of Adolescent to Adult Health, a periodic survey of 20,000 teens from 1994 into adulthood. One of the questions posed in 2000, when they were 18-24, was this: Other than your parents or step-parents, has an adult made an important positive difference in your life at any time since you were 14 years old?

Three quarters of the students said they had an adult like this in their lives. Often their most important mentor was another relative, a neighbor or a religious leader. But over 15 percent of the students – more than one out of every seven respondents – said that a teacher, a school counselor or a sports coach was their most important mentor. These school relationships were notably long-lasting; students said that teachers and coaches played important roles in their lives for more than five years, on average.

The researchers compared what happened to the 3,000 students who had mentors at school with the roughly 5,000 students who said they had no mentors at all. The ones with school mentors did moderately better in high school with slightly higher grades – for example, a  B- versus a C+ –  and failed fewer classes. 

But what was really striking was what happened after high school. Those who had formed a positive relationship with a teacher, a counselor or a coach increased their chances of going to college by at least 9 percentage points. That’s a substantial boost given that only 51 percent of students without a mentor enrolled in college.

Kraft and his colleagues brought the tools of modern applied economics to answer the question of a teacher’s worth outside of the classroom. There are many confounding factors and perhaps the teens who form these relationships with caring adults are different in other ways  – maybe they are more ambitious or have more self-confidence – and they would have gone to college in higher numbers even if they hadn’t had a mentor at school. Though it’s impossible to account for all the possibilities, the researchers crunched the numbers in various ways, arriving at different numerical results each time, but consistently saw strong benefits for students who had mentors at school. This was true even between best friends, romantic partners and twins. For example, the twin sibling with a mentor did better than the one without, even though they were raised by the same parents and attended the same high school. 

Kraft and his colleagues didn’t detect a big difference in college graduation rates between those with and without mentors. The largest difference seems to be the decision to apply and enroll in college. For students who are undecided on whether to go to college, having a school-based mentor seems to carry them over the threshold of the college gates.

Related: Two studies point to the power of teacher-student relationships to boost learning

Students from low-income and less educated families were less likely to have a mentor, but having a mentor was even more beneficial for them than it was for their higher income peers. Their college going appeared to be dramatically higher. The mentoring itself also seemed different for poor and rich students. Lower income students were more likely to report that their mentors gave them practical and tangible help, along with advice on money. Higher income students were more likely to report receiving guidance, advice and wisdom. 

Being mentored by a sports coach was just as effective as being mentored by a teacher; these young adults experienced the same short-term and long-term benefits. However, female students were more likely to gravitate toward teachers while male students were more likely to bond with a coach. 

Formal mentorship programs, such as Big Brothers Big Sisters, have also produced benefits for young adults, but Kraft said the benefits from the informal relationships studied here appear to be larger.

“We know how to set up formal mentoring programs but not all the relationships are going to pan out,” said Kraft. “We know far less about how to support and cultivate the formation of these voluntary relationships. And we have no control over whether or not it’s the students who might most benefit from them who are able to successfully seek out and form these mentoring relationships.”

But there are some clues in the study as to what schools can do to create the conditions for serendipity. “There is no magic wand for exactly the best way to do it,” Kraft said. “It’s not something we can say, do this and relationships will form. But schools are social organizations and can create environments where they’re more likely to happen.”

The researchers noticed that high schools with smaller class sizes and those where students said they felt a greater “sense of belonging” tended to produce twice as many of these mentoring relationships than schools with larger classes and a less hospitable school environment. “When students say that school is a place where they feel welcome and part of the community,” said Kraft, “you’re much more willing to open up to a teacher or counselor or a coach, and reciprocate when they reach out and say, ‘Hey, I see you’re looking a little down. Do you want to talk about it?’” 

Kraft offers two additional suggestions for schools:

  • Hire more Black and Hispanic teachers

White students were substantially more likely to report having a school mentor than their Black and Hispanic peers. That’s likely because the U.S. high school teacher workforce is 79 percent white and 59 percent female, and from middle and upper-middle class backgrounds. “Shared common life experiences increase the likelihood that you’ll develop an informal mentoring relationship because you can talk about things in a common way,” said Kraft. “This adds weight to the pressing need to diversify the teacher workforce.” 

The researchers do not know why so many Asian males (more than 20 percent) sought out and built strong relationships with adults at school. Seventeen percent of Asian females had school mentors. Only 10 percent of Black and Hispanic female students had mentors at school while Black and Hispanic males reported slightly higher rates of about 12 percent. Fifteen percent of white students reported having school-based mentors.

  • Create small group moments

Kraft suggests that school leaders can promote these student-teacher relationships by creating more opportunities for students to have multiple, sustained interactions with school personnel in small group settings. This doesn’t necessarily require smaller class sizes; small groups could be advisory periods, club activities or tutoring sessions during the school day.

Is the implication of this study that teachers should be taking on even more responsibilities? Kraft says that’s not his intention. Instead, he wants to recognize what many teachers and other school staffers are already doing. It’s another way, he said, “in which teachers are incredibly important.” 

This story about the importance of teacher-student relationships was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters. 

The post PROOF POINTS: How important was your favorite teacher to your success? Researchers have done the math appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-how-important-was-your-favorite-teacher-to-your-success-researchers-have-done-the-math/feed/ 0 93757
PROOF POINTS: Inside the perplexing study that’s inspired colleges to drop remedial math https://hechingerreport.org/proof-points-inside-the-perplexing-study-thats-inspired-college-to-drop-remedial-math/ https://hechingerreport.org/proof-points-inside-the-perplexing-study-thats-inspired-college-to-drop-remedial-math/#comments Mon, 15 May 2023 10:00:00 +0000 https://hechingerreport.org/?p=93312

When Alexandra Logue served as the chief academic officer of the City University of New York (CUNY) from 2008 to 2014, she discovered that her 25-college system was spending over $20 million a year on remedial classes.  Nationwide, the cost of remedial education exceeded $1 billion annually; many colleges operated separate departments of “developmental education,” […]

The post PROOF POINTS: Inside the perplexing study that’s inspired colleges to drop remedial math appeared first on The Hechinger Report.

]]>

When Alexandra Logue served as the chief academic officer of the City University of New York (CUNY) from 2008 to 2014, she discovered that her 25-college system was spending over $20 million a year on remedial classes.  Nationwide, the cost of remedial education exceeded $1 billion annually; many colleges operated separate departments of “developmental education,” higher-education’s euphemistic jargon for non-credit catch-up classes. “Nobody could tell me if we were doing it the right way,” Logue said. 

She suspected they weren’t. More than two-thirds of all community college students and 40 percent of undergraduates in four-year colleges had to start with at least one remedial class, according to a statistical report from the U.S. Department of Education. The majority of these students dropped out without degrees.

An experimental psychologist by training, Logue designed an experiment. She compared remedial math classes to the alternative of letting ill-prepared students proceed straight to a college course accompanied by extra help. The early results of her randomized control trial were so extraordinary that her study influenced not only CUNY in 2016 but also California lawmakers in 2017 to start phasing out remedial education in their state. 

Over the seven years of Logue’s study, which took place at three of CUNY’s seven two-year community colleges, the results kept getting better. Students who started with college math were successfully passing the course at a fraction of the cost of remediation, getting their math requirements out of the way, earning their degrees faster and earning thousands more in the labor market. Many public colleges, from Nevada and Colorado to Connecticut and Tennessee, have followed suit, phasing out remedial ed

Other data analyses have also shown benefits to bypassing remedial education, but this was one of the only real-life experiments, like a clinical trial, and so it carried a lot of weight. Most importantly, it studied math, often an insurmountable requirement for many students to complete their college degrees. This study has arguably been one of the most influential attempts to use experimental evidence to change how higher education operates and is now affecting the lives of millions of college students.

“It’s a great feeling of satisfaction,” said Logue, now a research professor at CUNY’s Graduate Center, “because it isn’t just CUNY. It’s across the country, using this really great evidence to help make things better.”

The third and final chapter of this long-term study was published in the January/February 2023 issue of the journal Educational Researcher, and as I pored over this body of research, I became confused about what it proved. The study could be seen as evidence against remedial education, but it could equally be seen as evidence for letting college students meet their math requirements without taking algebra.

The confusion stems from the study design. Instead of testing remedial versus college algebra, which would be a direct test of remedial education, the study compared remedial algebra to college statistics, a sort of apples to oranges comparison. In the experiment, CUNY randomly assigned almost 300 students who failed the algebra portion of a math placement test to an introductory statistics course. In tandem with this college class, students attended an extra two-hour workshop each week where a college classmate who had already passed the class tutored them. Researchers then compared what happened to these stats students with a similar group of almost 300 students who were sent to remedial algebra, the traditional first step for students who fail the algebra subtest. Logue had the same teachers teach sections of both courses – remedial algebra and college stats – so that no one could argue instructional quality was different. Also, only students who struggled with algebra, but not arithmetic, were part of this experiment; students with more severe math difficulties, as measured by the freshman placement test, weren’t asked to attempt the college course and were excluded from the control group.

By all measures, the students who went straight to college stats did better. More than half of the students who bypassed remedial algebra passed the stats class and earned college credit. Ultimately, these students finished their degrees a lot faster than those who started off in remedial algebra. They were 50 percent more likely to complete a two-year associate’s degree within three years and, according to the latest chapter of this seven-year study, they were twice as likely to transfer to a four-year institution and complete a bachelor’s degree within five years. Seven years after bypassing remedial ed, students were earning $4,600 more a year in the workplace, on average, than those who started in remedial math. 

“What we can say is, for students who have been assigned to remediation, put them into statistics with extra help, and you will get a good result,” said Logue. 

Some researchers argue that the shift to statistics might have made the difference. 

“That switch from algebra to stats is a big one for a lot of students,” said Lindsay Daugherty, a senior policy researcher at the RAND Corporation who has studied remedial education and efforts to reform it. She said all the other studies that have looked at replacing remedial classes with college courses plus extra support haven’t produced better graduation rates. “This CUNY study is the only one,” said Daugherty.

The only other randomized control trial of remedial education is Daugherty’s Texas experiment to replace remedial English courses with college courses plus extra support. Going straight to college courses helped more students earn college credits in English but that didn’t help them get through college. Dropout rates were the same for students in both the remedial and the “corequisite” courses, as the college plus extra help version is often called.

“We know that the way that we did it before with these standalone [remedial] courses was not helping students, and most states and colleges have made a change and are moving towards corequisites,” said Daugherty.  “But the evidence does not suggest that these corequisite courses are the magic potion that is going to change completion and persistence. It’s going to take a lot more and a lot of other support.”

What we don’t know from this study is how to help students who are behind in math learn college algebra, a course that is similar to intermediate high school algebra, which remains a requirement for many business, health and engineering majors. All the students in this landmark CUNY study had intended to major in non-STEM fields that didn’t require algebra, such as criminal justice and the humanities, and for which college statistics would fulfill their math requirements. 

Logue originally sought to conduct a simpler, cleaner study of only algebra, comparing the remedial prerequisite to the college course plus tutoring support. But she ran into problems with the algebra faculty. (There were too many different versions of college algebra for different majors and across different colleges at CUNY, each covering different topics, she said, and it was impossible to test one version of a basic college algebra course.) Meanwhile, the statistics department was open to the experiment and their introductory courses were very similar from professor to professor.

It’s unclear from this study how essential the weekly tutoring sessions were to helping students pass the statistics course. The experiment didn’t test whether students could pass the normal college stats class without peer tutoring. 

The good news is that the switch from remedial algebra to college stats didn’t seem to harm anyone. Indeed, the students in the statistics group were just as likely to complete advanced math courses, along the algebra-to-calculus track, as students who started with remedial algebra, according to co-author Daniel Douglas, director of social science research at Trinity College in Hartford, Connecticut, who led the data analysis. In the final number crunching, the stats students were just as likely to complete math-intensive degrees that required college algebra. Starting with stats didn’t thwart students from changing their minds about their majors and returning to an algebra-to-calculus track, Douglas said.

The bad news is that a lot of community college students still fell through the cracks. Although there was a 50 percent boost to the number of students who completed an associate’s degree within three years, only a quarter of the statistics students hit this milestone. Almost three-quarters didn’t. And though bypassing math remediation and heading straight to college stats led to a 100 percent increase in the number of bachelor’s degrees, only 14 percent of the statistics students earned a four-year degree. 

The main benefit of allowing students to bypass remedial classes is speed, according to Douglas. Over the course of seven years, the students who started in remedial algebra eventually caught up and hit many of the same milestones as the students who started with statistics. “At the end of our data collection in the fall of 2020, their degree completion – the elementary algebra group and the stats group – they’re not that different,” said Douglas. As those students enter the workforce and gain experience, it’s quite possible that their wages will catch up too.

A CUNY spokesperson told me that their college system stopped placing new students into remedial classes in the fall of 2022. For students who are behind in math, there are now “corequisite” math classes, where the extra support is more costly and differs from the tutoring that was tested in this study I am writing about here. Now the college-level course is two hours longer each week, blurring the lines between regular instruction and extra help support, and entirely taught by instructors, not peer tutors. Many instructors who used to teach remedial courses now teach these corequisite courses. 

For students who are significantly behind — struggling not only in algebra, but also in basic arithmetic — CUNY operates a separate pre-college program, called CUNY Start, where students take only remedial classes. These students haven’t yet matriculated at the college and don’t pay tuition, and so CUNY doesn’t count them as students. And the numbers of students in this pre-college remedial program had been swelling before the pandemic.* 

Students did better in these newer pre-college remedial classes than those who took traditional remedial classes, according to a separate 2021 study that Logue was also involved in. But these students aren’t necessarily doing better in college and earning more credits, unless they get a lot more advising and counseling support during their college years. Helping more young adults get through college isn’t going to be easy or cheap.

*Clarification: This paragraph has been modified to reflect that the CUNY Start program dates to 2009 and the number of students in it grew during the 2010s.  Enrollment in CUNY Start has decreased in recent years, mirroring the general drop in enrollment at community colleges. An earlier version implied that the CUNY Start program was new and that the number of students in it is still increasing.

This story about remedial math in college was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters. 

The post PROOF POINTS: Inside the perplexing study that’s inspired colleges to drop remedial math appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-inside-the-perplexing-study-thats-inspired-college-to-drop-remedial-math/feed/ 5 93312
PROOF POINTS: Trial finds cheaper, quicker way to tutor young kids in reading https://hechingerreport.org/proof-points-trial-finds-cheaper-quicker-way-to-tutor-young-kids-in-reading/ https://hechingerreport.org/proof-points-trial-finds-cheaper-quicker-way-to-tutor-young-kids-in-reading/#comments Mon, 13 Mar 2023 10:00:00 +0000 https://hechingerreport.org/?p=92215

Education researchers have been urging schools to invest their $120 billion in federal pandemic recovery funds in tutoring. What researchers have in mind is an extremely intensive type of tutoring, often called “high dosage” tutoring, which takes place daily or almost every day. It has produced remarkable results for students in almost 100 studies, but […]

The post PROOF POINTS: Trial finds cheaper, quicker way to tutor young kids in reading appeared first on The Hechinger Report.

]]>
After a year of short-burst tutoring, more than double the number of kindergarteners hit an important reading milestone. Researchers are tracking the children to see if the gains from this cheaper and quicker version of high-dosage tutoring are long lasting and lead to more third graders becoming proficient readers. Credit: AP Photo/Elaine Thompson

Education researchers have been urging schools to invest their $120 billion in federal pandemic recovery funds in tutoring. What researchers have in mind is an extremely intensive type of tutoring, often called “high dosage” tutoring, which takes place daily or almost every day. It has produced remarkable results for students in almost 100 studies, but these programs are difficult for schools to launch and operate. 

They involve hiring and training tutors and coming up with tailored lesson plans for each child. Outside organizations can help provide tutors and lessons, but schools still need to overhaul schedules to make time for tutoring, find physical space where tutors can meet with students, and safely allow a stream of adults to flow in and out of school buildings all day long. Tutoring programs with research evidence behind them are also expensive, at least $1,000 per student. Some exceed $4,000. 

One organization has designed a different tutoring model, which gives very short one-to-one tutoring sessions to young children who are just learning to read. The nonprofit organization, Chapter One (formerly Innovations for Learning), calls it “short burst” tutoring. It involves far fewer tutors, less disruption to school schedules and no extra space beyond a desk in the back of a classroom. The price tag, paid by school districts, is less than $500 per student. 

The first-year results of a four-year study of 800 Florida children conducted by a Stanford University research organization are promising. Half the children in 49 kindergarten classrooms were randomly selected to receive Chapter One’s tutoring program during the 2021-22 school year. Almost three-quarters of the students were Black and more than half were low-income – two groups who are more likely to be held back in third grade because of reading difficulties. 

To keep younger children on track, the Broward County school district, where the study took place, wanted all kindergarteners to be able to sound out simple three-letter words by the end of the year and be able to distinguish similar words such as hit, hot and hut. After one year of this short burst tutoring, more than double the number of kindergarteners hit this milestone: 68 percent versus 32 percent of the children who didn’t receive the tutoring in the same classrooms. Tutored students also scored much higher on a test of oral reading fluency. 

“These results are big,” said Susanna Loeb, a Stanford professor of education who was a member of the research team and heads the National Student Support Accelerator, a Stanford research organization that studies tutoring and released this study in February 2023. “What’s so exciting about this study is it shows that you can get a lot of the benefits of high impact tutoring – relationship-based, individualized instruction with really strong instructional materials – at a cost that is doable for most districts in the long run.”

Loeb said the reading gains in this study were at least as large as what has been produced by more expensive tutoring programs. But it remains to be seen whether these short-term benefits will endure, and whether kids without tutoring will eventually catch up. Researchers especially want to learn if these tutored children will become proficient readers at the end of third grade, a crucial marker in academic development. By one measure, a third of U.S. third graders are currently far behind grade level in reading and in need of intensive remediation. 

The 400 children who received the short-burst tutoring in kindergarten in this study are continuing to receive tutoring in first grade during the current 2022-23 academic year. Researchers are tracking all 800 children, with and without tutoring, for an additional two years through third grade.

Loeb cautioned that this short burst model would be unlikely to work with middle or high school students. It might be that short bursts of one-to-one help are particularly suited to the littlest learners.

“We realized at that young age that their attention span runs out somewhere around six or seven minutes if you’re really doing things intensively with them,” said Seth Weinberger, the founder of Chapter One. 

Weinberger stumbled into tutoring after a foray into educational video games. He was originally a lawyer representing video game makers, and collaborated with academics to develop phonics games to teach reading. 

“After about 20 years of honing these computer games, we came to the conclusion that computer games by themselves are just not going to be enough,” said Weinberger. “You really need some combination of computer-assisted instruction and actual real live humans in order to make it work for the kids.”

Weinberger’s tutoring-and-gaming model works like this: a tutor sits at a desk in the back of the classroom during the normal English Language Arts (ELA) period. One child works with a tutor for a short period of time, typically five to seven minutes, rejoins his classmates and another child rotates in. Children work with the same tutor each time, but a single tutor can cycle through eight or more students an hour this way. 

Though it might seem distracting to have an audible tutoring session in the same classroom, kindergarten classes are often a hubbub of noise as children work with classmates at different activity stations. Tutoring can be another noisy station, but I imagine that it can also be a distraction when the teacher is reading a picture book aloud. Weinberger considers it a strength of his program that kids are not pulled out of the classroom for tutoring so that they are not missing much instruction from their main teacher. In disadvantaged schools, children are frequently pulled out of classes for extra services, which is also disruptive.

Technology plays a big role. Behind the scenes, Chapter One’s computers are keeping track of every child’s progress and guiding the tutors on how to personalize instruction. The tutor’s screen indicates which student to work with next and what skills that student needs to work on.  It also suggests phonics lessons and activities that the tutor can use during the session. 

The computer guidance takes the usual guesswork and judgment calls out of reading instruction and that has enabled well-trained laypeople to serve as tutors as well as experienced, certified teachers. (The Stanford team is currently studying whether certified teachers are producing much larger reading improvements for children, but those results are not out yet. In the current study I am writing about here, both laypeople and certified teachers served as tutors.)

Chapter One’s technology also determines how much tutoring each child should get each day and how many times a week. Dosage ranges from a two-minute session every two weeks to as much as 15 minutes a day. More typical is five to seven minutes three to five times a week. Children in the middle who are making good progress get the most. Children at the very top and the very bottom get the least. (Children who are not making progress may have a learning disability and need a different intervention.)

Technology is also used to reinforce the tutoring with independent practice time on tablets. Chapter One recommends that every child spend 15 minutes a day playing phonics games that are synced to the tutoring instruction and change as the student progresses. The researchers did not yet have data on how much time children actually spent playing these educational games, and how important this independent practice time is in driving the results.

A federal survey of principals estimates that half of U.S. students are behind grade level, far higher than before the pandemic, when a third were behind. But it’s really hard to expand high-dosage tutoring programs rapidly to serve the millions of children who need it. Most of the effective programs are rather small, reaching only a tiny fraction of the students who need help. What’s heartening about this Chapter One study is that the organization is already tutoring 25,000 students in U.S. schools (plus 1,000 students in Canada and the United Kingdom). Now we have a well-designed study – as close as you get in education to the kinds of tests that we do on vaccines and pharmaceuticals – showing that it is effective. 

“It’s not that it has the potential to scale,” said Stanford’s Loeb. “Already 10,000 kids are receiving it in this one district, so we know that it’s actually possible.” 

This story about alternatives to high-dosage tutoring was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: Trial finds cheaper, quicker way to tutor young kids in reading appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-trial-finds-cheaper-quicker-way-to-tutor-young-kids-in-reading/feed/ 1 92215
PROOF POINTS: Taking stock of tutoring https://hechingerreport.org/proof-points-taking-stock-of-tutoring/ https://hechingerreport.org/proof-points-taking-stock-of-tutoring/#respond Mon, 27 Feb 2023 11:00:00 +0000 https://hechingerreport.org/?p=91991

Ever since the pandemic shut down schools almost three years ago, I’ve been writing about tutoring as the most promising way to help kids catch up academically. I often get questions about research on tutoring. How effective is tutoring? How many schools are doing it? How is it going so far? In this column, I’m […]

The post PROOF POINTS: Taking stock of tutoring appeared first on The Hechinger Report.

]]>

Ever since the pandemic shut down schools almost three years ago, I’ve been writing about tutoring as the most promising way to help kids catch up academically. I often get questions about research on tutoring. How effective is tutoring? How many schools are doing it? How is it going so far? In this column, I’m recapping the evidence for tutoring and what we know now about pandemic tutoring. For those who want to learn more, there are links to sources throughout and at the end, a list of Hechinger stories on tutoring. 

Well before the pandemic, researchers were zeroing in on tutoring as a way to help children who were significantly behind grade level. Remedial classes had generally been a failure, and researchers often saw disappointing results from after-school and summer school programs because students didn’t show up or didn’t want to go to school during vacation. 

But evidence for tutoring has been building for more than 30 years, as tutoring organizations designed reading and math programs, partnered with schools and invited in researchers. The results have been striking. In almost 100 randomized controlled trials, where students were randomly assigned to receive tutoring, the average gains were equivalent to moving an average child from the 50th percentile to the 66th percentile. In education, that’s a giant jump. One estimate equated the jump from tutoring to five months of learning beyond a student’s ordinary progress in a school year. There are no magic bullets in education, but tutoring comes as close to one as you get.

What researchers mean when they say “tutoring,” however, is not what many people might imagine. It’s not provided by the kind of tutors that well-to-do families might hire for their children at home. Studies have found that sessions once or twice a week haven’t boosted achievement much, nor has frequent after-school homework help. Instead, tutoring produces outsized gains in reading and math when it takes place daily, using paid, well-trained tutors who are following a proven curriculum or lesson plans that are linked to what the student is learning in class. Effective tutoring sessions are scheduled during the school day, when attendance is mandatory, not after school. Researchers call it “high-dosage” or “high-impact” tutoring. 

Think of it as the difference between outpatient visits and intensive care at a hospital. High-dosage tutoring is more like the latter. It’s expensive to hire and train tutors and this type of tutoring can cost schools $4,000 or more per student annually. (Surprisingly, the tutoring doesn’t have to be one-to-one; researchers have found that well-designed tutoring programs can be very effective when tutors work with two or three students.) 

The Biden administration has urged schools to use their $122 billion in pandemic recovery funds on tutoring. But it’s been hard for schools to launch tutoring operations. For starters, it’s tough to hire tutors amid a strong labor market when there aren’t many people looking for work and “help wanted” signs are everywhere. The logistical issues are complex: tutor training, rescheduling the school day to make time for tutoring periods, finding physical space to hold tutoring sessions and figuring out how to allow a stream of adult tutors to flow in and out of the school buildings all day. There are also tough decisions, such as which students should be tutored, and which curriculums to choose. Educators have to become operations experts and build a whole new organization amid everything else they’re juggling.

So far we have spotty data on how many schools have actually implemented tutoring. Among those who have, it’s unclear how many have launched good high-dosage programs and which students are getting it. 

The U.S. Department of Education estimates that more than four out of five schools were offering a version of tutoring to some of their students during the 2022-23 school year, based on a December 2022 survey of 1,000 schools. The majority said they were delivering “standard” tutoring, such as once a week extra-help sessions after school. Only 37 percent said they were delivering “high-dosage” tutoring. Even among the 37 percent of schools that said they were delivering high-dosage tutoring, only 30 percent of the students were receiving it. This translates into an estimate of 10 percent of public school students nationwide who are receiving high-dosage tutoring – far less than the need. In the same survey, school principals estimated that half of their students were behind grade level. 

Sixteen states are using $470 million of their federal pandemic recovery funds to launch large tutoring programs that will reach millions of children, according to a separate February 2023 report by the Council of Chief State School Officers, a group of public officials who head state education departments that oversee elementary, middle and high schools.  Among them are Arkansas, Colorado, Louisiana and Tennessee. Another four states are sending more $200 million directly to families to hire their own tutors. Indiana, for example, gives families up to $1,000 per qualifying student to spend on high-impact tutoring. (Local school districts are spending much more than a total of $700 million on tutoring. The school officers’ report covers only direct state spending.)

In many cases, tutoring this year is taking place virtually over screens instead of in person. Often, students are texting with tutors and not hearing or seeing one another – akin to a customer service chat session. But there are also tutoring companies that are trying to recreate an in-person tutoring experience through live video and audio. It feels more like a Zoom meeting with a shared whiteboard that both student and teacher can write on.

It remains to be seen if the outsized academic gains from in-person tutoring can be replicated online. A study of low-income middle schoolers in Chicago was disappointing. The program was riddled with problems: poor attendance, technical glitches and a slow recruitment of college student volunteers to serve as tutors. Students who were assigned tutoring didn’t catch up more than those who didn’t get that extra help. But there were some signs of hope, too. Kids who started the tutoring sooner made larger academic gains. 

Another pandemic study of virtual tutoring for low-income immigrant middle schoolers in Italy yielded good results when students received four hours a week, but much worse results when they got only two hours a week. When the hours were halved, the academic gains dropped by more than half.  

Saga Education, an organization which has built an impressive track record with in-person tutoring, is currently testing whether its high-dosage model works as well in the virtual world. I am eager to see their data when it comes out. Earlier this month I observed Saga’s virtual tutoring at a New York City high school, where the students sat in a classroom and connected to their algebra tutors through laptops. I noticed how much more engaged the students were with a tutor who was physically present. Many ninth graders weren’t keen to be seen on camera and angled their laptops away. It was harder to develop an easy, friendly rapport between student and tutor.

School administrators have told me that it is hard to squeeze in three or more tutoring sessions a week, or make sure that students log in when sessions are scheduled. No-shows are common.

Many schools have purchased unlimited online tutoring from for-profit companies, such as Paper, Tutor.com and Varsity Tutors, where students can login anytime for homework help. Companies have marketed this voluntary 24/7 tutoring as high-dosage because, in theory, students could use it frequently. And it is much cheaper for schools; it can cost $40 per student instead of $4,000 for in-person, high-dosage tutoring. But several reports, such as this one in Fairfax County, Virginia, find that students aren’t using it very much, and the students who need tutoring the most are the least likely to use these drop-in tutoring services. 

Efforts by researchers to increase usage through text nudges convinced only 27 percent of the students at one charter school chain in California to try an online tutor even once. More than 70 percent of the students never logged into the tutoring platform. Among students who needed tutoring the most because they had failed a class with a D or an F, only 12 percent ever logged on. Just 26 of the 7,000 students in the charter network used it three times or more a week, which is what researchers are recommending.

Even though the services are marketed as one-to-one tutoring, some tutoring companies, such as Paper, have their tutors handling multiple students at once. Several tutors explained to me how challenging it is to juggle homework questions from different grades and different subjects simultaneously. Students sometimes have to wait patiently for their tutor to reply to a text while the tutor is texting with others. Relying on students’ homework questions, instead of using a structured tutoring curriculum, makes it hard to know if you’re teaching students the topics they need to catch up. Part of the magic of tutoring may be forming a long-term relationship with a caring adult. But tutors at several of these companies rarely see the same student twice. It’s no wonder that most students aren’t eager to log in.

Even though there’s good evidence for the effectiveness of intensive tutoring, districts are struggling to build functional programs. The for-profit tutoring services many schools are buying in the meantime don’t make the grade.

Previous Proof Points columns on tutoring:

New federal survey estimates one out of 10 public school students gets high-dosage tutoring

Pandemic recovery strategies at 1,000 schools raise concerns over how quickly students will catch up

The life of an online tutor can resemble that of an assembly line worker

Time pressures and multi-tasking raise questions about the effectiveness of on-demand tutoring

Many schools are buying on-demand tutoring but a study finds that few students are using it

Companies market 24/7 online tutoring services as high-dosage tutoring but researchers warn that these products don’t have an evidence base behind them

Early data on high-dosage tutoring shows schools are sometimes finding it tough to deliver even low doses

Tennessee results for low-income children hint that tutoring benefits may be slow to emerge

Uncertain evidence for online tutoring

It’s a booming business in the wake of the pandemic

Research evidence increases for intensive tutoring

One approach to help students make up for the pandemic year uses recent college graduates as tutors

Takeaways from research on tutoring to address coronavirus learning loss

Research points to frequent sessions and a structured curriculum in helping struggling students catch up

Cheaper human tutors can be highly effective, studies show

Good tutors use step-by-step methods

This story about tutoring research was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: Taking stock of tutoring appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-taking-stock-of-tutoring/feed/ 0 91991