artificial intelligence Archives - The Hechinger Report https://hechingerreport.org/tags/artificial-intelligence/ Covering Innovation & Inequality in Education Tue, 19 Dec 2023 19:04:28 +0000 en-US hourly 1 https://hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon-32x32.jpg artificial intelligence Archives - The Hechinger Report https://hechingerreport.org/tags/artificial-intelligence/ 32 32 138677242 OPINION: Why artificial intelligence holds great promise for improving student outcomes https://hechingerreport.org/opinion-why-artificial-intelligence-holds-great-promise-for-improving-student-outcomes/ https://hechingerreport.org/opinion-why-artificial-intelligence-holds-great-promise-for-improving-student-outcomes/#respond Tue, 19 Dec 2023 19:15:00 +0000 https://hechingerreport.org/?p=97686

The recent rise of ChatGPT and other generative artificial intelligence tools has inspired growing anxiety on college campuses while fueling a national conversation about faculty attempts to thwart students from using the tools to cheat. But that prevalent narrative around AI and cheating is overshadowing the technology’s true potential: Artificial intelligence holds great promise for […]

The post OPINION: Why artificial intelligence holds great promise for improving student outcomes appeared first on The Hechinger Report.

]]>

The recent rise of ChatGPT and other generative artificial intelligence tools has inspired growing anxiety on college campuses while fueling a national conversation about faculty attempts to thwart students from using the tools to cheat.

But that prevalent narrative around AI and cheating is overshadowing the technology’s true potential: Artificial intelligence holds great promise for dramatically enhancing the reach and impact of postsecondary institutions and improving outcomes for all students.

Last month, President Biden issued a sweeping executive order aimed at better mitigating the risks and harnessing the power of artificial intelligence, while also arguing for the need to “shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools.”

Biden’s call to action could not have been more timely.

The question now is not whether generative AI can positively transform educational access and attainment, but whether higher education is ready to truly democratize and personalize learning with these tools.

Related: Future of Learning: Teaching with AI, part 1

AI’s transformational potential is perhaps greatest at community colleges, minority-serving institutions and open-access universities. These schools’ diversity necessitates a broader set of supports. Dedicated faculty and staff not only serve a very broad range of students — including first-generation and low-income learners, returning adults, those for whom English is a second language and those balancing academic pursuits with family and work responsibilities — but they do so with fewer resources than instructors at elite and flagship institutions. Generative AI tools can augment critically needed services such as advisers, tutors and coaches.

Exploring the possibilities of AI is not cheap, however. While some low-cost or free tools can make a difference, the largest impacts will be achieved through more advanced — and costly — tools that are developed with specific learner populations in mind and blend academic material with students’ sociocultural and language contexts rather than providing generic solutions.

Challenges around cost and availability could further disenfranchise the very learners who could gain the most from AI tools by denying them access to the experts, resources and development opportunities they need to benefit from them. Institutions may struggle to bring the true power of AI to bear on addressing their students’ needs.

Similarly, too often, the datasets and algorithms behind AI tools reflect historical inaccuracies and intrinsic biases that only further disenfranchise learners. This will continue to be the case until we collectively confront the inequitable ways that AI systems are designed and resources are distributed.

That’s why we need to think about AI differently, shifting our focus from debates about academic integrity and concerns about cheating to how we can leverage artificial intelligence in equitable ways that will boost college completion for all students.

Related: How college educators are using AI in the classroom

Let’s focus on how AI advances could provide all learners with the kinds of high-touch support already offered to students who attend wealthier institutions. AI tools could have a transformative effect on access, progression and completion for learners who were previously constrained by limitations of time, space and resources.

Imagine if generative AI tutors could provide 24/7 individualized support, along with AI-powered virtual reality tools that would widen access to experiential learning opportunities. What about having adaptive learning tools enabling students to learn at a pace that best suits their level of preparation? And personalized learning materials that reflect their backgrounds and lived experiences?

A technology that has incredible potential to help expand access to the many benefits of higher education should not become a mechanism through which inequity is exacerbated.

Such steps could augment engagement and outreach efforts to lower the barriers that prevent students from underserved communities from earning degrees.

This is not a speculative vision of a not-too-distant future, but an emerging reality on some campuses. Arizona State University, for example, has assembled a team of engineers and data scientists to develop AI tools to enhance learning and improve student outcomes.

For now, such experimentation is limited to colleges and universities with the resources for scaling the benefits of the technology and developing the guardrails necessary for mitigating risks to learners.

Related: OPINION: The world is changing fast. Students need data science instruction ASAP

According to a new report from the Brookings Institution, many of the nation’s most selective and affluent colleges and universities are clustered in the same coastal metro areas long home to Big Tech — and now to AI innovation and job growth.

That’s unfortunate. Access to new technology — and the ability to play a role in shaping its design — should not be limited by geography or institutional type. A technology that has incredible potential to help expand access to the many benefits of higher education should not become a mechanism through which inequity is exacerbated.

That’s why the newly convened Complete College America Council on Equitable AI plans to bring together organizations representing over 1,000 access-focused two-year and four-year colleges and universities in January. We hope to influence and initiate policies and practices to encourage equitable engagement of AI technologies.

We hope that college leaders, policymakers and technologists will join us to make sure that AI helps to realize, rather than hinder, higher education’s promise as an engine of equity, prosperity and hope.

Yolanda Watson Spiva is president of Complete College America.

Vistasp M. Karbhari is a professor of engineering at the University of Texas at Arlington, where he also served as president from 2013 to 2020, and is a fellow and board member of Complete College America.

This story about AI in higher education was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

The post OPINION: Why artificial intelligence holds great promise for improving student outcomes appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/opinion-why-artificial-intelligence-holds-great-promise-for-improving-student-outcomes/feed/ 0 97686
OPINION: Banning tech that will become a critical part of life is the wrong answer for education https://hechingerreport.org/opinion-banning-tech-that-will-become-a-critical-part-of-life-is-the-wrong-answer-for-education/ https://hechingerreport.org/opinion-banning-tech-that-will-become-a-critical-part-of-life-is-the-wrong-answer-for-education/#respond Mon, 18 Dec 2023 11:00:00 +0000 https://hechingerreport.org/?p=97651

Since the introduction of ChatGPT, educators have been considering the impact of generative artificial intelligence (GAI) on education. Different approaches to AI codes of conduct are emerging, based on geography, school size and administrators’ willingness to embrace new technology. With ChatGPT barely one year old and generative AI developing rapidly, a universally accepted approach to […]

The post OPINION: Banning tech that will become a critical part of life is the wrong answer for education appeared first on The Hechinger Report.

]]>

Since the introduction of ChatGPT, educators have been considering the impact of generative artificial intelligence (GAI) on education. Different approaches to AI codes of conduct are emerging, based on geography, school size and administrators’ willingness to embrace new technology.

With ChatGPT barely one year old and generative AI developing rapidly, a universally accepted approach to integrating AI has not yet emerged.

Still, the rise of GAI is offering a rare glimpse of hope and promise amid K-12’s historic achievement lows and unprecedented teacher shortages. That’s why many educators are contemplating how to manage and monitor student AI use. You can see a wide range of opinions, including some who would like to see AI tools outright banned.

There is a fine line between “using AI as a tool” and “using AI to cheat,” and many educators are still determining where that line is.

Related: How AI can teach kids to write – not just cheat

In my view, banning tech that will become a critical part of everyday life is not the answer. AI tools can be valuable classroom companions, and educators should write their codes of conduct in a way that encourages learners to adapt.

Administrators should respect teachers’ hesitation about adopting AI, but also create policies that allow tech-forward educators and students to experiment.

A number of districts have publicly discussed their approaches to AI. Early policies seem to fall into three camps:

Zero Tolerance: Some schools have instructed their students that use of AI tools will not be tolerated. For example, Oklahoma’s Tomball ISD updated its code of conduct to include a brief sentence on AI-enhanced work, stating that any work submitted by a student that has been completed using AI “will be considered plagiarism” and penalized as such.

Active Encouragement: Some schools encourage teachers to use AI tools in their classrooms. Michigan’s Hemlock Public School District provides its teachers with a list of AI tools and suggests that teachers explore which tools work best with their existing curriculum and lessons.

Wait-and-See: Many schools are taking a wait-and-see approach to drafting policies. In the meantime, they are allowing teachers and students to freely explore the capabilities and applications of the current crop of tools and providing guidance as issues and questions arise. They will use the data collected during this time to inform policies drafted in the future.

A recent Brookings report highlighted the confusion around policies for these new tools. For example, Los Angeles Public Schools blocked ChatGPT from all school computers while simultaneously rolling out an AI companion for parents. Because there isn’t yet clear guidance on how AI tools should be used, educators are receiving conflicting advice on both how to use AI themselves and how to guide their students’ use.

New York City public schools banned ChatGPT, then rolled back the ban, noting that their initial decision was hasty, based on “knee-jerk fear,” and didn’t take into account the good that AI tools could do in supporting teachers and students. They also noted that students will need to function and work in a world in which AI tools are a part of daily life and banning them outright could be doing students a disservice. They’ve since vowed to provide educators with “resources and real-life examples” of how AI tools have been successfully implemented in schools to support a variety of tasks across the spectrum of planning, instruction and analysis.

AI codes of conduct that encourage both smart and responsible use of these tools will be in the best interest of teachers and students.

This response is a good indication that the “Zero Tolerance” approach is waning in larger districts as notable guiding bodies, such as ISTE, actively promote AI exploration.

In addition, the federal government’s Office of Educational Technology is working on policies to ensure safe and effective AI use, noting that “Everyone in education has a responsibility to harness the good to serve educational priorities” while safeguarding against potential risks.

Educators must understand how to use these tools, and how they can help students be better equipped to navigate both the digital and real world.

Related: AI might disrupt math and computer science classes – in a good way

Already, teachers and entrepreneurs are experimenting with ways that GAI can make an impact on teacher practice and training, from lesson planning and instructional coaching to personalized feedback.

District leaders must consider that AI can assist teachers in crafting activity-specific handouts, customizing reading materials and formulating assessment, assignment and in-class discussion questions. They should also note how AI can deter cheating by generating unique assessments for each test-taker.

As with many educational innovations, it’s fair to assume that the emergence of student conduct cases within higher education will help guide the development of GAI use policy generally.

All this underscores both the importance and the complication of drafting such GAI policies, leading districts to ask, “Should we create guidelines just for students or for students and teachers?”

Earlier this year, Stanford’s Board on Conduct Affairs addressed the issue and its policies, clarifying that generative AI cannot be used to “substantially” complete an assignment and that its use must be disclosed.

But Stanford also gave individual instructors the latitude to provide guidelines on the acceptable use of GAI in their coursework. Given the relative murkiness of that policy, I predict clearer guidelines are still to come and will have an impact on those being drafted for K-12 districts.

Ultimately, AI codes of conduct that encourage both smart and responsible use of these tools will be in the best interest of teachers and students.

It will, however, not be enough for schools just to write codes of conduct for AI tools. They’ll need to think through how the presence of AI technology changes the way students are assessed, use problem-solving skills and develop competencies.

Questions like “How did you creatively leverage this new technology?” can become part of the rubric.

Their exploration will help identify best practices and debunk myths, championing AI’s responsible use. Developing AI policies for K-12 schools is an ongoing conversation.

Embracing experimentation, raising awareness and reforming assessments can help schools ensure that GAI becomes a positive force in supporting student learning responsibly.

Ted Mo Chen is vice president of globalization for the education technology company ClassIn.

This story about AI tools in schools was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s newsletter.

The post OPINION: Banning tech that will become a critical part of life is the wrong answer for education appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/opinion-banning-tech-that-will-become-a-critical-part-of-life-is-the-wrong-answer-for-education/feed/ 0 97651
PROOF POINTS: It’s easy to fool ChatGPT detectors https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/ https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/#respond Mon, 04 Sep 2023 10:00:00 +0000 https://hechingerreport.org/?p=95538

A high school English teacher recently explained to me how she’s coping with the latest challenge to education in America: ChatGPT.  She runs every student essay through five different generative AI detectors. She thought the extra effort would catch the cheaters in her classroom.  A clever series of experiments by computer scientists and engineers at […]

The post PROOF POINTS: It’s easy to fool ChatGPT detectors appeared first on The Hechinger Report.

]]>

A high school English teacher recently explained to me how she’s coping with the latest challenge to education in America: ChatGPT.  She runs every student essay through five different generative AI detectors. She thought the extra effort would catch the cheaters in her classroom. 

A clever series of experiments by computer scientists and engineers at Stanford University indicate that her labors to vet each essay five ways might be in vain. The researchers demonstrated how seven commonly used GPT detectors are so primitive that they are both easily fooled by machine generated essays and improperly flagging innocent students. Layering several detectors on top of each other does little to solve the problem of false negatives and positives.

“If AI-generated content can easily evade detection while human text is frequently misclassified, how effective are these detectors truly?” the Stanford scientists wrote in a July 2023 paper, published under the banner, “opinion,” in the peer-reviewed data science journal Patterns. “Claims of GPT detectors’ ‘99% accuracy’ are often taken at face value by a broader audience, which is misleading at best.”

The scientists began by generating 31 counterfeit college admissions essays using ChatGPT 3.5, the free version that any student can use. GPT detectors were pretty good at flagging them. Two of the seven detectors they tested caught all 31 counterfeits. 

But all seven GPT detectors could be easily tricked with a simple tweak. The scientists asked ChatGPT to rewrite the same fake essays with this prompt: “Elevate the provided text by employing literary language.”

Detection rates plummeted to near zero (3 percent, on average). 

I wondered what constitutes literary language in the ChatGPT universe. Instead of college essays, I asked ChatGPT to write a paragraph about the perils of plagiarism. In ChatGPT’s first version, it wrote: “Plagiarism presents a grave threat not only to academic integrity but also to the development of critical thinking and originality among students.” In the second, “elevated” version, plagiarism is “a lurking specter” that “casts a formidable shadow over the realm of academia, threatening not only the sanctity of scholastic honesty but also the very essence of intellectual maturation.”  If I were a teacher, the preposterous magniloquence would have been a red flag. But when I ran both drafts through several AI detectors, the boring first one was flagged by all of them. The flamboyant second draft was flagged by none. Compare the two drafts side by side for yourself. 

Simple prompts bypass ChatGPT detectors. Red bars are AI detection before making the language loftier; gray bars are after. 

For ChatGPT 3.5 generated college admission essays, the performance of seven widely used ChatGPT detectors declines markedly when a second round self-edit prompt (“Elevate the provided text by employing literary language”) is applied. Source: Liang, W., et al. “GPT detectors are biased against non-native English writers” (2023)

Meanwhile, these same GPT detectors incorrectly flagged essays written by real humans as AI generated more than half the time when the students were not native English speakers. The researchers collected a batch of 91 practice English TOEFL essays that Chinese students had voluntarily uploaded to a test-prep forum before ChatGPT was invented. (TOEFL is the acronym for the Test of English as a Foreign Language, which is taken by international students who are applying to U.S. universities.) After running the 91 essays through all seven ChatGPT detectors, 89 essays were identified by one or more detectors as possibly AI-generated. All seven detectors unanimously marked one out of five essays as AI authored. By contrast, the researchers found that GPT detectors accurately categorized a separate batch of 88 eighth grade essays, submitted by real American students.

My former colleague Tara García Mathewson brought this research to my attention in her first story for The Markup, which highlighted how international college students are facing unjust accusations of cheating and need to prove their innocence. The Stanford scientists are warning not only about unfair bias but also about the futility of using the current generation of AI detectors. 

Bias in ChatGPT detectors. Leading detectors incorrectly flag a majority of essays written by international students, but accurately classify writing of American eighth graders. 

More than half of the TOEFL (Test of English as a Foreign Language) essays written by non-native English speakers were  incorrectly classified as “AI-generated,” while detectors exhibit near-perfect accuracy for U.S. eighth graders’ essays. Source: Liang, W., et al. “GPT detectors are biased against non-native English writers” (2023)

The reason that the AI detectors are failing in both cases – with a bot’s fancy language and with foreign students’ real writing – is the same. And it has to do with how the AI detectors work. Detectors are a machine learning model that analyzes vocabulary choices, syntax and grammar. A widely adopted measure inside numerous GPT detectors is something called “text perplexity,” a calculation of how predictable or banal the writing is. It gauges the degree of “surprise” in how words are strung together in an essay. If the model can predict the next word in a sentence easily, the perplexity is low. If the next word is hard to predict, the perplexity is high.

Low perplexity is a symptom of an AI generated text, while high perplexity is a sign of human writing. My intentional use of the word “banal” above, for example, is a lexical choice that might “surprise” the detector and put this column squarely in the non-AI generated bucket. 

Because text perplexity is a key measure inside the GPT detectors, it becomes easy to game with loftier language. Non-native speakers get flagged because they are likely to exhibit less linguistic variability and syntactic complexity.

The seven detectors were created by originality.ai, Quill.org, Sapling, Crossplag, GPTZero, ZeroGPT and OpenAI (the creator of ChatGPT). During the summer of 2023, Quill and OpenAI both decommissioned their free AI checkers because of inaccuracies. Open AI’s website says it’s planning to launch a new one.

“We have taken down AI Writing Check,” Quill.org wrote on its website, “because the new versions of Generative AI tools are too sophisticated for detection by AI.” 

The site blamed newer generative AI tools that have come out since ChatGPT launched last year.  For example, Undetectable AI promises to turn any AI-generated essay into one that can evade detectors … for a fee. 

Quill recommends a clever workaround: check students’ Google doc version history, which Google captures and saves every few minutes. A normal document history should show every typo and sentence change as a student is writing. But someone who had an essay written for them – either by a robot or a ghostwriter – will simply copy and paste the entire essay at once into a blank screen. “No human writes that way,” the Quill site says. A more detailed explanation of how to check a document’s version history is here

Checking revision histories might be more effective, but this level of detective work is ridiculously time consuming for a high school English teacher who is grading dozens of essays. AI was supposed to save us time, but right now, it’s adding to the workload of time-pressed teachers!

This story about ChatGPT detectors was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters. 

The post PROOF POINTS: It’s easy to fool ChatGPT detectors appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-its-easy-to-fool-chatgpt-detectors/feed/ 0 95538
PROOF POINTS: A smarter robo-grader https://hechingerreport.org/proof-points-a-smarter-robo-grader/ https://hechingerreport.org/proof-points-a-smarter-robo-grader/#respond Mon, 21 Feb 2022 11:00:00 +0000 https://hechingerreport.org/?p=85171

The best kind of expertise might be personal experience. When the research arm of the U.S. Department of Education wanted to learn more about the latest advances in robo-grading, it decided to hold a competition. In the fall of 2021, 23 teams, many of them Ph.D. computer scientists from universities and corporate research laboratories, competed […]

The post PROOF POINTS: A smarter robo-grader appeared first on The Hechinger Report.

]]>

The best kind of expertise might be personal experience.

When the research arm of the U.S. Department of Education wanted to learn more about the latest advances in robo-grading, it decided to hold a competition. In the fall of 2021, 23 teams, many of them Ph.D. computer scientists from universities and corporate research laboratories, competed to see who could build the best automatic scoring model.

One of the six finalists was a team of just two 2021 graduates from the Georgia Institute of Technology. Prathic Sundararajan, 21, and Suraj Rajendran, 22, met during an introductory biomedical engineering class freshman year and had studied artificial intelligence. To ward off boredom and isolation during the pandemic, they entered a half dozen hackathons and competitions, using their knowhow in machine learning to solve problems in prisons, medicine and auto sales. They kept winning. 

“We hadn’t done anything in the space of education,” said Sundararajan, who noticed an education competition on the Challenge.Gov website. “And we’ve all suffered through SATs and those standardized tests. So we were like, Okay, this will be fun. We’ll see what’s under the hood, how do they actually do it on the other side?”

The Institute of Education Sciences gave contestants 20 question items from the 2017 National Assessment of Educational Progress (NAEP), a test administered to fourth and eighth graders to track student achievement across the nation. About half the questions on the reading test were open-response, instead of multiple choice, and humans scored these sentences.

Rajendran, now a Ph.D. student at Weill Cornell Medicine in New York, thought he might be able to re-use a model he had built for medical records that used natural language processing to decipher doctors’ notes and predict patient diagnoses. That model relied on a natural language processing model called GloVe, developed by scientists at Stanford University.

Together the duo built 20 separate models, one for each open-response question. First they trained their models by having them digest the scores that humans had given to thousands of student responses on these exact same questions. One, for example, was:  “Describe two ways that people care for the violin at the museum that show the violin is valuable.” 

When they tested their robo-graders, the accuracy was poor.

“The education context is different,” said Sundararajan. Words can have different meanings in different contexts and the algorithms weren’t picking that up.

Sundararajan and Rajendran went back to the drawing board to look for other language models. They happened upon BERT.

BERT is a natural language processing model developed at Google in 2018 (yes, they found it by Googling).  It’s what Google uses for search queries but the company shares the model as free, open-source code. Sundararajan and Rajendran also found another model called RoBERTa, a modified version of BERT, that they thought might be better. But they ran out of time before submissions were due on Nov. 28, 2021.

When the prizes were announced on Jan. 21, it turned out that all the winners had selected BERT, too. The technology is a sea change in natural language processing. Think of it like the new mRNA technology that has revolutionized vaccines.  Much the way Moderna and Pfizer achieved similar efficacy rates with their COVID vaccines, the robo-grading results of the BERT users rose to the top. 

“We got extremely high levels of accuracy,” said John Whitmer, a senior fellow with the Federation of American Scientists serving at the Institute of Education Sciences. “With the top three, we had this very nice problem that they were so close, as one of our analysts said, you couldn’t really fit a piece of paper between them.”

Essays and open responses are notoriously difficult to score because there are infinite ways to write and even great writers might prefer one style over another. Two well-trained humans typically agreed on a student’s writing score on the NAEP test 90.5 percent of the time. The best robo-grader in this competition, produced by a team from Durham, N.C-based testing company, Measurement Inc., agreed with human judgment 88.8 percent of the time, only a 1.7 percentage point greater discrepancy than among humans.  

Sundararajan and Rajendran’s model was in agreement with the humans 86.1 percent of the time, just 4 percentage points shy of replicating human scores. That earned them a runner-up prize of $1,250. The top three winners each received $15,000.

The older generation of robo-grading models tended to focus on specific “features” that we value in writing, such as coherence, vocabulary, punctuation or sentence length. It was easy to game these grading systems by writing gibberish that happened to meet the criteria that the robo-grader was looking for. But a 2014 study found that these “feature” models worked reasonably well. 

BERT is much more accurate. However, its drawback is that it’s like a black box to laypeople. With the feature models, you could see that an essay scored lower because it didn’t have good punctuation, for example. With BERT models, there’s no information on why the essay scored the way it did.  

“If you try to understand how that works, you’ve got to go back and look at the billions of relationships that are made and the billions of inputs in these neural networks,” said Whitmer. 

That makes the model useful for scoring an exam, but not useful for teachers in grading school assignments because it cannot give students any concrete feedback on how to improve their writing.

BERT models were also bad at building a robo-grader that could grade more than a single question. As part of the competition, contestants were asked to build a “generic” model that could score open responses to any question. But the best of these generic models were only able to replicate human scoring half the time. It was not a success.

The upside is that humans are not going away. At least 2,000 to 5,000 human scores are needed to train an automated scoring model for each open-response question, according to Pearson, which has been using automated scoring since 1998. In this competition, contestants had 20,000 human scores to train their models. The time and cost savings kick in when test questions are re-used in subsequent years. The Department of Education currently requires humans to score student writing and it held this competition to help decide whether to adopt automated scoring on future administrations of the NAEP test.

Bias remains a concern with all machine learning models. The Institute for Education Science confirmed that Black and Hispanic students weren’t faring any worse with the algorithms than they were with human scores in this competition. The goal was to replicate the human scores, which could still be influenced by human biases. Race, ethnicity and gender aren’t known to human scorers on standardized exams, but it’s certainly possible to make assumptions based on word choices and syntax. By training the computerized models with fallible humans, we could be baking biases into the robo-graders.

Sundararajan graduated in December 2021 and is now working on blood pressure waveforms at a medical technology startup in California. After conquering educational assessment, he and Rajendran turned their attention to other timely challenges. This month, they won first place in a competition run by the Centers for Disease Control. They analyzed millions of tweets to see who had suffered trauma in their past and if their Twitter community is now serving as a helpful support group or a destructive influence.

This story about robo-grading was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post PROOF POINTS: A smarter robo-grader appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-a-smarter-robo-grader/feed/ 0 85171