Technology for Literacy and Learning

By Anna Lynn Spitzer

Irvine, CA, August 22nd, 2014 — Professor Mark Warschauer has one foot firmly planted in the world of education and the other in information technology. In addition to academic appointments in both UC Irvine schools, he serves as associate dean of the School of Education and directs its Digital Learning Lab. Warschauer spoke this week to attendees at this summer’s last SURF-IT Symposium about his research, specifically four ongoing projects that utilize technology to enhance teaching and learning in K-12 classrooms.

The first uses natural language processing to enhance reading skills. In the last 20-30 years, nearly all professional and academic writing has moved from paper to computer screen, he said, and a similar transition is underway in the area of reading instruction. “That creates a lot of possibilities for new forms of scaffolding and supporting reading,” he told the group.

One approach is a system known as visual-syntactic text formatting or VSTF. The system automatically breaks blocks of text at phrase and clause boundaries into shorter lines that more resemble poetry. The system can also colorize certain words – verbs for example – for easier processing. “The theory is that this matches more closely to how the eyes and mind take in information,” Warschauer said.

Studies to determine the effectiveness of this approach to reading have yielded somewhat mixed, yet largely positive, results. College students who read science texts using VSTF performed slightly better on comprehension tests than those who read standard block text. Elementary school children, who read English and social studies lessons in VSTF for 50 minutes per day, had increased test scores on the California standardized tests given at year-end. “The theory is that they engaged more with the text; they understood the structure better,” Warschauer said. Researchers also thought their overall reading comprehension would increase, but test results indicated few changes.

The study did indicate that the children had a tendency to be more engaged with the VSTF than they did with block text. Warschauer is planning a much larger study of VSTF, with students utilizing Chromebooks and iPads as they read either block format or VSTF text.

His second project investigates automated scoring programs for writing competency. It’s universally recognized that writing is a key aspect of learning, and that feedback is vital for those learning to write. But large classes and complex writing assignments can leave teachers overwhelmed and inefficient. So researchers have developed machine-learning tools, which can evaluate and score word problems or essays, and return comments to the fledgling writers.

Warschauer is partnering with LightSide, a company that develops these scoring systems. They work by comparing a wide range of components in student writing to those in essays scored by humans, looking for correlations. None of the correlations stands up to scrutiny on its own but taken as a whole, they can be quite effective. In fact, after reaching an arbitrary number of attempts (based on the difficulty of the question) the machine-learning program comes closer to the original human score than another human does. Some writing – world history essays, for example, which are very content-laden – can require thousands of exemplars before the machine tools catch up to the human scorers. But after that, research indicates, they’re very comparable.

LightSides, says Warschauer, can identify portions of an essay that contribute to a high or low score and provide feedback to the students. He calls the programs “fallible tools,” saying they aren’t perfect but can help students revise their work when a teacher isn’t available. There is great potential for using these systems in MOOCs – Massive Open Online Courses – where thousands of papers must be graded.

Warschauer told the audience he hasn’t yet tested the LightSide systems in schools but is applying for funding to do so. “We really want to try to get dialog from students,” he explained. “We want to make it more reflective to emphasize more critical and reflective skills and not just fixing the template.”

Project number three analyzed student writing through Google Docs. In collaboration with ICS professor Judy Olson, Warschauer and a group of students developed a software program called “Scapes,” which compiles students' writing in Google Docs. Admittedly “crude,” the program collects time stamps, number of writers, word counts, number of revisions and other straightforward data. “It’s pretty simplistic, but it allows us to do quantitative analysis on a large scale,” Warschauer said.

Currently, the project is analyzing correlations between how much the students write, revise and collaborate, and changes in their attitude towards writing and their writing test scores. Future goals include developing a dashboard, which would allow teachers to see instantly the writing and revising their students are doing. Warschauer would like to see this dashboard combined with open-source vocabulary and syntactic analysis tools for even better evaluation. “I feel like we’re in the infancy of all this stuff, and over the next [few decades] there will be a lot of developments.”

Lastly, he discussed a partnership with a Carnegie Mellon professor who is an expert in computer-mediated collaborated learning and automated agents. Students benefit from mediated discussions – someone helping them to delve deeper, summarize, explain their thoughts, and compare and contrast their ideas, a concept known as academically productive talk.

But this can be challenging in large classrooms. “So the idea is to develop computer-mediated agents that can help” kids with these discussions, Warschauer said. For example, one automated agent that monitors online discussions automatically asks the children at certain points to “revoice” their comments. “Could you summarize that? Can you explain that better? Do you agree or disagree with what [your classmate] just said?”

Warschauer’s group is just beginning a project with its Carnegie Mellon collaborator to develop a one-year online biology course for high school students. Students logging in to the course site will be prompted to identify or explain what they see on the screen, with automated agents mediating those discussions.

“This work is just starting; we’re working on a grant proposal,” Warschauer concluded. “But I wanted to include it in today’s presentation to show the broad ways that text mining, natural language processing and automated agents can be used in literacy and learning instruction.”