China tests ai program to grade school essays
china tests ki program for grading school essays

Image: Pixybay.com/CC0

Mostly without the knowledge of the parents and students, 60.000 schools School essays graded and annotated by the software

That artificial intelligence can outperform some, probably more likely many, human cognitive performances is now obvious. Whenever a lot of data can be analyzed for machine learning, AI programs can also outpace experts. As early as late 2017, a program was unveiled in China that used a flood of data from 53 medical textbooks, 2 million medical records and 400.000 medical texts and reports, as well as with clinical experiences and case diagnoses was fed. The medical exam was not a difficulty, now the smart robot is to assist doctors in diagnosis, whose work could then be reduced to manual skills, unless surgical robots lend a hand (AI program better than humans at understanding natural language).

China, it has long been known, wants to be at the forefront in the development and use of artificial intelligence in all areas. Massive testing is also being done of sinister programs such as social scoring, which is designed to centrally control people’s individual behavior through rewards and punishments, or nudging, via the analysis of personal data wherever it can be collected. The dream of the behaviorists of a better world through "positive" behavioral control, as described by Skinner in "Futurum 2" (Walden Two), published in 1948, now seems to be becoming a reality in China.

It has now become known through a report in the newspaper SCMP that an AI program developed at the Capital Normal University is already being secretly used in China’s schools to check and grade school essays in Chinese or English. This is not a small test, but allegedly already in 60.000 schools, a quarter of all schools with 120 million students, uses the AI program, which understands the logic and meaning of texts, provides a human-like assessment of quality, grades essays, and makes recommendations to students for improvement in style, structure, and topics. For example, a hint is given if a paragraph deviates from the topic.

machine for generating conformity

While this is probably really not much different than what teachers also do to make students the "correct" writing, but teachers, while having their national teaching programs, differ individually in terms of how they teach and evaluate the performance of students they encounter in the school day. So now the disciplining is done centrally and nationwide through evaluation and corrections by an AI program. This increases comparability and is therefore perhaps fairer, but demands conformity and punishes deviations. The effect might be that the innovative AI technology just eradicates innovation or creativity. However, according to a document obtained by SCMP, the grades of the AI system and teachers are said to be 92 percent in agreement.

Of course, as is always done to reare, AI technology is not intended to replace humans as teachers, but only to support them by expanding their capacities. The promise is that teachers will have to spend less time correcting school essays, and that they will not have to use any of the tests "human" Mistakes are made due to inattention or unconscious bias. And such a program would be good for schoolchildren in remote areas to improve their writing skills more quickly, because there, it is amed, hardly any teachers want to go there.



The AI program was deployed without the knowledge of most of those involved, something that can only be afforded in an authoritarian regime. The secrecy also shows that the government is worried about scaring off the citizens by letting the smart machines take over the schools. In most schools, parents and children were not informed, only certain personnel were allowed to access the program’s computers, and the results of the tests were kept strictly secret.

AI programs become a black box

The schools contacted by SCMP did not want to give any further information, apart from the fact that the AI program is still working incorrectly and teachers say that sometimes a brilliant text only results in bad grades. It is said that it is not yet used for grading essays that pay for school-leaving exams.

As always, the program is based on sifting through masses of school essays and comparing grading and comments with those of teachers. The program should also have its own "Knowledge Bank" build up independently. One scientist, who does not want to be named, said that the AI program has become a black box, which is increasingly the case with machine learning: "It has continuously evolved and become so complex that we no longer know for sure what it thinks and how it makes an assessment."

There has long been a debate about AI because one can only control inputs and outputs and optimize results without knowing what is going on inside the black box (The Spirits We Call: Artificial Intelligence Algorithms as the New Alchemy; Psychologists for Artificial Intelligence).