More on the alleged cheating case at the Zadar Open
In a week when the biggest story in the mainstream media is about Lance Armstrong possibly admitting the use of doping to Oprah, the chess world has its own cheating story that seems to be dragging on. Associate Professor of Computer Science and Engineering Ken Regan made an analysis and wrote a piece that addresses the question "What constitutes evidence of cheating?" Meanwhile, the Association of Chess Professionals (ACP) has started an open petition against cheating.
The discussion about cheating in chess has a long history, and seems to be showing highs and lows like waves in the sea. After the alleged cheating incident at the Zadar Open the subject is on a lot of people's minds again.
To remind you: the story was about Borislav Ivanov, an untitled player rated 2227, who scored 6/9 and a 2697 rating performance which included victories against GMs Bojan Kurajica, Robert Zelcic, Zdenko Kozul and Ivan Saric. Arbiters searched Ivanov but no evidence for direct or indirect help by a computer chess program was found.
It's clear that many chess fans feel strongly about this topic: the article has generated 264 comments already. In the very first, the famous chess columnist Leonard Barden wrote:
1 Borislav Ivanov is probably the first adult (as opposed to a junior talent) with a confirmed low rating ever to achieve a 2600+ GM norm performance in an event of nine rounds or more, playing highly rated opponents throughout. (...)
2 Borislav Ivanov is the first player ever to successfully cheat at a major tournament over multiple rounds without the cheating mechanism being detected.
In the comments that followed, many seemed to agree with Barden and some of them pointed out a high correlation between moves suggested by strong engines and the ones played by Ivanov.
Five days after we reported about this, the story was picked up by Chessbase as well, and this prompted Bulgarian IM Valeri Lilov to do his own analysis. He posted his work on YouTube – a video of over an hour – in which he speaks of "the cheating incident in Zadar". Lilov also comes to the conclusion that too many moves are similar to the computer's suggestions. (A few days ago Lilov posted another video in which he reacts to feedback.)
The problem here is of course that there is no hard proof. Ivanov was searched, but nothing was found. Therefore, the big question is: what constitutes evidence of cheating, when no electronic device is found and someone plays 400+ Elo points above his level?
This is not a new question either, and in fact it's something that Ken Regan, Associate Professor at the Department of Computer Science and Engineering of the University at Buffalo, has been paying attention to for years. Regan has created a model that falls broadly into the category of predictive analytics. As explained here, the model aims to calculate probabilities for the moves — based on the skill of the player making them — that enable predicting means and variances for large enough aggregates of moves. The rate of agreements with a computer is one stat he can predict — for a non-cheating player. The rate of blunders is another.
The Regan test
After he found out about the Ivanov case, Regan decided to run tests. It took him two days to run the main test with two top programs, Rybka 3 and Houdini 3, run several supporting analyses, and then run his statistical analyzer. He spent another week writing a report (PDF here), which he sent to two officials of the Association of Chess Professionals (ACP), Emil Sutovsky and Bartlomiej Macieja. Regan concludes:
My model projects that for a 2300 player to achieve the high computer correspondence shown in the nine tested games, the odds against are almost a million-to-one. The control data and bases for comparison, which are wholly factual, show several respects in which the performance is exceptional even for a 2700-player, and virtually unprecedented for an untitled player.
Regan addressed the ACP with two questions:
1. What procedures should be instituted for carrying out statistical tests for cheating with computers at chess and for disseminating their results? Under whose jurisdiction should they be maintained?
2. How should the results of such tests be valued? Under what conditions can they be regarded as primary evidence? What standards should there be for informing different stages of both investigative and judicial processes?
The danger with analysis like Regan's is that we might be damaging our sport even more than the cheaters are doing. At some point nobody can score an exceptional result anymore without being frowned upon or simply being accused of cheating. This point was also mentioned in the long thread on Slashdot about this story:
Lance Armstrong was initially judged by the USADA to have used PED based not on testing results, but on the testimony of former teammates, some of whom failed their own tests, and may have had an ax to grind. First, because they feel they may have been singled out because of their assocation with Armstrong, second because they may have been pressured by Armstrong or the relationship to use PED, third because they may actually have witnessed Armstrong either taking PEDs or encouraging it, and fourth ALL of the above. The end result is that no one in cycling at the international level will be able to withstand the mere accusations. Non-analytical positives will become the norm. Every champion will be suspect, unless 100% testing is done, and then, as in Armstrong's case, new tests will be conducted on previosuly collected samples, in effect finding athletes guilty in arrears for using PEDs not yet known. Eventually coffee and Gatorade will be banned. And this will stain cycling to the point that fans like myself will turn away.
Chess will go this route. No Master of any rank will be allowed to exceed their 'reasonable' ability. Analysis will be conducted, perhaps electronic surviellance will be used to both check for transmissions and as forensics to be subjected to detailed analysis, suspects will be accused, strip-searched, imaged, run through the metal detectors, scrutinized, and judged guilty based on non-analytical positives. Chess will devolve into the meanest of states, blood sport not for the winners, but for the losers. I expect past upsets to be scrutinized for problems and winners discredited, even posthumously.
Regan wrote about this aspect:
I share the worry of many that a few cases of people “being clever” may ruin much pleasure.
In his letter to the ACP he added:
The point of approaching ACP is to determine how the contexts and rules should be set for chess. The goals, shared by Haworth and others I have discussed this with, include:
(a) To deter prospective cheaters by reducing expectations of being able to get away with it.
(b) To test accurately such cases as arise, whether in supporting or primary role, as part of uniform procedures recognized by all as fair.
(c) To educate the playing public about the incidence of deviations that arise by chance, and their dependence on factors such as the forcing or non-forcing quality of their games.
(d) To achieve transparency and reduce the frequency of improper accusations.
(e) Finally, hopefully to avert the need for measures, more extreme than commonly recognized ones, that would tangibly detract from the enjoyment of our game by players and sponsors and fans alike.
Meanwhile, the ACP has started an open petition against cheating.
We, the undersigned chess professionals and regular competitors in FIDE rated events, share the view that computer-assisted cheating is a major problem in chess and ask the ACP to address FIDE in order to take all necessary steps for fighting this plague.
You can sign the petition here.
1 day 12 hours ago
2 days 16 min ago
3 days 2 hours ago
3 days 15 hours ago
4 days 4 hours ago
5 days 6 hours ago
6 days 4 hours ago
6 days 21 hours ago
1 week 4 hours ago
1 week 23 hours ago
1 week 2 days ago
1 week 2 days ago
1 week 2 days ago
1 week 3 days ago
1 week 4 days ago
1 week 5 days ago
1 week 6 days ago
1 week 6 days ago
2 weeks 1 day ago
2 weeks 1 day ago