- Advertisment -
- Advertisment -

How to spot a potential fraudMEERI News

- Advertisment -


A few years ago, the chess website Chess.com temporarily banned US Grandmaster Hans Niemann for playing online chess moves that the questionable site suggested to him through a computer program. It reportedly banned his mentor Maxime Dalugi earlier.

And at the Sinquefield Cup earlier this month, world champion Magnus Carlsen quit without comment after a poor game against 19-year-old Niemann. He has since said this because he believes Neiman has continued to cheat recently.

Picture by Hans Niemann.
Hans Neiman
WikipediaCC BY-SA

Another participant, Russian Grandmaster Ian Nepomaniacchi, called Neiman’s performance “more than impressive”. While Niemann has admitted to cheating several times in previous online games, he has denied ever cheating in a live chess tournament.

But how does the world’s largest chess website, Chess.com, determine when a player may have cheated? It cannot show the world the code it uses, otherwise fraudsters will know how to avoid detection. The website states:

Although legal and practical considerations prevent Chess.com from disclosing the full set of data, metrics and tracking used to evaluate games in our fair-play tool, we can say that the basics of Chess.com’s system has a statistical model that evaluates A human player’s ability to match an engine’s top choices, and surpass the confirmed clean play of some of history’s greatest chess players.

Fortunately, research can shed light on what approach a website is using.

Human vs. A.I

When AI company DeepMind developed the AlphaGo program, which can play the strategy game Go, it was taught to predict what moves a person would make from any given situation.

Predicting human behavior is a supervised learning problem, the bread and butter of machine learning. Given many examples of situations from human games (dataset) and one example of a human move from each such situation (label), machine learning algorithms can be trained to predict labels on new data points. So DeepMind taught its AI to estimate the probability that a human would take any action from any given situation.

AlphaGo beat human opponent Lee Sedol in 2017. One of the AI’s famous moves in the game was “Move 37”. As lead researcher David Silver noted in the documentary AlphaGo, “AlphaGo said there was a 1/10,000 chance that move 37 would have been played by a human player.”

Image of Sedol's reaction to move 37.
Sedol’s reaction to move 37.
youtubeCC BY-SA

So according to that machine learning model of human Go players, if you saw someone play move 37, that would be evidence that they didn’t come up with the idea themselves. But of course, this will not be proof. Any human being can do Make that move.

To be very confident that someone cheats in a game, you have to see a lot of moves. For example, researchers have investigated how a player’s many moves can be analyzed to detect anomalies.

Chess.com openly uses machine learning to predict what moves a human might make in any given situation. In fact, it has different models of individual famous chess players, and you can actually play against them. Presumably, similar models are used to detect fraud.

A recent study suggests that, in addition to predicting how likely a person would be to take a certain action, what is also important is how good the action is. This is consistent with Chess.com’s statement that it evaluates whether moves are “surpassed … confirmed clean play” from great players.

But how do you measure which tactics are better than others? Theoretically, a chess position is either a “win” (you can guarantee a win), a “lose” (the other player can) or a “draw” (neither), and a good move is no. There will also be a trick that doesn’t. Your situation is worse. But in reality, although computers are much better than humans at calculating and choosing future moves, for many positions they cannot even tell with certainty whether a position is winning, losing or drawing. And they certainly could never prove it – a proof usually requires a lot of calculations, examining every leaf on a deadly game tree.

So people and computers use “heuristics” (inferences) to evaluate the “value” of different positions – predicting which player will win. This can also be cast as a machine learning problem where the dataset has many positions on the board and the labels are those that won – which trains the algorithm to predict who will win from a given position.

Typically, machine learning models used for this purpose think about the next few possible moves, consider what positions are accessible to both players, and then use those future positions to inform their assessment of the current position. use “gut feeling” about

Carlson Magnus of Norway.
World champion Magnus Carlsen of Norway.
Leszek Szymansk/EPA

But who wins from a given position depends on how good the players are. So the model’s evaluation of a particular game will depend on who was playing the game that made it into the training dataset. Generally, when chess commentators talk about the “objective value” of various positions, they mean who can win from a given position when both sides are played by the best chess AI available. have been But this measure of value isn’t always the most useful when considering a situation that human players will ultimately have to complete. So it’s not at all clear what Chess.com (or we) should consider a “good move”.

If I was cheating at chess and made some moves suggested by a chess engine, it might not even help me win. Those moves are probably setting up a wonderful attack that will never happen for me, so I’ll waste it unless I ask the chess engine to play the rest of the game for me. (Lichess.org tells me I’ve played 3,049 blitz games at the time of writing, and my very good ELO rating of 1632 means you can expect me to miss good moves left and right.)

Fraud is hard to detect. If you’re playing online and you’re wondering if your opponent is cheating, you won’t really be able to tell by any standard – because you’ve basically played millions of human games with different styles. have not seen This is a problem where machine learning models trained with large amounts of data have a huge advantage. Ultimately, they can be critical to the ongoing integrity of chess.

The Ultimate Managed Hosting Platform
- Advertisment -

Most Popular

- Advertisment -