Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.
I am not sure what the user above is thinking, but to play devil’s advocate:
One thing that modern AI does well is pattern recognition. An AI trained on player behavior, from beginner level all the way up to professional play, would be able to acquire a thorough understanding of what human performance looks like (which is something that games have been developing for a long time now, to try to have bots more accurately simulate player behavior).
I remember someone setting up their own litmus test using cheats in Tarkov where their main goal was just to observe the patterns of other players who are cheating. There are a lot of tells, a big one being reacting to other players who are obscured by walls. Another one could be the way in which aimbots immediately snap and lock on to headshots.
It could be possible to implement a system designed to flag players whose behavior is seen as too unlike normal humans, maybe cross-referencing with other metadata (account age/region/sudden performance anomalies/etc) to make a more educated determination about whether or not someone is likely cheating, without having to go into kernel-level spying or other privacy-invasive methods.
But then…this method runs the risk of eventually being outmatched by the model facilitating it: an AI trained on professional human behavior that can accurately simulate human input and behave like a high performing player, without requiring the same tools a human needs to cheat.
Cheating humans already perform closely enough to trick such a system. Many cheaters are smart enough to use an aimbot only for a split-second to nail the flick. With a tiny bit of random offset, those inputs indistinguishable from a high-skill player.
These tricks may make it indistinguishable to a human moderator, but machine learning is actually really good at detecting that. But most companies don’t have the expertise, resources or training data to build a proper model for it.
I am not sure what the user above is thinking, but to play devil’s advocate:
One thing that modern AI does well is pattern recognition. An AI trained on player behavior, from beginner level all the way up to professional play, would be able to acquire a thorough understanding of what human performance looks like (which is something that games have been developing for a long time now, to try to have bots more accurately simulate player behavior).
I remember someone setting up their own litmus test using cheats in Tarkov where their main goal was just to observe the patterns of other players who are cheating. There are a lot of tells, a big one being reacting to other players who are obscured by walls. Another one could be the way in which aimbots immediately snap and lock on to headshots.
It could be possible to implement a system designed to flag players whose behavior is seen as too unlike normal humans, maybe cross-referencing with other metadata (account age/region/sudden performance anomalies/etc) to make a more educated determination about whether or not someone is likely cheating, without having to go into kernel-level spying or other privacy-invasive methods.
But then…this method runs the risk of eventually being outmatched by the model facilitating it: an AI trained on professional human behavior that can accurately simulate human input and behave like a high performing player, without requiring the same tools a human needs to cheat.
Cheating humans already perform closely enough to trick such a system. Many cheaters are smart enough to use an aimbot only for a split-second to nail the flick. With a tiny bit of random offset, those inputs indistinguishable from a high-skill player.
These tricks may make it indistinguishable to a human moderator, but machine learning is actually really good at detecting that. But most companies don’t have the expertise, resources or training data to build a proper model for it.