Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.
I am not sure what the user above is thinking, but to play devil’s advocate:
One thing that modern AI does well is pattern recognition. An AI trained on player behavior, from beginner level all the way up to professional play, would be able to acquire a thorough understanding of what human performance looks like (which is something that games have been developing for a long time now, to try to have bots more accurately simulate player behavior).
I remember someone setting up their own litmus test using cheats in Tarkov where their main goal was just to observe the patterns of other players who are cheating. There are a lot of tells, a big one being reacting to other players who are obscured by walls. Another one could be the way in which aimbots immediately snap and lock on to headshots.
It could be possible to implement a system designed to flag players whose behavior is seen as too unlike normal humans, maybe cross-referencing with other metadata (account age/region/sudden performance anomalies/etc) to make a more educated determination about whether or not someone is likely cheating, without having to go into kernel-level spying or other privacy-invasive methods.
But then…this method runs the risk of eventually being outmatched by the model facilitating it: an AI trained on professional human behavior that can accurately simulate human input and behave like a high performing player, without requiring the same tools a human needs to cheat.
Cheating humans already perform closely enough to trick such a system. Many cheaters are smart enough to use an aimbot only for a split-second to nail the flick. With a tiny bit of random offset, those inputs indistinguishable from a high-skill player.
These tricks may make it indistinguishable to a human moderator, but machine learning is actually really good at detecting that. But most companies don’t have the expertise, resources or training data to build a proper model for it.
CSGO used to have Overwatch which is an anti cheat system that uses trusted and experienced players to go through video footage of reported players. With this method I both reported blatant spinbotters, wall hacking, and other chears. I also was on the side of watching back footage of hacking players.
Say AI trains on this data, it might work.
I’m not a fan of this though because knowledgeable and experienced players will be better than AI.
Only AI will be able to root this out in future
That’s an (obviously) unpopular opinion around here but I’ll give you the benefit of the doubt: How would AI be able to do that?
The same way it’s automodderating Reddit to a point that nobody can post anything anymore LOL
I am not sure what the user above is thinking, but to play devil’s advocate:
One thing that modern AI does well is pattern recognition. An AI trained on player behavior, from beginner level all the way up to professional play, would be able to acquire a thorough understanding of what human performance looks like (which is something that games have been developing for a long time now, to try to have bots more accurately simulate player behavior).
I remember someone setting up their own litmus test using cheats in Tarkov where their main goal was just to observe the patterns of other players who are cheating. There are a lot of tells, a big one being reacting to other players who are obscured by walls. Another one could be the way in which aimbots immediately snap and lock on to headshots.
It could be possible to implement a system designed to flag players whose behavior is seen as too unlike normal humans, maybe cross-referencing with other metadata (account age/region/sudden performance anomalies/etc) to make a more educated determination about whether or not someone is likely cheating, without having to go into kernel-level spying or other privacy-invasive methods.
But then…this method runs the risk of eventually being outmatched by the model facilitating it: an AI trained on professional human behavior that can accurately simulate human input and behave like a high performing player, without requiring the same tools a human needs to cheat.
Cheating humans already perform closely enough to trick such a system. Many cheaters are smart enough to use an aimbot only for a split-second to nail the flick. With a tiny bit of random offset, those inputs indistinguishable from a high-skill player.
These tricks may make it indistinguishable to a human moderator, but machine learning is actually really good at detecting that. But most companies don’t have the expertise, resources or training data to build a proper model for it.
CSGO used to have Overwatch which is an anti cheat system that uses trusted and experienced players to go through video footage of reported players. With this method I both reported blatant spinbotters, wall hacking, and other chears. I also was on the side of watching back footage of hacking players.
Say AI trains on this data, it might work.
I’m not a fan of this though because knowledgeable and experienced players will be better than AI.
Don’t waste your time, they’re either an hardcore AI bootlicker or a shit stirrer - most likely both, looking at their post history.
Keep that AI horseshit out of video games, thanks.