|
Cube & Cube 2 FORUM
|
Cheating & open source, revisited |
by Aardappel_
on 04/27/2005 07:54, 218 messages, last message: 06/16/2006 17:23, 188317 views, last view: 11/01/2024 15:34 |
|
As you all know, cheating is a problem for Cube being Open Source. Noone likes the current solution of the incompatible binaries, and I am getting to the point where I see the usefulness of having other people continue to work on Cube whenever I don't have the time.. currently that is problematic and would be much easier if the source to the official game could be truely open.
Multiplayer continues to be an important aspect of Cube, so we can't ignore cheating and simply hope that people won't change 1 line of code to enable god mode or permanent octa-damage, because they will (tell me something about human nature and the people on the interweb).
The solution can't come in the form of "cheat protection", this simply isn't possible with the current cube, and even if the entire gameplay code was moved serverside, is still fragile. Don't even suggest it... make sure you understand the nature of the client/server gameplay code before commenting.
The solution for Cube I feel has to be a social one. As you may remember, I designed a solution before:
http://wouter.fov120.com/rants/trusted_communities.html
The problem with this particular design is that it is too complex to set up, and too centralized. I would like to come up with a solution that is simpler, less implementation work, and can work with any group of people, centralized or not.
This is the idea I came up with sofar:
Every player that wants to play in a cheat free environment, can use a command in Cube to generate a set of key files. A key file is simply a file of, say, 10000 random bytes. The player then hands out these files to players he trusts... or rather, players he wants to trust him. (why there are multiple files will become clear later).
A server can either be in untrusted mode (default, works as before), or trusted mode. It can be set to trusted mode by the server admin, or voted by the players until the server empties. It will show up in the server browser as trusted.
If a player A & B connect to a trusted server, A looks up B's nickname in his folder of key files. If he finds a corresponding key file, he chooses a few random file offsets and reads the bytes there. It now sends a packet to B asking it for the bytes at those offsets. If B really is B, it can simply read its own keyfile and return the values. A now compares the values, and if they match, it sends a "I trust B" packet to the server. The hud shows which clients you trust, and for each client how many clients trust him in total. You are now sure that B really is who he says he is.
On a trusted server, people that after exchange of trust packets have gained no trust, can be booted from the server automatically. This allows you to play games with trusted people in your community, and have external people unable to be join the game.
asking for random offsets guarantees that untrustworthy clients or even servers never get to sniff keys. "Trust" is evaluated locally and for you only, so can't be spoofed.
The one problem would be handing your key file to someone who later turns out to be untrustworthy. This person could now impersonate you and appear to be you to all your trusted friends. Hence the multiple key files, so you can give a different key file to different people (or groups of people). That way, if the person "goes bad", he can't impersonate you towards your friends, as he doesn't have the keyfile your friends have.
The system is not perfect of course. You can still have 2 cheaters join together and trust eachother. Luckily cheaters hardly ever come in groups, and there are more complicated ways to protect even against this.
The biggest issue is the inconvenience of having to exchange key files, and especially to require new players to find existing players on forums/irc before they can sensibly play. I think it is bearable though, as you only need to do it once, and Cube multiplayer is a fairly closed community. And if servers are by default untrusted, you can give newcomers the benefit of the doubt until they behave suspicious.
What do you all think? Please think it through thoroughly before commenting (I am talking to you, Jean Pierre! :). I am especially interested in "holes" in this system, i.e. ways that cheaters could abuse it if they really wanted to.
|
|
Board Index
|
|
#5: Re: addendum |
by >driAn<.
on 04/27/2005 17:14, refers to #1
|
|
A 'real open' Cube would be great!
"[..] or using some kind of public/private key encryption system, but it has got to be simple."
Yes I think pub/priv key, kind of a pgp system, would be a lot better for the security.
reply to this message
|
|
#6: Im not telling you how to compile it. |
by jean pierre
on 04/27/2005 19:51
|
|
ON-TOPIC:
But what does the black list do....
It makes people more annoying and enter with another IP and they curse you back(and possibly more hateness!)
reply to this message
|
|
#7: .. |
by Rick|
on 04/27/2005 19:55
|
|
Than block a range and he won't come back ;-)
Besides not everyone is able to just change his IP.
reply to this message
|
|
#8: Re: addendum |
by tentus
on 04/28/2005 03:02, refers to #1
|
|
if you made an "untrust" command it may require a slight bit of menu editing- it'd be all to easy to untrust the wrong player with our current system, so a slight modification to add say a confirmation meu with extra data on it would be appropriate. on eht esame coin, you wouldn't want to make trusting too easy, otherwise people will just find a fellow cheater and go for it.
a possible solution to easy cheating would be to make a temporary version of the getmap function: to play on a server with this enabled, cube would _have_ load the level for only as long as they are playing, no prompt or anything. completely automatic, and if it doesn't follow thru then ban them. the problem would be people seeking out the code for this and deleting it, so there'd have to be a way for the server to check that it had actually happend (perhaps check in base that both the original and the temp maps were there?)
well, very promising ideas, i'm very interested in where this will go. good luck explaining all that to newbs though :)
reply to this message
|
|
#9: Response |
by pushplay
on 04/28/2005 03:22
|
|
Basically (my understanding of) Aard's scheme is summed up in three ideas:
- common servers which are just what we have now
- elite servers where I only play with people I trust
- a system for proving people are who they say they are
Personally I haven't found impersonation to be a big issue, and on the rare occasions someone wasn't who they claimed to be it was pretty obvious. Additionally, I rarely find myself playing with people I recognize any more but rather a quick game during lunch with randoms. So I would spend 90% of my time in the common servers.
I'd like to propose an alternate scheme. Mostly using a vote kick w/ timeban system using a plurality. One line of text at the top for the person, the current tally, and a kick reason. Also a second line for a response from the person being voted (which is likely to contain profnaity :). The other half would be authenticating with the server to become a trusted client immune from being kicked off, and perhaps even a higher level of trust for gaining referee status.
This would create defacto common servers where admins don't collect keys from those in the community.
Are there any other opensource games out there that have attempted to deal with this? What about that opensource fps that was kind of like tribes?
reply to this message
|
|
#10: Re: Response |
by Pxtl
on 04/28/2005 06:06, refers to #9
|
|
The tribes-like game was closed-source. Other FOSS games do have cheating problems (like QuakeWorld), but usually have a thin-client approach to logic so the worst cheating you get is aimbots, and aimbots happen on closed-source games too.
Cube has a thick client, so the client can cheat much more powerfully. In an opensource Cube, a cheater would be a god - he could kill players with a thought.
The approach that most places seem to use is to force the players to build up an identity. Aard's approach is interesting in that it is masterless, but more conventional approaches would be to follow what Slashdot and have a central login system. This could be expanded out to the rest of the Slash concepts of karma, moderators, meta-moderators, etc.
Players who care about play stats would be encouraged to take care of their accounts, because getting banned or redlighted would mean one might have to make a new account. Of course, this means that recording player stats on the master server would be needed (frags, games played, wins, etc). Most players would circumvent this by having a "good" account and several "bastard" accounts. The bastard accounts would get banned, but new ones would be created. This means that serious servers would have to either require old accounts only or minimum karma points or whatever, which means that new players are relegated to the ghetto of free (cheater) servers.
reply to this message
|
|
#11: Re: Response |
by pushplay
on 04/28/2005 07:57, refers to #10
|
|
Thinking about it I think there are only two possibilities:
players are assumed untrustworthy, create a long term profile and earn trust
players are assumed trustworthy, create a short term profile and lose trust
And we all agree that no code verification system can work since the hacked source could always run a hacked server with a clean client, observe the proper response and pass it along like a 'man in the middle attack.'
reply to this message
|
|
#12: ... |
by Aardappel_
on 04/28/2005 08:19
|
|
axel: that system doesn't help anything. now the only thing you know is that someone with MAC address X is cheating. You could blacklist him I suppose, but since the client has full control, he can make up any fake address he wants.
jean pierre: yeah I am not great.
rick: the default for servers would be untrusted, so it doesn't affect pure newbs just wanting to try it out. It is more for regular players who just want to exclude unknown entities from their games.
kick/vote & blacklisting is not really a solution. With people being able to change nicks, ips, and reconnect real easily, this means its more work for those combatting cheating than the cheaters.
tentus: untrust is not as solid as trust, as you can't automatically prove it. Obligatory map download is not going to do anything, because if you control the client, you can stop the map from being stored/loaded after download.
pushplay: impersonation is not the problem. What the system does is allow you to automatically block people whose identity you can't verify, different thing.
Again, any kind of manual kicking requires a lot of work on the part of the normal players... not a good plan.
Other open source problems have less of an issue because they have serverside gameplay, and because multiplayer leagues tend to work with set binaries whose crc's are checked by closed source proxies or dlls (as for example in quakeworld).
pxtl: if we were to go with a web based system, then my previous design would work better than what you suggest. Cheaters don't care about their stats, and will easily make a new account.
I think a fundamental thing to see here is that it is easier to trust the good players, than to detect the cheaters.
In general, when thinking of solutions, think from the perspective of the cheater: what could he do to circumvent your idea if he wanted to? Lots of ideas people come up with really are really easy to circumvent, if you have full source code.
reply to this message
|
|
#13: solutions... |
by ciscon_
on 04/28/2005 11:37
|
|
i think that the combination of an rcon, vote kicking/banning, and either aard's system or an account based system would do the trick.
personally i've found that having an active remote console on the server is usually good enough (which seems to be inevitable when you have a popular server that is capable of being remotely controlled as i've seen with qw and q2). if this isn't enough then you allow the players to votekick/ban (for a serverspecified amount of time). it really just comes down to letting somebody have some sort of control over the server.
reply to this message
|
|
#14: Re: ... |
by Pxtl
on 04/28/2005 11:45, refers to #12
|
|
Well, actually, the failure of my suggestion is the same as yours - new players are left in the ghetto of "unrecognised" - the ultimate problem is that new players must be sponsored into the game, either by gradual process of karma or by sudden induction through a user key recommendation. The end result is the same. The only reason that I suggest the central server approach is that we've all seen it in action - we've seen Slashdot (and the trolls, and the karma whores, etc) and have a vague idea how well/poorly it would work.
reply to this message
|
|
#15: Re: ... |
by D.plomat
on 04/28/2005 12:35, refers to #12
|
|
I think the problem of the "bastard accounts" Pxtl described can be avoided/at least very limited by having an initial "cost" per account. Of course not a monetary cost, but something like a registration procedure that takes about 15min and requires a valid e-mail.
A 15 minutes cost seems reasonable for a player that wants only one account. Of course creating fake yahoo/hotmail accounts is possible, but creating fake mail accounts, then creating fake Cube accounts will probably become unworthy if it takes more time creating the accounts than a cheater can use them to cheat.
There will probably be some lamers so inclined to take 5min hotmail+15min account creation for only 5 minutes of godmode playing but there won't be many, and they won't do this for a long time.
And if someone wants to cheat on a high trust server, he has to create an account, play honestly for a long time to earn enough karma, all that for an account he will probably cheat only once before it gets publicly known as untrustworthy.
Maybe this would require some kind of those "antibot" images to avoid robot creation of accounts.
By having a central trust database/server, this database is out of reach of cheaters, and the client-side code's duty is authenticating the good players against this server, the server have all public keys of all players, each player is the only one having his priv key on his machine(s).
The client sends his trust informations to the trust server, and the play servers only read trust information from the trust server.
Then to find some way of making this appear simple to the user while not adding too much into the client code. I think the setting trust commands will be really well used only if they're into the Cube client, but maybe some things can stay "manual operation", i suppose something like a one-time "drop a certificate file received as mail attachment into a subdirectory of Cube" is an acceptable complexity for even a total newbie (or maybe not?)
I've to make some tests coding with OpenSSL library :)
That's a very interesting project, i think i'll try to make some kind of prototype of this system.
Agree about the IP bans, this is more likely to block trustworthy players sharing the same ISP with a cheater than really keeping cheaters out.
reply to this message
|
|
#16: Re: ... |
by D.plomat
on 04/28/2005 12:43, refers to #14
|
|
I think the key feature in setting karma/trust is having people rate wisely so the whole trust score of a player gets an uniform value on all part of the community so the trust-level of a server can remain constant with a constant number.
So if we have multiple levels of trust we have to avoid ppl always setting 100% trust / 100% untrust, maybe by some menu labels like
I fully trust ...
I think ... is trustworthy
I don't know ...
I think ... might be cheating
I'm sure ... cheats
reply to this message
|
|
#17: Re: ... |
by D.plomat
on 04/28/2005 12:50, refers to #16
|
|
Reading again the Trusted communities page, this is already addressed by the "false accusation"/"bad trust" system
reply to this message
|
|
#18: Aard is right........... |
by jean pierre
on 04/28/2005 14:49
|
|
If we just make kick/ban then the guy could compile cube and remove kick/ban or something misc and look what happens the kick/ban code does not respond and he can come back and cant be kicked/banned.
reply to this message
|
|
#19: Re: ... |
by Aardappel_
on 04/28/2005 18:35, refers to #15
|
|
sure, such a system would work better overal, but it is a more complex system. If we're gonna build a more complex system, I would prefer to just build the system referenced in my initial post. My point of this thread was to see if there are simpler solutions, maybe there aren't.
reply to this message
|
|
#20: trust sytem is best |
by marco
on 04/29/2005 00:45
|
|
Hi, I just stumbled on your page. My $0.02 : you want _strong_ identification so you need a PKI infrastructure - there is no shortcut. I don't think it is that hard to implement; once you can identify people uniquely I guess any banning policy will do.
reply to this message
|
|
#21: Re: trust sytem is best |
by D.plomat
on 04/29/2005 13:03, refers to #20
|
|
PKI is something complex but fortunately there are already many available free and well documented standard tools+libraries.
> any banning policy will do
Not any, but the trusted communities system establishes autobanning (in fact not banning, but restricting access to trusted servers) based on a powerful distributed rating method that is viable to scale very well without constant heavy monitoring by dedicated staff.
So it's precisely this banning system if we can call it a "banning system" that can be used effectively for free games and communities projects.
reply to this message
|
|
#22: .. |
by sinsky
on 04/30/2005 01:34
|
|
You may be surprised but I've been thinking about that too. Of course I can't really do anything because I lack low-level coding skills completely, therefore my course of action took an entirely different and very wrong direction. I won't talk hypothetically because I've already done it (hold on to your chairs - big laugh is coming :).
So. What is it that a cheater gets from the cheating experience? Can't really be sure. Could be anything, but practically it's always something malicious. The guy feels good and everyone else feels bad. He can do the important thing, and other players who have invested time, emotions, and money in the ideal case, can not.
I doubt there's a way to identify 100% if a person has malicious intentions, even in real life people that know each other for ears sometimes have a hard time with this. You see, removing a cheater does not really solve this problem because someone can always make you feel bad just as easily using the chat, and cheat protection comes in to make this happen rarely by eliminating technical issues. And when something bad happens rarely, more people will join a community in the meantime, but is more ppl = more fun?
I understand that I haven't said anything important and maybe it's time to apologise for wasting your time so far. So I'll be brief - not long ago I enabled coopedit mode on Orb, my pet Cube project (at home I mean, nothing new on the web yet). Since Orb editing heavily relies on typing console commands, I also made messages received from the chat be treated as console commands. This system is highly inefficient and can survive only between people with 100% level ot trust, and probably not fitted for a community at all. If we have ten people playing for example and one of them types "quit" on the chat, all clients will quit and all coopedit work will be lost (unless saved recently). Of course this arises another question - what could a cheater do in a coopedit game where players can not be hurt physically, and only their work on the map can be altered. And also if a cheater, who I think can merely be referred to as "malicious person" in this environment, has done work on the map along with other players what rights does he hold to that work.
Of course it's just a game. No one really cares about a few cubes.. or do they.
reply to this message
|
|
#23: .. |
by pushplay
on 04/30/2005 03:02
|
|
It seems to be that kick-timebanning the ocasional punk is far less work than seeing the people I want to play with (not all of whom speak enlgish or see outside the server) get a copy of my key, and have to repeat that process every time I suspect someone has gotten a hold of my key that shouldn't have.
reply to this message
|
|
#24: Re: .. |
by Pxtl
on 04/30/2005 03:20, refers to #22
|
|
I think his point on malicious person is well taken - in any system, you have to wonder where you draw the line? Any kind of griefing behaviour (chatspamming, TKing, etc) or only for outright cheaters? There is the problem of what physical method to use to kick someone off of a server. For example, if you use group voting, I'm sure it would be trivial for the griefer to just connect 15 fake clients and take over the server. Admins are often absent. This is why I think there's merits to having a Slash-style masterserver for tracking users - you could let the long-term, experienced, good users have banning priviledges across all servers. Of course, then you also need meta-moderators to deal with anyone who abuses that power.
reply to this message
|
|
|
Board Index
|
|
Unvalidated accounts can only reply to the 'Permanent Threads' section!
|
|
|
|
|