home home

downloads files

forum forum

docs docs

wiki wiki

faq faq

Cube & Cube 2 FORUM


Cheating & open source, revisited

by Aardappel_ on 04/27/2005 07:54, 218 messages, last message: 06/16/2006 17:23, 188267 views, last view: 11/01/2024 13:18

As you all know, cheating is a problem for Cube being Open Source. Noone likes the current solution of the incompatible binaries, and I am getting to the point where I see the usefulness of having other people continue to work on Cube whenever I don't have the time.. currently that is problematic and would be much easier if the source to the official game could be truely open.

Multiplayer continues to be an important aspect of Cube, so we can't ignore cheating and simply hope that people won't change 1 line of code to enable god mode or permanent octa-damage, because they will (tell me something about human nature and the people on the interweb).

The solution can't come in the form of "cheat protection", this simply isn't possible with the current cube, and even if the entire gameplay code was moved serverside, is still fragile. Don't even suggest it... make sure you understand the nature of the client/server gameplay code before commenting.

The solution for Cube I feel has to be a social one. As you may remember, I designed a solution before:
http://wouter.fov120.com/rants/trusted_communities.html
The problem with this particular design is that it is too complex to set up, and too centralized. I would like to come up with a solution that is simpler, less implementation work, and can work with any group of people, centralized or not.

This is the idea I came up with sofar:

Every player that wants to play in a cheat free environment, can use a command in Cube to generate a set of key files. A key file is simply a file of, say, 10000 random bytes. The player then hands out these files to players he trusts... or rather, players he wants to trust him. (why there are multiple files will become clear later).

A server can either be in untrusted mode (default, works as before), or trusted mode. It can be set to trusted mode by the server admin, or voted by the players until the server empties. It will show up in the server browser as trusted.

If a player A & B connect to a trusted server, A looks up B's nickname in his folder of key files. If he finds a corresponding key file, he chooses a few random file offsets and reads the bytes there. It now sends a packet to B asking it for the bytes at those offsets. If B really is B, it can simply read its own keyfile and return the values. A now compares the values, and if they match, it sends a "I trust B" packet to the server. The hud shows which clients you trust, and for each client how many clients trust him in total. You are now sure that B really is who he says he is.

On a trusted server, people that after exchange of trust packets have gained no trust, can be booted from the server automatically. This allows you to play games with trusted people in your community, and have external people unable to be join the game.

asking for random offsets guarantees that untrustworthy clients or even servers never get to sniff keys. "Trust" is evaluated locally and for you only, so can't be spoofed.

The one problem would be handing your key file to someone who later turns out to be untrustworthy. This person could now impersonate you and appear to be you to all your trusted friends. Hence the multiple key files, so you can give a different key file to different people (or groups of people). That way, if the person "goes bad", he can't impersonate you towards your friends, as he doesn't have the keyfile your friends have.

The system is not perfect of course. You can still have 2 cheaters join together and trust eachother. Luckily cheaters hardly ever come in groups, and there are more complicated ways to protect even against this.

The biggest issue is the inconvenience of having to exchange key files, and especially to require new players to find existing players on forums/irc before they can sensibly play. I think it is bearable though, as you only need to do it once, and Cube multiplayer is a fairly closed community. And if servers are by default untrusted, you can give newcomers the benefit of the doubt until they behave suspicious.

What do you all think? Please think it through thoroughly before commenting (I am talking to you, Jean Pierre! :). I am especially interested in "holes" in this system, i.e. ways that cheaters could abuse it if they really wanted to.

Go to first 20 messagesGo to previous 20 messages    Board Index    Go to next 20 messagesGo to last 20 messages

#12: ...

by Aardappel_ on 04/28/2005 08:19

axel: that system doesn't help anything. now the only thing you know is that someone with MAC address X is cheating. You could blacklist him I suppose, but since the client has full control, he can make up any fake address he wants.

jean pierre: yeah I am not great.

rick: the default for servers would be untrusted, so it doesn't affect pure newbs just wanting to try it out. It is more for regular players who just want to exclude unknown entities from their games.

kick/vote & blacklisting is not really a solution. With people being able to change nicks, ips, and reconnect real easily, this means its more work for those combatting cheating than the cheaters.

tentus: untrust is not as solid as trust, as you can't automatically prove it. Obligatory map download is not going to do anything, because if you control the client, you can stop the map from being stored/loaded after download.

pushplay: impersonation is not the problem. What the system does is allow you to automatically block people whose identity you can't verify, different thing.

Again, any kind of manual kicking requires a lot of work on the part of the normal players... not a good plan.

Other open source problems have less of an issue because they have serverside gameplay, and because multiplayer leagues tend to work with set binaries whose crc's are checked by closed source proxies or dlls (as for example in quakeworld).

pxtl: if we were to go with a web based system, then my previous design would work better than what you suggest. Cheaters don't care about their stats, and will easily make a new account.

I think a fundamental thing to see here is that it is easier to trust the good players, than to detect the cheaters.

In general, when thinking of solutions, think from the perspective of the cheater: what could he do to circumvent your idea if he wanted to? Lots of ideas people come up with really are really easy to circumvent, if you have full source code.

reply to this message

#13: solutions...

by ciscon_ on 04/28/2005 11:37

i think that the combination of an rcon, vote kicking/banning, and either aard's system or an account based system would do the trick.
personally i've found that having an active remote console on the server is usually good enough (which seems to be inevitable when you have a popular server that is capable of being remotely controlled as i've seen with qw and q2). if this isn't enough then you allow the players to votekick/ban (for a serverspecified amount of time). it really just comes down to letting somebody have some sort of control over the server.

reply to this message

#14: Re: ...

by Pxtl on 04/28/2005 11:45, refers to #12

Well, actually, the failure of my suggestion is the same as yours - new players are left in the ghetto of "unrecognised" - the ultimate problem is that new players must be sponsored into the game, either by gradual process of karma or by sudden induction through a user key recommendation. The end result is the same. The only reason that I suggest the central server approach is that we've all seen it in action - we've seen Slashdot (and the trolls, and the karma whores, etc) and have a vague idea how well/poorly it would work.

reply to this message

#15: Re: ...

by D.plomat on 04/28/2005 12:35, refers to #12

I think the problem of the "bastard accounts" Pxtl described can be avoided/at least very limited by having an initial "cost" per account. Of course not a monetary cost, but something like a registration procedure that takes about 15min and requires a valid e-mail.

A 15 minutes cost seems reasonable for a player that wants only one account. Of course creating fake yahoo/hotmail accounts is possible, but creating fake mail accounts, then creating fake Cube accounts will probably become unworthy if it takes more time creating the accounts than a cheater can use them to cheat.

There will probably be some lamers so inclined to take 5min hotmail+15min account creation for only 5 minutes of godmode playing but there won't be many, and they won't do this for a long time.

And if someone wants to cheat on a high trust server, he has to create an account, play honestly for a long time to earn enough karma, all that for an account he will probably cheat only once before it gets publicly known as untrustworthy.

Maybe this would require some kind of those "antibot" images to avoid robot creation of accounts.

By having a central trust database/server, this database is out of reach of cheaters, and the client-side code's duty is authenticating the good players against this server, the server have all public keys of all players, each player is the only one having his priv key on his machine(s).
The client sends his trust informations to the trust server, and the play servers only read trust information from the trust server.

Then to find some way of making this appear simple to the user while not adding too much into the client code. I think the setting trust commands will be really well used only if they're into the Cube client, but maybe some things can stay "manual operation", i suppose something like a one-time "drop a certificate file received as mail attachment into a subdirectory of Cube" is an acceptable complexity for even a total newbie (or maybe not?)

I've to make some tests coding with OpenSSL library :)

That's a very interesting project, i think i'll try to make some kind of prototype of this system.


Agree about the IP bans, this is more likely to block trustworthy players sharing the same ISP with a cheater than really keeping cheaters out.

reply to this message

#16: Re: ...

by D.plomat on 04/28/2005 12:43, refers to #14

I think the key feature in setting karma/trust is having people rate wisely so the whole trust score of a player gets an uniform value on all part of the community so the trust-level of a server can remain constant with a constant number.

So if we have multiple levels of trust we have to avoid ppl always setting 100% trust / 100% untrust, maybe by some menu labels like

I fully trust ...
I think ... is trustworthy
I don't know ...
I think ... might be cheating
I'm sure ... cheats

reply to this message

#17: Re: ...

by D.plomat on 04/28/2005 12:50, refers to #16

Reading again the Trusted communities page, this is already addressed by the "false accusation"/"bad trust" system

reply to this message

#18: Aard is right...........

by jean pierre on 04/28/2005 14:49

If we just make kick/ban then the guy could compile cube and remove kick/ban or something misc and look what happens the kick/ban code does not respond and he can come back and cant be kicked/banned.

reply to this message

#19: Re: ...

by Aardappel_ on 04/28/2005 18:35, refers to #15

sure, such a system would work better overal, but it is a more complex system. If we're gonna build a more complex system, I would prefer to just build the system referenced in my initial post. My point of this thread was to see if there are simpler solutions, maybe there aren't.

reply to this message

#20: trust sytem is best

by marco on 04/29/2005 00:45


Hi, I just stumbled on your page. My $0.02 : you want _strong_ identification so you need a PKI infrastructure - there is no shortcut. I don't think it is that hard to implement; once you can identify people uniquely I guess any banning policy will do.




reply to this message

#21: Re: trust sytem is best

by D.plomat on 04/29/2005 13:03, refers to #20

PKI is something complex but fortunately there are already many available free and well documented standard tools+libraries.

> any banning policy will do

Not any, but the trusted communities system establishes autobanning (in fact not banning, but restricting access to trusted servers) based on a powerful distributed rating method that is viable to scale very well without constant heavy monitoring by dedicated staff.
So it's precisely this banning system if we can call it a "banning system" that can be used effectively for free games and communities projects.

reply to this message

#22: ..

by sinsky on 04/30/2005 01:34

You may be surprised but I've been thinking about that too. Of course I can't really do anything because I lack low-level coding skills completely, therefore my course of action took an entirely different and very wrong direction. I won't talk hypothetically because I've already done it (hold on to your chairs - big laugh is coming :).

So. What is it that a cheater gets from the cheating experience? Can't really be sure. Could be anything, but practically it's always something malicious. The guy feels good and everyone else feels bad. He can do the important thing, and other players who have invested time, emotions, and money in the ideal case, can not.

I doubt there's a way to identify 100% if a person has malicious intentions, even in real life people that know each other for ears sometimes have a hard time with this. You see, removing a cheater does not really solve this problem because someone can always make you feel bad just as easily using the chat, and cheat protection comes in to make this happen rarely by eliminating technical issues. And when something bad happens rarely, more people will join a community in the meantime, but is more ppl = more fun?

I understand that I haven't said anything important and maybe it's time to apologise for wasting your time so far. So I'll be brief - not long ago I enabled coopedit mode on Orb, my pet Cube project (at home I mean, nothing new on the web yet). Since Orb editing heavily relies on typing console commands, I also made messages received from the chat be treated as console commands. This system is highly inefficient and can survive only between people with 100% level ot trust, and probably not fitted for a community at all. If we have ten people playing for example and one of them types "quit" on the chat, all clients will quit and all coopedit work will be lost (unless saved recently). Of course this arises another question - what could a cheater do in a coopedit game where players can not be hurt physically, and only their work on the map can be altered. And also if a cheater, who I think can merely be referred to as "malicious person" in this environment, has done work on the map along with other players what rights does he hold to that work.

Of course it's just a game. No one really cares about a few cubes.. or do they.

reply to this message

#23: ..

by pushplay on 04/30/2005 03:02

It seems to be that kick-timebanning the ocasional punk is far less work than seeing the people I want to play with (not all of whom speak enlgish or see outside the server) get a copy of my key, and have to repeat that process every time I suspect someone has gotten a hold of my key that shouldn't have.

reply to this message

#24: Re: ..

by Pxtl on 04/30/2005 03:20, refers to #22

I think his point on malicious person is well taken - in any system, you have to wonder where you draw the line? Any kind of griefing behaviour (chatspamming, TKing, etc) or only for outright cheaters? There is the problem of what physical method to use to kick someone off of a server. For example, if you use group voting, I'm sure it would be trivial for the griefer to just connect 15 fake clients and take over the server. Admins are often absent. This is why I think there's merits to having a Slash-style masterserver for tracking users - you could let the long-term, experienced, good users have banning priviledges across all servers. Of course, then you also need meta-moderators to deal with anyone who abuses that power.

reply to this message

#25: ..

by enigma_0Z on 04/30/2005 05:09

Hmm, interesting...

I like the public/private key identification method alot...

But there's a n isue iwth this too...

If someone has become "untrusted", they can simply delete they're key and create a new one... of course then you decide whether or not they're trusted all over again. Unless you associate IP, Mac, and key within a span of, say, thirty minutes (DHCP users would die otherwise)...

What I mean is, if user A with IP 1.2.3.4 and MAC of 40*(whatever) and key of abcdefg becomes untrusted, logs off, and then reconnects with the masterserver with ip of 1.2.3.4 and the same MAC BUT a key of pqrstuv, then you'd have to realize that it's him...

But wouldn't having these huge keys eat up network time and kill server harddrives?

Hmmm

reply to this message

#26: Re: ..

by enigma_0Z on 04/30/2005 05:17, refers to #25

Some more ideas...

You would need a few things added to cube before this would work...

Namely:

1. A better (much better) voting system...

2. A kick/ban system (duh)

3. An easier way to identify players in game. The text is way to quick otherwise.

4. Serverside controls (kick, ban, slap, freeze, kill, + or - trust)

Another suggestion... perhaps trust should be governed by servers more than clients, eg. impose restrictions on clients, or make server votes count for more...

Another idea about managing trust...

Perhaps you could have two "trusted" numbers, server trust and player trust...

the server trust could be like b 1-1000, and client trust 1-100. That way clients can't all gang up on a single user (preventing trust wars), but servers could have more control over who connects and who doesn't. Furthermore, you could start players at 5001/501, and build (or destroy) their trust from there... You could make it too so that new users have a different trust number from any other users (so servers could judge better)... hmmm

reply to this message

#27: What does slap command?

by jean pierre on 04/30/2005 07:45

I never knew slap command what is its effect?

reply to this message

#28: Re: ..

by >driAn<. on 04/30/2005 09:25, refers to #25

"If someone has become "untrusted", they can simply delete they're key and create a new one... of course then you decide whether or not they're trusted all over again."
No, if they do a new key they start again with 0 trust.

reply to this message

#29: ..

by Gilt on 04/30/2005 17:42

I haven't really throughly read the thread, but it seems like everybody is talking about different things...

It seems like aard's idea is more of a way to give people who don't have the means or will to set up their own private server, the ability to use a temporary semi-private server to play on, in a sense. and that trust keys are per relationship, not player.

anyway, off the top of my head, I would be kind of worried about bots and hippies who trust everybody, though don't know how big of a problem that would be.

reply to this message

#30: ..

by Gilt on 04/30/2005 18:02

oh, and I bet cheaters will start preying on newbs to get into trust servers.

"Hey newb, you want to be my friend? If you're my friend you can go play on the safe trust servers!" or some other bullshit.

reply to this message

#31: ..

by makkE on 04/30/2005 20:19

I have waited a while to see everyones opinion before I post mine.

The most problematic issue I see about this "trust" system is that it´s pretty complicated. A system like this might make sense in a big community, but, don´t get me wrong, I believe cube will never get too big.

At the moment cheating is really no problem in cube. I believe it´s up to cube´s nieche-like state. (the closed netcode doesn´t really prevent an ambitious hacker from coding a cheat, does it?)

Someone who likes the simplicity and speed of cube´s gameplay is (at least to me) very often a person that wouldn´t cheat anyways. It´s brute gameplay does in my opinion only really attract people that look for real competition, for people that don´t mind losing ..etc..

The other half (I guess the majority) of people that try cube will quit playing it after 5 minutes anyways ("too fast, looks crap, too few weapons no this and that, can´t impress my friends with that"..)

So well to get to the point: Cube´s gameplay already prevents a lot of "malicious" persons even bother with playing it and cheating.
For those few morons (excuse me) that really try cheating or go on peoples nervers, a simple kick/ban function would be sufficient.

I also believe that just very few coders would bother to write a cheat for cube.
I mean those people want fame. They won´t be able to impress their fellow "evil coder friends" with saying: "Look I wrote a hack/bot/cheat for cube"
And the common idiot doesn´t code.., he´ll go back on some cs pub to use his easily downloadable hack/cheat and annoy people there.

Of course, if cube went fully opensource, the possibility to hack it would be there. But I believe none of the coders intrested in the cube code (the people on this board for example) will ever even consider writing a hack.

To sum my 2 ?cents up: ability to kick/ban people an the basis of a more simple ("Vote yes (F1)/no (f2)") voting system would be sufficient.

If cube should go open source and if after some time problems do arise, the trust system might be reconsidered.

One last word: I personally have met only 1 person I would have wanted to kick (for namefaking and spamming) in over a year of cube pubbing.

reply to this message

Go to first 20 messagesGo to previous 20 messages    Board Index    Go to next 20 messagesGo to last 20 messages


Unvalidated accounts can only reply to the 'Permanent Threads' section!


content by Aardappel & eihrul © 2001-2024
website by SleepwalkR © 2001-2024
58256145 visitors requested 76207802 pages
page created in 0.042 seconds using 10 queries
hosted by Boost Digital