home home

downloads files

forum forum

docs docs

wiki wiki

faq faq

Cube & Cube 2 FORUM


I cant kill!!

by J-Francois on 01/27/2004 21:23, 55 messages, last message: 06/02/2004 09:41, 32210 views, last view: 05/25/2024 17:48

I can\'t kill monster. When a shot, its like a shot on a wall. Please help me!!!

Go to first 20 messagesGo to previous 20 messages    Board Index    Go to next 20 messagesGo to last 20 messages

#16: ..

by e:n:i:g:m:a on 05/28/2004 02:25

Does anyone know if this bug is fixed for ATI cards in linux?

reply to this message

#17: ..

by e:n:i:g:m:a on 05/28/2004 02:25

Does anyone know if this bug is fixed for ATI cards in linux?

erm, for the 2004 release that is...

reply to this message

#18: Re: ..

by Thalion on 05/28/2004 16:10, refers to #15

Unfortunately the code was lost when I installed FreeBSD on the partition where Gentoo was =( However, as far as I remember, I just multiplied the Z-value returned by glReadPixels by 256

This however breaks the rendering for non-buggy drivers, that's why it has to be enabled with a switch.

reply to this message

#19: Re: ..

by Aardappel on 05/28/2004 16:18, refers to #18

if you look at the code, that is the fix I attempted... I guess what you mean the *256 should happen on the float in int representation, because doing it on the float itself does not work (the depth value is not linear).

As I said, it doesn't need a switch, I can simply detect the case.

If anyone wants to try this, let me know. They have to be available on irc a lot, know how to compile the source, and obviously run linux on ATI.

reply to this message

#20: Re: ..

by Thalion on 05/28/2004 16:28, refers to #19

I have SuSE 9.1 installed now (btw, a neat distro!), so I can check that. In 20 minutes or so.

Will you be on IRC?

reply to this message

#21: ..

by Th4lion on 05/28/2004 18:10

(I can't confirm my cookie - it takes longer than 20 minutes for the email to get to me - hence the different login)

Well it took me longer than 20 minutes to do all the stuff, but the end result is: it works. Here\'s my depthcorrect:

float depthcorrect(float d)
{
return d * 256;
}

I tried it on several maps, works fine.

reply to this message

#22: Re: ..

by Thalion on 05/28/2004 19:07, refers to #21

Oh BTW if anyone has problems compiling Cube on SUSE - ask, I'll tell how to fix it =)

reply to this message

#23: ..

by >driAn<. on 05/28/2004 20:15

Thalion:
yeah i have a problem by compiling cube. Check
http://www.cubeengine.com/forum.php4?action=display_thread&thread_id=296&start=34
on posting #44

It would be great if you know what to do, if not that doesnt matter cause i installed slack on my second pc and will try it there =)

reply to this message

#24: Re: ..

by Aardappel on 05/28/2004 20:40, refers to #21

we can meet this weekend somewhere on irc... I am usually on in afternoons or evenings central US time... though saturday I will likely be away a lot.

I tried d*256, I think D.plomat reported it not working. Since its a question of a 16bit int presented as a 24bit one, I didn't think it would work on the float representation. To do it on an int, you would have to trick it like:

*((int *&)&d) <<= 8;

or something like that. Any other people on linux+ATI that can experiment with the depthcorrect() function?

reply to this message

#25: Re: ..

by Thalion on 05/29/2004 05:39, refers to #23

drian, check that thread =)

reply to this message

#26: Re: ..

by Thalion on 05/29/2004 05:48, refers to #24

> tried d*256, I think D.plomat reported it not working. Since its a question of a 16bit int presented as a 24bit one, I didn't think it would work on the float representation.

Don't forget that it's a float only as far as OpenGL's concerned. But in the video memory, it's an int (16-bit or 24-bit). Now if you remember, the problem is that ATI drivers, when using 24-bit buffer, don't use the higher 8 bits, thus turning it into 16-bit one. Now OpenGL doesn't know about it, and, when trying to convert this it to float in [0..1], simply divides it by 2^24. But since it's practically 16-bit, one should divide by 2^16 to get meaningful result. So to fix it we get the original int value by multiplying by 2^24, and then get the right float value by dividing by 2^16. So in effect we just multiply it by 256 and get the right result.

reply to this message

#27: ..

by e:n:i:g:m:a on 05/29/2004 06:23

now pardon me if this doesn't refer to what you are talking about...

>16bit int presented as a 24bit one

Then couldn't you simple fill the last 8 bits with nothing, or even better simply have those sixteen bits the ones closest to the current view and then just ignore the other eight??

reply to this message

#28: Re: ..

by Thalion on 05/29/2004 06:32, refers to #27

The last 8 bits ARE filled with nothing. However, this is the _internal_ representation of video memory used by video card. When you get this value using OpenGL, it is returned as a float in range [0..1]. And when OpenGL converts the internal int representation to float, it thinks (rightfully) that all 24 bits are used, and gets the wrong result. So this whole story is about getting a right value from the wrong one glReadPixels returns.

reply to this message

#29: Re: ..

by Aardappel on 05/29/2004 10:24, refers to #26

sure, I know that is the theory... but why are then others getting very different results? The kind of values D.plomat often got were in the range 0.7 to 0.9, not something you can *256 and still get correct values for.

reply to this message

#30: Re: ..

by Aardappel on 05/29/2004 19:32, refers to #29

well, someone else confirmed the *256 works on ATI. So if I am not very much mistaken, this code would make depth work on ANY card:

void depthcorrect(float d)
{
return (d>=1.0f/256.0f) ? d*256 : d;
};

any objections?

reply to this message

#31: Re: ..

by Aardappel on 05/29/2004 19:33, refers to #30

oops. make that:

void depthcorrect(float d)
{
return (d<=1/256.0f) ? d*256 : d;
};

reply to this message

#32: patched executable for ATI

by Hal9k on 05/29/2004 22:34

I helped Aardappel with some data gathering which led to the above code fragment. It works great on my end (ATI 9700 and driver version 3.9.0). Some of you may remember that the original fglrx driver (2.9.*) worked fine and thus somehow broke with the new versions.

I patched the 5/22/04 version of Cube (linux_client executable) so that I could play on the servers. Now the patch only does a blind *256 and therefore is just a stopgap until a new build comes out. (My patch must not be applied if the game is working fine.)

At location 0x32467 in the file should be a 0x3f. Change that to a 0x43. Or, simply do a:
wget http://www.gate.net/~hew/linux_client

reply to this message

#33: ..

by >driAn<. on 05/29/2004 23:15

argh..
exactly now i changed to xorg but the 3D support doesnt work :(

Can anyone help me to write the xorg.conf correctly for my ati radeon 9200 ?

reply to this message

#34: Re: ..

by Vassili on 05/30/2004 01:33, refers to #33

just run fglrxconfig and move the /etc/X11/XF86Config-4 to /etc/X11/xorg.conf

reply to this message

#35: Re: ..

by Thalion on 05/30/2004 06:28, refers to #30

> any objections?

Maybe only perform the check once and store the result (say, in static bool)? It's not likely to change while you play, if you know what I mean.

BTW, does Cube allow for maps wider than 256 (in Z-units)? Because then one could make such a map (and maybe they even exist), and this check would no longer work.

reply to this message

Go to first 20 messagesGo to previous 20 messages    Board Index    Go to next 20 messagesGo to last 20 messages


Unvalidated accounts can only reply to the 'Permanent Threads' section!


content by Aardappel & eihrul © 2001-2024
website by SleepwalkR © 2001-2024
54111939 visitors requested 71896605 pages
page created in 0.041 seconds using 9 queries
hosted by Boost Digital