home home

downloads files

forum forum

docs docs

wiki wiki

faq faq

Cube & Cube 2 FORUM


gluUnproject

by Thalion on 03/20/2004 17:20, 20 messages, last message: 04/09/2004 11:20, 1727 views, last view: 05/04/2024 23:59

Interesting...

I've tracked that story about missiles exploding right in front of you, and buggy ATI drivers, blah blah blah. So I've downloaded Mesa source code, taken out the code for gluUnProject from there (renaming it as cube_gluUnproject), and put it into renderextras.cpp, also replacing the appropriate glUnProject call there.

Now, it should not depend upon the drivers any longer, right? But the bug is still there! Which is weird since I remember it worked great with nVidia drivers... maybe the problem is somewhere else? glReadPixels, or whatever?

   Board Index   

#1: idiot

by onvol.network.nyt.fire on 03/21/2004 07:07

you should never use a source not even with permession,you are a big asshole

reply to this message

#2: Re: idiot

by pushplay on 03/21/2004 08:07, refers to #1

What the hell?

reply to this message

#3: ..

by staffy02 on 03/21/2004 10:28

???

reply to this message

#4: Re: idiot

by Thalion on 03/21/2004 14:49, refers to #2

Just a troll... now can we pretend it isn't there and get back to the topic? =)

reply to this message

#5: Re: idiot

by pushplay on 03/21/2004 18:27, refers to #4

Are we sure the mesa code isn't buggy?

reply to this message

#6: Re: idiot

by Thalion on 03/21/2004 20:20, refers to #5

A good question. But if it is used by ATI drivers, that isn't it also used by nVidia drivers? And the latter do work...

But if you are right and the bug is indeed in Mesa, this is actually good, because then we have the code, so we can find the bug and fix it! =)

reply to this message

#7: Ok

by pushplay on 03/22/2004 02:26

I don't think nVidia is under any obligation to use mesa's code. I think you need a strategy for nailing down where the error is.

reply to this message

#8: Re: Ok

by Drakker_ on 03/22/2004 03:23, refers to #7

The nVidia driver doesnt use mesa at all. When you install it it even install its own library over the system default's to make sure mesa is not used.

reply to this message

#9: Re: Ok

by Thalion on 03/22/2004 07:56, refers to #8

Aha. And ATI driver _does_ use Mesa, at least for GLU functions... on the other hand, when I switched OpenGL to use XFree software rendering (which also uses Mesa), it did work! - slow enough to be unplayable, but the shots were traced correctly.

All in all, this is just wierd. A strategy? I'd like to hear any suggestions.

reply to this message

#10: SOLVED!!!

by Thalion on 03/22/2004 09:37

Yes, that's it. I've found out the reason. Well not exactly, but I've found the way to fix it =)

The problem was with glReadPixels. To be more exact, the problem was that ATI uses 24-bit Z-buffer (software Mesa uses 16-bit), which seems to confuse gluUnProject for some reason. To fix it, cursordepth in readdepth() needs to be multiplied by 2^24, and then divided by 2^16 - in other words, multiplied by 256 (thus pretending that z-buffer depth is 16-bit). After that, it works.

I'm not sure if it is gluUnProject bug or not - don't know math good enough to understand how it works anyway =) - but a temporary solution could be to provide some sort of command-line switch to enable this "24-bit to 16-bit" hack.

I really hope to see it in the official version, since self-compiled doesn't work with official servers...

reply to this message

#11: Re: SOLVED!!!

by D.plomat on 03/22/2004 13:41, refers to #10

Congrats. What card model did you tested this on?

Mine is a Radeon IGP320M (aka Radeon mobility U1), it's a very close to a Radeon 7500

Have done some tests on this, when Aard saw the results he also said that the problem is in glReadPixels.

According to what i understood, gluUnproject is (at least for consumer video cards) implemented in software, but glReadPixel is much more low-level stuff and dependent on the hardware.

What i wonder is that if it's the same problem on different cards, or at least this covers all ATI cards... i'll test this x256 workaround to check, but it's probably another problem on my machine, as it behaves differently depending on the resolution, sometimes always explodes at 2-3 "meters" from the player, and also sometimes the projectile can pass-thru a wall.

Also the driver i use is the radeon driver from XFree with experimental patch for IGP chips, not one provided by ATI(so at least i can try to fix it in the driver code itself if i manage to understand it...)

reply to this message

#12: Re: SOLVED!!!

by Thalion on 03/22/2004 15:19, refers to #11

Radeon 9200.

By the way, I don't think glReadPixels returns wrong results. No, it returns correct values, considering that buffer is 24-bit. However, gluUnProject (or maybe Cube?) is unable to cope with it.

reply to this message

#13: Re: SOLVED!!!

by D.plomat on 03/22/2004 21:12, refers to #12

According to the OpenGL reference,

Depth values are read from the depth buffer. Each component is converted to floating point such that the minimum depth value maps to 0.0 and the maximum value maps to 1.0. Each component is then multiplied by GL_DEPTH_SCALE, added to GL_DEPTH_BIAS, and finally clamped to the range [0,1].

So normally the cursordepth value should be hardware-independent.

Also, it appears that the problem i have with my IGP320 is different, as those values are always in ]0-1[, but appears to have a (very-short)"maximum" range and sometime ignore some cubes.

Something strange: i read in the topic on the OpenGL forum that using GL_DOUBLE might help, so i tested it with
double cursordepth;
...
glReadPixels(w/2, h/2, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &cursordepth);
Then it behaves differently, a bit better but still buggy (ie the rocket can fly half of nudist map, but still explodes in the air)... also another strange, sometimes the rocket "sticks" to a wall for some seconds, then explodes.

About Mesa being bugged, i'll test with the software renderer only to be sure.

reply to this message

#14: ..

by Thalion on 03/23/2004 01:59

You've quoted the reference right, but then missed the point =) It IS hardware dependent. It is just clamped to the range [0..1]. However, if your Z-buffer is 16-bit, then the value of 1 means object's physical (internal, so to say) Z coordinate is 2^16, while in case of 24-bit Z-buffer it is 2^24! This is easy to notice - add a line in the code which prints the value read by glReadPixels, start Cube (but don't move!), and see what it prints. For Mesa, it was always higher than 0.9. For ATI drivers, it was somewhere around 0.003...

GLdouble won't help here because glReadPixels reads the data in GLfloat format. If you use GLdouble you're sure to get glitches.

reply to this message

#15: Re: ..

by D.plomat on 03/23/2004 11:17, refers to #14

Ok, so your driver is really returning a [0..1/256] value. Maybe this could better help Aard for his tests on this bug, as on mine i had those values, but never had a so small value as 0,00x

Here some results i had:

With correct driver in windows:

> On windows, i always get those values, whatever screen resolution
> it is: when facing and touching a wall: ~0.86 when pointing at the
> other end of the map: ~0.998

With buggy experimental driver:

> Also, i've double-checked that, but i made a too much approximative
> evaluation of distances, as 0.981781 is really 1 meter, both in
> windows/linux and also on my other PC with the GeForce. In fact
> it's another case of 0.994278 that gives ~4meters (i've this one
> only if i launch in 1280x800, then in any <= 320x200, by looking
> closer i would say that in 800x600 it's ~2meters(nearly at the
> limit of the rocket radius, but still hurts)(maybe some scale value
> in the driver not resetted properly...))

So this clearly isn't the same bug, but the bug you have is probably more frequent so it's cool to have a workaround because many ppl wants it... for my bug, well it's an experimental driver from a patched XFree devel snapshot so the best thing i've to do is probably waiting for a stable release ;)

reply to this message

#16: Re: ..

by Thalion on 03/23/2004 11:35, refers to #15

It seems that the problem is that ATI driver uses those 24 bits to scale value, not to increase precision. I don't really know if it is correct or not. But I also hope for a workaround.

Oh, by the way. XFree 4.4 was released quite some time ago (29 February).

reply to this message

#17: Re: ..

by D.plomat on 03/23/2004 11:53, refers to #16

Yes, but support for the IGP series chips is 2D-only :(

...looks like this patch didn't made it in the stable XFree branch... maybe for 4.4.1?

reply to this message

#18: Re: ..

by Aardappel on 03/28/2004 22:09, refers to #14

it is interesting to see what the problem is in the ATI cards, but it is STILL a driver bug. Just because the depth value is represented as 0 to 2^16 or 2^24 doesn't matter, both should be mapped to 1.0 when I read it as float. The latter just gives more precision across its range.

What appears to go wrong is that inside the driver the value is read as 16bit, then scaled to 1.0 as if it were 24bit... this would explain the small scale. It doesn't explain D.plomats problems yet however.

A possible solution (hack) I have been thinking about is to make sure the player always spawns at the same spot on the start map, record the distance on the first frame, compare with a stored value that is correct, and compute a scaling factor. That would at least work for all buggy drivers but its rather ugly.

Another workaround is to read the depth as int rather than float, and do the float conversion ourselves. since the depth value is always large, the high bits will be set so it will be easy to portably detect whether it is a 16, 24, or 32 bit value, even if the driver is lying about it.

One interesting thing would be for you two to test what values GL_DEPTH_SCALE and GL_DEPTH_BIAS are set to. They should default to 1 and 0, the driver has no business changing them. But maybe it has set them, in which case a portable fix could be made as well.

reply to this message

#19: Re: ..

by Thalion on 03/29/2004 01:49, refers to #18

> The latter just gives more precision across its range.

Well it seems that ATI doesn't use it for precision, but rather to scale.

> One interesting thing would be for you two to test what values GL_DEPTH_SCALE and GL_DEPTH_BIAS are set to.

This was the first thing I did when I realized what's happening =) No, they're correct - 1 and 0, respectively.

reply to this message

#20: I know this is COMPLETELY off the topic...

by Lethedethius on 04/09/2004 11:20

I know this is TOTALLY off topic, but yo aard, have you gotten the models for the next update of cube yet? *from dcp...* :) *is waiting to blast people with better graphics :P*...

reply to this message

   Board Index   


Unvalidated accounts can only reply to the 'Permanent Threads' section!


content by Aardappel & eihrul © 2001-2024
website by SleepwalkR © 2001-2024
53864051 visitors requested 71639184 pages
page created in 0.019 seconds using 10 queries
hosted by Boost Digital