"Doom3 is up to 112% slower than the 6600GT"
- Nvidia PR doing the math. The X700 apparently renders at a negative framerate in Doom3.
More pages: 1 ... 11 ... 21 ... 31 ... 41 ... 51 ... 61 ... 71 ... 81 ... 91 ... 101 ... 111 ... 121 ... 131 ... 141 ... 151 ... 161 ... 171 ... 181 ... 191 ... 201 ... 211 ... 221 ... 231 ... 241 ... 251 ... 261 ... 271 ... 274 275 276 277 278 279 280 281 282 283 284 ... 291 ... 301 ... 311 ... 321 ... 331 ... 341 ... 351 ... 361 ... 365
Query FailedKombatant
Friday, July 2, 2004

Excellent as always pal; always eager to try out your new demos

Richteralan
Friday, July 2, 2004

Obviously my problem been ignored @ Beyond3D.

If being mature, I'd rather not say the first sentence in your demo description.

Wester547
Friday, July 2, 2004

LOL, pretty persistent to get around nVidia's future-proof shader technology, aren't you humus? That's one damn impressive demo, but keep in mind the jump that Shader Model 2.0/2.x took to Shader Model 3.0 isn't as great as 1.x to 2.0/2.x, just like that jump itself isnt as great as the fixed Hardware T & L pipeline was to the Programmable Shaders introduction, so especially with Shader Model 2.x(the enhanced iteration with 2.0 used with the nVidia GeForce FX and ATi Radeon X800 series), this demo is still very possible.

Anonymous
Friday, July 2, 2004

nVidia owns ATI this time round...

g__day
Friday, July 2, 2004


Once again Humus - very impressive!

Da3dalus
Friday, July 2, 2004

Very nice demo, Humus

sqrt[-1]
Friday, July 2, 2004

Not that I don't consider this a good demo but I think you should have included it in a "real world" scenerio. (ie by using scissoring etc.)

I also question how good this would be if you had to do multiple if statements (or even a loop)

JustJess:
http://www.microsoft.com/whdc/winhec/partners/shadermodel30_NVIDIA.mspx

When Humus can hack a texture lookup in a vertex program THEN I'll really be impressed.

I hope this is not the start of a trend of demos trying to bash Nvidia....


Anonymous
Friday, July 2, 2004

Erm, hasn't this technique been around for a while? I don't see how this is bad for Nvidia when the same method can be applied to Nvidia cards as well as ATI cards, especially when real dynamic branching is only a small part of SM3.0 (and can't both this and Dynamic Branching be used at the same time, even if some instructions would be redundant?). I'd like to see real Dynamic Branching support added to the demo so we can compare the two methods, and to see what happens when you use both at once.

More pages: 1 ... 11 ... 21 ... 31 ... 41 ... 51 ... 61 ... 71 ... 81 ... 91 ... 101 ... 111 ... 121 ... 131 ... 141 ... 151 ... 161 ... 171 ... 181 ... 191 ... 201 ... 211 ... 221 ... 231 ... 241 ... 251 ... 261 ... 271 ... 274 275 276 277 278 279 280 281 282 283 284 ... 291 ... 301 ... 311 ... 321 ... 331 ... 341 ... 351 ... 361 ... 365