"UNIX is simple. It just takes a genius to understand its simplicity."
- Dennis Ritchie
More pages: 1 ... 11 ... 21 ... 31 ... 41 ... 51 ... 61 ... 71 ... 81 ... 91 ... 101 ... 111 ... 121 ... 131 ... 141 ... 151 ... 161 ... 171 ... 181 ... 188 189 190 191 192 193 194 195 196 197 198 ... 201 ... 211 ... 221 ... 231 ... 241 ... 251 ... 261 ... 271 ... 281 ... 291 ... 301 ... 311 ... 321 ... 331 ... 341 ... 351 ... 361 ... 371 ... 381 ... 391 ... 401 ... 411 ... 421 ... 431 ... 438
Query FailedHumus
Monday, April 13, 2009

Rohit,
yes, I will enable it in the Linux code as well. I started on the Linux stuff yesterday and it's up and running now too.

n00body,
well, not necessarily. I can still expose features, even if I don't implement them in OpenGL. Also, with OpenGL 3.1 it's much closer than it was in the past. Things like constant buffers exist in OpenGL now. I'm going to need them to share a common interface, so OpenGL 3.1 will be a requirement. However, in some cases I suppose I may want to expose API specific features. The framework interface doesn't necessarily have to be 100% indentical between APIs. Where it makes sense I may divert from it.

Anon,
in my test the forward compatible flag is not ignored. It generates INVALID_OPERATION if you use deprecated features.
I definitely agree on Direct State Access. It's on top of my wish list right now. If any Khronos members are reading this, please include DSA in OpenGL 3.2.

Anon
Sunday, April 12, 2009

You have nvidia to thank for this extension. Just like their implementation of the forward compatible flag (simgly ignored). Horrible, just horrible.

The biggest thing lacking from OpenGL right now is DSA. The bind-to-edit model is evil, especially for middleware libraries.

Overlord
Sunday, April 12, 2009

@n00body:
Why would it, technically speaking DX11 is not that different from DX10.1, at least not in a dramatic way, i look at it more as an update on dx itself than the hardware.
Open GL development may be slower, but it's also faster in some respects as once a card supports something it is usually directly exposed as an extension, besides nvidia doesn't even have 10.1 hardware out yet so just take a breather.
Once the hardware is out there openGL will support it in some way.

So supporting 3.1 doesn't mean that one is limiting oneself, it may just be the other way arround.

Mave
Saturday, April 11, 2009

I totally agree with you. There is so much crap in the OGL 1.x spec that makes writing an OpenGL driver much more complicated that it should be.

Of course, for companies such as nVidia with a huge amount of resources, they do not care about the fact that writing an optimized OpenGL driver is incredibly difficult. In fact, it's an advantage for them. They've spent a huge amount of time optimizing all this legacy stuff so why would they want you loose their edge on competitors?

The problem is that the Kronos group is more or less controlled by nVidia because they are pretty much the only one contributing new stuff. nVidia deserves a lot of credit for contributing so much, but it also has a downside: Their priority is making money. Their goal is not write the most elegant specification to the detriment of their profit.

And also, will someone one day contribute some validation tests for hardware vendors. That's also a big problem that I wish will be resolved one day.

n00body
Saturday, April 11, 2009

Slow though it may be, progress is progress. Since they've shown they will follow through on their deprecation plan, I'm hopeful, if still a bit skeptical about the future.

Question:
I thought the whole point of framework 4 was to exploit all the new features of DX11. So won't supporting OGL 3.1 severely inhibit that goal?

Rohit
Friday, April 10, 2009

I meant enabling that bit in the linux port of your code?

Rohit
Friday, April 10, 2009

Actually, since all the crap is an extension now, won't all code have to put ARB suffix in all of their gl calls? And BTW, whatever may be the merits of their decision, they are doing what they said they would do. Now that the deprecated stuff has been moved out to extensions and there is no requirement for it's implementation, I'd expect these things to be dropped from the consumer cards over time at least in future. IHV's will have an incentive to do this, they'll make sure that only the workstation cards have those drivers with legacy crap so that they can screw those not updating the code, some poetic justice eh!

BTW, in your framework, you'll be enabling the forward compatible flag by default right?

Humus
Friday, April 10, 2009

From WGL_ARB_create_context spec:

"If the WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB is set in WGL_CONTEXT_FLAGS_ARB, then a <forward-compatible> context will be created. Forward-compatible contexts are defined only for OpenGL versions 3.0 and later. They must not support functionality marked as <deprecated> by that version of the API, while a non-forward-compatible context must support all functionality in that version, deprecated or not."

So yes, providing this flag disables all deprecated stuff. I have also found that both AMD and Nvidia implementations does it right too, so in fact, deprecated function calls are ignored. I haven't checked if any errors are generated, but at least no rendering comes out of it.

More pages: 1 ... 11 ... 21 ... 31 ... 41 ... 51 ... 61 ... 71 ... 81 ... 91 ... 101 ... 111 ... 121 ... 131 ... 141 ... 151 ... 161 ... 171 ... 181 ... 188 189 190 191 192 193 194 195 196 197 198 ... 201 ... 211 ... 221 ... 231 ... 241 ... 251 ... 261 ... 271 ... 281 ... 291 ... 301 ... 311 ... 321 ... 331 ... 341 ... 351 ... 361 ... 371 ... 381 ... 391 ... 401 ... 411 ... 421 ... 431 ... 438