More pages: 1
... 5 6 7 8 9
10 11 12 13 14 15
New DirectX SDK
Wednesday, June 9, 2010 | Permalink
The June 2010 DirectX SDK
has just been released. I haven't given in a try yet, but there appears to be a few nice things in it, but not so much big news. This release also drops MSVC 2005 support while adding MSVC 2010 support.
[ 1 comments
| Last comment by fmoreira (2010-06-12 22:55:34)
Latest Steam hardware survey
Saturday, June 5, 2010 | Permalink
I've touched on the Steam hardware survey a couple of time before, mostly related to JC2. This month's survey has another very interesting piece of data. Steam has only been available for Mac less than a month, yet MacOS now represents 8.46% of the market. That's an enormous accomplishment for the Steam system to bring on so many Mac users in so short time! The number of Mac users vs. PC users are already pretty much in balance with current Mac vs. PC market in general. My next wish is for Valve to expand the system to Linux as well. I don't think a lot of AAA titles would come to Linux anytime soon, but a lot of indie developers would love to explore that market. It may not be large enough to be worth the effort for AAA titles, but for indie games it would certainly be. As someone once explained, being a big fish in a small pond is not so bad.
It's also worth noting that Windows XP keeps dropping. The drop looks a bit unproportional because MacOS jumped in and stole 8.46% of the market, making all Windows versions drop their percentage, but the XP drop is large while other versions have modest drops. If we look at the Windows world only, XP now represents 36.5%, with Windows 7 beating it at 39.2% and Vista at 23.6%.
[ 0 comments
About crunch mode
Wednesday, June 2, 2010 | Permalink
When I was at ATI/AMD a couple of years ago I was working 7.5 hour days. During my more than three years there I can only remember working overtime once. This was due to an ISV needing urgent help with what they thought was a driver bug, just an hour or so before the end of the day on a friday, and they needed it fixed before the weekend. Turned out it was their fault, but anyway, that was the only time I stayed in the office a couple of hours extra. When I decided to switch to the game industry I worried quite a lot about things like overtime, because you know, I want to have a life too, and the game industry have a really poor record in this area. One of the main reasons I joined Avalanche Studios in particular was because they have an expressed policy basically stating that "overtime is a failure of the management". During my time here I can say that overtime has been rare. Yes, it has happened a few times, in relation to important deliveries. But generally it has been in moderate amounts and have only lasted a couple of weeks at most, and with employees able to plan their overtime freely. As we get paid for the overtime work, alternatively can swap it for vacation at a later time, and the company also provide overtime food in crunch time, it's not much of an issue when it does happen.
However, I keep hearing the situation is not as rosy in many other places. I believe the situation is better among Swedish developers than elsewhere, but I suppose that might have more to do with local laws than a better mentality. I think the situation has improved a lot though, especially after the famous "EA Spouse" story, but I still think there's a immaturity in the industry. Somehow game developers are perceived "from the top" as a bunch of 20 year old nerds that are all single and probably stay up all night anyway. Well, I'm 30. Many of my co-workers are married, have kids, and I'd say a majority are in a stable relationship with someone who might care if they can share dinner at home tonight.
I saw this article today named Why Crunch Mode Doesn't Work: 6 Lessons
. I knew that productivity drops the longer people work. What I didn't know was that the common 40-hour work week was not some kind of arbitrary standard, but the result of research and studies done a century ago. Turns out this is the sweet spot for getting the highest output, at least for industrial jobs. Increase to 10 hours and you not just less per hour, but get less done in absolute terms. This raised a question though. Should a manager strive to get the highest possible output per employee, or the highest output per invested dollar. If the former, the 8-hour day is likely optimal, but if the latter, maybe going down to 7-hour or so would make more sense, because of increased productivity. Say you get perhaps 95% of the output at only 87.5% of the cost. Come to think of it, the 7.5-hour day at ATI was probably not just an arbitrary "bonus half hour free" to be nice to employees, but probably someone had a deeper thought there. Because this was only for office type employees, while people working in production had the typical 8-hour days. It was probably optimized for different variables by some clever dude.
[ 5 comments
| Last comment by Rob L. (2010-06-07 15:28:20)
Saturday, May 29, 2010 | Permalink
It's funny how sometimes as technology develops what was originally a good can become a bad idea. EDRAM for video cards is such a thing. AFAIK no video card for the PC ever had it, but it's been used occasionally in some game consoles, most recently in the Xbox 360. However, I recall back when bitboys were trying to enter the PC video card industry using EDRAM based designs. Unfortunately they never succeeded and none of their products ever saw the light of the day. Had they been able to produce one at the time though, chances are it would have worked very well. However, how people render the frames today and back in the DX7 era is very different, which makes EDRAM far less ideal today. Back in the DX7 days you'd probably render to the backbuffer almost all the time, and given no shaders and as many ROP as there were pipelines the biggest bottleneck was generally bandwidth to the backbuffer. That's hardly the case anymore. Today shaders are long, bottlenecks are usually ALU or texture fetches, and even if you end up being limited at the backend you're normally not bandwidth bound but ROP bound.
Having worked a fair amount with the Xbox 360 the last 2.5 years I find that EDRAM mostly is standing in the way, and rarely providing any benefit. Unfortunately, you can't just render "normally" to a buffer in memory if you so prefer, nope, you always have to render to the EDRAM. Once you're done with your rendering the results have to be resolved to the video memory. So even if we assume we are never ROP bound and EDRAM gets to shine with its awesome bandwidth, it really would not buy us much. Each render target operation is immediately followed by a resolve operation copying the data to video memory. During this copying phase the GPU is busy just copying rather than rendering. If the rendering was targetting a video memory buffer to begin those writes to memory would be nicely interleaved with the rendering work that the GPU does and no resolve would be necessary, so once the rendering is done all that's needed is to flush whatever data is residing in the backend cache to memory and you're done.
Sadly it's not just that it doesn't really provide so much of a benefit as it might look on paper, but it also alters the rendering model that we are all familiar with and adds a bunch of new restrictions. Because EDRAM is still quite expensive in hardware it's not something we get an awful lot of. The Xbox 360 has 10MB. But if you render to the typical 1280x720 resolution with 2xAA, that's 14MB needed for the color and depth buffer. So this is generally solved by "tiling", which means you render for instance to the top of the screen first, then resolve, and then the bottom, and resolve. The DirectX9 for Xbox helps out a bit here to let you do this stuff quite automatically by entering a tiling section of the rendering, which is then submitted twice to the hardware, or how many times necessary depending on how many tiles are require for your render target configuration. Sounds fine huh? Well, until you want to squeeze in another render target operation somewhere in that stream. Say you want to apply some SSAO. You need the complete depth buffer for opaque stuff and then apply before rendering transparent stuff. SSAO can be quite expensive, so instead of using oodles of samples you probably want to take a few, then blur the result to half-res, and then apply that. Well, to blur you need to switch render target, which breaks the model. In order for everything to work you need to first resolve everything, do your SSAO passes, then copy that back to EDRAM again and enter a new tiling section. This is of course way too costly so nobody bothers doing that kind of stuff, but instead just try to live with the limitations imposed. So one may attempt to just resolving the depth-buffer without switching render target and then apply SSAO in one pass. Unfortunately, not even this is ideal. The problem is that when the top tile enters this code only the top tiles has been rendered, so the depth buffer texture it will use for the effect is incomplete. So when it samples the neighborhood around pixels close to the edge it will sample over the edge and get data from the previous frame. This often results in visible seams when in motion. It's common to copy the backbuffer for refraction effects. In many games on the Xbox 360 you'll see visible seems when traveling on water for this reason.
For the next generation consoles chances are we want 1080p, full HDR, at least 4xMSAA and probably many want additional buffers for deferred rendering or other techniques. I don't think it will be possible to embed enough EDRAM to fit all for many games, so if you're designing a future console now and are thinking of using EDRAM, please don't. Or at least let us render directly to memory. Or only let the EDRAM work as a large cache or something if you really want it.
[ 12 comments
| Last comment by mark (2010-07-27 23:27:20)
Pac-man turns 30!
Friday, May 21, 2010 | Permalink
Happy anniversary Pac-man!
[ 3 comments
| Last comment by fmoreira (2010-05-22 17:10:40)
My other project ...
Friday, April 16, 2010 | Permalink
Some of you may have noticed a bit of a decline in my posting rate on my blog. No DX11 demo yet, little news on Framework4, not much commentary on the state of the industry. Most posts recently has been on Just Cause 2, which of course is natural since we approached and passed the launch of the game. However, the main reason is that there's another project that has taken up a considerable amount of time and certainly occupied my mind a lot the last half year, which just recently culminated in this:
Yes, that's me and my fiance's hands. We got engaged on April 4, under the starry sky by the beach on Tenerife. And yesterday we got our rings.
[ 39 comments
| Last comment by rebb (2010-05-21 16:51:20)
Just Cause 2 tops the UK charts
Monday, March 29, 2010 | Permalink
[ 7 comments
| Last comment by vis (2010-04-09 19:38:42)
56.5 percent have DX10+ system
Friday, March 19, 2010 | Permalink
As I touched on before the lack of XP in Just Cause 2 is not as bad as it may first have seemed. In fact, now that the demo has been released I have not heard any more criticism on that, because what we could do with a DX10 only renderer certainly justified the move and now it has turned more into a selling point of the game.
In addition to this I find the latest Steam survey interesting. The DX10/11 systems around are now 56.5%, which if we still count on about 15% in the bottom being too low-end anyway, that means we now reach 66% of the potential customers. In my last post I counted on a drop for XP by 2 percentage points per months, but the last survey noted a drop of 3.5% in the last month alone.
[ 14 comments
| Last comment by ocsi (2010-05-13 18:00:06)
More pages: 1
... 5 6 7 8 9
10 11 12 13 14 15