Ok, so when I wrote the bit about the rock close by and the rock far away being “rendered at the same level of detail”,

I didn’t express myself correctly.


As you noted, Luke, they do obviously render objects in the distance with less detail, especially since (as you observed)

“[less detailed] things in the distance are less processor heavy…” So when far away buildings clip into existence they have less detail and less pix, yes, but…


What I was trying to talk about was more nuanced simulation of vision.
You wrote, in effect, ‘bluish-fuzzy, Overlapping contours, arial perspective, and dispensation of light and shade …. fuzzing distant objects... this was actually figured out pretty early on”


My point is that they only THINK they figured it out! Or they developed techniques for dealing with the limitations of computing (e.g. fog) and rendering that they are still using as a basis for current thinking!


Again, I don’t know all the details and terminology, but my suspicion is that these things are often thought of simplistically – for example, just because you can render many more pixels, doesn’t mean that if you want your game to look good all you have to do is have more “detail” (more pixels)! Or just because you _can_ work with more detail doesn’t always mean you should. (Battle for Middle Earth comes to mind – as far as I’m concerned it was way too detailed for it’s own good.)


“An interesting side note on how improved visuals can compensate for lacking in other sensory experience kind of has to do with this though: volumentric fog…


Another simplistic idea, to my way of thinking, is fog effects. In most cases I’ve seen, it’s just a cheat-slash-workaround! (I mean, come on, how often do you actually see fog in real life? Even in San Francisco..) Now that we’ve got more graphics power/memory/speed, what they need to do is take the technology behind the “fog”, make it more transparent (stop making it white) and use it as a layer for making distant objects more fuzzy in a way that more closely simulates how we actually see!


Let me show you some examples of what I mean by  “how we see” …


Take this demo image from in-game rendering done using the Unreal Engine 3 (okay, this is 2004 technology…):


[Full Size: http://lyberty.com/gallery/how_i_would_do_games/UE3-Terrain-001.jpg]


Yes, it looks good and all, but to me they’re missing nuances of simple techniques of blurring and color and noise

that could be used to change the game world from being “flat” to being a better simulation of reality.

To continue with the fog idea: here, they appear to just be using the fog because they haven’t thought

of a better way to simulate how far away things look different to us in a way that clearly indicates they are far away.

(long winded, but I think you know what I mean)


When I was saying that everything is rendered at the same level of detail, what I actually meant was this flat effect. Let me show you some things I would try:


1. They have mathematical representations of distance in virtual space, right? What about a multi-layered simple, subtle blur repeating at a certain distance that clips through everything (as if it were a wall)?

Note the windmill in the original pic. Here’s my improvement:




Now I'll admit this has probably already been thought of, and I think the effect was actually visible in older games, but my point is that need to do it better. In the example shown here, the back of the tower (with the fan blades) is apparently rendered at the same level of clarity as the front of the tower (the side facing you, the viewer). So the far-off tower is fuzzy; great. (And the shadows help a lot.) But let's add some depth even to things relatively close to you!


2. I’d like to see more simulation of FOCUS – we really only see detail in the small space we’re actually directly looking at (most people think they see the whole field of view in focus, but it’s not so – this can be observed by looking across the room and, without moving your eyes, trying to make out the details of the things in your periphery…)

So (again, exaggerated) here’s a view of the world when you’re running, or maybe a “sniper view”:



3. They get colors wrong a lot, and forget simple things like visual noise. To me, just with the simple addition of a little visual noise, that screenshot seems to get more depth and appeal. Now, it bears repeating that I’m overdoing the effect so that you can see it clearly, but even so – is it just me that thinks that the noise actually adds depth, even though it’s much less “sharp”?

Full size comparison: http://lyberty.com/gallery/how_i_would_do_games/compare-1.html


4. Vision Simulator for Standard Size Monitors


In this model, point at the part of the image that you want to look at.

But imagine that this was the standard reticule in a game- in other words, the reticule would be

the center of your screen at all times.


Hopefully, it’s obvious that this is exaggerated for effect.

The in-game implementation would be much more subtle, circular, have fuzzy edges, and would increase as

you get further from the center….


(Let me know if the image doesn’t change in your browser as you move your mouse around the image: I tested it in Internet Explorer 6 and Opera 9– javascript and css and “ActiveX” have to be enabled…)