Texture mapping
-
@Gothon I was absolutely floored with your texture mapping project in the "game jam", and I have a few questions. I didn't see a dedicated thread for it, so I figured I'd just make one:
- What are the current limitations to your approach? For instance, limits on texture or surface size?
- Would it be difficult to add some kind of distance-based "fading"-- sort of like the effect on the surfaces in the "Doom" engine?
- How tough would it be to do geometric shapes other than cubes-- or is that just crazy talk?
- What about lighting-- or is that even crazier talk?
- How tough would occlusion culling be? That would be necessary I imagine for performance reasons
To me, your demo was a watershed moment for Fuze, because now one could draw your cubes into a "room" shape, put the camera inside, and viola-- a full, textured 3d environment.
-
Yeah this honestly seems like something I wanna know.
-
Regarding other shapes - correct me if I'm wrong because I haven't seen the code, BUT...
Let's say that you want to draw a triangle and render an image to the surface. It's not going to end well because you can't create a triangle shaped image. So when you use drawQuad to draw your texture image that you presumably created, it's going to get very squashed at the "pointy end" of the triangle. Now, if you know this in advance then maybe you can take it into account when you create your image but that's going to be far from easy?
I may be barking up completely the wrong tree because at the end of the day this is all 100% assumptions on how this texture mapping is working in the first place. If something is shared I should download it and take a look.
-
These are some good questions @Spacemario
- What are the current limitations to your approach? For instance, limits on texture or surface size?
The largest image buffer I can create with createImage() has the same dimensions as the docked screen (1920x1080). And I can make several of them, I don't know what the max is. The bottleneck is more on the geometry side since I am processing the vertices in the FUZE interpreter. I can actually get over 1000 calls to drawQuad() into a frame but only if there is 0 computation (everything is pre-computed). I only fully rotate a small number of corner vertices, and calculate the rest of vertex positions using lerp() at the moment.
- Would it be difficult to add some kind of distance-based "fading"-- sort of like the effect on the surfaces in the "Doom" engine?
drawQuad() has a 'tint' parameter which I currently use to apply lighting. It should be easy enough to use it for distance fading, like a distance fog or to simulate water submerging. I don't know about the doom engine however (will have to look that up).
- How tough would it be to do geometric shapes other than cubes-- or is that just crazy talk?
I actually wrote the current program with the intention of later making games with ramps, half height blocks, and maybe even half height ramps. Turns out a Quad is actually two triangles stitched together. If you do some testing, you should be able to create a single triangle by setting the 4th vertex to the same position as one of the other vertices to hide the second triangle. Martin does have a point however that there may be some distortion if the math is wrong. A ramp for example is sqrt(2) times longer than it is wide, so you may need to adjust the texture to prevent stretched pixels.
- What about lighting-- or is that even crazier talk?
Doing a light calculation for 1000 faces every frame would be quite expensive. Currently I do a simple light calculation for one cube and apply it to all the terrain cubes because they are all facing the same direction. This lighting is simple flat shading using the 'tint' parameter, however it should be possible to use per texel light mapping using the multiply blend mode (setBlend() function). Per texel light calculations are even more expensive of course, so you will need to find a way of pre computing them (eg, only do it when a light source is added or removed).
- How tough would occlusion culling be? That would be necessary I imagine for performance reasons
Yes, minimizing the number of faces rendered is vital but also expensive to do every frame. I currently render 3 faces for every cube since I only have the back faces culled. However any time two solid cubes are adjacent to each other, the connecting faces aren't visible. Iterating over all the blocks to remove these faces takes some time but it only needs to be recomputed when blocks are added or removed and only to blocks adjacent to the changes.
To me, your demo was a watershed moment for Fuze, because now one could draw your cubes into a "room" shape, put the camera inside, and viola-- a full, textured 3d environment.
Bear in mind that what I have made uses 'orthogonal projection', and is rendered in far to near order by controlling the direction of the loops that iterate over the terrain. It bears a lot more similarity to an isometric game than a first person game using 'perspective projection' with a 'depth buffer' which you can get using FUZE's 3D graphics functions (even though these don't yet support geometry with user defined vertices and textures).