I plan on writing a tile based game, but would like to have lighting effects, etc, and after seeing the sprite demos of SunBurn it looks like it may be prove useful. But I have a few questions:
SunBurn's 2D rendering works very similar to XNA's SpriteBatch - you can add sprites, which are rendered with the scene, using a very easy to use interface.
However SunBurn also allows you to add static sprites, which will continue to be rendered throughout the game (without re-adding them), or until you choose to remove / clear them.
While you can use SunBurn's SceneObject class to contain individual sprites this is not optimal (due to draw calls, batching, and sprite buffer size). It's often better to create a simplified actor class that emits sprites to SunBurn and allow SunBurn to quickly render all sprites, lighting, and shadows.
You can definitely mix 2D and 3D environments. Sprites are not yet editable in the editor or affected by collision, though integration with Farseer is possible (video by Matt Bettcher, one of the developers on the Farseer team):
Let me know if this helps!
Follow me on Twitter – development and personal tweetsAwesome XNA Videos – Lighting, Rendering, and game videos
Just a few more questions :)
I've watched the following YouTube videos and can you tell me if these videos are all 2D or a mixture of 3D and 2D sprites?
I am current using TorqueX 2D and they've recently announced that they will no longer being developing TX2D, so I'm on the look out for another 2D capable engine to replace it.
All of the SunBurn 2D videos, demos, and examples feature flat 2D sprites (quads with alpha clipped sprites or sprite-sheets for animation).
Here is an example of the player sprite-sheet diffuse and normal maps in the SunBurn Top-down 2D example:
You can use both SunBurn's forward and deferred rendering with 2D games. Both work equally well, though deferred can render more lights on screen without much of a performance hit.
SunBurn casts top-down (not sideways) 2D shadows. This allows for a more realistic effect and a great deal of perspective (as you can tell by the number of comments on our youtube videos claiming the sprites are meshes :).
Ideally we're planning for parity between our 3D and 2D features, and certainly some type of 2D scene editing. However we don't want to get too specific with features, SunBurn already has a lot to offer 2D developers and we don't want people buying for features that are not yet available.
My aim, at the moment, is to concentrate on a 2D-styled game and therefore my intests would be in the 2D features of the SunBurn engine.
Based on this I was thinking of integrating SunBurn into the game for rendering and write all the other required features myself. On the other hand, I was thinking of waiting to see what more 2D features are bundled into the SunBurn engine and just do simple 2D rendering for now.
Before making this decision could you tell me:
I understand these may be too far into the furture to provide any concrete answers, but I'm just trying to get an idea on whether to invest now or wait to see how the engine develops on the 2D front.
The new 2D features will almost certainly be part of the SunBurn 2.x line (a minor version update). It's way too early to talk about specific features, but you can imagine things like collision, scene editing, and similar are being considered. :)
You can use SunBurn for rendering now and adopt the new features as they become available (or continue to use your own if already implemented). While we regularly add new features, we do try to keep them modular so developers can choose to use them or not.
It's good to hear that the 2D features will be in the 2.x line.
The sprite sheet resource and it's accompanying normal maps in this example appear to have been made from a 3D model.
I've tried making normal maps for a model using Blender by replacing the models materials with a white 100% ambient lit texture, applying a black fog (or Mist) over the depth of the model and rendering this from above. This produces a greyscale image which is suitable for producing normal maps with, for example, the GIMP, Photoshop or xNormal. Unfortunately the resulting normal-maps do not have the same amount of detail as the normal maps you have in this example.
The normal maps that you have appear to have been made by rendering the model with specific coloured lights. Could you share how this is done?
Creating normal maps from source textures (height, diffuse, or other) is somewhat limited with regard to detail. If you already have a 3D model it's more accurate to bake out a tangent space normal map from the model.
This can be done with a high to low poly bake-down using the 3D model as the high poly and a quad as the low poly. I'm not sure how specifically to do this in Blender, but I'm guessing it's supported as Blender is a very complete modeling package.
We basically took a high poly character model and baked the normal information onto a flat plane. Baking from a 3D object will always give greater normal information than creating one from a texture or heightmap.
Here's a tutorial for baking normal maps in Blender - http://vimeo.com/2936073. I'm not familiar with the program but I assume that our method (baking a high poly object to a plane) would work just the same. We also baked out the diffuse using the same method.
Let me know if that helps,
Follow me on Twitter!
Hey Alex and John,
Thank you very much for the excellent support. I did try something similar with an nVidia tool but I guess I must have been doing it wrong. I'll be having another look at this at the weekend, I'll let you know how I get on then.
I have managed to get this working and I'm happy with the results. The video tutorial on baking normal maps was very good but, although I could recreate what was described in the video, I always got a blank normal map when I used my character model joined to the flat as a high-poly model. Using the same models with the xNormal program produced excellent normal maps and I should be able to set up a batch file to automate this. Unfortunately some of the geometry I am using has n-gons in it and the xNormal program balks at this. I'm under the impression that if I triangulate these models that will get rid of the n-gons. I'll be giving this another blast next weekend.
I was able to bake-down a number of 3D objects to planes / sprites in Blender, and the results are great.
I used this bake-down tutorial and placed the plane (the low poly) above the 3D object, as the bake rays cast down from the plane:
Lol! I had the plane below the 3D object. I'll give it another shot this weekend.
EDIT: I didn't try this weekend. I'm using xNormal for the moment but will have a look at using Blender next weekend.