bouncy-ball
hello world game for g3d development and learning
hello world game for g3d development and learning
I worked with game engines like Ogre3D and Irrlicht and these two are not full-blown game engines but rather 3D renderers because they don't come with a physics system or audio system integrated. They have extensions/plugins for enabling physics and in my early game development phase, I used Ogre with OgreBullet, which incorporated Bullet Physics 2.x. I did this way back and on an HP 630 Laptop with Core i3-2330M and 2GB RAM and no dedicated graphics. I also had almost no idea about how graphics worked.
I used Blender to export the meshes as *.obj/mtl files. Designing the box was easy as that's the one that is there by default with a new project. Creating the football was a bit tricky. However, I followed a couple of video tutorials and ended up with a very high-poly model, which slowed down the engine too much. The ball in the video has a low polygon count and I guess I used an icosphere and some trials to achieve the best looking and performing mesh.
Ogre3D and Irrlicht are excellent, but new versions don't come out so often. I studied design patterns and wanted some use cases to apply these patterns/principles, and game engines/game development are among the best and coolest software in which these paradigms find applications. I searched for a new C++ based game engine with a more active release cycle and engaged community and came across G3D. It has many excellent features developed by people with years of academic and professional experience in computer graphics. This engine is open-source and looked well designed with proper documentation.
The engine didn't have integrated physics, which seemed like an opportunity for me to explore. It ships an old version of PhysX, but I didn't have access to the source code. I had worked with Bullet Physics before and had access to the source code; I thought it would be an excellent exercise to integrate these two.
The Game engine and the Physics engine have very different responsibilities and the codebases are maintained separately. The Game engine utilizes more GPU resources as its job is to draw triangles on the screen and it abstracts the platform-dependent tasks such as window creation and the shader pipeline. Depending on the functionality, it allows users to write OpenGL shaders or DirectX shaders and call the appropriate and usually, the user doesn't have to deal with the graphic calls and binding of vertex attributes and texture or custom parameters. It usually runs a loop, more traditionally called a game-loop inside which the screen is drawn repeatedly and based on the camera's position, the object is projected accordingly. Objects/Nodes are inserted in a Scene graph with a root node positioned at the origin and all nodes are attached to it. To represent an object faithfully in 3-D space, we need 6 degrees of freedom or 6DOF and is represented by a composition of a 3x3 rotation matrix and 3x1 position vector [R | t]. For now, this is enough information for upcoming things to make sense.
The Physics engine has an entirely different task. It is designed to simulate physical phenomena like gravity, application of force and collision between objects. There are different types of objects, such as RigidBodies which can simulate contact responses and rotations, or SoftBodies which can be used to model deformable objects such as rubber or cloth. Internally, these objects' position is an [R | t] matrix called the rigid transformation matrix. RigidBody is non-deformable and usually doesn't represent real-world objects very faithfully. FEM-based physics engines solve this problem and can be integrated, but that's a project for another day. This project will attempt to implement fundamental functionality to support Trigger objects and force dynamics.
To integrate the Game and Physics engines, we should have some way of synchronizing these two worlds. We know both use the same data structure to represent position but operate on different types of objects and we should maintain a mapping between these entities. At the time of instantiating Game objects, its counterpart is created in the Physics engine. As seen in the figure, the game engine runs an iteration with a specific time-step loop; this time-step is used to evolve the physical state. Then, the rigid transformation of all Game objects with a Physics counterpart is updated, and this cycle is repeated.
The Scene class manages the Scenegraph in G3D, and the nodes inserted are an Entity class. Two basic types of entities are provided, a VisibleEntity and a MarkerEntity and these will be subclassed for our use case. As the names suggest, VisibleEntity is rendered on the screen, and a MarkerEntity can be used as a trigger object but not visible to the camera. In this design, I didn't modify the G3D source code, which resulted in this scheme. Luckily, G3D is designed to allow many custom extensions and desired operations and making it work was not too hard.
As for the engine, I didn't want to stick to just one, so I designed an interface that accepts G3D objects and can be queried for position updates. In my design, the objects are responsible for fetching the updates and this introduces some function calls and slight performance overhead. However, the overhead is minimal and can be ignored. The interface was realized with a BulletPhysics implementation. It holds a pointer to the btDynamicsWorld into which Bullet objects are inserted. It also maintains a HashMap for constant-time lookup for Entities to fetch position updates in the onSimulation method. Initially, I envisioned coming up with different types of entities with different behaviors. For example, TeleporterEntity, which contains a GhostEntity, can query the physics engine for overlapping objects and based on additional rules(distance from the center, or time spent in the overlapped state), transport these objects to another coordinate. Or we could think of a TimeTravelEntity, which stores their past 6DOF positions for a specific time and can travel back in time based on some trigger.
For object construction, I came up with a generic method for GO(G3D Object) to EO(physics Engine Object) and depending upon the type of object and the type of engine, the implementer could specialize these template arguments to construct appropriate objects. Likewise, there are templates to create GD(G3D Datatypes) from ED(Engine Datatypes) and vice-versa. These helper methods allow a lot of trivial code reuse.
G3D supports scene creation from a dictionary-like structure called AnyFile. It's a readable format and we can load and save the scene graph using this file. It contains information about the Models, Lighting and Entities in the scene and we can specify parameters like position and orientation or entities in the graph. It parses this file and constructs objects appropriately. In order to construct new entities, we should enable the read from Any and write to Any specification.
For the physics engine, there are primitive shapes for collision detection like a plane, box, sphere and capsule, etc. To create these shapes, we need a similar G3D object and there is a base class, Shape, and it is subclassed to represent different primitive shapes and can be used. To extend this functionality and support some other features, I came up with the was to come up with the AShape class composed of a Shape. This helps determine the type of object which is written in the AnyFile and also allows support for changing the shapes in Scene edit mode.
Similarly, the Rigid objects have certain parameters like rolling friction and angular velocity, etc. Another object must be included to set these parameters. Luckily, the Entity object already contained some parameters like mass and frame information which can be directly used to construct btRigidBody. To support other parameters, I created a Solid object that can be used to gain finer control over parameterization. Both of these objects derive from a PropertyChain object which can be added or removed. For now, I only added rolling friction as that's what I needed, but it can be extended very easily. I went with this because I thought that there could be more properties in the future, but a simple composition could've worked just fine.
For custom entities, we need to implement methods that enable GUI operations while the scene is in Edit mode. The method that achieves that is makeGUI which can be overridden for the desired effect. Loading from Any file is great, but it can get really complicated and increase development time as the complexity of the game increases. Having the option to edit parameters via a GUI and then exporting the scene is a better way. It is one of the key features of a fully-fledged game engine and reduces development time.
Based on the previous work and the design principle of delegating behavior to entities, we can proceed to add more objects that are based on two objects which we created earlier. Let's think of an entity that has prototype information about a PhysicsEntity and we want to spawn this prototype object every 2 seconds. We would also need a spawn point from which these prototypes are generated. The SpawnerEntity, which inherits from the GhostEntity, does what is desired. It has private members and the logic written in the onSimulation method to achieve the desired effect. We have to define new entities for new behaviors because of the way the AnyFile system works at present. It needs a concrete prototype for loading/unloading.
Likewise, let's say we want an entity to just grab and hold on to objects that trespass it. I came up with an AttractorEntity for this behavior which maintains a list of Constraints and simply binds an invisible anchor and the trespassing object inside the physics engine. I only implemented a 6DOF constraint which specifies that the origins of these two objects should be the same. With a single trespasser, it works out fine, but when multiple objects are introduced, there is a lot of jittery motion, as seen in the video. I haven't figured out a way to resolve this while achieving a black-hole-like effect; adding spring constraints might do the trick.
We can see everything in action in this video. It's not the best demonstration, but given the time and resources put into this, it illustrates the extensibility of this design well.
There were plenty of features missing in this implementation. A complete integration should target these points as well.
In debug mode, it could be helpful to visualize the underlying collision shapes that the physics engine is using. This helps the developer to figure out bugs in the game faster.
I only implemented one type of constraint in this project. However, a general physics engine has certain types of constraints and they are pretty much generic and used universally. A complete integration should support all well-known constraints.
In games, we usually need to know the exact points at which the objects are colliding. This can be used to create particle effects like explosions or dust rising from footsteps or even the burn marks on the road caused by a drifting car. This concept is well known and solutions exist in a physics engine to figure out the exact points of collision. The game logic can use this feedback to implement desired effects.
I worked on this for about a month and gave around 2-3 hours per day. This project is stalled at the moment because I came back to my hometown due to the pandemic and I couldn't bring my PC on which everything was integrated. Moreover, there is little sense in pursuing this any further because of the lack of general use cases. A better approach would be to work with direct integration of G3D with an engine and collaborate with others on that. This project gave me enough insight into the implementation aspect and I would like to learn and implement more.