Category: BGE

“Why is the Blender Game Engine difficult to maintain?” This question is getting thrown around some more now (especially with talks of overhauling or replacing the BGE). I’d like to take some time to point out a few of the many issues I’ve stumbled across in the BGE. I will keep to mostly current issues, but I will occasionally talk about previous issues that have been fixed if I need to highlight a certain point. Also, be aware that this might get a bit technical.

Overview of BGE Code

In order to facilitate understanding some of the issues, it is first good to understand a little about how the game engine is organized. Inside Blender’s source code you’ll find the folder source/gameengine, which contains the following folders: BlenderRoutines, Converter, Expressions, GameLogic, GamePlayer, Ketsji, Network, Physics, SceneGraph, and VideoTexture. These loosely correspond to the various modules/libraries of the BGE. There are also various prefixes used as pseudo-namespaces. I will discuss these prefixes as I talk about the relevant folder. However, note how we’ve got files with prefixes in folders were they don’t really belong (a first hint that things are amiss).


Prefixes: BL, KX

BL files are files that are responsible for interacting with Blender code/data. The BlenderRoutines folder contains mostly code for the embedded player (the code that runs when you press P in the viewport).


Prefixes: BL, KX

This folder handles converting Blender data to BGE data. The BGE (for the most part) does not use any Blender data directly, however I have advocated for more direct use of Blender data in the past.


Prefix: None
Here we have a lot of various data structures and utility classes (similar to Blender’s BLI code). The name Expressions comes from the various classes used to handle properties and how to make expressions with them (e.g., how to add two FloatValues together). Overall, I feel a lot of the code in this folder is the result of over-engineering. The main class for our Python wrapper (PyObjectPlus) also resides in this folder.


Prefix: SCA
Initially this was meant to contain logic bricks. Any logic bricks that do not rely on other parts of the BGE are prefixed with SCA (sensor, controller, and actuator). Any logic bricks that require access to other parts of the engine (e.g., mesh data, etc.) are prefixed either KX or KX_SCA.


This folder contains two subfolders:

  • common (prefix: GPC)
  • ghost (prefix: GPG)

The common folder contains code that should be common to multiple game players. The ghost folder contains a game player based on ghost. It should be noted that the embedded player is not defined here, nor does it use any of the code in common. Instead, it duplicates code and resides in the earlier mentioned BlenderRoutines folder.


Prefixes: BL, KX, KX_SCA, KX_SG

Ketsji more or less contains the “core” of the BGE. However, this core is rather bloated and contains a lot of code that should probably be split into their own modules (e.g., navmeshes, audio, etc.). It can also be seen from the large list of prefixes that this folder has gotten a little out of hand.


Prefix: NG
This folder is for the BGE’s networking implementation. The only actual implementation is a loopback networking interface which is used for message sensors/actuators. In other words, another example of over-engineering: we shouldn’t have coded a networking interface until we actually had a need for one.


This folder contains three subfolders:

  • Bullet (prefix: Ccd)
  • common (prefix: PHY)
  • Dummy (prefix: Dummy)

The common folder contains the interface for the physics subsystem. Dummy is just a empty implementation of the physics interface that does nothing. Bullet contains a physics implementation that makes use of the Bullet physics library.


Prefix: SG
Here resides the scenegraph, which handles tasks such as storing hierarchical transform data for nodes (game objects). Scenegraphs are also usually responsible for culling, but the BGE’s scenegraph does not. Instead, our KX_Scene class from Ketsji does. In addition, Bullet is also used for culling. So, this one task is scattered through out the codebase.


Prefix: None
This contains the code for the VideoTexture Python module.

Examples of Problems

If the overview was not enough to make one already question the quality of the BGE codebase, lets look at some specific parts of the code. The following sections are examples of various issues in the BGE codebase (this list is not exhaustive).

Rendering Code has Broken Encapsulation

In general, our rasterizer is quite messy. Why not simply clean it up then? The problem we face is that we have rendering (i.g, OpenGL) code scattered all around the code base. For example here is a list of the earlier mentioned modules that contain OpenGL code:

  • Rasterizer (good)
  • BlenderRoutines (bad)
  • GamePlayer (bad)
  • Ketsji (bad)
  • VideoTexture (bad)

The physics code also contained OpenGL code until recently. Why would a physics engine need to use OpenGL? You would probably guess for the “Physics Visualization” option, but that is being properly handled through the rasterizer interface. Instead, our physics code was using OpenGL in it’s view-frustum and occlusion culling tests. That’s right, our physics engine determines what is drawn, not our rasterizer or scenegraph. This should make a few people scratch their heads. The thing is, Bullet provides data structures for handling dynamic scenes and culling those dynamic scenes, and Bullet can do this a lot faster (logarithmic time complexity) than what we previously were using (linear time complexity). I can see why this optimization was done. However, the culling code is completely separate from the physics code. A GraphicsController was added to our physics interface to allow physics engines to do culling tests. Instead, we should have just created some code to allow the scenegraph to use Bullet to do culling (neatly encapsulated in some fashion).

Now, back to the rasterizer itself. Lets say I want to change how we use shaders (maybe to reduce the number of state changes). The most obvious spot to look for this code is in the rasterizer. However, the rasterizer does not know about shaders. For that mater it doesn’t know about materials or textures either, and, until recently, did not even know about lights. A fun note on lights: the fixed-function (i.e., Multitexture Material) lighting code used to be handled by players (e.g., the embedded player or stand-alone blenderplayer), which meant we had different fixed-function code for the different players.

Overall, the fact that our rendering code is all over the engine makes it very difficult to maintain. We cannot add a new rasterizer for things like OpenGL 3+ or OpenGL ES (mobile) until we get everything properly encapsulated behind our current rasterizer, which will take time.

Duplicate Code in VideoTexture

The VideoTexture module sits almost completely apart of the rest of the engine. At first glance this is a good thing (loosely coupled, few dependencies, etc.). However, taking a deeper look at the module shows that it achieves this lack of dependencies by copying large chunks of the render setup code from KX_KetsjiEngine (the core engine class). This creates a large chunk of duplicate code. Now, if a developer wants to fix/change something like how camera projections are handled (e.g., fix something related to orthographic cameras) they might find the code in KX_KetsjiEngine, make the change, and call it a day. However, they have just introduced a bug to the render-to-texture functionality offered by VideoTexture! Overall, I would like to see the VideoTexture features better integrated into the BGE and the VideoTexture module itself phased out. For example, we should just add “native” support for Blender’s movie texture type.

A further issue with the VideoTexture module is that it is only exposed through the Python API. The rest of the engine does not know it exists. This makes it difficult to use the VideoTexture module in other parts of the engine (e.g., for the earlier mentioned support for movie texture types). Ideally, the features in the VideoTexture module would be parts of the engine that are exposed to the Python API (like all of the other engine code).

Multiple Material Classes

In the BGE materials are handled via the following classes:

  • RAS_IPolygonMaterial – This is an interface in the rasterizer for handling materials
  • KX_BlenderMaterial – This is a concrete implementation of RAS_IPolygonMaterial that handles materials for the BGE (note that it’s in Ketsji!). This includes code for Multitexture and GLSL materials mixed together in the same file (breaking the single responsibility principle).
  • BL_Material – Essentially a copy of Blender’s material data made during conversion. This is made since the BGE tries to avoid using Blender data. However, one of the fields in this class is the Blender material, which various parts of the engine access. This invalidates the need for having BL_Material in the first place! Really, BL_Material is not needed and is causing extra clutter and confusion.
  • BL_Texture – Handles textures in Multitexture materials and custom shaders. GLSL mode uses Blender’s texture handling code (i.e., not code in the engine itself). Again, this isn’t in the rasterizer.
  • BL_Shader – This is for custom shaders created via the Python API. It should be noted, that despite the BL prefix, this class does not (directly) touch Blender data or code. It does make use of BL_Material and BL_Texture, which ultimately do interact with Blender code, but BL_Shader doesn’t need to know where the data came from. In other words, BL_Shader does not care what Blender does, but it still has the BL prefix, which is confusing to new developers.
  • BL_BlenderShader – Another shader class, but instead is used for Blender’s generated GLSL shaders. This means that custom-made user shaders have a completely different code path than the builtin shaders. Nice.

We used to also have a KX_PolygonMaterial for Singletexture mode. The general idea behind many of these classes is not bad, but, as mentioned, they have a lot of oddities.

Logic System Dependent on Logic Bricks

The game engine can not currently run without logic bricks. The entire logic system is dependent on logic brick code. For example, we cannot even run a Python script without getting logic bricks involved. Logic bricks are useful to have, but they should be properly encapsulated and organized. There are a few things that need to be done here:

  1. Logic bricks should be put into their own folder (GameLogic) and not be part of the core engine module.
  2. Logic bricks should be implemented as a logic system that is separate from the engine itself. This allows other forms of logic (e.g., Python and HIVE) to be integrated in a smoother manner. This could be achieved by having a list of logic systems that the game engine could iterate over and tell to update.
  3. Logic bricks should expose functionality, not contain it. A lot of this has been cleaned up, but we’ve had cases where features such as animations were coded directly into a logic brick. These types of features should be in the engine code, and then a logic brick can interface with that code. This allows other systems (e.g., Python, HIVE, other parts of the engine itself) to make use of these features. We’ve still got some features that only logic bricks have access to like 2D filters. We also have cases such as the Near and Radar bricks that create collision shapes in the physics engine to work. Where is this in the Python API?

Inconsistent Logic Bricks

While on the topic of logic bricks: logic bricks are rather inconsistent, which makes using or developing them awkward. For example, handling pulses is left up to the actuator (not some higher level system). This means that actuators can choose how to behave to positive or negative pulses. This sounds like a good idea until you realize two different patterns have emerged:

  • Actuators that only perform tasks while receiving a positive pulse.
  • Actuators that start performing tasks on a positive pulse and only stop when they receive a negative pulse.

This is just confusing to both new users (why do the actuators behave differently?) and developers (how should a new actuator behave?). What is worse is that this inconsistency cannot be fixed without breaking existing games.

Other Issues

I’ve just given a sampling of potential issues. Some other issues I haven’t gone in depth on are (among others):

  • KX_GameObject starting to fall into the trap of the god object anti-pattern
  • Physics being a mess in general (CcdPhysicsEnvironment.cpp contains a lot of classes, bad timestep, etc.)
  • Logic bricks being tied to framerate
  • The terrible inheritance structure used for deformers

I may write some articles on these problems in the future.

Patch Example: LoD

I would like to end this discussion with an example of how these types of issues affect actual development. For this example, we’ll take a look at the recently added LoD feature. Overall, this was a fairly simple patch (self contained, necessary changes were obvious, etc.). However, after being reviewed by three developers, we still had a few bugs creep in from it. One example was T39053. The bug was that using replace mesh via an actuator did not properly change the mesh material. The first sign of a design issue is that replace mesh is working different on the actuator than the Python/engine API. This shows that the actuator is doing more than simply exposing an interface. This bug took me way longer to fix than it should have. Between git bisects, head scratching and digging through code, I think I spent around 1~2 hours fixing the bug. The fix was committed as 6c9dd1. The LoD patch changed a little bit how mesh conversion worked, but this change was reviewed and was deemed to be fine. What we failed to notice is that the replace mesh actuator converts a mesh a second time instead of using the already converted mesh data. The actuator should never have done conversion like this (again, breaking the single responsibility principle).

I hope that this article gives people at least a small glimpse into the problems facing BGE developers and what makes the engine difficult to maintain.

In Ton’s recent blog post, he discussed a roadmap for Blender 2.7, 2.8 and beyond, which included a more tighter integration between Blender and the BGE. While this initially caused quite the stir in the BGE community with some thinking this meant dropping the BGE entirely, I see it more as a desire to get the two to share more code. Blender has smoke, fluids and particles, why shouldn’t we use those in the BGE? Too slow? Then lets speed them up and make Blender users happier in the process. The way I see it, the BGE can benefit from new features in Blender and Blender can benefit from performance improvements from the BGE. But, how do we get there? That’s what I aim to discuss in this article.

Sharing Blender Data

The first major problem that needs to be tackled is how the BGE handles Blender data. Currently, one of the BGE’s major design decisions is to never modify Blender data. While the BGE does modify Blender data in a few places (most notably lights), we’ve mostly stuck to this design principle, which has helped prevent numerous bugs and potentially corrupting users’ data. However, in doing so, we’ve had to recreate most of Blender’s data structures and convert all Blender data to BGE data. This also limits how we can interact with existing Blender tools. Blender has a lot of powerful mesh editing tools, but we can’t use those in the BGE because they require a Blender Mesh object while the BGE has a RAS_MeshObject, and using the original Blender Mesh would cause that data to change.

If we want a tighter integration between Blender and the BGE, we need to allow the BGE to have more direct control over Blender data. This means we need to find a way to allow the BGE to modify and use Blender data without changing the original data. The most obvious method is to give the BGE a copy of all of the data and then just trash the copy when the BGE is done. However, I think there is a bit more elegant solution to the problem. If you look at the existing code base, you can see that the Blenderplayer actually doesn’t have to worry about modifying Blender data as long as it never saves the Blendfile it reads. Only the embedded player has issues because it is using the Blender data already loaded in Blender. So, why not have the embedded player read from disk like the Blenderplayer? When the embedded player starts, the current Blendfile could be saved to disk and then loaded by the BGE. There are some details that have to be worked out here though, such as where do we save the file? A temporary location (e.g., /tmp)? That will cause path issues in larger games. Instead, I see two feasible locations: the original file or the original file appended with a “~”. The first would behave like a compiler would, you save before running your program, and is the approach I prefer. However, this changes the current behavior, which might upset some users.

A more long term solution to the problem of modifying Blender data is to drop the embedded player. As I mentioned before, the Blenderplayer doesn’t run into issues using Blender data since it doesn’t share a memory space with Blender. And, since the Blenderplayer supports being embedded into other applications, we can still have games running in what appears to be the viewport. In other words, we would not lose features! Some benefits to this approach:

  • Get rid of a lot of code (the whole source/gameengine/BlenderRoutines folder)
  • A lot less duplicate code
  • Smaller Blender runtime size (all BGE code would only be in the Blenderplayer, and not Blender)
  • Playing the game in the viewport and the Blenderplayer would be guaranteed to be the same (right now small differences exist)
  • The ability to modify Blender data without breaking Blender
  • A BGE crash won’t affect Blender since they will be in separate processes (like Chrome tabs)

However, there are some downsides, which include:

  • It will be more difficult to affect the BGE from Blender. At the moment this isn’t a problem, but if we want some goodies like Unity offers with adjusting the game using the editor while the game is running, we’d need to develop some inter-process communication protocol to get Blender and the BGE communicating.
  • We currently don’t allow embedding on OS X. I’m not sure if this is a limitation of OS X itself, or a lack of development effort on our part.

Using Blender Data

So, we’ve got some ways to minimize the issues of the BGE using Blender data, but what do we do with it? First off, I’d start to clean up the BGE code to use DNA data as storage and then shift the focus of the various BGE classes to act as wrappers around that storage. Where possible, the member functions of those classes could delegate to the various Blender kernel (BKE) functions. Once that is done, we can look into what Blender goodies we can start adding to the BGE using these new classes.

Viewport Drawing

While the BGE and Blender already share a fair amount of viewport drawing code (especially in GLSL Mode), this area could be much improved. The first task here is to get all of the OpenGL (and any calls to bf_gpu) into the Rasterizer, and only the Rasterizer. This requires moving material and lighting data out of Ketsji and into the Rasterizer. Once this is done, we can worry about how the BGE handles it’s drawing. The Rasterizer should have two modes (possibly implemented as two Rasterizers): fixed function pipeline and programmable pipeline. To do this, I would propose dropping Singletexture and making Multitexture code the basis for the fixed function Rasterizer, while GLSL mode would be the basis for the programmable Rasterizer. The programmable Rasterizer could have an OpenGL minimum of 2.1 as Ton suggested for his proposed roadmap, but I’d keep the fixed function Rasterizer as compatible with older hardware as possible.

After we have the Rasterizer cleaned up, we can start offloading as many tasks as possible from the Rasterizer to the bf_gpu module, which the viewport code also uses. The more we can put into this module, the more Blender and the BGE can share viewport drawing. Ideally, the Rasterizer would not have any OpenGL code and would rely entirely on bf_gpu, maximizing code reuse and sharing.


Using the ideas outline in this article, we’d have two main points of interaction between Blender and the BGE: BKE and bf_gpu. We could certainly look into more ways to increase integration between Blender and the BGE, but what I have discussed here will give us more than enough work for the foreseeable future. Also, please note that this is only a proposal and a listing of ideas, and by no means a definitive plan. Discussion and feedback is much encouraged and appreciated.

BGE Profile Stats and What They Mean

Many people are familiar with the “Show Framerate and Profile” option in the BGE and the mess of text it displays on their screen. However, not as many people truly know what the different statistics mean. This article aims to help improve people’s understanding of the profile stats and how to change your game to get those numbers down (less time spent is better for performance). Aside from the FPS, the profile shows nine stats: Physics, Logic, Animations (only in newer versions of Blender), Network, Scenegraph, Rasterizer, Services, Overhead, Outside, GPU Latency (only in newer versions of Blender). To get the most accurate readings, I recommend turning off “Use Frame Rate” and using your graphics card drivers (or the UI option in the render properties on newer versions of Blender) to force vsync off.


This represents the time spent on physics code. These days the BGE only uses Bullet for physics, so this stat mostly represents the time spent in Bullet. To reduce the time, you’ll need to simplify your physics so Bullet doesn’t have to do as much work. This can include using simpler physics shapes for objects. For example, if you have a complicated mesh for a character and you set the physics type to Convex Hull or Triangle Mesh (the default if no other bound type is explicitly set), Bullet has to do physics calculations with the complicated mesh, which is just a waste of time. Instead, try to see if something simpler like a sphere or box can do the trick. If not, at least setup a “proxy” by creating a simple version of your mesh that is invisible and is used for calculations instead of the complicated mesh that is used for rendering.


Time spent on logic is time that is spent on logic bricks and Python code (excluding code run through KX_Scene.pre_draw and KX_Scene.post_draw; those times show up under the Rasterizer). If you want to reduce this, you’ll need to simplify/optimize your logic bricks and Python code. I’m not going to give a tutorial on optimizing Python code, but this talk by Mike Fletcher (known for PyOpenGL) describes profiling Python code and some tips for optimizing. Remember, always profile your code before attempting to optimize it! As a last resort, you can also try moving some of your Python code to C/C++.


Under animations you have the time spent in Blender’s animation code, which the BGE makes use of. This includes things such as looking up pose data and interpolating key frames. However, be warned that sometimes things like calculating IK can show up under the scenegraph when calculating bone parents. Also, this category does not include the time spent to do the actual mesh deformation, this time is recorded under the Rasterizer category. To reduce the time spent on animation try to reduce the bone count in your armatures. You can also try switching your armatures over to iTaSC (set to simulation) for IK solving instead of the Legacy solver. iTaSC can be faster than the Legacy solver. In my tests I’ve seen 1.25~1.5x speed improvements when using iTaSC, but I’ve heard that 4x is not unreasonable.


This might come as a surprise to some, but the BGE actually has some networking code. However, this feature was never really developed, so now it is mostly a stub that can send messages over a loopback interface. This is how Message actuators and sensors (and the corresponding Python API features) work. It’s doubtful that this category will ever be a time sink, but if you’re having problems, take a look at the number of messages you’re sending and see if you can reduce them.


The scenegraph keeps track of objects’ position, orientation and scale (and probably a few other things I’m not thinking of at the moment). This also includes updating parent-child relation ships (e.g., bone parents). As mentioned earlier, the time for bone parents can include getting updated pose data, which possibly means calculating IK. If the scenegraph is really high, try reducing the number of objects in your scene. You can also try using iTaSC (mentioned under Animations). The scenegraph also handles culling (frustum and occlusion) calculations.


The rasterizer is responsible for actually rendering the game. This includes rendering geometry, shaders, and 2D filters. Since the BGE makes use of double buffering, the rasterizer also has to swap the buffers, which can give really high readings if vsync is enabled (SwapBuffers() blocks while waiting for a screen refresh). This time is now represented in the GPU Latency category. To reduce the time spent in the rasterizer (or the GPU latency), you can try to simplify your geometry and materials. Also make sure you don’t have too many lights casting dynamic shadows. Each shadow cast requires the scene to be rendered. So, if you have three spot lights casting shadows, the scene is rendered four times (three for shadows and once for the actual scene)! 2D filters can also suck up some time, so even if that bloom, depth of field and SSAO look nice, you might want to consider removing them or trying to reduce the number of samples they use.


This is the time spent processing various system devices (keyboard, mouse, etc). You shouldn’t have a problem with this category taking up time.


This is probably one of the most mis-leading category names. The “overhead” is all the text drawn on top of the game screen in the top left corner. This includes the framerate, profile, and debug properties. So, the time spent on this category should be reclaimed when running your game in a more “release” configuration (i.e., you’re not drawing all that debug/profiling text to the screen). If you want to reduce the time spent here while profiling, try reducing the number of debug properties you display.


This is time spent outside of the BGE’s main loop. In other words, something is taking time away from the BGE. You really have no control over this area. If you have a lot of other programs running, you can try to close some.

GPU Latency

This category is new to r59097, and will be in Blender 2.69. This category represents the time spent waiting on the GPU. This category used to be entirely within the Rasterizer category, so the same tips from there apply to this category. However, time spent waiting for vsync will show up here now instead of in the Rasterizer category. Also, this category is a bit different from other categories in that it is idle time (the CPU is just waiting on the GPU). This means this is time that can be used by the CPU (e.g., physics, animations, logic, etc.) without affecting the framerate. This also means that if the GPU Latency is high, trying to optimize CPU time is pointless as it will, also, not affect the framerate. If this value is low, it is still possible to be GPU bound. Various OpenGL calls (usually some form of glGet) can cause a sync event in which the CPU has to wait on the GPU.  These sync events can cause odd profiler readings depending on which part of the codebase they occur in. For example, if overhead is suddenly taking up a large amount of time, odds are that the font rendering triggered a sync.

I hope people find this useful.


BGE Python Components

I introduce to you, the (hopefully) next step forward in BGE logic: components. I was playing around with Unity and I liked how their component system worked, so I scratched my previous game object class work, and decided to go with components instead.

So, what exactly is a component?

The idea of a component is a simple one. They are modules that can be attached to game objects. You can attach as many as you want, and each one serves a specific purpose such as third person character movement with WASD. After a component has been attached to an object, it can have various exposed settings that you can edit. In the case of a third person movement component, this could be things such as movement speed and turn speed.

So, how do I make components?

Components are Python classes that subclass KX_PythonComponent. For settings they have a class level “args” dictionary with the keys being the name of the property/setting and the values being the default values for that setting. The type of the value also determines what type the property will have (e.g., a value of 2.6 will mean that the property will be a float property). Currently, only integer, float, string and boolean are supported. A component also has a start() method which accepts a dictionary. This dictionary will match the args dictionary, but will have values from the user instead of the defaults. Every frame the engine is running, a components update() method is called. This will most likely be the bulk of most components. In the movement example this includes checking for user input and acting accordingly. And lastly, an example component:

import bge

class ThirdPerson(bge.types.KX_PythonComponent):
	"""Basic third person controls

	W: move forward
	A: turn left
	S: move backward
	D: turn right


	args = {
		"Move Speed": 10,
		"Turn Speed": 0.04

	def start(self, args):
		self.move_speed = args['Move Speed']
		self.turn_speed = args['Turn Speed']

	def update(self):
		keyboard =

		move = 0
		rotate = 0

		if keyboard[]:
			move += self.move_speed
		if keyboard[]:
			move -= self.move_speed

		if keyboard[]:
			rotate += self.turn_speed
		if keyboard[]:
			rotate -= self.turn_speed

		self.object.setLinearVelocity((0, move, 0), True)
		self.object.applyRotation((0, 0, rotate), True)

Cool, have a video or something showing this?

Yup, right here.

Nice, when can we expect this in trunk?

Short answer: when it’s ready.

Longer answer: More tests need to be run and code needs to be cleaned up. From there I will have to get someone to review this as it’s a larger patch. And from there, I don’t know if it will make it in before 2.6 as there is a semi-freeze in affect for the SVN. The focus is supposed to be on stabilizing and bug fixing, not adding new toys. If the patch only affected BGE code, then it would be easier to get in, but it has Blender code changes, which people are a bit stricter about.

Well, do you have any tests builds then?

Nope, not at this time. But, you can grab the patch in it’s raw form from here. I say raw because I haven’t gone back over it to remove no longer needed stuff (commented out code, debug prints, etc).

Stay tuned for updates and possibly some builds!


EDIT: The previous patch was missing files. I’ve uploaded a newer version.


As more people start to play around with Blender 2.5x, I’ve seen a increase in questions which basically boil down to: how do I get my 2.49 BGE scripts to work in 2.5? After having ported a few scripts, I’ve seen that the process breaks down to these three steps:

  1. Replace deprecated methods
  2. Update to changes in 2.5 API
  3. Update to changes in Python 3.x

Replace deprecated methods

The first step is to make sure your script works in Blender 2.49 without any deprecation warnings. All methods deprecated in 2.49 have been removed in 2.5. Make sure Game -> Ignore Deprecation Warnings is not checked and then run the script/game. The deprecation warnings take the following form:

old_way is deprecated, please use new_way instead.

For example, if you use the deprecated getOwner() method of SCA_PythonController, you’ll get the following:

Method getOwner() is deprecated, please use the owner property instead.

In this case, replace the getOwner() method with the owner property. Thus this:




Another change in 2.49 was how custom game properties are stored on objects. Prior to 2.49 users would use attribute style access for these properties ( However, in 2.49 this was deprecated in favor of dictionary style access for properties (my_ob[‘foo’]). By having the properties in a dictionary we avoid potential conflicts with the built-in attributes. For example, I can now use my_ob[‘position’] without having to worry about the built-in my_ob.position, or safely use my_ob[‘velocity’] knowing that if a my_ob.velocity were ever added, my property wouldn’t conflict. Another thing this changes is how a user checks to see if a game object has a certain property. Prior to 2.49, the following was used:

# Check to see if my_ob has the property 'foo'
if hasattr(my_ob, 'foo'):

Some might remember the use of has_key() to find properties in 2.49. However, since this was already deprecated in Python, it was quickly dropped in favor of using the “in” keyword like so:

# Check to see if my_ob has the property 'foo'
if 'foo' in my_ob:

After everything is working fine in 2.49 without any deprecation warnings, we are ready to move on to 2.5.

Update to changes in 2.5 API

While Blender’s API underwent a major script-breaking change, the goal for the BGE was to try really hard not to break old scripts. However, there is one change that does break scripts. Prior to 2.5 when referencing objects by name, you had to use the “OB” prefix. For example:

my_ob = GameLogic.getCurrentScene().objects['OBmy_ob']

While the prefix is still used internally it’s no longer used in the API. This makes things a little nicer, but it does break old scripts. Fortunately it’s easy to update; all you have to do is remove the prefix like so:

my_ob = GameLogic.getCurrentScene().objects['my_ob']

Another change in the 2.5 api is the introduction of aliases for all of the BGE modules. Both the old and the new names work in 2.5. However, GameLogic is no longer automatically imported; if you were relying on that, you’ll need to add “import GameLogic” to the top of your scripts. The aliases are as follows:

2.5 2.49
bge N/A
bge.logic GameLogic
bge.render Rasterizer GameKeys
bge.constraints PhysicsConstraints
bge.types GameTypes
bge.texture VideoTexture

Update to changes in Python 3.x

Blender 2.5 uses Python 3.1, which scripts need to be updated for. For a complete list of the changes, look here I wont go over all of them, however, there are two changes that I see which most often affect BGE scripts. First off, print is a function instead of a statement. So instead of:

print "foo"



Also, the division operator (/) now uses “real” division for two integers as opposed to integer division. So, / will always give you a float now. To use integer division use // instead.

BGE 2.5 Python API

Docs on the BGE’s Python API can be found here:

From the BGE you have access to the “Standalone Modules” and the “Game Engine Modules”.


And that should do the trick! See, not as bad as you thought, right?


PCF Soft Shadows in the BGE

Last night I had decided that I wanted to take a look at what it would take to get some better looking shadows in the BGE. After doing some reading on soft shadows, I implemented PCF soft shadows in the BGE. Here are my results:

PCF Soft Shadows in the BGE

PCF Soft Shadows in the BGE

Currently the only user control over the soft shadows is the sample size which is controlled by the softness value. The number of samples is hard coded to 16 (4×4).

Here is the patch:


Custom Game Object Classes For The BGE

Yay, a post that’s not a status report!

In response to this thread:

I started poking around at the BGE and in particular focusing on “replacing” the built-in KX_GameObject with a user defined subclass. Mostly, all I currently have is that the user defined class’ main() method get’s called every frame. This allows users to simply create a class for an object and not even need to touch logic bricks. A video showing this can be found here:

And a Windows build with the patch can be found here:

Since posting the video and build, I’ve done a bit more playing around, and I have collision callbacks working now too. The class I’m currently doing testing with:

import bge # New top level module in 2.5

class Player(bge.types.KX_GameObject):
	def main(self):
		if (, bge.logic.KX_INPUT_ACTIVE) in
			self.applyRotation((0, 0, .05), True)

	def on_collision(self, other):
		self.applyForce((0, 0, 500), True)


And we have geometry shaders!

In my lastest commit to my branch (r28848), I got geometry shaders working in the viewport and the BGE. Here is a screenshot of a tessellation shader I got from Martinsh (thanks Martinsh!):

If you look at the right 3d view you’ll see that the plane only has a single face. However, in the left 3d view, you can see the effects of the geometry shader, which tessellates the mesh (adds more geometry).

For the time being GL_GEOMETRY_INPUT_TYPE_EXT is set to GL_TRIANGLES and GL_GEOMETRY_OUTPUT_TYPE_EXT is set to GL_TRIANGLE_STRIP. I’ll add the ability to change these later. Also, soon I’ll be working on passing uniform values around.



Initial bgui beta release (0.01)

I’ve been working on a GUI library to be used with the Blender Game Engine. It is licensed under a zlib style license and uses the BGE’s VideoTexture module for handling image data, BGL for drawing, and BLF for font data and text rendering. So far there are only two widgets available: Image and Label. My main purpose for making a release now is to get feedback on the api. Also, a note on the api: it’s not “stable” and likely to change in the future.


  • Recent Blender 2.5 build (rev 28590 or later)

Here is a download link:

First Google Summer of Code commit

I’ve committed the first bits of my Google Summer of Code project (  to my Blender branch ( Here is my message from the commit log:

Getting a start on my gsoc project:
* bf_gpu can now use custom shaders
* filenames for the shaders are currently being stored on the material
* RNA has access to the filenames of the custom shaders
* I’ve added a quick ui to the custom shaders for experimenting, it can be found in the material panel
* I’ve added stuff for geometry shaders as well, but they currently aren’t working right.
Here is a screenshot where I use a simple toon shader:
Me using a toon shader
Here is the vertex shader:
#version 120

varying vec3 varnormal, varpos;

void main()
	gl_Position = ftransform();
	gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
	varnormal = (gl_NormalMatrix * gl_Normal);
	varpos = (gl_ModelViewMatrix * gl_Vertex).xyz;
And the fragment shader:
#version 120

varying vec3 varnormal, varpos;

void main()
	vec3 n, lightDir;
	float intensity, factor;

	vec4 colorMap = vec4(0.0, 0.0, 0.8, 1.0);

	n = normalize(varnormal);
	lightDir = normalize(gl_LightSource[0] - varpos);
	intensity = max(dot(n, lightDir), 0.0);

	if (intensity > 0.60)
		factor = 1.0;
	else if (intensity > 0.25)
		factor = 0.7;
		factor = 0.0;

	gl_FragColor = colorMap * vec4(factor, factor, factor, 1);

And the only material settings I used were the newly added custom shader options. 🙂