Latest Entries »

“Why is the Blender Game Engine difficult to maintain?” This question is getting thrown around some more now (especially with talks of overhauling or replacing the BGE). I’d like to take some time to point out a few of the many issues I’ve stumbled across in the BGE. I will keep to mostly current issues, but I will occasionally talk about previous issues that have been fixed if I need to highlight a certain point. Also, be aware that this might get a bit technical.

Overview of BGE Code

In order to facilitate understanding some of the issues, it is first good to understand a little about how the game engine is organized. Inside Blender’s source code you’ll find the folder source/gameengine, which contains the following folders: BlenderRoutines, Converter, Expressions, GameLogic, GamePlayer, Ketsji, Network, Physics, SceneGraph, and VideoTexture. These loosely correspond to the various modules/libraries of the BGE. There are also various prefixes used as pseudo-namespaces. I will discuss these prefixes as I talk about the relevant folder. However, note how we’ve got files with prefixes in folders were they don’t really belong (a first hint that things are amiss).

BlenderRoutines

Prefixes: BL, KX

BL files are files that are responsible for interacting with Blender code/data. The BlenderRoutines folder contains mostly code for the embedded player (the code that runs when you press P in the viewport).

Converter

Prefixes: BL, KX

This folder handles converting Blender data to BGE data. The BGE (for the most part) does not use any Blender data directly, however I have advocated for more direct use of Blender data in the past.

Expressions

Prefix: None
Here we have a lot of various data structures and utility classes (similar to Blender’s BLI code). The name Expressions comes from the various classes used to handle properties and how to make expressions with them (e.g., how to add two FloatValues together). Overall, I feel a lot of the code in this folder is the result of over-engineering. The main class for our Python wrapper (PyObjectPlus) also resides in this folder.

GameLogic

Prefix: SCA
Initially this was meant to contain logic bricks. Any logic bricks that do not rely on other parts of the BGE are prefixed with SCA (sensor, controller, and actuator). Any logic bricks that require access to other parts of the engine (e.g., mesh data, etc.) are prefixed either KX or KX_SCA.

GamePlayer

This folder contains two subfolders:

  • common (prefix: GPC)
  • ghost (prefix: GPG)

The common folder contains code that should be common to multiple game players. The ghost folder contains a game player based on ghost. It should be noted that the embedded player is not defined here, nor does it use any of the code in common. Instead, it duplicates code and resides in the earlier mentioned BlenderRoutines folder.

Ketsji

Prefixes: BL, KX, KX_SCA, KX_SG

Ketsji more or less contains the “core” of the BGE. However, this core is rather bloated and contains a lot of code that should probably be split into their own modules (e.g., navmeshes, audio, etc.). It can also be seen from the large list of prefixes that this folder has gotten a little out of hand.

Network

Prefix: NG
This folder is for the BGE’s networking implementation. The only actual implementation is a loopback networking interface which is used for message sensors/actuators. In other words, another example of over-engineering: we shouldn’t have coded a networking interface until we actually had a need for one.

Physics

This folder contains three subfolders:

  • Bullet (prefix: Ccd)
  • common (prefix: PHY)
  • Dummy (prefix: Dummy)

The common folder contains the interface for the physics subsystem. Dummy is just a empty implementation of the physics interface that does nothing. Bullet contains a physics implementation that makes use of the Bullet physics library.

SceneGraph

Prefix: SG
Here resides the scenegraph, which handles tasks such as storing hierarchical transform data for nodes (game objects). Scenegraphs are also usually responsible for culling, but the BGE’s scenegraph does not. Instead, our KX_Scene class from Ketsji does. In addition, Bullet is also used for culling. So, this one task is scattered through out the codebase.

VideoTexture

Prefix: None
This contains the code for the VideoTexture Python module.

Examples of Problems

If the overview was not enough to make one already question the quality of the BGE codebase, lets look at some specific parts of the code. The following sections are examples of various issues in the BGE codebase (this list is not exhaustive).

Rendering Code has Broken Encapsulation

In general, our rasterizer is quite messy. Why not simply clean it up then? The problem we face is that we have rendering (i.g, OpenGL) code scattered all around the code base. For example here is a list of the earlier mentioned modules that contain OpenGL code:

  • Rasterizer (good)
  • BlenderRoutines (bad)
  • GamePlayer (bad)
  • Ketsji (bad)
  • VideoTexture (bad)

The physics code also contained OpenGL code until recently. Why would a physics engine need to use OpenGL? You would probably guess for the “Physics Visualization” option, but that is being properly handled through the rasterizer interface. Instead, our physics code was using OpenGL in it’s view-frustum and occlusion culling tests. That’s right, our physics engine determines what is drawn, not our rasterizer or scenegraph. This should make a few people scratch their heads. The thing is, Bullet provides data structures for handling dynamic scenes and culling those dynamic scenes, and Bullet can do this a lot faster (logarithmic time complexity) than what we previously were using (linear time complexity). I can see why this optimization was done. However, the culling code is completely separate from the physics code. A GraphicsController was added to our physics interface to allow physics engines to do culling tests. Instead, we should have just created some code to allow the scenegraph to use Bullet to do culling (neatly encapsulated in some fashion).

Now, back to the rasterizer itself. Lets say I want to change how we use shaders (maybe to reduce the number of state changes). The most obvious spot to look for this code is in the rasterizer. However, the rasterizer does not know about shaders. For that mater it doesn’t know about materials or textures either, and, until recently, did not even know about lights. A fun note on lights: the fixed-function (i.e., Multitexture Material) lighting code used to be handled by players (e.g., the embedded player or stand-alone blenderplayer), which meant we had different fixed-function code for the different players.

Overall, the fact that our rendering code is all over the engine makes it very difficult to maintain. We cannot add a new rasterizer for things like OpenGL 3+ or OpenGL ES (mobile) until we get everything properly encapsulated behind our current rasterizer, which will take time.

Duplicate Code in VideoTexture

The VideoTexture module sits almost completely apart of the rest of the engine. At first glance this is a good thing (loosely coupled, few dependencies, etc.). However, taking a deeper look at the module shows that it achieves this lack of dependencies by copying large chunks of the render setup code from KX_KetsjiEngine (the core engine class). This creates a large chunk of duplicate code. Now, if a developer wants to fix/change something like how camera projections are handled (e.g., fix something related to orthographic cameras) they might find the code in KX_KetsjiEngine, make the change, and call it a day. However, they have just introduced a bug to the render-to-texture functionality offered by VideoTexture! Overall, I would like to see the VideoTexture features better integrated into the BGE and the VideoTexture module itself phased out. For example, we should just add “native” support for Blender’s movie texture type.

A further issue with the VideoTexture module is that it is only exposed through the Python API. The rest of the engine does not know it exists. This makes it difficult to use the VideoTexture module in other parts of the engine (e.g., for the earlier mentioned support for movie texture types). Ideally, the features in the VideoTexture module would be parts of the engine that are exposed to the Python API (like all of the other engine code).

Multiple Material Classes

In the BGE materials are handled via the following classes:

  • RAS_IPolygonMaterial – This is an interface in the rasterizer for handling materials
  • KX_BlenderMaterial – This is a concrete implementation of RAS_IPolygonMaterial that handles materials for the BGE (note that it’s in Ketsji!). This includes code for Multitexture and GLSL materials mixed together in the same file (breaking the single responsibility principle).
  • BL_Material – Essentially a copy of Blender’s material data made during conversion. This is made since the BGE tries to avoid using Blender data. However, one of the fields in this class is the Blender material, which various parts of the engine access. This invalidates the need for having BL_Material in the first place! Really, BL_Material is not needed and is causing extra clutter and confusion.
  • BL_Texture – Handles textures in Multitexture materials and custom shaders. GLSL mode uses Blender’s texture handling code (i.e., not code in the engine itself). Again, this isn’t in the rasterizer.
  • BL_Shader – This is for custom shaders created via the Python API. It should be noted, that despite the BL prefix, this class does not (directly) touch Blender data or code. It does make use of BL_Material and BL_Texture, which ultimately do interact with Blender code, but BL_Shader doesn’t need to know where the data came from. In other words, BL_Shader does not care what Blender does, but it still has the BL prefix, which is confusing to new developers.
  • BL_BlenderShader – Another shader class, but instead is used for Blender’s generated GLSL shaders. This means that custom-made user shaders have a completely different code path than the builtin shaders. Nice.

We used to also have a KX_PolygonMaterial for Singletexture mode. The general idea behind many of these classes is not bad, but, as mentioned, they have a lot of oddities.

Logic System Dependent on Logic Bricks

The game engine can not currently run without logic bricks. The entire logic system is dependent on logic brick code. For example, we cannot even run a Python script without getting logic bricks involved. Logic bricks are useful to have, but they should be properly encapsulated and organized. There are a few things that need to be done here:

  1. Logic bricks should be put into their own folder (GameLogic) and not be part of the core engine module.
  2. Logic bricks should be implemented as a logic system that is separate from the engine itself. This allows other forms of logic (e.g., Python and HIVE) to be integrated in a smoother manner. This could be achieved by having a list of logic systems that the game engine could iterate over and tell to update.
  3. Logic bricks should expose functionality, not contain it. A lot of this has been cleaned up, but we’ve had cases where features such as animations were coded directly into a logic brick. These types of features should be in the engine code, and then a logic brick can interface with that code. This allows other systems (e.g., Python, HIVE, other parts of the engine itself) to make use of these features. We’ve still got some features that only logic bricks have access to like 2D filters. We also have cases such as the Near and Radar bricks that create collision shapes in the physics engine to work. Where is this in the Python API?

Inconsistent Logic Bricks

While on the topic of logic bricks: logic bricks are rather inconsistent, which makes using or developing them awkward. For example, handling pulses is left up to the actuator (not some higher level system). This means that actuators can choose how to behave to positive or negative pulses. This sounds like a good idea until you realize two different patterns have emerged:

  • Actuators that only perform tasks while receiving a positive pulse.
  • Actuators that start performing tasks on a positive pulse and only stop when they receive a negative pulse.

This is just confusing to both new users (why do the actuators behave differently?) and developers (how should a new actuator behave?). What is worse is that this inconsistency cannot be fixed without breaking existing games.

Other Issues

I’ve just given a sampling of potential issues. Some other issues I haven’t gone in depth on are (among others):

  • KX_GameObject starting to fall into the trap of the god object anti-pattern
  • Physics being a mess in general (CcdPhysicsEnvironment.cpp contains a lot of classes, bad timestep, etc.)
  • Logic bricks being tied to framerate
  • The terrible inheritance structure used for deformers

I may write some articles on these problems in the future.

Patch Example: LoD

I would like to end this discussion with an example of how these types of issues affect actual development. For this example, we’ll take a look at the recently added LoD feature. Overall, this was a fairly simple patch (self contained, necessary changes were obvious, etc.). However, after being reviewed by three developers, we still had a few bugs creep in from it. One example was T39053. The bug was that using replace mesh via an actuator did not properly change the mesh material. The first sign of a design issue is that replace mesh is working different on the actuator than the Python/engine API. This shows that the actuator is doing more than simply exposing an interface. This bug took me way longer to fix than it should have. Between git bisects, head scratching and digging through code, I think I spent around 1~2 hours fixing the bug. The fix was committed as 6c9dd1. The LoD patch changed a little bit how mesh conversion worked, but this change was reviewed and was deemed to be fine. What we failed to notice is that the replace mesh actuator converts a mesh a second time instead of using the already converted mesh data. The actuator should never have done conversion like this (again, breaking the single responsibility principle).

I hope that this article gives people at least a small glimpse into the problems facing BGE developers and what makes the engine difficult to maintain.

In Ton’s recent blog post, he discussed a roadmap for Blender 2.7, 2.8 and beyond, which included a more tighter integration between Blender and the BGE. While this initially caused quite the stir in the BGE community with some thinking this meant dropping the BGE entirely, I see it more as a desire to get the two to share more code. Blender has smoke, fluids and particles, why shouldn’t we use those in the BGE? Too slow? Then lets speed them up and make Blender users happier in the process. The way I see it, the BGE can benefit from new features in Blender and Blender can benefit from performance improvements from the BGE. But, how do we get there? That’s what I aim to discuss in this article.

Sharing Blender Data

The first major problem that needs to be tackled is how the BGE handles Blender data. Currently, one of the BGE’s major design decisions is to never modify Blender data. While the BGE does modify Blender data in a few places (most notably lights), we’ve mostly stuck to this design principle, which has helped prevent numerous bugs and potentially corrupting users’ data. However, in doing so, we’ve had to recreate most of Blender’s data structures and convert all Blender data to BGE data. This also limits how we can interact with existing Blender tools. Blender has a lot of powerful mesh editing tools, but we can’t use those in the BGE because they require a Blender Mesh object while the BGE has a RAS_MeshObject, and using the original Blender Mesh would cause that data to change.

If we want a tighter integration between Blender and the BGE, we need to allow the BGE to have more direct control over Blender data. This means we need to find a way to allow the BGE to modify and use Blender data without changing the original data. The most obvious method is to give the BGE a copy of all of the data and then just trash the copy when the BGE is done. However, I think there is a bit more elegant solution to the problem. If you look at the existing code base, you can see that the Blenderplayer actually doesn’t have to worry about modifying Blender data as long as it never saves the Blendfile it reads. Only the embedded player has issues because it is using the Blender data already loaded in Blender. So, why not have the embedded player read from disk like the Blenderplayer? When the embedded player starts, the current Blendfile could be saved to disk and then loaded by the BGE. There are some details that have to be worked out here though, such as where do we save the file? A temporary location (e.g., /tmp)? That will cause path issues in larger games. Instead, I see two feasible locations: the original file or the original file appended with a “~”. The first would behave like a compiler would, you save before running your program, and is the approach I prefer. However, this changes the current behavior, which might upset some users.

A more long term solution to the problem of modifying Blender data is to drop the embedded player. As I mentioned before, the Blenderplayer doesn’t run into issues using Blender data since it doesn’t share a memory space with Blender. And, since the Blenderplayer supports being embedded into other applications, we can still have games running in what appears to be the viewport. In other words, we would not lose features! Some benefits to this approach:

  • Get rid of a lot of code (the whole source/gameengine/BlenderRoutines folder)
  • A lot less duplicate code
  • Smaller Blender runtime size (all BGE code would only be in the Blenderplayer, and not Blender)
  • Playing the game in the viewport and the Blenderplayer would be guaranteed to be the same (right now small differences exist)
  • The ability to modify Blender data without breaking Blender
  • A BGE crash won’t affect Blender since they will be in separate processes (like Chrome tabs)

However, there are some downsides, which include:

  • It will be more difficult to affect the BGE from Blender. At the moment this isn’t a problem, but if we want some goodies like Unity offers with adjusting the game using the editor while the game is running, we’d need to develop some inter-process communication protocol to get Blender and the BGE communicating.
  • We currently don’t allow embedding on OS X. I’m not sure if this is a limitation of OS X itself, or a lack of development effort on our part.

Using Blender Data

So, we’ve got some ways to minimize the issues of the BGE using Blender data, but what do we do with it? First off, I’d start to clean up the BGE code to use DNA data as storage and then shift the focus of the various BGE classes to act as wrappers around that storage. Where possible, the member functions of those classes could delegate to the various Blender kernel (BKE) functions. Once that is done, we can look into what Blender goodies we can start adding to the BGE using these new classes.

Viewport Drawing

While the BGE and Blender already share a fair amount of viewport drawing code (especially in GLSL Mode), this area could be much improved. The first task here is to get all of the OpenGL (and any calls to bf_gpu) into the Rasterizer, and only the Rasterizer. This requires moving material and lighting data out of Ketsji and into the Rasterizer. Once this is done, we can worry about how the BGE handles it’s drawing. The Rasterizer should have two modes (possibly implemented as two Rasterizers): fixed function pipeline and programmable pipeline. To do this, I would propose dropping Singletexture and making Multitexture code the basis for the fixed function Rasterizer, while GLSL mode would be the basis for the programmable Rasterizer. The programmable Rasterizer could have an OpenGL minimum of 2.1 as Ton suggested for his proposed roadmap, but I’d keep the fixed function Rasterizer as compatible with older hardware as possible.

After we have the Rasterizer cleaned up, we can start offloading as many tasks as possible from the Rasterizer to the bf_gpu module, which the viewport code also uses. The more we can put into this module, the more Blender and the BGE can share viewport drawing. Ideally, the Rasterizer would not have any OpenGL code and would rely entirely on bf_gpu, maximizing code reuse and sharing.

Conclusion

Using the ideas outline in this article, we’d have two main points of interaction between Blender and the BGE: BKE and bf_gpu. We could certainly look into more ways to increase integration between Blender and the BGE, but what I have discussed here will give us more than enough work for the foreseeable future. Also, please note that this is only a proposal and a listing of ideas, and by no means a definitive plan. Discussion and feedback is much encouraged and appreciated.

Bgui 0.08 Released

I’m looking to make some breaking changes to Bgui’s API, so I figured it was about time for a 0.08 release (especially since 0.07 was released over a year ago). Some cool changes that people might like include font outlines, a better animation system, and more control over font theming for the TextBlock and TexInput widgets. For more details on the changes, take a look at the changelog.

You can grab the new version from here.

Requirements:

  • Blender 2.6+ (tested against 2.66a)

Haven’t tried out Bgui yet? There is a Getting Started guide in the wiki.

Cheers,
Moguri

Well, this is it, my last report for GSoC 2012:

 

What did you do this week?

I cleaned up the Pre Z code a bit more. For now I’m only doing the Pre Z for GLSL Materials since the goal is to minimize time spent on fragment shaders. However, this means that Multitexture and Singletexture are working again when the patch is applied. I also got custom vertex shaders partially working; the problem is I need to figure out how/when to switch back to the Pre Z vertex shader after using a custom one. I have also been playing with GPU PerfStudio and APITrace to try and get some more info about the GPU. So far, I don’t have any new optimizations that give any noticeable results, but I believe the bottleneck is still in the fragment shaders.

Tracker stats:

New: 0
Closed: 1 (1 by me)
Net Change: -1
Current: 153

For those that are interested, in total, I have closed around 65 bug reports as part of this year’s summer of code.

What do you plan to do next week?
Next Monday is the pencils down date, so I’ll start getting some patches ready to send off to code review.

Are there any problems that will require extra attention and what impact will they have on your proposed schedule?

Nope, things are going smoothly.

Are there any builds available?

There are some on GraphicAll.

Cheers,
Moguri

What did you do this week?

While I fixed a couple of bugs, I spent way more time on optimizing this week. I spent some time with AMD’s GPU PerfStudio 2 on an AMD card. After a lot of profiling and toying around, I found that fragment shaders were really slowing down the Necrosys map. I did some research and decided to try implementing a depth pre-pass/Pre-Z pass to reduce overdraw and the amount of time needed to process fragments by culling them with a depth test. For more information on early depth testing, there is this article from AMD, which I found very helpful. This yielded a 60% increase in the fps of the Necrosys map (going from about 63fps to about 104fps) on my system. I started a thread on Blender Artists to try and collect more data on how this optimization affects other scenes.

I also implemented display lists for shadows, but this gave no noticeable performance difference since the bottleneck was not in transferring vertices.

Tracker stats:

New: 4
Closed: 2 (2 by me)
Net Change: +2
Current: 154

What do you plan to do next week?
Next Monday is the “suggested ‘pencils down’ date,” but I don’t think it affects my project a whole lot this year since most of what I’m doing is cleaning and scrubbing. I have some more test files that I will take a look at for optimization. Maybe I can also look into closing enough reports to get back down to three pages in the tracker.

Are there any problems that will require extra attention and what impact will they have on your proposed schedule?

Nope, things are going smoothly.

Are there any builds available?

There are some on GraphicAll.

Cheers,
Moguri

What did you do this week?

Recently I’ve been profiling the BGE’s VBO code, and I’ve been rather disappointed with it. Even after some cleanup/optimization (including a nice speedup to skinned meshes and other frequently updating meshes), I could not get VBOs as fast as vertex arrays with display lists. I’ve also come to find out that Nvidia has a particularly nice display list compiler, which will make it difficult to get Nvidia cards running faster with VBOs than with display lists. I was, however, hoping to at least get ATI and Intel cards to run faster. However, on those two cards, VBOs are still running slower than vertex arrays alone (no display lists!). I might try to get more gains out of VBOs, but I think I might want to start looking elsewhere.

Taking a look at components, I got the branch to compile again and I improved reloading of components; they now update properties instead of recreating (meaning you don’t lose your settings). While looking at the code, I realized that it had some style issues, so I cleaned it up to better match what I could remember from Blender’s style guide (C-style comments in C code, K&R bracing for loops and ifs).

As for the bug tracker, I managed to close the following bugs this week:

  • Action actuator doesn’t finish playing if frame rate drops {fixed r49349}
  • BGE Vertex deformer optimized method does not work properly {fixed r49371}
  • Character physics type colliding with sensor type {fixed r49373}

I also fixed a couple of bugs that were reported to me outside of the tracker:

  • Performance regression with 2D Filters {fixed r49326}
  • Restrict Animation Updates option not framerate independent {fixed r49732}

Tracker stats:

New: 6
Closed: 3 (3 by me)
Net Change: +3
Current: 152

And we’re back to four pages. 😦

What do you plan to do next week?
More optimizing and bug fixing, with more of an emphasis on optimizing.

Are there any problems that will require extra attention and what impact will they have on your proposed schedule?

Nope, things are going smoothly.

Are there any builds available?

There are some on GraphicAll.

Cheers,
Moguri

What did you do this week?

I managed to finally get Nvidia Nsight and started profiling some OpenGL usage. I cleaned up some of the BGE’s OpenGL usage (eliminated some glGet and glIsEnabled calls), which got me a few fps in the Necrosys map. I’m hoping for more gains, but I’m still learning how to best use the tool. I also managed to speed up loading on Dalai’s project by using glGenerateMipmap() instead of gluBuild2DMipmaps() on hardware that supports glGenerateMipmap(). This greatly reduced the delay from when you could start hearing sounds to when you could start seeing the level.

In an effort to start getting things merged into trunk (if I’m lucky for 2.64), I’ve submitted a patch with only my changes to Swiss for code review.

Furthermore, I’ve finally gotten around to recreating the ge_components branch with my component code. I haven’t done much testing with it, but I’d like to start poking around and seeing what needs to be done.

I also managed to close the following bugs this week:

  • Incorrect physics for LibLoaded dupligroups {fixed r49237}
  • SubSurf in BGE suggestion {rejected}
  • Action Actuator in Loop End stops updating the Frame Property after no longer receives positive signal {fixed r49189}
  • Unable to modify KX_LightObject in BGE {fixed r49154}
  • light distance not adressable in GE GLSL mode {fixed r49154}
  • Overlay scene gets transparent when motion blur is enabled {r49128}

Tracker stats:

New: 6
Closed: 6 (6 by me)
Net Change: +0
Current: 149

What do you plan to do next week?
Hopefully I can start getting some reviews from the code review. I will continue trying to make the Necrosys map run faster and fix more bug reports.

Are there any problems that will require extra attention and what impact will they have on your proposed schedule?

Nope, things are going smoothly.

Are there any builds available?

There are some on GraphicAll.

Cheers,
Moguri

What did you do this week?

I started off this week looking at multi-uv bugs to see which ones are actually fixed in Swiss. I’ve verified that bugs #18146 and #17927 are fixed in Swiss. #20281 and #37775 should be solved after I get a bit of clarification on some Blender code.

There are also a couple of bugs about changing light values in realtime not having any graphical effect. These bugs were fixed in Cucumber, so I’m going to see if I can bring that code over to Swiss or trunk.

Other than bugs, I managed to get lib loaded materials to not compile their shaders twice. This gets rid of an error message when using the async option, and it offers a small speed up. I have also gotten my Swiss code into a working copy of trunk to start looking at the possibility of merging with trunk.

Now time for some tracker stats:

New: 3
Closed: 1 (1 by me)
Net Change: +2
Current: 149

What do you plan to do next week?
Once Nvidia get’s their developer site back up, I’d like to try some of their programs to profile the BGE’s OpenGL usage to try and get the Necrosys map to run better. I finally got Dalai’s files working, so I can also start trying to optimize for those. Overall, a scene change spends about two seconds on scene conversion; hopefully I can get it down to around one second or better.  I’ll also see about the possibility of merging some of my Swiss changes into trunk so the multi-uv bug reports can be closed.

Are there any problems that will require extra attention and what impact will they have on your proposed schedule?

Nope, things are going smoothly.

Are there any builds available?

There are some on GraphicAll.

Cheers,
Moguri

What did you do this week?

This week was a rather slow week for me as I was busy with other things and waiting on files. However, I’ve fixed some memory leaks in both trunk and Swiss that I found using the handy Visual Leak Detector. I also wrote up a Game Engine release log for the 2.64 test builds release, and I’ve been monitoring response to the release in this BA thread to try and find any regressions. So far, the only regression I’ve confirmed is one involving DDS/DXT textures and needing to flip the compressed textures. This regression (as well as the reason for the regression and possible fixes) is noted in the release log.

Of the bugs I fixed, one notable one was a regression (found prior to the 2.64 release) caused by the character physics type which made Radar and Near sensors collide with objects. I say this is notable, because I think it was one of the few (if not the only) 2.64 BGE regressions in the tracker. Hopefully this means 2.64 won’t break too many 2.63 games. Another bug worth mentioning is enable/disable rigid body not working with Bullet. Now that I’ve fixed this, I don’t think there are any old features (i.e., Sumo features) laying around that do not work with Bullet.

Now time for some tracker stats:

New: 4
Closed: 11 (8 by me)
Net Change: -7
Current: 147

And we are down to three pages in the tracker!

What do you plan to do next week?
I got a map from the Necrosys guys that they are letting me profile, so I’m going to see what I can do with that. So far, it loads about 30% faster in Swiss. Dalai also has a game/walkthrough for me to profile, but we’re having issues running it on Windows. Once those get resolved, I’ll also profile that.

Are there any problems that will require extra attention and what impact will they have on your proposed schedule?

Nope, things are going smoothly.

Are there any builds available?

There are some on GraphicAll.

Cheers,
Moguri

What did you do this week?

I finally implemented an interface to work with asynchronous lib loading. I settled on a future object that can register callbacks as mentioned in this post on my feedback thread. As for the tracker, I managed to close eleven reports, but five new ones came in. The good news is we are now only four reports away from finally getting back to three pages.

Tracker stats:

New: 5
Closed: 11 (11 by me)
Net Change: -6
Current: 154

4 more to go until we’re back down to three pages!

What do you plan to do next week?
I should be able to get the tracker under three pages this next week. As for what non-bug-hunting task I will work on, I think I will need to consult Dalai. I might work on an actuator for LibLoad, but with the Hive GSoC, it might be better to wait on adding any new logic bricks. I could start looking at cleaning up components, or look into more converter optimizations.

Are there any problems that will require extra attention and what impact will they have on your proposed schedule?

Nope, things are going smoothly.

Are there any builds available?

There are some on GraphicAll.

Cheers,
Moguri