Touching base

I am a developer at a production company that create extraordinary experiences using the latest in technology.

It is a very exciting place to work despite tight deadlines and long hours, or perhaps because of them. And we get to play with the latest technology on offer including virtual reality, augmented reality, latest display and interaction devices.

Last week was one of those weeks where I had to complete three separate projects by Friday, two VR experiences and a multi-touch table application. All were completed, with a slight hiccup on one, but it occurred to me that I am often facing problems for which very few people seem to have the answers to and it would be useful to record those problems, and hopefully the solution somewhere, hence this blog.

The two VR experiences needed minor changes and just need a little time, which I did not really have as I was facing a very large problem with the multi-touch table project.

The problem I faced was converting a Microsoft Surface 2.0 application to run on a MultiTaction table. Both work recognise touches, markers or tags placed on the table and shapes. However, each uses a completely different API. Microsoft Surface has since been renamed PixelSense but I shall continue to refer to it as Surface as that is how it is referred to in the code.

My original idea was to use the existing code and simply replace the Surface components with TUIO code. TUIO being an open source framework for multi-touch applications which can also recognise markers and blobs.

Unfortunately, when examining the code in a little more detail it became apparent that the Surface component was embedded within the whole of the application and I also had only a cursory knowledge of Windows Presentation Foundation (WPF) in which Surface applications are written.

No problem, someone must have written an Application API for TUIO  for WPF and they have, you can find them all on, along with reference code to implement a TUIO client in several languages. All could handle multi-touch, none could handle markers. On top of that, not all the Surface controls had an equivalent in any of the APIs I looked at.

I had a week to do this.

After some experimentation with the various APIs, I kept coming back to a statement that a MultiTaction table could run a Microsoft Surface 2.0 application. This was clearly not entirely true as every time I tried, the application would run without displaying a window. The only table I had access to was one used for demonstrations and had been used for other development projects, so it may be that a clean system would have worked but I could not completely reinstall the entire system so I went back to work out why this application was not working on this particular table.

I installed the MS Surface 2.0 SDK on my desktop PC and was pleasantly surprised to find that it did indeed run, but this did not help discover why this was not the case on the table itself. I created a very bare Surface 2.0 application and again it would run on my PC but not show a window on the table. As the only control was the SurfaceWindow control and the SurfaceWindow control inherited from System.Windows.Window, I decided to try replacing it with the Window control. And would you believe it, it worked!

OK, the window was now just a normal Windows window, but a few changes to its properties and it was restored to a full screen application, with no apparent loss of functionality to the Surface 2.0 controls. Round 1 to me.

Unfortunately, while I had restored the multi-touch functionality, I still has to restore marker recognition and allow it to track the position of those markers on the table. This bit of magic was not something the MultiTaction table was able to reproduce, instead using its own marker tracking methods.

Now remember earlier I mentioned that I had planned on using TUIO to replace the multi-touch and marker recognition. While I no longer need multi-touch, I realised I could still use the marker recognition, and as I was unable to find a WPF API, I went back to the original reference C# implementation. Running the demo program showed that markers were being recognised by the pure C# TUIO code, so all I needed to do was incorporate it into the application.

This is where my lack of experience with WPF really showed, I took the code and wrote it into the window’s code-behind and added some code to display a the marker on the screen where the physical marker was placed. With some trepidation, I ran the application on the table, and nothing. I was confused.

I wrote some more code to create a log file so I could see what was happening and I was happy to see that the marker being placed on the table was definitely being registered and its position reported correctly, but it would just stop and not recognise any other marker afterwards. I was stumped.

It was Friday now, and my boss wanted to see this working before the end of the day. I was sweating. Taking a moment to panic for a second before sitting down and trying to figure out what was happening, I checked the reference C# code again and realised that an object was created and then locked when a marker event occurred before being released after processing the event. And it occurred to me that the TUIO listener was running in its own thread.

Further research revealed that WPF controls can only be altered by code in its own UI thread and due to the way the TUIO code registered its event through delegates, the event handling was occurring outside of it. This meant I had to find a way to run my WPF manipulation code in the UI thread.

After reading this post, I wrapped my UI manipulation code with

Application.Current.Dispatcher.Invoke(new Action(() => { /* Your code here */ }));

Markers were now being recognised, tracked and removed as intended and I breathed a large sigh, it wasn’t even 5pm yet and I could fully enjoy my bank holiday weekend.