Category Archives: Windows Store apps

(Universal Windows apps)^2

The great majority of apps built for Windows 8.1/Windows Phone 8.1 work on Windows 10 as-is – no changes required what so ever. But what if you want to leverage the new APIs provided by Windows 10 such as the inking API while still supporting the Windows 8.1 version of your app? Or you might be among the few unfortunate ones who have been using some API deprecated on Windows 10; UserInformation class no longer works on Windows 10 but you have to use the User class instead. How to do that without duplicating the code base and having two completely separate app projects to maintain? In this article I’ll describe two approaches to do that.

Shared code and assets in portable project

The first approach is to include all the shared code (in practice that can be almost all of your code) to a separate portable project in your Windows 8.1 solution. First you need to create the project: Right click your solution in the solution explorer, hover on Add and select New Project…

Adding a new project to a solution

Use Class Library as the project type, name it and hit OK.

Creating a class library project

Drag all the code and asset files you want to share between both the Windows 8.1 and Windows 10 app to the newly created Class Library project.

Note that if you have a solution that supports both Windows 8.1 and Windows Phone 8.1, you have to have at least a partial main page (the page you navigate to in the start-up) in the original Windows 8.1 and Windows Phone 8.1 projects. This due to the fact that you can’t add a reference to your Class Library project in the Shared (Windows 8.1/Windows Phone 8.1) project where your App class lives. And without the reference you can’t make your app to navigate to a page defined in your Class Library project in the start-up. Makes sense? Ok, cool, let’s carry on…

Now that we have the code moved to the Class Library project, we must add it as a reference to the other projects so that we can access the classes and assets. Right click the References under the projects in the solution explorer and select Add Reference…

Adding references to a project

On Projects tab you should now find the Class Library project. Check the checkbox and click OK.

Adding a project in the solution to another as a reference

Now fix any minor problems you may have and once your app builds and runs it is time to move on to work on the Windows 10 solution. Create a new Universal Windows 10 application project and add the Class Library project containing the shared code to the Windows 10 solution as an existing project:

Adding an existing project to a solution

Add the Class Library project as a reference to your main Windows 10 project (as explained before), make your main project to use the shared code and you’re all set! Fine – I realize it’s not often this simple and you need to do some tweaking to get all the other dependencies working and so on, but these are the first steps to take.

If you now want to extend the app on Windows 10 by utilizing the cool new APIs, you need to add that specific code to the main project. You can’t, of course, access the code in the main project from the shared code (for many reasons, one being that this would create a circular dependency), but one solution is to define interfaces in the shared code and providing the implementations from the main project. See my example, namely IUserInformationHelper interface in the Class Library, Windows 10 UserInformationHelper implementation and App.xaml.cs where the implementation is provided.

Pros

  • Allows management of the shared code as a single project

Cons

  • Other dependencies (Nuget packages and such) may cause problems e.g. if they aren’t as universal and work only on Windows 8.1 and not on Windows 10
  • You cannot use conditional preprocessing blocks in the shared code (#if) to target a specific platform since the compilation symbols are also shared
Conditional compilation symbols in project preferences (WINDOWS_UWP is for Windows 10 apps)

Shared code and asset files as links

Another way of sharing code between solutions is adding the code and asset files as links. Using links you don’t have to change your existing solution. Simply create a new – in this case – Windows 10 application project and start adding the files from your existing Windows 8.1 solution. Right click your new project in the solution explorer, hover on Add and select Existing Item… Then browse the Windows 8.1 solution folder containing the files you want to add, select the files and click Add As Link:

Adding files as links

The files are now shown in your solution explorer. However, they are not physically in your new project but exist in the Windows 8.1 application project folder. Any changes you make to these files will also appear in both projects.

While adding the files individually can be tedious, the benefit here is that you can take advantage of conditional preprocessing blocks in C# code:

#if WINDOWS_UWP
    // Have your Windows 10 specific code here
#else
    // Have your Windows 8.1 specific code here
#endif

Pros

  • Conditional preprocessing blocks and compilation symbols can be used
  • Dependencies to additional libraries and Nuget packages are easier to maintain
  • Adding platform specific features, e.g. new Windows 10 APIs, is trivial

Cons

  • Adding/removing shared code and asset files needs to be done in both solutions separately

Sample code

An example for using the both approaches featured in this article can be found here in GitHub.

 

Tracking Objects from Video Feed Part III: Detecting Object Displacement

In the previous part we provided one solution for detecting and identifying a stationary object of certain shape in video feed. In this part we focus on tracking the object and try to analyze a simple path of a moving object. By simple, I mean *really* simple, we try to detect the “from” and “to” positions of the object – where it started and where did it end up.

When milliseconds count

Compared to detecting objects from a static image or frame, detecting object displacement presents us a new, tough requirement: We have to analyze the frames real-time and thus, performance is the key. We cannot simply use all the methods described earlier, since, especially on mobile devices, they simply take too much time to compute. Ideally, depending on the framerate and the estimated speed, relative to our field of view (FoV), of the moving object, our operation for tracking the image should take less than 10 milliseconds per frame. It is quite obvious that the complexity of any algorithms we use is relative to the frame size – the less pixels we have to analyze, the faster the operation.

Instead of using all the methods described earlier (chroma filter, object mapping, convex hull etc.) to track the object, we utilize them to “lock” the target object. In other words, we identify the object we want to track and after that we can use far lighter methods to track its position. We don’t have to process the full frame, but only the area of the object with some margin. This helps us to reduce the resolution and run our operations much quicker.

Since our target object can be expected not to change color (unless we’re tracking a chameleon), we can do the following:

  1. Once we have detected the object from the image/frames and we know its position and size (number of pixels horizontally and vertically where the object is thickest) we can define a rectangular cropped area with the object in the center and with a margin of e.g. 15 %.
  2. Apply chroma filter to this cropped area for each frame and keep track of the position, which is defined by the intersecting point of virtual lines placed where we have most pixels horizontally and vertically. Figure 9 illustrates tracking the locked target object.
    • If the center point displacement exceeds our predefined delta value, we move to the next phase, where we analyze the object movement.

VaatturiLockedToObjectScaledFigure 9. Target object locked, and tracking limited to the region marked by the green rectangle.

It moved, but where did it go?

How do we implement the next phase then? It seems that for more accurate analysis of the object movement, we must use more complex methods than we used for detecting the initial displacement of the object. What if we record the frames for later analysis? Since we may not know or forecast when the object is going to move, depending on the frame size, the video we record might be huge! Fortunately, there is a way to store the frames while still keeping the required size fixed: A ring buffer (also known as circular buffer). In short, ring buffer is a fixed size buffer and when you reach the end, your start again from the beginning and replace the frames recorder earlier. See this article about buffering video frames by Juhana Koski to learn more. Because we observe the initial displacement of the object in real-time, we can record few more frames (the estimated time until the object exists our FoV) and then stop. After this we no longer have the real-time requirement and we can take our time analyzing what happened to the object after its initial displacement.

Let’s say that we want to get the last frame of the object until it leaves the FoV. We could use the following algorithm:

  1. Start iterating from the last recorded frame towards the frame of the initial displacement:
    1. Treat each frame as we did in the beginning when we found the desired object from the image using chroma filter, object map, convex hull and shape analysis.
    2. If we find an object satisfying our criteria, we stop expecting it to be the object we were tracking.
  2. We now have the object position from the beginning of its movement to the last known position in our FoV (see figure 10). This means we can at least calculate the angle and relative velocity of the object.

 VaatturiObjectMotionCapturedScaledFigure 10. Object (USB cannon projectile wrapped with pink sticker) motion captured.

Challenges and future development

Lighting challenges are typical with image pattern recognition solutions. Changes in lighting conditions affect the perceived color and that makes the selection of parameters (YUV value and threshold) for chroma filtering difficult. Camera hardware and its settings play a significant role here: Longer the exposure time, easier it is to detect the object properly. However, with long exposure time, it’s harder to capture the object movement. The object in motion will have a distorted shape and its color will blend with the background. Thus, it becomes more difficult find the object in the frames when it’s moving. On the other hand, if we use short exposure time, we get less light per frame and the color difference of the object compared to the background might be insufficient.

The current implementation of the solution relies on manual parameter setting for both color and threshold. In the future, we could try to at least partially automate the parameter setting. We would still have to roughly know the shape and size of the object we want to find. We could apply edge detection algorithms to boost the color filter and get more accurate results with stationary objects. Of course, when an object is moving fast, the edges may blur. However, since the current implementation provides us with the frame of the initial object displacement, we can compare that to the later and see the changes in e.g. chroma. The moving object will leave a trace even if it’s blurred with the background or distorted.

And then there was the code…

The related code project is hosted in GitHub: https://github.com/tompaana/object-tracking-demo

See the README.md file delivered with the project to learn more. The project is freely licensed so you can utilize any bits of the code anyway you like. Have fun!

ThatsAllFolks…or is it?

EDIT: Turns out that’s not all folks. See how everything turns out here.

Staying in control

When working on an app even a bit more complex than couple of views, you quickly find yourself in a need to create either a custom UI components (user controls in Windows development terms) or at very least a composite components. Whenever I find myself in this kind of situation, I try to generalize them and make them as self-contained as possible. In my opinion this approach has two benefits: First, obviously, I can easily use them later in other projects. Second, it makes a better architecture and makes it easier to have more instances of the component in the same project.

I find this to be a standard practice; in most cases when you run into this situation, you find a ready-made solution to your problem in stackoverflow. Usually, it’s in a form of a snippet that you just copy into your project and sometimes you get the complete user control from some project. I recently worked on an app, and found there were no solutions to couple of my problems, so I thought I would fill in the gap and provide them here. Both solutions are quite trivial (except for the first one, if you target a specific platform/framework version), but the thought of saving the precious time of any developer facing the same problem makes me happy.

Sliding panel user control

 

So, to the point. Behold, a sliding panel! With buttons, text, bells and whistles! User can drag it or animate it by tapping an icon or a button. The problem here isn’t the composite nature nor the way it is manipulated (dragging), but the performance when your project is built on a specific framework, namely Windows Phone Silverlight. I try to always work with the latest frameworks, but sometimes it’s just not possible. The performance trick used here is very conventional: Render the whole layout in a bitmap and then animate that. The nice thing about this component is that it works in both Silverlight and Windows Universal apps. It is fully self-contained.

See more detailed description and get the source code from my GitHub project.

Ticker text user control

In image: Three ticker text user controls in StackPanel layout.

This is quite trivial and rarely needed UI component. To be honest, I was quite surprised not to find a version of this anywhere in the interwebs. I suppose one must be somewhere and my search engine skills are just below average. Or it could be that this component is so trivial, that no-one bothers to even look for a ready-made solution. In any case, since I lack all discretion, I dumped my user control here (or actually to GitHub) anyways.

In case the “ticker” does not ring any bells, it’s the component that has a scrolling text on it, much like you see in the lower part on your television sets when watching the news. My control, like the sliding panel, is supported on Windows Silverlight and in Windows Universal apps.

Find the code in GitHub.