Hello everyone!

Last year i uploaded a video in which I developed a small WPF application that consumed Cognitive services to calculate the distance of a person captured by the Azure Kinect Camera. I received a lot of questions about the code and it made me feel good that i could help developers around the world. They requested me to consume the Face API with the device, so this is it!

Untitled

So lets begin.

1 . Create a WPF .NET Application, before adding all Nuget packages there are some things you have to configure for the project to work with our sensor.

image

2. Configure Debug mode x64, if you don´t have available that option available select the configuration manager and add it.

image

3. Right click your project in the solution explorer and select “Properties”

image

Once the Properties panel is available uncheck the “Prefer 32-bit” option and check the “Allow unsafe code”.

image

4. Now it is time to installl the Nuget Packages. To do this right click your Project > Manage Nuget Packages and then install:

  • “Azure Kinect Sensor” v. 1.4.

image

  • Microsoft Cognitive Services Face

image

If you don’t see some of these versions, make sure you have the Include Prerelease option checked.

image

5. Lets include all the references we need, i will list them here.

using Microsoft.Azure.Kinect.Sensor;
using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
using System.IO;
using System.ComponentModel;

6. Let’s first connect our Azure Kinect Device and open our camera. So let´s declare our variables.

private readonly Device device = null;

7. Now that we have our Device we will configure and open our camera. So inside our MainWindow() let’s start our camera.

device = Device.Open();
device.StartCameras(new DeviceConfiguration
{
ColorFormat = ImageFormat.ColorBGRA32,
ColorResolution = ColorResolution.R720p,
DepthMode = DepthMode.NFOV_2x2Binned,
SynchronizedImagesOnly = true
});

8. We need to create the image which we will use to display our color camera. For this we need to declare the following variables.

private readonly WriteableBitmap bitmap = null;
private readonly int colorWidth = 0;
private readonly int colorHeight = 0;

9.  Next to the start of our camera we will create the image.

colorWidth = device.GetCalibration().ColorCameraCalibration.ResolutionWidth;
colorHeight = device.GetCalibration().ColorCameraCalibration.ResolutionHeight;

bitmap = new WriteableBitmap(colorWidth, colorHeight, 96.0, 96.0, PixelFormats.Bgra32, null);


this.DataContext = this;

10. In our MainWindow.xaml we will create two events, the closing and the loaded event.

image

11.  In our closing event we will write the following. This will dispose our device.

if (device != null)
{
device.Dispose();
}

12. We will now get our color camera information and display it in our app. In our Window Loaded event lets add the following code.

while (running)
{
using (Capture capture = await Task.Run(() => { return device.GetCapture(); }))
{

                    this.bitmap.Lock();
var color = capture.Color;
var region = new Int32Rect(0, 0, color.WidthPixels, color.HeightPixels);
unsafe
{

                         using (var pin = color.Memory.Pin())
{
bitmap.WritePixels(region, (IntPtr)pin.Pointer, (int)color.Size, color.StrideBytes);
}
bitmap.AddDirtyRect(region);
bitmap.Unlock();
}
}
}

13.  Finally we need to include our Image and Bind it. So in add the image in our Xaml and cs file.

<Image Source=”{Binding ImageSource}” Stretch=”Fill”/>

—————————————————————————–

public ImageSource ImageSource
{
get
{
return this.bitmap;
}
}

14. If we test our App, we can now see our color camera.

Connect our App to Cognitive Services Face Api.

1. Declare the variables you are going to use.

private string statusText = null;

private const string faceClientKey = “/* INCLUDE HERE YOUR SERVICE  KEY */”;
private FaceClient faceclient = null;
private DetectedFace detectedFace=null;
private static readonly List<FaceAttributeType> atributes = new List<FaceAttributeType>()
{
FaceAttributeType.Emotion
};

2. In our MainWindow method initiate the service.

faceclient = new FaceClient(new Microsoft.Azure.CognitiveServices.Vision.Face.ApiKeyServiceClientCredentials(faceClientKey))
{
Endpoint =
https://YOURAPPNAME.cognitiveservices.azure.com/

};

3. Here comes the fun part. For us to send our image to our service we need to create a Stream. That’s why you will add this method to help us get our jpeg.

private Stream StreamFromBitmapSource(BitmapSource bitmap)
{
Stream jpeg = new MemoryStream();

            BitmapEncoder enc = new JpegBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create(bitmap));
enc.Save(jpeg);
jpeg.Position = 0;

            return jpeg;
}

4. We will also add a StatusItem to display the emotion that our service will inform us. So we can add the following in our Xaml and cs.

<StatusBar Grid.Row=”2″ HorizontalAlignment=”Center” Name=”statusBar” VerticalAlignment=”Center” Background=”#0971ce” Foreground=”White”  >
<StatusBarItem Content=”{Binding StatusText}” FontFamily=”Segoe UI” FontSize=”18″/>
</StatusBar>

———————————————————————————

public string StatusText
{
get
{
return this.statusText;
}

            set
{
if (this.statusText != value)
{
this.statusText = value;

                    if (this.PropertyChanged != null)
{
this.PropertyChanged(this, new PropertyChangedEventArgs(“StatusText”));
}
}
}
}

5. We will now add another method that will inform us the Emotion which has the highest value.

static (string Emotion, double Value) getEmotion(Emotion emotion)
{
var emotionProperties = emotion.GetType().GetProperties();
(string Emotion, double Value) highestEmotion = (“Anger”, emotion.Anger);
foreach (var e in emotionProperties)
{
if (((double)e.GetValue(emotion, null)) > highestEmotion.Value)
{
highestEmotion.Emotion = e.Name;
highestEmotion.Value = (double)e.GetValue(emotion, null);
}
}
return highestEmotion;
}

6. Replace the Window Loaded code with this one.

         int count = 0; //With this number we will control the amount of frames , we will call our Cognitive Services every 30 frames.
while (running)
{
using (Capture capture = await Task.Run(() => { return device.GetCapture(); }))
{

                  count++;

                  this.bitmap.Lock();
var color = capture.Color;
var region = new Int32Rect(0, 0, color.WidthPixels, color.HeightPixels);
unsafe
{

                      using (var pin = color.Memory.Pin())
{
bitmap.WritePixels(region, (IntPtr)pin.Pointer, (int)color.Size, color.StrideBytes);
}
if (detectedFace != null)
{
this.StatusText = getEmotion(detectedFace.FaceAttributes.Emotion).ToString(); //We display the result of our method GetEmotion that we coded before.

}

bitmap.AddDirtyRect(region);
bitmap.Unlock();

                      if (count % 30 == 0)
{
var stream = StreamFromBitmapSource(this.bitmap);
_ = faceclient.Face.DetectWithStreamAsync(stream, true, false, MainWindow.atributes).ContinueWith( responseTask =>   //In here we are getting our Detected Images.

                          {
try
{
foreach (var face in responseTask.Result)
{
detectedFace = face;

}
}
catch (System.Exception ex)
{
this.StatusText = ex.ToString();
}
}, TaskScheduler.FromCurrentSynchronizationContext());

                      }
}
}
}

Hope this code helps you!