Table of Contents
Introduction
Azure Face API is a cloud-based service that detects and analyzes human faces in images. Facial recognition is important in many different circumstances, such as security and visual content analysis .
In this post, we will learn how to provision the Azure Face API, interact with it , detect the number of faces in the image and locate some faces landmarks.
Check out the completed Azure Face Detection service implemented in a Blazor Web App.
Face Detection vs. Face Recognition
Face detection is the process of locating human faces in images and detecting facial landmarks, such as the tip of the nose, while Face Recognitions on the other hand, is a method to verify that two faces in a set of faces are similar or that they belong to the same person.
Setting Up Azure Face API
You need An Azure account, subscription, and access to Azure Face API.
First of all, go to the Azure Portal , navigate to “Create a resource” , search for “Face”, select “Face” and click “Create”.
Fill in the required details and click “Review + create” as shown below and .

Once the resource has been created and deployed, navigate to the resource.
Under “Resource Management” find and note down the Endpoint
and Key
from the “Keys and Endpoint” section, as shown below.

Creating The Azure Face Solution and adding a class library project
Open Visual Studio and create a blank solution named AzureFace
, Add a new Class library named Dnc.Services.FaceDetection
to the solution. Select .NET 8 (Long Term Support) as the target Framework.
Implementing the Azure Face Detection Client
In this section we will a create a custom face detection client. It will locate faces in images and return face rectangle coordinates and nose tip landmark for each detected face.
First, create a new folder in the Dnc.Services.FaceDetection
project named Models. Then, add new class file called Face
, that contains three classes Face
, Landmarks
and NoseTip
, as shown below.
namespace Dnc.Services.FaceDetection.Models
{
public class Face
{
public string FaceId { get; set; }
public FaceRectangle FaceRectangle { get; set; }
public FaceLandmarks FaceLandmarks { get; set; }
}
public class FaceLandmarks
{
public NoseTip NoseTip { get; set; }
}
public class NoseTip
{
public double X { get; set; }
public double Y { get; set; }
}
}
Next, create a new folder named Clients, that will contain the client to interact with Azure Face API using HTTP requests.
Add a new interface file name IAzureFaceDetectionClient
to the Clients folder.
namespace Dnc.Services.FaceDetection.Clients
{
public interface IAzureFaceDetectionClient
{
Task<IEnumerable<Face>> DetectFacesInBinaryImage(byte[] imageBytes);
Task<IEnumerable<Face>> DetectFacesWithImageUrl(string imageUrl);
}
}
The IAzureFaceDetectionClient
interface has two methods for detecting faces: one using an image URL and the other using a binary image.
Lastly, Add a new class file to the Clients folder named AzureFaceDetectionClient
, which implements the IAzureFaceDetectionClient
interface.
namespace Dnc.Services.FaceDetection.Clients
{
public class AzureFaceDetectionClient : IAzureFaceDetectionClient
{
private readonly HttpClient httpClient;
public AzureFaceDetectionClient(HttpClient httpClient)
{
this.httpClient = httpClient;
}
public async Task<IEnumerable<Face>> DetectFacesInBinaryImage(byte[] imageBytes)
{
var request = new HttpRequestMessage(HttpMethod.Post, "face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=true&detectionModel=detection_01");
using var content = new ByteArrayContent(imageBytes);
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
request.Content = content;
var response = await httpClient.SendAsync(request);
var responseBody = await response.Content.ReadAsStringAsync();
if (response.IsSuccessStatusCode)
{
return JsonConvert.DeserializeObject<IEnumerable<Face>>(responseBody);
}
else
{
throw new Exception(responseBody);
}
}
public async Task<IEnumerable<Face>> DetectFacesWithImageUrl(string imageUrl)
{
var request = new HttpRequestMessage(HttpMethod.Get, $"face/v1.0/detect?url={imageUrl}");
var response = await httpClient.SendAsync(request);
var responseBody = await response.Content.ReadAsStringAsync();
if (response.IsSuccessStatusCode)
{
return JsonConvert.DeserializeObject<IEnumerable<Face>>(responseBody, new JsonSerializerSettings { Culture = CultureInfo.InvariantCulture });
}
else
{
throw new Exception(responseBody);
}
}
}
}
The AzureFaceDetectionClient
receives an instance of the HttpClient
class using constructor injection and uses it to send HTTP requests and receive HTTP responses from the Azure Face API identified by a URI.
The query string is crucial because it defines the data that will be returned by the Azure Face API. As illustrated in the code below, the query string sets the faceId
and returnFaceLandmarks
to true
.
face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=tru
Also read https://dotnetcoder.com/azure-service-bus-queue-trigger-azure-function/
Implementing the Azure Face Detection Service
After detecting faces, the next step is to create an abstraction that allows you to decouple your domain model from the client model.
To begin, we create a new class file in the Dnc<em>.</em>Services<em>.</em>FaceDetection
project called BoundingFace
, that represents information about a detected face within an image.
namespace Dnc.Services.FaceDetection
{
public class BoundingFace
{
public string Id { get; set; }
public int Top { get; set; }
public int Left { get; set; }
public int Width { get; set; }
public int Height { get; set; }
public double NoseTipX { get; set; }
public double NoseTipY { get; set; }
}
}
In the Dnc<em>.</em>Services<em>.</em>FaceDetection
project, add a new interface file called IAzureFaceDetectionService
, as shown below.
namespace Dnc.Services.FaceDetection
{
public interface IAzureFaceDetectionService
{
Task<IEnumerable<BoundingFace>> DetectFacesInBinaryImage(byte[] imageData);
Task<IEnumerable<BoundingFace>> DetectFacesWithImageUrl(string imageUrl);
}
}
Next, add a new class file to the project, named AzureFaceDetectionService
, which implements the IAzureFaceDetectionService
.
namespace Dnc.Services.FaceDetection
{
public class AzureFaceDetectionService : IAzureFaceDetectionService
{
private readonly IAzureFaceDetectionClient azureFaceDetectionClient;
public AzureFaceDetectionService(IAzureFaceDetectionClient azureFaceDetectionClient)
{
this.azureFaceDetectionClient = azureFaceDetectionClient;
}
public async Task<IEnumerable<BoundingFace>> DetectFacesInBinaryImage(byte[] imageData)
{
var faces = await azureFaceDetectionClient.DetectFacesInBinaryImage(imageData);
return faces.Select(face => MapToBoundingFace(face));
}
public async Task<IEnumerable<BoundingFace>> DetectFacesWithImageUrl(string imageUrl)
{
var faces = await azureFaceDetectionClient.DetectFacesWithImageUrl(imageUrl);
return faces.Select(face => MapToBoundingFace(face));
}
readonly Func<Face, BoundingFace> MapToBoundingFace = face => new BoundingFace
{
Id = face.FaceId,
Top = face.FaceRectangle.Top,
Left = face.FaceRectangle.Left,
Width = face.FaceRectangle.Width,
Height = face.FaceRectangle.Height,
NoseTipX = face.FaceLandmarks.NoseTip.X,
NoseTipY = face.FaceLandmarks.NoseTip.Y
};
}
}
Every method in the AzureFaceDetectionService
returns IEnumerable of BoundingFace
class that represents information about a detected face within an image. This includes both the basic identification details of the face and its location within the image.
Using the Azure Face Detection service in a Blazor Web App
In this part we will learn how to use our custom Azure Face Detection service in the Blazor application.
First, add a new Blazor Web App project template named Dnc.FaceDetection.WebApp
to the solution. Select .NET 8 (Long Term Support) as the target Framework and set it as Startup project.

Next, reference the Dnc.Services.FaceDetection
project in the Dnc.FaceDetection.WebApp
project, and add the Endpoint and key to your appsettings.Development.json
.
{
"Endpoint": "YOUR ENDPOINT",
"SubscriptionKey": "YOUR KEY"
}
Manage the secret securely by using Azure Key Vault instead of including it directly in your code.
To configure the Azure Face Detection service, we need to update the Program
, as follow.
builder.Services.AddHttpClient<IAzureFaceDetectionClient, AzureFaceDetectionClient>(httpClient =>
{
httpClient.BaseAddress = new Uri(builder.Configuration.GetValue<string>("Endpoint"));
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", builder.Configuration.GetValue<string>("SubscriptionKey"));
});
Finally, insert the following code into the Home.razor
and start the application.
@page "/"
@using Dnc.Services.FaceDetection
<PageTitle>Home</PageTitle>
<div class="container">
<div class="row">
<h3 class="my-5">Face services Api : Face Detection</h3>
@if (!Loading)
{
<div class="col-6">
@if (Image != null)
{
<div>
<img src="@Image">
</div>
}
else
{
<div class="image-empty"></div>
}
</div>
<div class="col-6">
@if (boundingFaces == null)
{
<div class="error-message">
No faces detected on the image
</div>
}
@if (boundingFaces != null && boundingFaces.Count() > 0)
{
<div class="error-message">
Faces detected in the image : @boundingFaces.Count() (face/faces)
</div>
var x = 1;
@foreach (var face in boundingFaces)
{
<span style="color:#0f8c98">Face (@x)</span>
<table>
<tr>
<td>Face ID:</td>
<td>@face.Id</td>
</tr>
<tr>
<td>Top:</td>
<td>@face.Top</td>
</tr>
<tr>
<td>Left:</td>
<td>@face.Left</td>
</tr>
<tr>
<td>Width:</td>
<td>@face.Width</td>
</tr>
<tr>
<td>Height:</td>
<td>@face.Height</td>
</tr>
<tr>
<td>Nose tip X: </td>
<td>@face.NoseTipX</td>
</tr>
<tr>
<td>Nose tip Y: </td>
<td>@face.NoseTipY</td>
</tr>
</table>
x++;
}
}
</div>
}
else
{
<div class="fa-x3 d-flex justify-content-center align-items-center">
<div>
<i class="fas fa-circle-notch fa-spin"></i>
<div>
<span>Loading...</span>
</div>
</div>
</div>
}
</div>
<div class="my-3">
<label for="upload">
<span style="cursor:pointer;color:#2c52fd;text-decoration:underline;text-transform:uppercase" aria-hidden="true">Upload</span>
<InputFile type="file" id="upload" OnChange="@UploadPhoto" style="display:none" />
</label>
</div>
</div>
@code{
protected string Image {get;set;}
protected bool Loading {get;set;}
protected IEnumerable<BoundingFace> boundingFaces{get;set;}
protected BoundingFace BoundingFace { get; set; }
[Inject]
public IAzureFaceDetectionService AzureFaceDetectionService { get; set; }
public async Task UploadPhoto(InputFileChangeEventArgs e)
{
Loading = true;
var file = e?.File;
try
{
if (file != null)
{
var imageBytes = await ConvertFileToByte(file);
boundingFaces = await AzureFaceDetectionService.DetectFacesInBinaryImage(imageBytes);
var base64String = await ConvertToBase64StringAsync(file);
Image = string.Format("data:image/jpeg;base64,{0}", base64String);
}
}catch(Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
Loading = false;
}
}
private static async Task<byte[]> ConvertFileToByte(IBrowserFile file)
{
var buffer = new byte[file.Size];
using (var stream = file.OpenReadStream())
{
await stream.ReadAsync(buffer, 0, (int)file.Size);
}
return buffer;
}
private async Task<string> ConvertToBase64StringAsync(IBrowserFile file)
{
using (var memoryStream = new MemoryStream())
{
await file.OpenReadStream().CopyToAsync(memoryStream);
byte[] fileBytes = memoryStream.ToArray();
return Convert.ToBase64String(fileBytes);
}
}
}
I did not go through the Blazor and CSS code because it’s off topic for this post.
Test with any image to detect faces in it and get bounding face information, as shown below.

Test with another image that has more than one face.

Conclusion
In this post, we created an Azure Face Detection service that can be integrated into various projects. We began by explaining what the Azure Face API is and clarifying the difference between face detection and face recognition. Finally, we demonstrated the practical use of the Face Detection service.
The code for the Azure Face Detection service can be found Here.
Also read https://dotnetcoder.com/options-pattern-in-asp-net-core/
Enjoy This Blog?
Discover more from Dot Net Coder
Subscribe to get the latest posts sent to your email.