38.1 F
High Point
Home Blog

.NET: Unit Testing Azure Service Bus

.NET: Unit Testing Azure Service Bus

Unit testing an Azure Service Bus consumer is fairly easy. In this tutorial, I will use .NET6, Moq (v4), and Fluent Assertions. The difficult part is constructing the ServiceBusMessageReceived because it has only internal constructors and no interface. However, there is a factory service that can create these objects.

GitHub

I’ve included unit testing in this project.

https://github.com/mrjamiebowman-blog/microservices-application-insights

https://github.com/mrjamiebowman-blog/microservices-application-insights/blob/main/test/MrJB.MS.Common.Tests/Services/ConsumerAzureServiceBusTests.cs

What should be unit tested?

There are two areas that I focus on when testing consumer service. That is message processing and error handling.

Message Handler

This is the main method for processing incoming data from Azure Service Bus. This can be tested to make sure data is properly deserialized, and distributed to the correct services appropriately.

Error Handler

The Error Handler method can handle specific cases where the service bus client fails. It may be useful to unit test this code if there are certain actions that need to happen during a failure.

ConsumerAzureServiceBus Service Class

This is the code that we will be testing to put things in context.

Mocking the Service Bus Client

I’ve tried these several ways and I think this is the simplest. Moq can only mock public and virtual methods. An easy and effective way to implement a mocked Service Bus Client is to create a public virtual method that returns the service bus client. I’m also excluding this from code coverage since this is specifically here for unit testing.

In the StartReceivingMessageAsync() method I can get the client doing this.

Why injecting the ServiceBusClient didn’t work out.

Someone recommended injecting the ServiceBusClient and I thought that was worth trying. That doesn’t work in the end for several reasons. There isn’t a method to connect or reconnect that I know of and the only way a connection is created is through instantiating the ServiceBusClient service. When the subobject object processor is disposed it causes the service bus client to close which will no longer maintain a connection and will cause the service to fail.

ServiceBusModelFactory Factory Class

The Service Bus Model Factory can be used to mock the ServiceBusReceivedMessage class.

Message Handler Tests

The general idea here is that we want to test the flow of the message handler to make sure it’s passing and accepting the correct arguments. Typically after consumption either a message is saved in a database or passed to another service for further processing. We want to make sure that everything happens here as expected.

Error Handler Tests

This is just a sample of what could be tested.

Full Unit Tests

Again all of this is on my GitHub under the Microservice: Application Insights repository.

https://github.com/mrjamiebowman-blog/microservices-application-insights/blob/main/test/MrJB.MS.Common.Tests/Services/ConsumerAzureServiceBusTests.cs

.NET Health Checks: Azure Key Vault

.NET Health Checks: Azure Key Vault

There isn’t much documentation on how to use the Azure Key Vault Health Checks so I thought I would demonstrate and teach what I’ve learned using C#. I typically use Azure App Configuration and reference Key Vault entries with microservices. It’s not very intuitive but the way this works is your Key Vault Health Check is capable of checking to see if Secrets, Certificates, and Keys exist. I find this very useful, especially if the environment you are using is in a CI/CD pipeline that creates and destroys itself rather frequently. It’s also a very practical thing to check at startup. This is a one-off article but I will write a much larger article covering everything I’ve learned about Health Checks with .NET Microservices.

GitHub

I will include a GitHub sample project with the larger Health Checks project.

Configuration

I recommend creating a configuration class of some sort to keep track of which Key Vault references need to be monitored and the URI.

AppHealthChecksConfiguration.cs

The two important values here are KeyVaultUri and the list of strings AzureKeyVaultSecrets.

Samples values may look like:

KeyVaultUri = “https://YOUR_KEY_VAULT_NAME.vault.azure.net/”

AzureKeyVaultSecrets = new () { “secret-test” }

Health Check Extension

Static Tags Class

I’m also creating a nested static Tags class that will provide string references to the tags associated with the health checks. If you’re not aware, tags can associate each health check with a certain type of health check. In Kubernetes, there are health checks for Startup, Readiness, and Liveness.

CustomAzureKeyVaultExtension.cs

Using a list of strings as a configuration property, we can iterate through the secrets we want to check. This is also beneficial for adding and removing key vault checks without redeploying code. This is read from Azure App Config or the AppSettings.json file.

Important Note: (DefaultAzureCredential) Typically, I do this in a microservice running in Docker so the appropriate values for the AZURE_TENANT_ID, AZURE_CLIENT_ID, and AZURE_CLIENT_SECRET are set as environment variables so Azure Identity can authorize. This is how Azure Key Vault would authorize in a containerized microservice. The article below is for Express.js but it covers the general idea of how it works.

https://docs.microsoft.com/en-us/azure/developer/javascript/how-to/with-web-app/use-secret-environment-variables#configure-expressjs-required-environment-variables-to-use-azure-identity

Map Endpoints

This code will create two endpoints that listen on /hc/startup and /hc/live to monitor health checks. This can easily be integrated with Kubernetes.

UIResponseWriter.WriteHealthCheckUIResponse

This line isn’t necessary but it sets the ResponseWriter to an implementation that will output a detailed response that can be ready by the Health Check UI. This is also compatible with Kubernetes Health Checks. This will also include an error if the health check fails. It will likely be a large piece of text containing HTML explaining that either the URI was invalid or the health check failed.

Startup.cs

Further Reading

https://docs.microsoft.com/en-us/dotnet/api/overview/azure/security.keyvault.secrets-readme-pre

https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.Security.KeyVault.Secrets/4.0.0/api/index.html

Terraform Crash Course

Terraform Crash Course

I’ll teach you how to get started with Terraform very quickly in this article. My main goal here is to focus on a few basic concepts of Terraform so you can install and deploy your very first Terraform script today. This is going to be straight to the point and focused on using Terraform on Windows with Azure. I will demonstrate this in the simplest of ways so that you can grasp the concepts and see this work end-to-end. Before you start, you should also create an empty folder for the workspace for this crash course.

Git Repo: Terraform Crash Course

Installation

I typically create a C:\bin folder and add that to my System Environment path variables. This makes it easy for me to maintain static binary tools that I download like Terraform.

https://www.terraform.io/downloads.html

Steps

  • Create c:\bin\ folder.
  • Export to Environment Variable for Paths
  • Download and extract 64 bit Terraform binary to bin folder.

Verify Installation

To verify installation you should be able to run a basic terraform command to see if Windows recognizes the bin path and finds Terraform.

terraform version

Azure CLI

You will need to have Azure CLI installed and working. Before you can run any of the Terraform commands below you will need to log in and set your subscription to the one you will want to use. Use the commands below to accomplish this task.

Logging In

You’ll need to be authenticated against Azure to do any of this so make sure you run this command below.

az login --use-device-code

List Accounts

az account list -o table

Set the Account Context

To tell Azure CLI and Terraform which Tenant and Subscription this will run against use the command below. Replace {GUID} with your Subscription ID from the output above.

az account set --subscription {GUID}

Providers

Providers are simple to understand if you start with this thought, they are the gateway in which Terraform talks to a service to create infrastructure. For example, the azurerm provider allows Terraform to create infrastructure within Azure. The provider provides general settings and a library for creating Terraform scripts.

https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs

Create a providers.tf file

This is the bare minimum code that is needed in the providers.tf file to tell Terraform which version of azurerm it should use to communicate with Azure.

providers.tf

HashiCorp Language (HCL)

HashiCorp Language (HCL) is similar to YAML and is a declarative style with blocks, identifiers, data sources, output, variables, and can provide functionality for mapping values into the blocks.

https://www.terraform.io/docs/language/syntax/configuration.html

Create a main.tf file

The main.tf file will be our main Terraform file for creating an Azure Kubernetes Service (AKS). This script will create a Resource Group and an Azure Kubernetes Service (AKS) instance.

You can reference the documentation in the Terraform Registry for azurerm.

https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster

main.tf

Properties (node_count, vm_size)

Worth noting, Azure Kubernetes Service (AKS) uses a Virtual Machine Scale Set, and specifying the VM Size (vm_size) allows you to tell Azure which virtual machine to use. I recommend using the cheapest (Standard_B2s) since we are just playing around. It’s possible you may leave this running, I certainly have before. The Node Count (node_count) is how many Kubernetes nodes do we want to be created. For redundancy, it’s normal to use at least 3, but for this demo, we will use only 1.

Referencing Values from Other Blocks

If you look at the AKS block, you will see that the “location” property for “azurerm_kubernetes_cluster” is being set to location = azurerm_resource_group.rg.location. The name of the resource block is “rg” and by using the block we can get the location from that resource. This is great for keeping things consistent. For example, it’s common to pass a variable value in for the Resource Group’s location and then inherit that value throughout the script by referencing that block’s value.

Outputs

After the script is run, outputs will put these values out into the terminal. Terraform output can also be referenced but that’s more advanced and will not be covered in this tutorial.

Variables

Variables allow configuration values to be injected into the process. This is incredibly useful for deploying to multiple environments or decoupling values from the process. It’s very common to use a Continous Integration (CI) pipeline to deploy Terraform to multiple environments. In this example, we will make the Environment Tag a variable so that can be easily changed.

Create a variables.tf file

This will create a variables.tf file with a default environment tag of “Prod”. If you were to leave out the default value then this would prompt you to enter the value before the script can be run. These are often used in CI pipelines.

variables.tf

Terraform Commands

Now, is the fun part, we can actually deploy and destroy Terraformed infrastructure and see this work. I’m going to teach you the most basic commands that you will use to create an infrastructure through Terraform. There’s a lot more but this is the bare minimum to get started.

https://www.terraform.io/docs/cli/commands/index.html

Init

The terraform init command will download providers and modules and prepare Terraform to be ready to plan and apply infrastructure changes.

terraform init

Plan

Terraform plan allows you to see what changes will be applied to the infrastructure before committing to the change.

terraform plan

Apply

The Terraform apply command applies the infrastructure changes. You will have to type “Yes” in order to commit these changes.

terraform apply

After typing “yes”, Terraform will being to create infrastructure. The final outcome should look like this:

Destroy

Terraform destroy will remove all infrastructure that is in the Terraform code. So, this will remove the Azure Kubernetes Service (AKS) cluster and the resource group but if there are other resources then those will not be removed.

terraform destroy

Further Studying

I can’t recommend this course enough. I know Derek Morgan personally through friends in IT and he’s an incredibly talented person who focuses on automation and IoT. I already knew quite a bit of Terraform and found his Udemy course to be extremely well put together and I was able to pick up a lot of new tricks.

https://www.udemy.com/course/terraform-certified/
https://spacelift.io/blog/terraform-functions-expressions-loops

Merge Kube Config in PowerShell

Merge Kube Config in PowerShell

Merging Kubernetes Config files is a rather cumbersome process. You can manually copy in the values but sometimes that doesn’t always go as planned. I was looking for a way to merge Kube Config files when I ran across a great post that wasn’t geared for PowerShell users.

Key Takeaways

  • Must use full path. Relative paths did not work.
  • $ENV:KUBECONFIG was an empty value before I set it.
  • –flatten – is used to flatten the kubeconfig file into a self-contained output.

For this demo, my newly created Kubernetes Config file will be called config-devbox.

I have to thank Jacob Tomlinson for his post on Medium “How to merge Kubernetes kubectl config files”, however, his post was not based on Windows PowerShell users. If you’re looking for the Bash equivalent I suggest you visit his post!

PowerShell

Clean Up

To remove old clusters by name see the commands below.

Further Reading

https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

Connecting Kafka Tool to Azure Event Hubs

Connecting Kafka Tool to Azure Event Hubs

I find connecting the Kafka Tool or any tool to Kafka at times to be a cumbersome process. It seems like each service has some nuance for either security, ports, or connecting that makes it tricky or confusing. I’ll show you how to easily connect your Kafka Tool to Azure Event Hubs so you can see data in your Event Hubs (topics).

Make Sure Kafka Services is Enabled

If you are using a Standard version or higher of Azure Event Hubs the “Kafka Surface” will be enabled. This is not supported by the “Basic” tier of Azure Event Hubs.

Get Configuration Values from Azure Event Hubs

Under the Azure Event Hubs Namespace there is a menu item for “Shared access policies”, here you will be able to create a policy or use the RotoManageSharedAccessKey to connect to the Kafka Service. You will only need the connection string so copy that.

Kafka Tool Configuration for Azure Event Hubs

I’ll post pictures of the screens and explain what the values should be here.

Properties

Broker Security

Advanced

Be sure to set the port to the correct port. Azure Event Hubs uses port 9093 and not the default 9092 port of Kafka.

JAAS Config

You can read more on Microsoft’s site here: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview#shared-access-signature-sas

Connect

It make take a few seconds to connect but once it does you’re good.

Further Reading

https://docs.microsoft.com/en-us/azure/event-hubs/apache-kafka-developer-guide

Identity Server 4 ClientCredentials with POSTMAN

Identity Server 4 ClientCredentials with POSTMAN

This tutorial will demonstrate how to set up security within microservices using Identity Server 4 with OpenID. This will use a Client and Secret for microservice to microservice (machine-to-machine) communication that way a compromised microservice can’t interact with resources it’s not authorized to. I will also demonstrate how to use POSTMAN to get tokens, inspect tokens and verify things are set up correctly.

GitHub

GitHub: Identity Server 4 OpenID POSTMAN

What is OpenID Connect (OIDC)?

OpenID is a protocol that standardized OAuth and added certain capabilities to make authentication easier and more universal. OAuth is still responsible for handling authorization while OpenID is an extension that specifically standardizes authentication by providing login and profile information through a token known as the JWT token.

Setting Up the Identity Server

This section will walk through creating the Identity Server 4 project and how to configure it for OpenID Connect.

Creating the Project

First you’ll need to install the Identity Server 4 templates.

dotnet new -i identityserver4.templates

After the templates are installed you can create an Identity Server project using dotnet new. If you want to use OAuth for authenticating users you can also add the UI which will include MVC controllers and views. This is not necessary for this tutorial.

Setting up API Resources, Clients in the Config.cs

Startup.cs

Under the ConfigureService() method you will also need to add the Config.ApiResources to register them with the Identity Server.

Add this line of code .AddInMemoryApiResources(Config.ApiResources).

Microservice API

The goal here is to protect a microservice API from internal abuse within the cluster by configuring JWT bearer verification. The general idea of protecting microservice-to-microservice (machine-to-machine) communication is to limit the amount of damage a bad actor could do if they hypothetically gained access to the cluster. This could be a hacker compromising a container and attempting to interact with other microservices in an effort to pivot or gain information.

Install NuGet Package

Microsoft.AspNetCore.Authentication.JwtBearer

Configuring JWT Bearer Verification

There are several ways to set up JWT validation through .NET. I will demonstrate how to do this using the ASP.NET middleware libraries.

Startup.cs – ConfigureServices()

To apply authentication to the entire API it’s easiest to create an Authorization Policy and apply it as an AuthorizeFilter.

Startup.cs – Configure()

Add in the code below to the Configure() method. If you skip this step or comment this out the Identity Server will create a valid JWT token but it will still return unauthorized when using that valid token against the API service.

Using POSTMAN to Verify OpenID/OAuth Works

POSTMAN is a great tool for interacting with APIs and has full support for OpenID/OAuth.

Creating a GET Request

We will first need to set the URL to GET to https://localhost:5011/WeatherForecast You may need to update the ports here but this is the port for the microservice and can be found in the Properties of the project. This should fail with a 401 Unauthorized HTTP status response. This is as expected.

(401 Unauthorized)

Getting a Token

In order to get a JWT token (Bearer Token), you will need to select the Authorization Tab and set it to OAuth 2 and configure it with the settings below. You may have to scroll down on the right to get to this section but look for “Configure New Token”. Be sure to set the Client ID, Secret, and Scopes requested.

The next window that will popup displays that the token has been generated and authorization with the Identity Server was successful. Click “proceed” or just wait…

On the next screen you should copy the JWT Token so you can inspect it at jwt.io.

Great Success!

At this point the Bearer Token (JWT) is being sent along with the request to get WeatherForecasts and is successfully authenticating.

Inspecting Tokens

Inspecting the JWT Token is easy. Hop over to jwt.io and paste in the JWT Token and you can see the properties passed with it.

https://jwt.io/

Reserved Claims

These are the reserved claims that are used for OpenID to process and authenticate JWT tokens.

  • iss – Issuer of the JWT
  • sub – Subject of the JWT
  • aud – Audience (Defined under ApiResources in Identity Server)
  • exp – Expiration time in Epoch time.
  • nbf – Not before time
  • iat – Issued at time
  • jti – Unique identifier for the JWT Token (JWT ID).

Further Reading

https://developer.okta.com/blog/2019/10/21/illustrated-guide-to-oauth-and-oidc
https://openid.net/
https://auth0.com/docs/tokens/json-web-tokens/json-web-token-claims

Kafka Log Compaction

Kafka Log Compaction

This tutorial is to demonstrate how Log Compaction works in Apache Kafka. For this tutorial, I will use .NET and a local Docker image to run Apache Kafka. This example will demonstrate how to create distributable settings that could be stored in a Kafka topic. This is similar to how Kafka Connect saves its configuration for connectors (if I understood that correctly).

GitHub

GitHub: Kafka Log Compaction

Getting Started

There is a startup.ps1 script that will start a Docker image of Kafka. It will basically run a Docker Compose command with an environment variables file. You will need to enter values into the .env file.

From the root folder you can either run the startup.ps1 or run this command to start the Kafka Docker image.

docker-compose -f docker/docker-compose.yml --env-file ./.env up -d

Create a Kafka Topic with Log Compaction

We will execute terminal access to the Docker container in order to use the Kafka binary files and create a topic with log compaction enabled.

Note: Apache ZooKeeper is being deprecated and there is a flag --zookeeper that will be deprecated.
KIP-604: Remove ZooKeeper Flags from the Administrative Tools

Create Topic

Topic Configuration

I recommend reading Javier Holguera’s blog post “Kafka quirks: tombstones that refuse to disappear” it helped me a lot. These settings are not optimal for a topic that receives a lot of volume continuously. Log compaction configured like this will run more frequently affecting the performance of this topic.

Confluent: Topic Configuration

cleanup.policy=compact

This setting is rather straight forward. Setting the cleanup policy to “compact” will enable general log compaction. However, that’s not enough for optimal configuration.

delete.retention.ms=100

The delete retention setting is the amount of time to retain tombstone markers for log compacted topics.

segment.ms=100

The segment configuration value controls the period of time after which Kafka will force the log to roll causing log compaction.

min.cleanable.dirty.ratio=0.01

This setting affects how often the log compactor will attempt to clean the log.

.NET App

The .NET App will create and read distributed settings from the Kafka topic app.settings.

Consuming Data

On initial startup the application will read the processed settings values from the Kafka topic.

The var consumeResult = consumer.Consume(TimeSpan.FromSeconds(5)); passes a TimeSpan of 5 seconds, this causes the consumer to run and if there are no offsets to consume it moves on to the next task. This also returns null when that happens. This is common during the initial run.

I’m also setting the setting.LastProcessed = consumeResult.Message.Timestamp.UtcDateTime; so that in theory I could filter out and select the most recent setting based on the time stamp.

Producing Data

As the application runs it will update the past processed time and update that every 20 seconds. The expectation is that there will be a single record with a unique key.

The key things to point out here are the GetProducer() method takes <string, string> where the first type is the key and the second type is the value.

Verifying Compaction using Kafka Tool

We will use the Kafka Tool to inspect the data on the Kafka log to make sure there are no duplicates and that there is only one value that is being updated.

You can see that log compaction happens periodically and will reduce records based on the “Key”. Unfortunately, I could not get it to compact down to a single record. That was rather disappointing.

BUG: Tombstones can Survive Forever

Kafka is still young and has a lot of growing to do. It appears that there is a bug where tombstoned records can persist even after they have been marked for deletion.

It’s also important to know that tombstoned records will come back as NULLs.

Apache.org: Tombstones can survive forever

Further Reading

https://kafka.apache.org/documentation/
Kafka quirks: tombstones that refuse to disappear
Confluent: Topic Configuration

Connecting Kafka Tool to Confluent Cloud

Connecting Kafka Tool to Confluent Cloud

Kafka Tool is a great tool to use to see data that is being put on the Kafka Log. I use it regularly to verify topics, data, and configuration within a Kafka cluster. Confluent Cloud provides a cloud solution that can run on Azure, AWS, and Google Cloud. Their platform provides a lot of useful features like a UI that can create topics, monitor data, and consumer lag.

You can get started with Confluent Cloud for FREE! They give you a 1 week $200 dollar credit and you can also apply other promo codes.

https://www.confluent.io/
https://www.kafkatool.com/

Setting up a User API Key in Confluent Cloud

First, you’re going to have to set up an API key to access the Confluent Cloud cluster.

Step 1

Click API Access on the left navigation and then click “Create key”.

Step 2

You’re going to want to select the default selection, “Global access”. This will grant your user full access to the cluster.

Step 3

On the final screen, you can get your API key and secret. You will need this to set up Kafka Tool. Save this information because once you click “Save” it will be unavailable and you will have to repeat the process. I also put my name under the description, it makes life easier.

Configuring Kafka Tool for Confluent Cloud

Before getting started with Kafka Tool make sure it is up-to-date. You will need to make sure you have the latest version because Confluent Cloud keeps their clusters close to the latest releases of Apache Kafka. As I’m writing this post, they are on version 2.6 and the Kafka Tool supports 0.11 and up. I’m using Kafka Tool 2.0.9 as of 1/21/2021.

Step 1: Create a New Connection

Click “File” -> “Add New Connection” or right-click “Clusters” and select “Add New Connection”. Your values should look similar to this.

Note: I expect Zookeeper will go away and will not matter in the near future. Ping does not work here.
KIP-555: Deprecate Direct Zookeeper access in Kafka Administrative Tools

Step 2: Security

Click the “Security” tab and set the Type to “SASL SSL”. You will not need to set any values here.

Step 3: Advanced

Next, click the “Advanced” tab and set the bootstrap servers, SASL Mechanism to PLAIN, and check the Offset Topic option to make use of a background thread.

Step 4: JASS Config

OK, so this part was a bit confusing for me then it finally worked. You will need to put your API key for the username and API secret for the password that you saved earlier from Confluent Cloud.

Step 5: Verify it Works!

Using Confluent Cloud’s UI I was able to easily create a topic and produce sample data to that topic. This made verification easy.

Conclusion

It took me a lot of time to become familiar and several tries to figure this out! I hope this helps someone out. If it helped you be sure to comment below! Thanks!

Kubernetes: Recreating Pods

Kubernetes: Recreating Pods

Sometimes it’s practical to re-create a pod because it needs to re-load configuration or restart. Here are some tricks to deleting and re-creating pods.

Delete All Pods By Label

To get the labels of a pod run this command on the appropriate namespace.

kubectl get pods -o wide

kubectl delete pods -l app=kubernetes-tutorial -n mrjamiebowman

Delete Each Pod Individually

The key thing here is to leave one running and delete the others.

kubectl delete pod/podname -n mrjamiebowman

Scale to 0…3

kubectl scale deployment kubernetes-tutorial -n mrjamiebowman --replicas=0

then scale back up…

kubectl scale deployment kubernetes-tutorial -n mrjamiebowman --replicas=3

Rolling Restart (Production)

A more production-friendly way of doing this would be to configure the Deployment to handle a rolling restart. That is for another tutorial in another day. I just want people to be aware of this.

kubectl rollout restart deployment kubernetes-tutorial

Kubernetes ConfigMaps with .NET Core

Kubernetes: ConfigMaps with .NET

Kubernetes ConfigMaps with .NET Core

This tutorial will cover how to mount a Kubernetes ConfigMap into a pod and how to read that configuration in .NET. I will also demonstrate the different ways a .NET application can read configuration from an appsettings.json file. One demonstration will reload configuration on change and how to listen for those changes. That does not work with Kubernetes ConfigMaps but is great for local development.

GitHub

You can download the source code for the application here.
GitHub: Kubernetes Tutorial: ConfigMap in .NET

Kubernetes ConfigMaps

Kubernetes provides the ability to map configuration into pods. This information is typically not sensitive and can be either a JSON configuration file or a collection of key-value pairs. In this tutorial, I will show how to map in a JSON configuration file.

Kubernetes: ConfigMaps

Why use a ConfigMap over Environment Variables?

Environment Variables are great for passing in single values but if you have an entire configuration file like the appsettings.json file then you should use either a ConfigMap or Azure App Configuration.

Creating a Kubernetes ConfigMap

To install this ConfigMap into the Kubernetes cluster you will need to run this command. You could also download this configuration and edit it yourself.

Note: The deploy.ps1 script will execute all of the kubectl commands below.

To start with I’m creating a Kubernetes namespace titles “mrjamiebowman”.

kubectl create namespace mrjamiebowman

ConfigMap

kubectl apply -f k8s/configmap.yaml

ConfigMap using Files.Get

Another variation would be to use .Files.Get to load an appsettings.json file. This might make it easier to maintain. I haven’t figured out how to read this from a relative path either.

Deployment

kubectl apply -f k8s/deployment.yaml

Service

In most cloud providers a service will automatically create a LoadBalancer that is immediately capable of accepting traffic to an IP address.

kubectl apply -f k8s/service.yaml

Exploring Configuration Options with .NET Core

There are several different ways configuration can be injected into the controller. With Kubernetes ConfigMaps, the configuration will not be re-loaded unless the pod is re-created by either deleting or a rolling restart. You can’t edit it in the pod either. Using the standard injection method below will work fine if you are using ConfigMaps, however, another option is to use the IOptionsMonitor Pattern which may ease local development. Knowing the difference is important.

Controller

In this tutorial, we are injecting the configuration twice using two different methods. The _valuesConfiguration will use standard injection while the _valuesMonitoredConfiguration will use the IOptionsMonitor Pattern.

Standard Injection

This is a very simple way of binding the configuration to an object and injecting that into controllers. This will work fine with Kubernetes ConfigMaps.

Startup.cs – ConfigureServices(IServiceCollection services)

Get Method

This is as simple as it gets. Configuration values can be returned from the injected configuration object.

IOptionsMonitor

Using IOptionsMonitor you can easily monitor your configuration file for changes and I’m going to demonstrate how that works. However, this does not work with Kubernetes ConfigMaps. Pods must be re-created for the new changes to apply.

Retrieving data from the IOptionsMonitor Pattern is slightly different. You have to select the CurrentValue to return the updated configuration setting.

Startup.cs – ConfigureServices(IServiceCollection services)

Configuration injection here is slightly different.

Get Method (IOptionsMonitor)

Reloading Configuration in Kubernetes

There are several ways to do this. Assuming the replication factor has been increased to something like 3 and there are now 3 pods running. Being that this tutorial is to teach people, I would expect that you would be doing this in a DEV environment. Most of these techniques you would not want to do in production because it would cause downtime. Here are some ideas of how to re-create pods.

Delete All Pods By Label

kubectl delete pods -l app=kubernetes-tutorial -n mrjamiebowman

Delete Each Pod Individually

The key thing here is to leave one running and delete the others.

kubectl delete pod/podname -n mrjamiebowman

Scale to 0…3

kubectl scale deployment kubernetes-tutorial -n mrjamiebowman --replicas=0

then scale back up…

kubectl scale deployment kubernetes-tutorial -n mrjamiebowman --replicas=3

Rolling Restart (Production)

A more production-friendly way of doing this would be to configure the Deployment to handle a rolling restart. That is for another tutorial in another day. I just want people to be aware of this.

kubectl rollout restart deployment kubernetes-tutorial

Other Configuration Management Options

While this information defeats the purpose of using a ConfigMap it is also worth being aware that there are more premium options for managing configuration. Now, the downside here is that Azure App Configuration starts at about $36 dollars per month for the Standard Tier of service.

Azure App Configuration

App Configuration allows for the easy importation of .NET JSON configuration files. The values can be managed in the Azure Portal and even includes functionality like “Features”. Convenience and distributed configuration is a prime reason to use Azure App Configuration. Azure App Configuration can also link values from the Azure Key Vault.

Microsoft: Azure App Configuration

Azure Key Vault

Azure Key Vault is great for managing API keys and sensitive information that may need updating periodically. It’s actually very easy because you can set expiration and start dates so that configuration can change smoothly. I’ve found that using the Azure App Configuration with Azure Key Vault to be extremely powerful and useful for managing configuration across multiple applications.

Microsoft: Azure Key Vault

Further Reading

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/options?view=aspnetcore-5.0
https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
https://www.bluematador.com/blog/kubernetes-deployments-rolling-update-configuration

Software Architect

WEATHER

High Point
scattered clouds
38.1 ° F
42.4 °
33.7 °
80 %
1.3mph
40 %
Sun
46 °
Mon
51 °
Tue
52 °
Wed
61 °
Thu
69 °

FOLLOW ME

131FollowersFollow
345FollowersFollow
16SubscribersSubscribe

Random article