Rasmus Olsson

Setting up DB migrations with .NET running in kubernetes

December 15, 2023

In this guide, we'll start by developing a universal .NET application designed for managing database migrations with DbUp. Our goal is to encapsulate this application within a Docker container and transform it into a reusable Docker image. This image will be engineered to handle various database migrations across different domains, requiring a connection string and SQL scripts folder path as inputs.

Following the creation of this generic image, we'll show how it can serve as a foundation for constructing additional images tailored to execute migrations for distinct databases and domains.

The final step involves deploying and running this as a Kubernetes job.

Lets start by setting up a postgres database that will live inside kubernetes

apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:latest resources: limits: cpu: "1" memory: "1Gi" requests: cpu: "0.5" memory: "500Mi" ports: - containerPort: 5432 env: - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: YourStrong!Passw0rd - name: POSTGRES_DB value: YourDatabaseName --- apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app: postgres ports: - protocol: TCP port: 5432 targetPort: 5432 type: ClusterIP

then apply it by running:

kubectl apply -f postgres-deployment.yaml

Lets verify that everything is setup by connecting to postgres.

Note: Before we connect, because we have ClusterIP, we will have to do port-forwarding on the kubernetes service.

kubectl port-forward service/postgres-service 5432:5432

Running the following to connect to the postgres database:

psql -h localhost -p 5432 -U postgres -d YourDatabaseName

Output:

psql (16.2) Type "help" for help. YourDatabaseName=#

Perfect the database connection works!

Now lets create the universal DB migration job with DbUp.

using DbMigrator; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; var host = Host.CreateDefaultBuilder(args) .ConfigureAppConfiguration((_, config) => { config.SetBasePath(Directory.GetCurrentDirectory()); config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddCommandLine(args); }) .ConfigureServices((hostContext, services) => { var configuration = hostContext.Configuration; var scriptDirectory = configuration.GetSection("DbMigrationSettings").GetValue<string>("ScriptDirectory"); var connectionString = configuration.GetConnectionString("PgConnection"); services.AddSingleton(new DbMigrationService(connectionString!, scriptDirectory!)); }) .Build(); var databaseMigrationService = host.Services.GetRequiredService<DbMigrationService>(); databaseMigrationService.MigrateDatabase(); using DbUp; namespace DbMigrator; public class DbMigrationService(string connectionString, string scriptDirectory) { public void MigrateDatabase() { EnsureDatabase.For.PostgresqlDatabase(connectionString); var upgrader = DeployChanges.To .PostgresqlDatabase(connectionString) .WithScriptsFromFileSystem(scriptDirectory) .LogToConsole() .Build(); var result = upgrader.PerformUpgrade(); if (!result.Successful) { Console.WriteLine(result.Error); } else { Console.WriteLine("Success!"); } } }

We have an appsettings.json that we can use to test with but we will override this configuration later.

{ "ConnectionStrings": { "PgConnection": "Server=localhost;Database=YourDatabaseName;User Id=postgres;Password=YourStrong!Passw0rd;" }, "DbMigrationSettings": { "ScriptDirectory": "sql" } }

And then a simple sql file:

CREATE TABLE users ( id SERIAL PRIMARY KEY, name VARCHAR(100) NOT NULL, email VARCHAR(150) NOT NULL UNIQUE, joined_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP );

The project structure looks like this:

DbMigrator │ appsettings.json │ DbMigrationService.cs │ DbMigrator.csproj │ Dockerfile │ Program.cs │ └───sql Examples.sql

Lets first run it locally to see that everything works.

output:

DbMigrator/bin/Debug/net8.0/DbMigrator.exe Master ConnectionString => Host=localhost;Database=postgres;Username=postgres;Password=****** Beginning database upgrade Checking whether journal table exists.. Journal table does not exist Executing Database Server script 'Examples.sql' Checking whether journal table exists.. Creating the "schemaversions" table The "schemaversions" table has been created Upgrade successful Success!

Perfect, it works!

Next we will build an image out of this:

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build WORKDIR /src COPY ["DbMigrator.csproj", "./"] RUN dotnet restore "DbMigrator.csproj" COPY . . RUN dotnet build "DbMigrator.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "DbMigrator.csproj" -c Release -o /app/publish /p:UseAppHost=false FROM mcr.microsoft.com/dotnet/runtime:8.0 AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "DbMigrator.dll"] docker build -t db-migrator .

Now that we have the universal db migrator image, we have the infrastructure in place to reuse it on multiple projects. Lets have a look how we can do that:

Begin by creating a new directory within the specific project requiring a database migration. Within this directory, create a subfolder, possibly named 'sql', and include both a docker-compose file and a Dockerfile looking like this:

db │ docker-compose.yml | dockerfile | └───sql payments.sql

Explaination:

  • docker-compose.yml: Not strictly required, but doing so is helpful for testing locally and for ensuring that all parts are functioning correctly before building the final image
  • Dockerfile: Utilized for customizing the db migrator to fit the specific requirements of the project.
  • sql folder: Contains the SQL migration scripts for the database

Lets test the docker-compose file:

version: '3.8' services: db-migrator: image: db-migrator:latest volumes: - .:/app/DbMigrations environment: ConnectionStrings__PgConnection: 'Server=postgres-service;Database=YourDatabaseName;User Id=postgres;Password=YourStrong!Passw0rd;' DbMigrationSettings__ScriptDirectory: 'DbMigrations/sql' docker-compose up

output:

\payment> docker-compose up [+] Running 1/0 ✔ Container payment-db-migrator-1 Recreated 0.1s Attaching to db-migrator-1 db-migrator-1 | Master ConnectionString => Host=host.docker.internal;Database=postgres;Username=postgres;Password=****** db-migrator-1 | Beginning database upgrade db-migrator-1 | Checking whether journal table exists.. db-migrator-1 | Fetching list of already executed scripts. db-migrator-1 | Executing Database Server script 'payments.sql' db-migrator-1 | Checking whether journal table exists.. db-migrator-1 | Upgrade successful db-migrator-1 | Success! db-migrator-1 exited with code 0

Good!

Next, we will create an image use, we will reuse the universal image and then include script folder inside of it.

FROM db-migrator:latest # Copy the sql directory into the image COPY ./sql /app/DbMigrations/sql docker build -t payment-db-migrator .

Now that we have the final image we will wrap it inside a kubernetes job:

apiVersion: batch/v1 kind: Job metadata: name: payment-db-migration-job spec: template: spec: containers: - name: db-migration image: payment-db-migrator:latest imagePullPolicy: IfNotPresent env: - name: ConnectionStrings__PgConnection value: "Server=postgres-service;Database=YourDatabaseName;User Id=postgres;Password=YourStrong!Passw0rd;" - name: DbMigrationSettings__ScriptDirectory value: "DbMigrations/sql" restartPolicy: Never

Note: For production applications, make sure to handle password securely

Lets apply the kubernetes job by running:

kubectl apply -f payment-db-migration-job.yml

output:

job.batch/payment-db-migration-job created

Lets grab the pod by running:

kubectl get pods

output:

payment> kubectl get pods NAME READY STATUS RESTARTS payment-db-migration-job-qc7gq 0/1 Completed 0 postgres-deployment-f847cbbb4-8n2jp 1/1 Running 0

And then check the logs to see if its execute the db migration by running:

kubectl logs payment-db-migration-job-qc7gq

output:

Master ConnectionString => Host=postgres-service;Database=postgres;Username=postgres;Password=****** Beginning database upgrade Checking whether journal table exists.. Journal table does not exist Executing Database Server script 'payments.sql' Checking whether journal table exists.. Creating the "schemaversions" table The "schemaversions" table has been created Upgrade successful Success! output:

Perfect, it works!

Happy coding!

please share