Skip to main content

DB Migration Checker

Overview#

DB Migration Checker is a DLE's component that enables integration with CI/CD tools to automatically test migrations in CI/CD pipelines.

Key features#

  • Automated: DB migration testing in CI/CD pipelines
  • Realistic: test results are realistic because real or close-to-real (the same size but no personal data) databases are used, thin-cloned in seconds, and destroyed after testing is done
  • Fast and inexpensive: a single machine with a single disk can operate dozens of independent thin clones
  • Well-tested DB changes to avoid deployment failures: DB Migration Checker automatically detects (and prevents!) long-lasting dangerous locks that could put your production systems down
  • Secure: DB Migration Checker runs all tests in a secure environment: data cannot be copied outside the secure container
  • Lots of helpful data points: Collect useful artifacts (such as pg_stat_*** system views) and use them to empower your DB changes review process

How to set up a DB migration checker#

  • Make sure that the Database Lab Engine is running

  • Copy the contents of configuration example config.example.run_ci.yaml from the Database Lab repository to ~/.dblab/run_ci.yaml:

    mkdir -p ~/.dblab
    curl https://gitlab.com/postgres-ai/database-lab/-/raw/master/configs/config.example.run_ci.yaml \  --output ~/.dblab/run_ci.yaml
  • Configure the DB migration checker file run_ci.yaml

  • Launch DB migration checker

    docker run --name dblab_ci_checker --rm -it \  --publish 2500:2500 \  --volume /var/run/docker.sock:/var/run/docker.sock \  --volume /tmp/ci_checker:/tmp/ci_checker \  --volume ~/.dblab/run_ci.yaml:/home/dblab/configs/run_ci.yaml \postgresai/dblab-ci-checker:2.4.1
  • [optional] Run the localtunnel (or an analog) - use it only for debug purposes to make DB migration instance accessible for a CI pipeline lt --port 2500

  • Prepare a new repository with your DB migrations(Flyway, Sqitch, Liquibase, etc.)

    • add secrets:
      • DLMC_CI_ENDPOINT - an endpoint of your Database Lab Migration Checker service. For example, https://ci-checker.example.com/, or in case of debug the endpoint given from the localtunnel.
      • DLMC_VERIFICATION_TOKEN - verification token for the Database Lab Migration Checker API
  • Configure a new workflow in the created repository (see an example of configuration)

  • Push a commit to the repository

How it works#

After configuring all components, the repository is ready to start a CI workflow. Receiving a push event, GitHub starts the workflow with the Database Lab DB migration action. The Database Lab DB migration action makes a request to the DB migration checker. The DB migration checker requests the DLE to create a new clone, start a new container with migration tools and, then runs commands from the GitHub action against this clone.

Take a look at the communication scheme Communication scheme of DB migration service

Available input params#

  • commands (list, required) - list of commands to run needed database migrations'
  • dbname (string, optional, default: "") - database that the workflow is running with
  • migration_envs (list, optional) - list of environment variables that will be set during migrations running
  • download_artifacts (string, optional, default: "false") - option that allows choosing whether to store artifacts
  • observation_interval (string, optional, default: 10) - interval of metric gathering and output (in seconds)
  • max_lock_duration (string, optional, default: 10) - maximum allowed duration for locks (in seconds)
  • max_duration (string, optional, default: "3600") - maximum allowed duration for observation (in seconds)

Supported DB migration tools:#

There are prepared Docker images with DB migration tools

How to extend#

Proposals to extend this list are welcome in the project repository: https://gitlab.com/postgres-ai/custom-images.

tip

If needed, you can specify any custom Docker image with a DB migration tool in the Database Lab Migration Checker configuration. There is one requirement: such an image must remain running during the execution of all commands.

Use these Dockerfiles as examples. You can extend or modify it for your needs.