badhouseplants-net/content/posts/argocd-dynamic-environment-.../index.md

8.9 KiB

title date draft ShowToc cover
Argocd Dynamic Environment Per Branch: Part 1 2023-02-25T14:00:00+01:00 true true
image caption relative responsiveImages
cover.png Argocd Dynamic Environment Per Branch Part 1 false false

[Do you remember?]({{< ref "dont-use-argocd-for-infrastructure" >}})

And using helmfile, I will install ArgoCD to my clusters, of course, because it's an awesome tool, without any doubts. But don't manage your infrastructure with it, because it's a part of your infrastructure, and it's a service that you provide to other teams. And I'll talk about in one of the next posts.

Yes, I have written 4 posts where I was almost absuletely negative about ArgoCD. But I was talking about infrastructure then. I've got some ideas about how to describe it in a better way, but I think I will write another post about it.

Here, I want to talk about dynamic (preview) environments, and I'm going to describe how to create them using my blog as an example. My blog is a pretty easy application. From Kubernetes perspective, it's just a container with some static content. And here, you already can notice that static is an opposite of dynamic, so it's the first problem that I'll have to tackle. Turning static content into dynamic. So my blog consists of markdown files that are used by hugo for a web page generation.

Initially I was using hugo server to serve the static, but it needs way more resources than nginx, so I've decided in favor of nginx.

I think that I'll write 2 of 3 posts about it, because it's too much to cover in only one. So here, I'd share how I was preparing my blog to be ready for dynamic environments.

So this is how my workflow looked like before I decided to use dynamic environments.

  • I'm editing hugo content while using hugo server locally
  • Pushing changes to a non-main branch
  • When everything is ready, I'm uploading pictures to the minio storage
  • And merging a non-main branch to the main
  • Drone-CI is downloading images from minio and builds a docker image with the latest tag
    • First step is to generate a static content by hugo
    • Second step is to put that static content in nginx container
  • Drone-CI is pushing a new image to my registry
  • Keel spots that images was updated and pulls it.
  • Pod with a static is being recreated, and I have my blog with a new content

What I don't like about it? I can't test something unless it's in production. And when I stated to work on adding comments (that is still WIP) I've understood that I'd like to have a real environemnt where I can test everything before firing the main pipeline. Even though having a static development environment would be fine for me, because I'm the only one who do the development here, I don't like the concept of static envs, and I want to be able to work on different posts in the same time. Also, adding a new static environemnt for development purposes it kind of the same amount of work as implementing a solution for deploying them dynamically.

Before I can start deploying them, I have to prepare the application for that. At the first glance changes looks like that:

  1. Container must not contain any static content
  2. I can't use only latest tags anymore
  3. Helm chart has a lot of stuff that's hardcoded
  4. CI pipelines must be adjusted
  5. Deployment process should be rethought

Static Container

Static content doesn't play well with dynamic environments. I'd even say, doesn't play at all. So at least I must stop defining hostname for my blog on the build stage. One container should be able to run anywhere with the same result. So I've decided that instedd of putting the generated static content in the container with nginx on the build stage, I need to ship a container with source code to Kubernetes, generate static there and put it to a container with nginx. So before my deployment looked like that:

spec:
  containers:
    - image: git.badhouseplants.net/allanger/badhouseplants-net:latest
      imagePullPolicy: Always
      name: badhouseplants-net

And it was enough. Now it looks like that:

containers:
      - image: nginx:latest
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        resources: {}
        volumeMounts:
        - mountPath: /var/www
          name: public-content
          readOnly: true
        - mountPath: /etc/nginx/conf.d
          name: nginx-config
          readOnly: true
      initContainers:
      - args:
        - --baseURL
        - https://dynamic-charts-dev.badhouseplants.net/
        image: git.badhouseplants.net/allanger/badhouseplants-net:d727a51c0443eb4194bdaebf8ab0e94c0f228b06
        imagePullPolicy: Always
        name: badhouseplants-net
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /src/static
          name: s3-data
          readOnly: true
        - mountPath: /src/public
          name: public-content
      restartPolicy: Always
      - emptyDir:
          sizeLimit: 1Gi
        name: public-content
      - configMap:
          defaultMode: 420
          name: nginx-config
        name: nginx-config

So in the init container I'm generating a static content (--baseUrL flag is templated with Helm). Putting the result to the directory that is mounted as en emptyDir volume. And then later I'm mounting this folder to a container with nginx. Now I can use my docker image wherever I'd like with the same result It doesn't depend on the hostmame that was fixed during the build.

No more latest

Since I want to have my envs updated on each commit, I can't push only latest anymore. So I've decided to use commit sha as tags for my images. But it means that I'll have a lot of them now and having 300Mb of images and other media is becoming very painful. That means that I need to stop putting images directly to container during the build. So instead of using rclone to get data from minio in a drone pipeline, I'm adding another init container to my deployment.

      initContainers:
      - args:
        - -c
        - rclone copy -P badhouseplants-public:/badhouseplants-static /static
        command:
        - sh
        env:
        - name: RCLONE_CONFIG
          value: /tmp/rclone.conf
        image: rclone/rclone:latest
        imagePullPolicy: Always
        name: rclone
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp
          name: rclone-config
          readOnly: true
        - mountPath: /static
          name: s3-data
      volumes:
      - name: rclone-config
        secret:
          defaultMode: 420
          secretName: rclone-config
      - emptyDir:
          sizeLimit: 1Gi
        name: s3-data

And also, I'm mounting the s3-data volume to the hugo container, so it can generate my blog with all images.

Helm chart should be more flexible

I had to find all the values, that should be different between different environments. And turned out, it's not a lot.

  1. Istio VirtualServices hostnames (Or Ingress hostname, if you don't use Istio)
  2. Image tag for the container with the source code
  3. And a hostname that should be passed to hugo as a base URL
  4. Preview environments should display pages that are still drafts

So all of that I've put to values.yaml

istio:
  hosts:
    - badhouseplants.net
  hugo:
  image:
    tag: $COMMIT_SHA
  baseURL: https://badhouseplants.net/
  buildDrafts: false

CI pipelines

Now I need to push a new image on each commit instead of pushing only once the code made it to the main branch, But I also don't want to have something that doesn't work completely in my registry, because I'm self-hosting and ergo I care about storage. So before building and pushing an image, I need to to test it,

# ---------------------------------------------------------------
# -- My Dockerfile is very small and easy, so it's not a problem 
# --  to duplicate its logic in a job. But I think that 
# --  a better way to implement this, would be to build an image
# --  with Dockerfile, run it, and push, if everything is fine
# ---------------------------------------------------------------
- name: Test a build
  image: klakegg/hugo
  commands:
    - hugo

- name: Build and push the docker image 
  image: plugins/docker
  settings: 
    registry: git.badhouseplants.net
    username: allanger
    password: 
      from_secret: GITEA_TOKEN
    repo: git.badhouseplants.net/allanger/badhouseplants-net
    tags: ${DRONE_COMMIT_SHA}

Now if my code is not really broken, I'll have an image for each commit. And when I merge my branch to main I can use a tag from the latest preview build on for the production instance. So I'm almost sure that what I've tested before is what a visitor will see.