badhouseplants-net/content/posts/argocd-vs-helmfile-application/index.md

22 KiB

title date draft cover ShowToc
ArgoCD vs Helmfile: Applications 2023-02-13T12:14:09+01:00 false
image caption relative responsiveImages
cover.png ArgoCD false false
true

So as promised in [the previous ArgoCD post]({{< ref "dont-use-argocd-for-infrastructure" >}}), I'll try to show a simple example of Pull Requests for different kinds of setups. This is the first part. Putting everything in the same post seems kind of too much.

Intro

I've created three main branches and three branches for install two applications. I assume we have two production clusters (If you've read the previous post, you know that by saying 'production', I mean production for SRE team, so they can be dev/stage/whatever for other teams) and one test cluster (the one where SRE team can test anything without affecting other teams)

You can already check all of them here: https://git.badhouseplants.net/allanger/helmfile-vs-argo/pulls

I've decided to install Vertical pod autoscaler to both prod clusters and goldilocks to only one of them. Therefore, I have to add both to the test-cluster as well. Also, I've promised that I'd implement the CI/CD for all of those solutions, but I think that it's going to be enough just to describe the logic. If you really want to see different implementation of CI/CD, you can shoot me a message, and I will write another post then.

Applications (Ann App of Apps)

So here is the PR for installing applications with Application manifests. https://git.badhouseplants.net/allanger/helmfile-vs-argo/pulls/2/files

I've chosen to follow the App of apps pattern, because it's including changes that must have been done if you use a "direct" applications installation and app of apps. So let's have a look at the main manifests, here you can see the base: https://git.badhouseplants.net/allanger/helmfile-vs-argo/src/branch/argo-apps-main

Initially I thought to use only one "Big Application" manifest for all three clusters, but I found out that it's not so easy when you don't have clusters with exactly the same infrastructure. Even with multi-source apps, you will probably have to use an additional tool for templating/substituting, for example like this:

# app-of-apss.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: app-of-apps
  namespace: argo-system
spec:
  destination:
    namespace: argo-system
    server: https://kubernetes.default.svc
  project: system
  sources:
    - path: ./manifests/$CLUSTER
      repoURL: git@git.badhouseplants.net:allanger/helmfile-vs-argo.git
      targetRevision: argo-apps-main
    - path: ./manifests/common
      repoURL: git@git.badhouseplants.net:allanger/helmfile-vs-argo.git
      targetRevision: argo-apps-main

and then, in a pipeline do something like this:

export CLUSTER=cluster1
kubectl apply $(envsubst < app-of-apps.yaml) # I haven't tested it out, so this command may no work, but I hope you get the point. 

So it's either additional files, or an additional logic in CI/CD.

Also, the helm-freeze thing. I wanted to vendor charts, because in this example it's required, but my Gitea instance can't preview file changes when there are 9000+ lines of code updated, so I had to remove.

But logic would be like this

  • Manual part:
    • Update helm-freeze.yaml
    • Run helm-freeze sync
    • Add a new application to the manifests/$CLUSTER dir
    • Push
  • CI/CD
    • Since it needs to be GitOps, you need to check that charts in the vendor dir are up-to-date with helm-freeze.yaml. Because if you updated helm-freeze and forgot to execute helm-freeze sync, you will have a contradiction between actual and desired states. That's one of the reasons, why I don't like this kind of vendoring. Either it's an addition step in CI, that is verifying that the manual step was done, or it's an additional work for reviewer. You also can add an action that is going to execute it withing the pipeline and push to your branch, but I'm completely against it. (something for another post maybe)

    • Then depending on a branch:

      • If not main

        Then you need to run argocd diff for production clusters, and deploy changes to the test clusters, so it's something like

      • If main

        Deploy to all clusters

So let's try to do it

So we create a first app-of-apps manifests

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: app-of-apps
  namespace: argo-system
spec:
  destination:
    namespace: argo-system
    server: https://kubernetes.default.svc
  project: default
  source:
    path: ./manifests/cluster2/
    repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
    targetRevision: argo-apps-updated

Then we need to create apps

# ./manifests/cluster2/vpa.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: vpa
  namespace: argo-system
spec:
  destination:
    namespace: vpa-system
    server: https://kubernetes.default.svc
  project: default
  source:
    helm:
      releaseName: vpa
      valueFiles:
        - ../../values/vpa.common.yaml
    path: ./vendor/vpa
    repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
    targetRevision: argo-apps-updated

Here we have different options.

  • Sync everything automatically (app-of-apps and applications), but it doesn't look too fail-safe to me. And also we can't know diff then, because what's different will be applied immediately. So it's 👎
  • Sync automatically only the app-of-apps, and then sync applications with the argocd cli. It sounds better, because then we can run diff on applications and know the difference between a wished state and a real state, so it's closer to 👍
  • Sync applications automatically, but app-of-apps with cli. Doesn't sound to bad, does it? Maybe not that flexible as the previous option, but still not too bad. So it's closer to 👍 too.
  • Sync everything with cli. I would say it will give you the best control, but will become additional steps in the pipeline. Now I don't think it's a hard thing to implement, so let's say "closer to 👍 too".

I don't consider the first option a reliable one, so I wouldn't even talk about it. You can try, of course, but your changes won't be visible unless they are deployed. So it's like the "test on production" thing.

The second, let's have a look. Let's try adding some values to the vpa release, and install Goldilocks (assuming it wasn't installed).

VPA values:

# ./values/vpa.common.yaml
# I've just changes `false` to `true`
updater:
  enabled: true # <- here

Goldilocks app:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: goldilocks
  namespace: argo-system
spec:
  destination:
    namespace: vpa-system
    server: https://kubernetes.default.svc
  project: default
  source:
    helm:
      releaseName: goldilocks
    path: ./vendor/goldilocks
    repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
    targetRevision: argo-apps-updated

And I pushed to repo.

So now let see what I've got in UI: Changes in UI

This is how diffs for VPA look in the UI: Diff in UI

{{< details "Here you can find all the diffs from the UI as text" >}}

+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+   labels:
+     app.kubernetes.io/component: updater
+     app.kubernetes.io/instance: vpa
+     app.kubernetes.io/managed-by: Helm
+     app.kubernetes.io/name: vpa
+     app.kubernetes.io/version: 0.11.0
+     argocd.argoproj.io/instance: vpa
+     helm.sh/chart: vpa-1.6.0
+   name: vpa-updater
+   namespace: vpa-system
+ spec:
+   replicas: 1
+   selector:
+     matchLabels:
+       app.kubernetes.io/component: updater
+       app.kubernetes.io/instance: vpa
+       app.kubernetes.io/name: vpa
+   template:
+     metadata:
+       labels:
+         app.kubernetes.io/component: updater
+         app.kubernetes.io/instance: vpa
+         app.kubernetes.io/name: vpa
+     spec:
+       containers:
+         - env:
+             - name: NAMESPACE
+               valueFrom:
+                 fieldRef:
+                   fieldPath: metadata.namespace
+           image: 'k8s.gcr.io/autoscaling/vpa-updater:0.11.0'
+           imagePullPolicy: Always
+           livenessProbe:
+             failureThreshold: 6
+             httpGet:
+               path: /health-check
+               port: metrics
+               scheme: HTTP
+             periodSeconds: 5
+             successThreshold: 1
+             timeoutSeconds: 3
+           name: vpa
+           ports:
+             - containerPort: 8943
+               name: metrics
+               protocol: TCP
+           readinessProbe:
+             failureThreshold: 120
+             httpGet:
+               path: /health-check
+               port: metrics
+               scheme: HTTP
+             periodSeconds: 5
+             successThreshold: 1
+             timeoutSeconds: 3
+           resources:
+             limits:
+               cpu: 200m
+               memory: 1000Mi
+             requests:
+               cpu: 50m
+               memory: 500Mi
+           securityContext: {}
+       securityContext:
+         runAsNonRoot: true
+         runAsUser: 65534
+       serviceAccountName: vpa-updater
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"vpa"},"name":"vpa-actor"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"vpa-actor"},"subjects":[{"kind":"ServiceAccount","name":"vpa-recommender","namespace":"vpa-system"}]}
  labels:
    argocd.argoproj.io/instance: vpa
  managedFields:
    - apiVersion: rbac.authorization.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            .: {}
            'f:argocd.argoproj.io/instance': {}
        'f:roleRef': {}
        'f:subjects': {}
      manager: argocd-application-controller
      operation: Update
      time: '2023-02-13T20:58:02Z'
    - apiVersion: rbac.authorization.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
      manager: argocd-controller
      operation: Update
      time: '2023-02-13T20:58:02Z'
  name: vpa-actor
  resourceVersion: '34857'
  uid: 71958267-68b4-4923-b2bb-eaf7b3c1a992
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: vpa-actor
subjects:
  - kind: ServiceAccount
    name: vpa-recommender
    namespace: vpa-system
+  - kind: ServiceAccount
+    name: vpa-updater
+    namespace: vpa-system
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+   labels:
+     argocd.argoproj.io/instance: vpa
+   name: vpa-evictionter-binding
+ roleRef:
+   apiGroup: rbac.authorization.k8s.io
+   kind: ClusterRole
+   name: vpa-evictioner
+ subjects:
+   - kind: ServiceAccount
+     name: vpa-updater
+     namespace: vpa-system
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+   labels:
+     argocd.argoproj.io/instance: vpa
+   name: vpa-status-reader-binding
+ roleRef:
+   apiGroup: rbac.authorization.k8s.io
+   kind: ClusterRole
+   name: vpa-status-reader
+ subjects:
+   - kind: ServiceAccount
+     name: vpa-updater
+     namespace: vpa-system
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"vpa"},"name":"vpa-target-reader-binding"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"vpa-target-reader"},"subjects":[{"kind":"ServiceAccount","name":"vpa-recommender","namespace":"vpa-system"}]}
  labels:
    argocd.argoproj.io/instance: vpa
  managedFields:
    - apiVersion: rbac.authorization.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            .: {}
            'f:argocd.argoproj.io/instance': {}
        'f:roleRef': {}
        'f:subjects': {}
      manager: argocd-application-controller
      operation: Update
      time: '2023-02-13T20:58:02Z'
    - apiVersion: rbac.authorization.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
      manager: argocd-controller
      operation: Update
      time: '2023-02-13T20:58:02Z'
  name: vpa-target-reader-binding
  resourceVersion: '34855'
  uid: 30261740-ad5d-4cd9-b043-0ff18daaf3aa
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: vpa-target-reader
subjects:
  - kind: ServiceAccount
    name: vpa-recommender
    namespace: vpa-system
+  - kind: ServiceAccount
+    name: vpa-updater
+    namespace: vpa-system

{{< /details >}}

And for Goldilocks Goldilocks Application

All the diffs are also there, and they look good.

But to seem them I had to push to the target branch. And we want to see changes without pushing.

# main
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: app-of-apps
  namespace: argo-system
spec:
  destination:
    namespace: argo-system
    server: https://kubernetes.default.svc
  project: default
  source:
    path: ./manifests/cluster2/
    repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
    targetRevision: argo-apps-main

Then we need to create apps

# ./manifests/cluster2/vpa.yaml
# feature branch
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: vpa
  namespace: argo-system
spec:
  destination:
    namespace: vpa-system
    server: https://kubernetes.default.svc
  project: default
  source:
    helm:
      releaseName: vpa
      valueFiles:
        - ../../values/vpa.common.yaml
    path: ./vendor/vpa
    repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
    targetRevision: argo-apps-main

App of apps in the main

So currently app of apps doesn't know about what's happening in my new branch. And so I can't just do argocd app vpa diff. So what should I do?

argocd app diff --help
...
Usage:
  argocd app diff APPNAME [flags]
...

That means that I can't use it for those new apps that exist inly in my branch, because I need to pass an App name, and since it's not installed yet, I have something like

argocd app diff vpa
FATA[0000] rpc error: code = NotFound desc = error getting application: applications.argoproj.io "vpa" not found

There is a --local option, but it still requires a name (why if there is a name in manfiests 🙃🙃🙃)

# Just testing out
argocd app diff vpa --local ./manifests/cluster2/
FATA[0000] rpc error: code = NotFound desc = error getting application: applications.argoproj.io "vpa" not found # 🤪

Ok, then we can check the app-of-apps

argocd app diff app-of-apps --local ./cluster-1.yaml
Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.FATA[0000] error while parsing source parameters: stat cluster-1.yaml/.argocd-source.yaml: not a directory

argocd app diff app-of-apps --local ./cluster-1.yaml --server-side-generate
FATA[0000] rpc error: code = Unknown desc = failed to get app path: ./manifests/cluster2/: app path does not exist

argocd app diff app-of-apps --local ./cluster-2.yaml --server-side-generate --loglevel debug
FATA[0000] rpc error: code = Unknown desc = failed to get app path: ./manifests/cluster2/: app path does not exist
# I can't get it, maybe anybody could tell me what I'm doing wrong?


argocd app diff app-of-apps --local ./cluster-2.yaml
Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.FATA[0000] error while parsing source parameters: stat cluster-2.yaml/.argocd-source.yaml: not a directory


mkdir /tmp/argo-test
cp cluster-2.yaml /tmp/argo-test
argocd app diff app-of-apps --local /tmp/argo-test --loglevel debug

Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.
===== argoproj.io/Application /app-of-apps ======
0a1,15
> apiVersion: argoproj.io/v1alpha1
> kind: Application
> metadata:
>   labels:
>     argocd.argoproj.io/instance: app-of-apps
>   name: app-of-apps
> spec:
>   destination:
>     namespace: argo-system
>     server: https://kubernetes.default.svc
>   project: default
>   source:
>     path: manifests/cluster2/
>     repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
>     targetRevision: argo-apps-main

# If i change a branch for the app of apps target to the current one 

cat cluster-2.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: app-of-apps
  namespace: argo-system
spec:
  destination:
    namespace: argo-system
    server: https://kubernetes.default.svc
  project: default
  source:
    path: ./manifests/cluster2/
    repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
    targetRevision: argo-apps-updated

kuvectl apply -f cluster-2.yaml
cp cluster-2.yaml /tmp/argo-test
argocd app diff app-of-apps --local /tmp/argo-test --loglevel debug
Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.
===== argoproj.io/Application /app-of-apps ======
0a1,15
> apiVersion: argoproj.io/v1alpha1
> kind: Application
> metadata:
>   labels:
>     argocd.argoproj.io/instance: app-of-apps
>   name: app-of-apps
> spec:
>   destination:
>     namespace: argo-system
>     server: https://kubernetes.default.svc
>   project: default
>   source:
>     path: ./manifests/cluster2/
>     repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
>     targetRevision: argo-apps-updated

I don't really understand what it means. Most probably, I'm just stupid. But what I see is that it's not working with --server-side-generate with an error, that I can't really understand. And is saying that I shouldn't use it without the flag, because that way of running it is deprecated. And even without the flag, it's giving me a strange output, that I don't know how to use it.

So as I see, to have a proper diff, you need to apply. But it doesn't look like a fail-safe and scalable way to use.

I told that we can check different options for syncing, but as I see now, other workflows won't give me a better overview about what's happening. So I don't think it makes a lot of sense. If I find a way to see a proper diff without applying manifests first, I would go back to this topic and write one more post.

Maybe it's because an App of Apps layer

Let's try installing apps directly. Remove an app-of-apps from k8s. And let's use manifests from /manifests/cluster2/ directly. As I see, diffing won't work anyway for applications that are not installed yet. So you can check ones that are already installed, but I couldn't make it work too. I was changing values to check if they are shown, but they weren't. Again, I could simply screw up, and if you have a positive experience with that, don't hesitate to let me know about it, I'm willing to change my mind

Conclusion

So you can check the PR here: https://git.badhouseplants.net/allanger/helmfile-vs-argo/pulls/2/files

I like that values can be handled as normal values files. (But for handling secrets you might have to add a CMP, that means an additional work and maintenance) But even if adding CMP is fine, I couldn't get proper diffs for my changes, that means that I can't see what's happening without applying manifests. And applying manifests will mean that other team members will not be work on other tickets withing the same scope, so it looks like a bottleneck to me.

But I don't like that you need to add a lot of manifests to manage all the applications. We have only 2 manifests that are copied from folder to folder. So we have a lot of repeating code. And repeating code is never good. So I would write a tool that can let you choose applications from the list of all applications and choose clusters where they need to be deployed. So the config looks like this:

app_path: ./manifests/common
clusters:
  - cluster: cluster1
    applications:
      - vpa
  - cluster: cluster2
    applications:
      - vpa
      - goldilocks
  - cluster: cluster3
    applications: 
      - vpa
      - goldilocks

But I think that with the whole GitOps pulling concept it will be a hard thing to implement. And in the end it looks like helmfile, so ... 🤷‍♀️🤷‍♀️🤷‍♀️

I can only say, that I see no profit in using argo like this. It only seems like either a very complicated setup (most probably you will be able to implement anything you need, the question is, how much time will you spend with that), or a crippled not complete setup.

And if you compare an amount of lines that area updadated to install these apps as Applications to the helmfile stuff, it's going to be ~100 vs ~30. And that's what I also don't like.

In the next post I will try doing the same with ApplicationSets, and we'll see, if it looks better or not.

Thanks,

Oi!

{{< comments >}}