Init Commit
continuous-integration/drone/push Build is passing Details

Start following the GitFLow
This commit is contained in:
Nikolai Rodionov 2023-02-17 15:19:49 +01:00
commit 8c825f7c84
50 changed files with 4153 additions and 0 deletions

1
.dockerignore Normal file
View File

@ -0,0 +1 @@
node_modules

84
.drone.yml Normal file
View File

@ -0,0 +1,84 @@
---
kind: pipeline
type: kubernetes
name: Build badhouseplants.net
steps:
- name: Publish the Helm chart
when:
branch:
- main
image: alpine/helm
environment:
GITEA_TOKEN:
from_secret: GITEA_TOKEN
commands:
- helm plugin install https://github.com/chartmuseum/helm-push
- helm package chart -d chart-package
- helm repo add --username allanger --password $GITEA_TOKEN allanger-charts https://git.badhouseplants.net/api/packages/allanger/helm
- helm cm-push "./chart-package/$(ls chart-package)" allanger-charts
- name: Init git submodules
image: alpine/git
when:
branch:
- main
commands:
- git submodule update --init --recursive
- name: Get static content
image: rclone/rclone:latest
when:
branch:
- main
environment:
RCLONE_CONFIG_CONTENT:
from_secret: RCLONE_CONFIG_CONTENT
RCLONE_CONFIG: /tmp/rclone.conf
commands:
- echo "$RCLONE_CONFIG_CONTENT" > $RCLONE_CONFIG
- rclone copy -P badhouseplants-public:/badhouseplants-static static
- name: Build and push the docker image
when:
branch:
- main
image: plugins/docker
settings:
registry: git.badhouseplants.net
username: allanger
password:
from_secret: GITEA_TOKEN
repo: git.badhouseplants.net/allanger/badhouseplants-net
tags: latest
depends_on:
- Init git submodules
- Get static content
---
kind: pipeline
type: kubernetes
name: CV Builder
when:
branch:
- main
steps:
- name: Build the CV
image: ghcr.io/puppeteer/puppeteer
commands:
- cp -R ./content/cv/* $HOME
- cd $HOME
- npm install md-to-pdf
- npx md-to-pdf index.md
- mkdir $DRONE_WORKSPACE/cv
- mv index.pdf $DRONE_WORKSPACE/cv/n.rodionov.pdf
- name: Upload the CV
image: rclone/rclone:latest
environment:
RCLONE_CONFIG_CONTENT:
from_secret: RCLONE_CONFIG_CONTENT_PRIVATE
RCLONE_CONFIG: /tmp/rclone.conf
commands:
- echo "$RCLONE_CONFIG_CONTENT" > $RCLONE_CONFIG
- rclone copy -P $DRONE_WORKSPACE/cv badhouseplants-minio:/public-download

3
.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
node_modules
static
content/cv/index.pdf

6
.gitmodules vendored Normal file
View File

@ -0,0 +1,6 @@
[submodule "themes/ananke"]
path = themes/ananke
url = https://github.com/theNewDynamic/gohugo-theme-ananke
[submodule "themes/papermod"]
path = themes/papermod
url = https://github.com/adityatelange/hugo-PaperMod.git

0
.hugo_build.lock Normal file
View File

13
Dockerfile Normal file
View File

@ -0,0 +1,13 @@
FROM alpine:latest AS builder
WORKDIR /src
ARG GOHUGO_LINK=https://github.com/gohugoio/hugo/releases/download/v0.110.0/hugo_0.110.0_linux-amd64.tar.gz
RUN apk update && apk add curl tar
RUN curl -LJO ${GOHUGO_LINK} && tar -xf hugo_0.110.0_linux-amd64.tar.gz
RUN chmod +x /src/hugo
FROM alpine:latest
WORKDIR /src
COPY --from=builder /src/hugo /usr/bin/hugo
COPY . /src
ENTRYPOINT ["/usr/bin/hugo"]
CMD ["--help"]

5
Makefile Normal file
View File

@ -0,0 +1,5 @@
upload_static:
rclone copy -P static badhouseplants-minio:/badhouseplants-static
get_static:
rclone copy -P badhouseplants-public:/badhouseplants-static static

4
README.md Normal file
View File

@ -0,0 +1,4 @@
# Badhouseplants NET
## Static content
Storing static content in the repo is painful, because there are massive. That's why for storing them I'm using a S3 bucket that is publicly available for downstream operations

12
archetypes/default.md Normal file
View File

@ -0,0 +1,12 @@
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
ShowToc: true
cover:
image: "cover.png"
caption: "{{ replace .Name "-" " " | title }}"
relative: false
responsiveImages: false
---

23
chart/.helmignore Normal file
View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

6
chart/Chart.yaml Normal file
View File

@ -0,0 +1,6 @@
apiVersion: v2
name: badhouseplants-net
description: A Helm chart for Kubernetes
type: application
version: 0.1.12
appVersion: "1.16.0"

22
chart/templates/NOTES.txt Normal file
View File

@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "badhouseplants-net.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "badhouseplants-net.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "badhouseplants-net.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "badhouseplants-net.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "badhouseplants-net.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "badhouseplants-net.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "badhouseplants-net.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "badhouseplants-net.labels" -}}
helm.sh/chart: {{ include "badhouseplants-net.chart" . }}
{{ include "badhouseplants-net.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "badhouseplants-net.selectorLabels" -}}
app.kubernetes.io/name: {{ include "badhouseplants-net.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "badhouseplants-net.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "badhouseplants-net.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,58 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "badhouseplants-net.fullname" . }}
labels:
{{- include "badhouseplants-net.labels" . | nindent 4 }}
{{- with .Values.deployAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "badhouseplants-net.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "badhouseplants-net.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
command:
{{ toYaml .Values.command | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,61 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "badhouseplants-net.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "badhouseplants-net.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "badhouseplants-net.fullname" . }}
labels:
{{- include "badhouseplants-net.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "badhouseplants-net.selectorLabels" . | nindent 4 }}

73
chart/values.yaml Normal file
View File

@ -0,0 +1,73 @@
replicaCount: 1
image:
repository: git.badhouseplants.net/allanger/badhouseplants-net
pullPolicy: Always
tag: latest
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
deployAnnotations:
keel.sh/trigger: poll
keel.sh/policy: 'force'
podSecurityContext: {}
# fsGroup: 2000
command:
- "/bin/sh"
- "-c"
- "hugo server --bind 0.0.0.0 -p 80 -b https://badhouseplants.net/ --appendPort=false"
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations:
kubernetes.io/ingress.class: istio
hosts:
- host: badhouseplants.net
paths:
- path: /
pathType: Prefix
tls:
- secretName: badhouseplants-wildcard-tls
hosts:
- badhouseplants.net
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}

65
config.yaml Normal file
View File

@ -0,0 +1,65 @@
baseURL: 'https://badhouseplants.net/'
languageCode: 'en-us'
title: 'Bad Houseplants'
theme: 'papermod'
menu:
main:
- name: Posts
url: /posts
weight: 10
- name: Music
url: /music
weight: 11
- name: Beats
url: /beats
weight: 12
- name: About
url: /about
weight: 13
- name: Search
url: /search
weight: 14
taxonomies:
tag: tags
params:
ShowBreadCrumbs: true
ShowReadingTime: true
ShowPostNavLinks: true
ShowCodeCopyButtons: true
profileMode:
enabled: true
title: "Bad Houseplants"
subtitle: "... by allanger"
imageUrl: "Finish.png"
imageWidth: 150
imageHeight: 150
buttons:
- name: Source
url: "https://git.badhouseplants.net/allanger/badhouseplants-net"
- name: My Music
url: "https://funkwhale.badhouseplants.net/library/artists"
socialIcons:
- name: "telegram"
url: "https://t.me/allanger"
- name: "twitter"
url: "https://twitter.com/_allanger"
- name: "mastodon"
url: "https://mastodon.social/@allanger"
- name: github
url: 'https://github.com/allanger'
- name: email
url: 'mailto:allanger@zohomail.com'
ShowShareButtons: true
ShareButtons: ["telegram", "twitter", "reddit", "linkedin"]
env: production
title: Bad Houseplants
description: "...by allanger"
keywords: [Blog, Portfolio]
author: allanger
DateFormat: "January 2, 2006"
defaultTheme: auto
outputs:
home:
- HTML
- RSS
- JSON

47
content/about/_index.md Normal file
View File

@ -0,0 +1,47 @@
---
title: About
date: 2023-01-24T09:26:52+01:00
draft: false
---
> It was supposed to be just yet another web page with musical releases reviews, but after trying to write something about them, I've found out that I'm not good at it. So it's just a blog where I'm talking about everything that comes to my mind.
[![Build Status](https://drone.badhouseplants.net/api/badges/allanger/badhouseplants-net/status.svg?ref=refs/heads/main)](https://drone.badhouseplants.net/allanger/badhouseplants-net/latest)
### Who am I?
> If you're hiring, you can find [my CV here]({{< ref "cv" >}} )
I'm a musician and a geek, who works full time as a DevOps engineer, whatever it means. Thanks to my job, I know how to run self-hosted services pretty well, and that's helping me achieve my goal of bringing the indie culture everywhere I can. I'm trying to separate myself from global companies as a user as much as it's possible in my daily life.
Also, I'm a Linux lover, what doesn't really correlate with my will to make music. I hope that one day we will see that developers will see that Linux is a real OS that can be used as a daily driver. And building software for Linux is important just like building for MacOS and Windows. I hope that we will be able to use not only open source solutions working on Linux, but also closed-source proprietary ones.
### Music, Beats and Arrangements
## Music
> I always thought I was a musician
[Check out what I've got](https://funkwhale.badhouseplants.net)
You can find everything I consider ready enough to be shown on my [FunkWhale](https://funkwhale.badhouseplants.net/library) instance. Also, my music can be found on many streaming services, and yes, I know that it's not a very independent way of doing things, but it's one of many exceptions 🙃.
All of my beats are waiting for somebody to do something with them. I'm giving them all for donation, so if you happen to like any, just shoot me a message. I can re-arrange and remix them as much as possible. I can mix your tracks, and I really will to do that, it doesn't matter what kind of music it is, I'm ready to work with everything, if I like it *(at least a little bit)*.
## IT
> I'm a DevOps after all
[Visit my gitea](https://git.badhouseplants.net)
For I'm a DevOps I'm working a lot with Kubernetes, Containers, Linux, etc. And that's the root of my intention to move to Linux completely.
I hope I will do my contribution to the world of Linux music production too. I'm hosting my own Gitea instance. There you will be able to find all my code (or almost all of my code).
If you made it to here, you might think that it's the point of all of this existing. Self-hosted blog, a music streaming service, and git. **This guy is just a fucking geek! **
And yes, you're partially right. The main reason it exists is that I'm trying to follow and promote `indie/punk` culture, that is not only applies to arts. And that's going to be covered in my posts, I hope.
---
### If you're still here,
I'm looking for people with the same mindset as me, to make music or to code together, or anything. So I would be happy to get connections on [Mastodon](https://mastodon.social/@allanger)

53
content/beats/_index.md Normal file
View File

@ -0,0 +1,53 @@
---
title: Beats
date: 2023-01-24T09:26:52+01:00
draft: false
---
>I don't lease my beats. If you happen to like anything, just shout me a message and we will come to an agreement. And if you decide to use any of my beats you'll be the only one using it (legally).
---
### Easy Money
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=18"></iframe>
{{< /rawhtml >}}
### Phantom Limb
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=19"></iframe>
{{< /rawhtml >}}
### Ark
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=21"></iframe>
{{< /rawhtml >}}
### Tremor
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=24"></iframe>
{{< /rawhtml >}}
### Empty Cubicles
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=23"></iframe>
{{< /rawhtml >}}
### Body Drop
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=20"></iframe>
{{< /rawhtml >}}
### Broken Piano
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=22"></iframe>
{{< /rawhtml >}}
### Dead Wings
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=25"></iframe>
{{< /rawhtml >}}
### Trapped
{{< rawhtml >}}
<iframe width="100%" height="150" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=track&amp;id=17"></iframe>
{{< /rawhtml >}}

97
content/cv/index.md Normal file
View File

@ -0,0 +1,97 @@
---
title: "Curriculum Vitae (CV)"
date: 2023-02-11T18:29:00+01:00
draft: false
ShowToc: true
---
# Nikolai Rodionov
```
> Location: Düsseldorf, Germany
> Email: allanger@zohomail.com (preffered)
> Phone: 015223284008
> Github: https://github.com/allanger
```
---
## About me
<p align="center">
<img src="./myself.jpeg" alt="drawing" width="30%"/>
</p>
I'm a DevOps engineer (or SRE if you wish) with 5++ years of hands-on experience with a decent amount of tools that are most probably used or going to be used in your company. One of the most important tools that I love working with and want to continue working with, is Kubernetes. At least, while I don't see any better alternative to it. I think that containers themselves are one of coolest inventions in development, and I'm trying to use them as long as it's possible. Also, I believe that every routine must be automated, because routing is a boring job that lets people lose focus and make mistakes.
I think that there are several things that a good SRE or DevOps engineer must be able to do:
- To build reliable and stable infrastructure
- Keep this infrastructure up-to-date
- Keep all the source and instructions of this infrastructure clean and simple
- Avoid a human factor as long as possible
- And when it's not possible to avoid it, not to be afraid to take responsibility
Also, I think it's important that before implementing anything, an engineer has understood all the requirements and checked tools that can fulfil them. I often see, how people try to use a tool for its name but not for its functionality, and hence they have to do a lot of additional work and deal with compromises. And if nothing really can fulfil those requirements, you need not be afraid of writing something new *and open-source it*.
<div class="page-break"></div>
## Experience
**Klöckner-i**: DevOps Engineer
> 01.2022 - until now
```
| GCloud - Microsoft Azure
| Linux - Containers - Kubernetes
| Helm - Helmfile
| Percona Mysql - Postgresql
| Bash
| Prometheus - Grafana - Elasticsearch - Kibana
| ArgoCD - Gitlab CI - Github Actions
| Sops
| Ansible
```
---
**Itigris**: DevOps Engineer
> 07.2019 - 12.2021
```
| AWS - Yandex Cloud
| Linux - Containers - Kubernetes
| Helm - Helmfile - Kustomize
| Bash
| Gitlab CI - Drone - ArgoCD
| Postgresql - Redis
| Java - JS - Go
| Ansible - Terraform
| Prometheus - Grafana - Loki - Elasticsearch - Kibana
```
---
**Etersoft**: DevOps Engineer
> 03.2017 - 06.2019
```
| Bare metal - Proxmox - Virtual Box
| Linux - Containers - Networks
| Bash - Perl
| Mysql - Postgresql
| Minio - Ceph
| Gitlab CI
| Ansible
```
<div class="page-break"></div>
## A little bit more about me
- I love to work with `Kubernetes`, but not with `yaml`.
- I'm a huge fan of [Helmfile](https://github.com/helmfile/helmfile).
- I have written several small cli tools in Rust, that you might find in my [GitHub profile pins](https://github.com/allanger) (they are not perfect, but I'm working on it).
- I'm contributing to [db-operator](https://github.com/kloeckner-i/db-operator).
- I'm trying to automate everything until I'm losing control over something that is automated.
- I love Perl, although I don't even remember how to write code with it, but I would be somehow thrilled to have any ability to work with it in production
- I also think that everything is better in Rust, or at least in Go *(if Bash is not enough)*
I have a blog (written-as-code) that is deployed to K8s (https://badhouseplants.net/), with the source code stored in a self-hosted Gitea, that is also deployed to K8s alongside the CI/CD system where this blog is being built and published. This CV is also being built as a part of the CI process, and then uploaded to `minio` storage that is also ~~surprisingly~~ running in this cluster. So you can download the latest version of CV here: <https://s3.badhouseplants.net/public-download/n.rodionov.pdf>
> But I can't guarantee 100% availability because it's a one-node k8s, and sometimes I need to do a maintenance work

BIN
content/cv/myself.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

49
content/music/index.md Normal file
View File

@ -0,0 +1,49 @@
---
title: "Music"
date: 2023-01-31T13:52:43+01:00
draft: false
ShowToc: true
---
Everything that's created by me, can be found on my [funkwhale instance](https://funkwhale.badhouseplants.net). But I'm only uploading `lossy` there. I was trying to upload losseless, but then it either doesn't really work with my Android App, or it's hard to manage. And it needs a way more disk that way. So if you want to listnen to lossless, go to my [Bandcamp](https://allanger.bandcamp.com/). *A lot of tracks are still not there, but they will be there soon*. I also have a [SoundCloud account](https://soundcloud.com/allanger) and I try to publish everything there.
---
### allanger
[Spotify](https://open.spotify.com/artist/1VPAs75xrhaXhCIIHsgF02) - [Apple Music](https://music.apple.com/us/artist/allanger/1617855325) - [Deezer](https://www.deezer.com/us/artist/117780712) - [SoundCloud](https://soundcloud.com/allanger) - [Bandcamp](https://allanger.bandcamp.com/) - [Funkwhale](https://funkwhale.badhouseplants.net/library/artists/3/)
#### Anymore
> In this song, I'm using samples from a YouTube video and so I'm not sure that I can distribute on all platforms. That's why it exists only on SoundCloud and Funkwhale
>![Cover](/music/allanger-Anymore.jpg)
>Release Date: 2018-12-26
>
>Genre: Indie
>
> Sub Genre: Lo-Fi Indie
[SoundCloud](https://soundcloud.com/allanger/anymore) - [Funkwhale](https://funkwhale.badhouseplants.net/library/albums/11/)
### Oveleane
> It's another project made by me, I just thought that that electronic stuff won't fit well in the allanger's profile, and so decided to separate them. But it's still allanger, you know...
[Spotify](https://open.spotify.com/artist/2PKE1XvwP82LCacM5q6rCx?si=hJyJWcEgR4mZLkjbCso45A) - [Apple Music](https://music.apple.com/us/artist/oveleane/1654951021) - [Deezer](https://www.deezer.com/us/artist/190392997)
#### Four Steps Behind
>![Cover](/music/Oveleane%20-%20Four%20Steps%20Behind.jpg)
>Release Date: 2022-12-05
>
>Genre: Electronic
>
>Sub Genre: IDM/Experimental
[Spotify](https://open.spotify.com/album/1RjB1xLoD2JXmWuBjGegCN?si=fIsGrOfoQRaeKu9f-Oh0dw) - [Apple Music](https://music.apple.com/us/album/1654953305) - [Deezer](https://www.deezer.com/us/album/377293977) - [Funkwhale](https://funkwhale.badhouseplants.net/library/albums/1/)
{{< rawhtml >}}
<iframe width="100%" height="330" scrolling="no" frameborder="no" src="https://funkwhale.badhouseplants.net/front/embed.html?&amp;type=album&amp;id=1"></iframe>
{{< /rawhtml >}}

0
content/posts/_index.md Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 292 KiB

View File

@ -0,0 +1,574 @@
---
title: "ArgoCD vs Helmfile: Applications"
date: 2023-02-13T12:14:09+01:00
draft: false
cover:
image: "cover.png"
caption: "ArgoCD"
relative: false
responsiveImages: false
ShowToc: true
---
> So as promised in [the previous ArgoCD post]({{< ref "dont-use-argocd-for-infrastructure" >}}), I'll try to show a simple example of Pull Requests for different kinds of setups. This is the first part. Putting everything in the same post seems kind of too much.
# Intro
I've created three `main` branches and three branches for install two applications. I assume we have two production clusters (If you've read the previous post, you know that by saying 'production', I mean production for SRE team, so they can be dev/stage/whatever for other teams) and one test cluster (the one where SRE team can test anything without affecting other teams)
You can already check all of them here: <https://git.badhouseplants.net/allanger/helmfile-vs-argo/pulls>
I've decided to install [Vertical pod autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) to both prod clusters and [goldilocks](https://github.com/FairwindsOps/goldilocks) to only one of them. Therefore, I have to add both to the test-cluster as well. Also, I've promised that I'd implement the CI/CD for all of those solutions, but I think that it's going to be enough just to describe the logic. If you really want to see different implementation of CI/CD, you can shoot me a message, and I will write another post then.
# Applications (Ann App of Apps)
So here is the PR for installing applications with `Application` manifests.
<https://git.badhouseplants.net/allanger/helmfile-vs-argo/pulls/2/files>
I've chosen to follow the `App of apps` pattern, because it's including changes that must have been done if you use a "direct" applications installation and `app of apps`. So let's have a look at the main manifests, here you can see the base: <https://git.badhouseplants.net/allanger/helmfile-vs-argo/src/branch/argo-apps-main>
Initially I thought to use only one "Big Application" manifest for all three clusters, but I found out that it's not so easy when you don't have clusters with exactly the same infrastructure. Even with multi-source apps, you will probably have to use an additional tool for templating/substituting, for example like this:
```YAML
# app-of-apss.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: app-of-apps
namespace: argo-system
spec:
destination:
namespace: argo-system
server: https://kubernetes.default.svc
project: system
sources:
- path: ./manifests/$CLUSTER
repoURL: git@git.badhouseplants.net:allanger/helmfile-vs-argo.git
targetRevision: argo-apps-main
- path: ./manifests/common
repoURL: git@git.badhouseplants.net:allanger/helmfile-vs-argo.git
targetRevision: argo-apps-main
```
and then, in a pipeline do something like this:
```BASH
export CLUSTER=cluster1
kubectl apply $(envsubst < app-of-apps.yaml) # I haven't tested it out, so this command may no work, but I hope you get the point.
```
So it's either additional files, or an additional logic in CI/CD.
Also, the `helm-freeze` thing. I wanted to vendor charts, because in this example it's required, but my Gitea instance can't preview file changes when there are 9000+ lines of code updated, so I had to remove.
But logic would be like this
- Manual part:
- Update `helm-freeze.yaml`
- Run `helm-freeze sync`
- Add a new application to the `manifests/$CLUSTER` dir
- Push
- CI/CD
- Since it needs to be `GitOps`, you need to check that charts in the `vendor` dir are up-to-date with `helm-freeze.yaml`. *Because if you updated helm-freeze and forgot to execute `helm-freeze sync`, you will have a contradiction between actual and desired states. That's one of the reasons, why I don't like this kind of vendoring. Either it's an addition step in CI, that is verifying that the manual step was done, or it's an additional work for reviewer. You also can add an action that is going to execute it withing the pipeline and push to your branch, but I'm completely against it. (something for another post maybe)*
- Then depending on a branch:
- If not `main`
> Then you need to run `argocd diff` for production clusters, and deploy changes to the test clusters, so it's something like
- If `main`
> Deploy to all clusters
So let's try to do it
So we create a first `app-of-apps` manifests
```YAML
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: app-of-apps
namespace: argo-system
spec:
destination:
namespace: argo-system
server: https://kubernetes.default.svc
project: default
source:
path: ./manifests/cluster2/
repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
targetRevision: argo-apps-updated
```
Then we need to create apps
```YAML
# ./manifests/cluster2/vpa.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vpa
namespace: argo-system
spec:
destination:
namespace: vpa-system
server: https://kubernetes.default.svc
project: default
source:
helm:
releaseName: vpa
valueFiles:
- ../../values/vpa.common.yaml
path: ./vendor/vpa
repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
targetRevision: argo-apps-updated
```
Here we have different options.
- Sync everything automatically (app-of-apps and applications), but it doesn't look too fail-safe to me. And also we can't know diff then, because what's different will be applied immediately. So it's 👎
- Sync automatically only the `app-of-apps`, and then sync applications with the `argocd` cli. It sounds better, because then we can run diff on applications and know the difference between a wished state and a real state, so it's closer to 👍
- Sync applications automatically, but app-of-apps with cli. Doesn't sound to bad, does it? Maybe not that flexible as the previous option, but still not too bad. So it's closer to 👍 too.
- Sync everything with cli. I would say it will give you the best control, but will become additional steps in the pipeline. Now I don't think it's a hard thing to implement, so let's say "closer to 👍 too".
I don't consider the **first** option a reliable one, so I wouldn't even talk about it. You can try, of course, but your changes won't be visible unless they are deployed. So it's like the "test on production" thing.
The **second**, let's have a look. Let's try adding some values to the `vpa` release, and install Goldilocks (assuming it wasn't installed).
VPA values:
```YAML
# ./values/vpa.common.yaml
# I've just changes `false` to `true`
updater:
enabled: true # <- here
```
Goldilocks app:
```YAML
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: goldilocks
namespace: argo-system
spec:
destination:
namespace: vpa-system
server: https://kubernetes.default.svc
project: default
source:
helm:
releaseName: goldilocks
path: ./vendor/goldilocks
repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
targetRevision: argo-apps-updated
```
And I pushed to repo.
So now let see what I've got in UI:
![Changes in UI](/argocd-vs-helmfile/update-in-ui.png)
This is how `diffs` for VPA look in the UI:
![Diff in UI](/argocd-vs-helmfile/diff-in-ui.png)
{{< details "Here you can find all the diffs from the UI as text" >}}
```diff
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ app.kubernetes.io/component: updater
+ app.kubernetes.io/instance: vpa
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/name: vpa
+ app.kubernetes.io/version: 0.11.0
+ argocd.argoproj.io/instance: vpa
+ helm.sh/chart: vpa-1.6.0
+ name: vpa-updater
+ namespace: vpa-system
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app.kubernetes.io/component: updater
+ app.kubernetes.io/instance: vpa
+ app.kubernetes.io/name: vpa
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/component: updater
+ app.kubernetes.io/instance: vpa
+ app.kubernetes.io/name: vpa
+ spec:
+ containers:
+ - env:
+ - name: NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ image: 'k8s.gcr.io/autoscaling/vpa-updater:0.11.0'
+ imagePullPolicy: Always
+ livenessProbe:
+ failureThreshold: 6
+ httpGet:
+ path: /health-check
+ port: metrics
+ scheme: HTTP
+ periodSeconds: 5
+ successThreshold: 1
+ timeoutSeconds: 3
+ name: vpa
+ ports:
+ - containerPort: 8943
+ name: metrics
+ protocol: TCP
+ readinessProbe:
+ failureThreshold: 120
+ httpGet:
+ path: /health-check
+ port: metrics
+ scheme: HTTP
+ periodSeconds: 5
+ successThreshold: 1
+ timeoutSeconds: 3
+ resources:
+ limits:
+ cpu: 200m
+ memory: 1000Mi
+ requests:
+ cpu: 50m
+ memory: 500Mi
+ securityContext: {}
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534
+ serviceAccountName: vpa-updater
```
```DIFF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"vpa"},"name":"vpa-actor"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"vpa-actor"},"subjects":[{"kind":"ServiceAccount","name":"vpa-recommender","namespace":"vpa-system"}]}
labels:
argocd.argoproj.io/instance: vpa
managedFields:
- apiVersion: rbac.authorization.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:labels':
.: {}
'f:argocd.argoproj.io/instance': {}
'f:roleRef': {}
'f:subjects': {}
manager: argocd-application-controller
operation: Update
time: '2023-02-13T20:58:02Z'
- apiVersion: rbac.authorization.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubectl.kubernetes.io/last-applied-configuration': {}
manager: argocd-controller
operation: Update
time: '2023-02-13T20:58:02Z'
name: vpa-actor
resourceVersion: '34857'
uid: 71958267-68b4-4923-b2bb-eaf7b3c1a992
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vpa-actor
subjects:
- kind: ServiceAccount
name: vpa-recommender
namespace: vpa-system
+ - kind: ServiceAccount
+ name: vpa-updater
+ namespace: vpa-system
```
```DIFF
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ labels:
+ argocd.argoproj.io/instance: vpa
+ name: vpa-evictionter-binding
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: vpa-evictioner
+ subjects:
+ - kind: ServiceAccount
+ name: vpa-updater
+ namespace: vpa-system
```
```DIFF
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ labels:
+ argocd.argoproj.io/instance: vpa
+ name: vpa-status-reader-binding
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: vpa-status-reader
+ subjects:
+ - kind: ServiceAccount
+ name: vpa-updater
+ namespace: vpa-system
```
```DIFF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"vpa"},"name":"vpa-target-reader-binding"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"vpa-target-reader"},"subjects":[{"kind":"ServiceAccount","name":"vpa-recommender","namespace":"vpa-system"}]}
labels:
argocd.argoproj.io/instance: vpa
managedFields:
- apiVersion: rbac.authorization.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:labels':
.: {}
'f:argocd.argoproj.io/instance': {}
'f:roleRef': {}
'f:subjects': {}
manager: argocd-application-controller
operation: Update
time: '2023-02-13T20:58:02Z'
- apiVersion: rbac.authorization.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubectl.kubernetes.io/last-applied-configuration': {}
manager: argocd-controller
operation: Update
time: '2023-02-13T20:58:02Z'
name: vpa-target-reader-binding
resourceVersion: '34855'
uid: 30261740-ad5d-4cd9-b043-0ff18daaf3aa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vpa-target-reader
subjects:
- kind: ServiceAccount
name: vpa-recommender
namespace: vpa-system
+ - kind: ServiceAccount
+ name: vpa-updater
+ namespace: vpa-system
```
{{< /details >}}
And for Goldilocks
![Goldilocks Application](/argocd-vs-helmfile/goldilocks-ui.png)
All the diffs are also there, and they look good.
But to seem them I had to push to the target branch. And we want to see changes without pushing.
```YAML
# main
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: app-of-apps
namespace: argo-system
spec:
destination:
namespace: argo-system
server: https://kubernetes.default.svc
project: default
source:
path: ./manifests/cluster2/
repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
targetRevision: argo-apps-main
```
Then we need to create apps
```YAML
# ./manifests/cluster2/vpa.yaml
# feature branch
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vpa
namespace: argo-system
spec:
destination:
namespace: vpa-system
server: https://kubernetes.default.svc
project: default
source:
helm:
releaseName: vpa
valueFiles:
- ../../values/vpa.common.yaml
path: ./vendor/vpa
repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
targetRevision: argo-apps-main
```
![App of apps in the `main`](/argocd-vs-helmfile/app-of-apps-main.png)
So currently app of apps doesn't know about what's happening in my new branch. And so I can't just do `argocd app vpa diff`. So what should I do?
```BASH
argocd app diff --help
...
Usage:
argocd app diff APPNAME [flags]
...
```
That means that I can't use it for those new apps that exist inly in my branch, because I need to pass an App name, and since it's not installed yet, I have something like
```BASH
argocd app diff vpa
FATA[0000] rpc error: code = NotFound desc = error getting application: applications.argoproj.io "vpa" not found
```
There is a `--local` option, but it still requires a name ~~(why if there is a name in manfiests 🙃🙃🙃)~~
```BASH
# Just testing out
argocd app diff vpa --local ./manifests/cluster2/
FATA[0000] rpc error: code = NotFound desc = error getting application: applications.argoproj.io "vpa" not found # 🤪
```
Ok, then we can check the app-of-apps
```BASH
argocd app diff app-of-apps --local ./cluster-1.yaml
Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.FATA[0000] error while parsing source parameters: stat cluster-1.yaml/.argocd-source.yaml: not a directory
argocd app diff app-of-apps --local ./cluster-1.yaml --server-side-generate
FATA[0000] rpc error: code = Unknown desc = failed to get app path: ./manifests/cluster2/: app path does not exist
argocd app diff app-of-apps --local ./cluster-2.yaml --server-side-generate --loglevel debug
FATA[0000] rpc error: code = Unknown desc = failed to get app path: ./manifests/cluster2/: app path does not exist
# I can't get it, maybe anybody could tell me what I'm doing wrong?
argocd app diff app-of-apps --local ./cluster-2.yaml
Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.FATA[0000] error while parsing source parameters: stat cluster-2.yaml/.argocd-source.yaml: not a directory
mkdir /tmp/argo-test
cp cluster-2.yaml /tmp/argo-test
argocd app diff app-of-apps --local /tmp/argo-test --loglevel debug
Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.
===== argoproj.io/Application /app-of-apps ======
0a1,15
> apiVersion: argoproj.io/v1alpha1
> kind: Application
> metadata:
> labels:
> argocd.argoproj.io/instance: app-of-apps
> name: app-of-apps
> spec:
> destination:
> namespace: argo-system
> server: https://kubernetes.default.svc
> project: default
> source:
> path: manifests/cluster2/
> repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
> targetRevision: argo-apps-main
# If i change a branch for the app of apps target to the current one
cat cluster-2.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: app-of-apps
namespace: argo-system
spec:
destination:
namespace: argo-system
server: https://kubernetes.default.svc
project: default
source:
path: ./manifests/cluster2/
repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
targetRevision: argo-apps-updated
kuvectl apply -f cluster-2.yaml
cp cluster-2.yaml /tmp/argo-test
argocd app diff app-of-apps --local /tmp/argo-test --loglevel debug
Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.
===== argoproj.io/Application /app-of-apps ======
0a1,15
> apiVersion: argoproj.io/v1alpha1
> kind: Application
> metadata:
> labels:
> argocd.argoproj.io/instance: app-of-apps
> name: app-of-apps
> spec:
> destination:
> namespace: argo-system
> server: https://kubernetes.default.svc
> project: default
> source:
> path: ./manifests/cluster2/
> repoURL: ssh://git@git.badhouseplants.net/allanger/helmfile-vs-argo.git
> targetRevision: argo-apps-updated
```
I don't really understand what it means. *Most probably, I'm just stupid.* But what I see is that it's not working with ` --server-side-generate ` with an error, that I can't really understand. And is saying that I shouldn't use it without the flag, because that way of running it is deprecated. And even without the flag, it's giving me a strange output, that I don't know how to use it.
So as I see, to have a proper diff, you need to apply. But it doesn't look like a fail-safe and scalable way to use.
I told that we can check different options for syncing, but as I see now, other workflows won't give me a better overview about what's happening. So I don't think it makes a lot of sense. If I find a way to see a proper diff without applying manifests first, I would go back to this topic and write one more post.
## Maybe it's because an App of Apps layer
Let's try installing apps directly. Remove an app-of-apps from k8s. And let's use manifests from `/manifests/cluster2/` directly. As I see, diffing won't work anyway for applications that are not installed yet. So you can check ones that are already installed, but I couldn't make it work too. I was changing values to check if they are shown, but they weren't. *Again, I could simply screw up, and if you have a positive experience with that, don't hesitate to let me know about it, I'm willing to change my mind*
## Conclusion
So you can check the PR here: <https://git.badhouseplants.net/allanger/helmfile-vs-argo/pulls/2/files>
I like that `values` can be handled as normal values files. (But for handling secrets you might have to add a [CMP](https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/), that means an additional work and maintenance) But even if adding CMP is fine, I couldn't get proper `diffs` for my changes, that means that I can't see what's happening without applying manifests. And applying manifests will mean that other team members will not be work on other tickets withing the same scope, so it looks like a bottleneck to me.
But I don't like that you need to add a lot of manifests to manage all the applications. We have only 2 manifests that are copied from folder to folder. So we have a lot of repeating code. And repeating code is never good. So I would write a tool that can let you choose applications from the list of all applications and choose clusters where they need to be deployed. So the config looks like this:
```YAML
app_path: ./manifests/common
clusters:
- cluster: cluster1
applications:
- vpa
- cluster: cluster2
applications:
- vpa
- goldilocks
- cluster: cluster3
applications:
- vpa
- goldilocks
```
But I think that with the whole GitOps pulling concept it will be a hard thing to implement. And in the end it looks like helmfile, so ... 🤷‍♀️🤷‍♀️🤷‍♀️
I can only say, that I see no profit in using argo like this. It only seems like either a very complicated setup (most probably you will be able to implement anything you need, the question is, how much time will you spend with that), or a ~~crippled~~ not complete setup.
And if you compare an amount of lines that area updadated to install these apps as `Applications` to the helmfile stuff, it's going to be ~100 vs ~30. And that's what I also don't like.
In the next post I will try doing the same with `ApplicationSets`, and we'll see, if it looks better or not.
Thanks,
Oi!

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

View File

@ -0,0 +1,240 @@
---
title: "ArgoCD vs Helmfile: ApplicationSet"
date: 2023-02-15T10:14:09+01:00
draft: false
cover:
image: "cover.png"
caption: "ArgoCD"
relative: false
responsiveImages: false
ShowToc: true
---
This is a second post about *"argocding"* your infrastructure. [First can be found here]({{< ref "argocd-vs-helmfile-application" >}}).
There I've tried using `Applications` for deploying. Here I will try to show an example with `ApplicationSets`. As in the previous article, I will be installing [VPA](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) and [Goldilocks](https://github.com/FairwindsOps/goldilocks)
So let's prepare a base. We have 3 clusters:
- cluster-1
- cluster-2
- cluster-3
> With `ApplicationSets` you have an incredible amount of ways to deploy stuff. So what I'm doing may look super-different from what you would do
I'm creating 3 manifests, one for each cluster.
```YAML
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: helm-releases
namespace: argo-system
spec:
syncPolicy:
preserveResourcesOnDeletion: true
generators:
- git:
repoURL: https://git.badhouseplants.net/allanger/helmfile-vs-argo.git
revision: argo-applicationset-main
files:
- path: "cluster2/*"
- git:
repoURL: https://git.badhouseplants.net/allanger/helmfile-vs-argo.git
revision: argo-applicationset-main
files:
- path: "common/*"
template:
metadata:
name: "{{ argo.application }}"
namespace: argo-system
spec:
project: "{{ argo.project }}"
source:
helm:
valueFiles:
- values.yaml
values: |-
{{ values }}
repoURL: "{{ chart.repo }}"
targetRevision: "{{ chart.version }}"
chart: "{{ chart.name }}"
destination:
server: "{{ argo.cluster }}"
namespace: "{{ argo.namespace }}"
```
Manifests with a setup like thos have only one values that is really different, so we could create just one manifest that would look like that:
```YAML
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: helm-releases
namespace: argo-system
spec:
syncPolicy:
preserveResourcesOnDeletion: true
generators:
- git:
repoURL: https://git.badhouseplants.net/allanger/helmfile-vs-argo.git
revision: argo-applicationset-main
files:
- path: "$CLUSTER/*"
- git:
repoURL: https://git.badhouseplants.net/allanger/helmfile-vs-argo.git
revision: argo-applicationset-main
files:
- path: "common/*"
template:
metadata:
name: "{{ argo.application }}"
namespace: argo-system
spec:
project: "{{ argo.project }}"
source:
helm:
valueFiles:
- values.yaml
values: |-
{{ values }}
repoURL: "{{ chart.repo }}"
targetRevision: "{{ chart.version }}"
chart: "{{ chart.name }}"
destination:
server: "{{ argo.cluster }}"
namespace: "{{ argo.namespace }}"
```
And add a step in the `CI` pipeline, where we're substituting a correct value instead of the variable. But since I'm not really implementing a CI, I will create 3 manifests.
Then I need to add `generators` in the feature branch:
```YAML
#/common/vpa.yaml
---
argo:
cluster: https://kubernetes.default.svc
application: vpa
project: default
namespace: vpa-system
chart:
version: 1.6.0
name: vpa
repo: https://charts.fairwinds.com/stable
values: |
updater:
enabled: false
```
```YAML
#/cluster2/goldilocks.yaml
---
argo:
cluster: https://kubernetes.default.svc
application: goldilocks
project: default
namespace: vpa-system
chart:
version: 6.5.0
name: goldilocks
repo: https://charts.fairwinds.com/stable
values: |
```
And the main problem here is that values are passed as a string. So you can't separate them into different files, use secrets or share common values. That can be solved with multi-source apps that came with ArgoCD 2.6, but I can't say that they are production-ready yet. Also, I've read that `ApplicationSets` can be used to separate values and charts, but it seemed a way too complicated to me back then, and I think that with ArgoCD 2.7 this problem will be completely solved, so I'm not sure that it makes sense to check that approach now.
Next thing is that Git generators are pointed to a specific branch, so I have two problems. How to test changes on the `cluster-test` and how to view diffs.
### Test changes
This problem is solvable, I will show on a cluster-2 example, because I don't have 3 clusters running locally, but this logic should apply to the test cluster.
After you add new generators files, you need to deploy them to the `test cluster`, and you also need not override what's being tested by other team-members. So the best option that I currently see, is to get an `ApplicationSet` manifest that is already deployed to `k8s` and add new generators to it. So it looks like this:
```YAML
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: helm-releases
namespace: argo-system
spec:
syncPolicy:
preserveResourcesOnDeletion: true
generators:
- git:
repoURL: https://git.badhouseplants.net/allanger/helmfile-vs-argo.git
revision: argo-applicationset-main
files:
- path: "cluster2/*"
- git:
repoURL: https://git.badhouseplants.net/allanger/helmfile-vs-argo.git
revision: argo-applicationset-main
files:
- path: "common/*"
# This should be added within CI and removed once a the branch is merged
- git:
repoURL: https://git.badhouseplants.net/allanger/helmfile-vs-argo.git
revision: argo-applicationset-updated
files:
- path: "common/*"
- git:
repoURL: https://git.badhouseplants.net/allanger/helmfile-vs-argo.git
revision: argo-applicationset-updated
files:
- path: "cluster2/*"
template:
metadata:
name: "{{ argo.application }}"
namespace: argo-system
spec:
project: "{{ argo.project }}"
source:
helm:
valueFiles:
- values.yaml
values: |-
{{ values }}
repoURL: "{{ chart.repo }}"
targetRevision: "{{ chart.version }}"
chart: "{{ chart.name }}"
destination:
server: "{{ argo.cluster }}"
namespace: "{{ argo.namespace }}"
```
After applying this change, this what I've got
![ApplicationSet](/argocd-vs-helmfile/applicationset-test.png)
Those applications should be deployed automatically within a pipeline. So steps in your pipeline would look like that:
- Get current `ApplicationSet` manifest from Kubernetes
- Add new generators
- Sync applications with argocd cli
But I'm not sure what going to happen if you have two different pipelines at the same time. Probably, changes will be overwriten but the pipeline that is a little bit slower. But I think that it can be solved without a lot of additional problems. And also I don't think that it's a situation that you will have to face very often, so you can just rerun your pipeline after all.
### Diffs
Diffs are not supported for `ApplicationSets` at the moment, and I'm not sure when they will be: <https://github.com/argoproj/argo-cd/issues/10895>
~~And with the diffing situation from the previous article, I think that they will not work the way I'd like them to work.~~
But I think that the easiest way to deal with them right now, would be adding `git generators` not only to a test cluster, but to all clusters, add to those applications an additional label (e.g. `test: true`), and sync only those applications that don't have this label. So the whole pipeline for branch would look like:
Feature branch
- Get current `ApplicationSet` manifests from Kubernetes (each cluster)
- Add new generators (each cluster)
- Sync applications with argocd cli (only test cluster)
Main branch (merged)
- Get current `ApplicationSet` manifests from Kubernetes (each cluster)
- Remove obsolete generators (each cluster)
- Sync applications with argocd cli (each cluster and filter by label not to sync those, that are not merged yet)
> But I'm not sure exactly how to manage these `test` labels. They can be added manually to generators files, but then you can't be sure that one won't forget to do it, so I think that, if possible, they should be added to generators inside an `ApplicationSet` manifest, or added to applications right after they were created by an `ApplicationSet`, but the second way is not the best, because if the `main` pipeline is faster than feature's one, you will have it installed in a production cluster.
## Conclusion
I like this way a way more than simple `Applications`, especially with multi-source applications. I think that the main problem with this approach are complicated CI/CD pipelines. And I don't like that for diffing you need to have something added to prod clusters. Diff must be safe, and if you add 1000 generator files and push them, you will have 1000 new applications in you ArgoCD. I'm not sure how it's going to handle it. And since ArgoCD is something that is managing your whole infrastructure, I bet, you want it to work like a charm, you don't want to doubt how it's going to survive situations like this.
Amount of changes is not big, pretty close to helmfile, I'd say. And the more common stuff you have, the less you need to copy-paste. You can see the PR here: <https://git.badhouseplants.net/allanger/helmfile-vs-argo/pulls/3>
Thanks,
Oi!

File diff suppressed because it is too large Load Diff