Add a spellchecker tool to CI and fix all existing typos.
continuous-integration/drone/push Build is passing Details

This commit is contained in:
Nikolai Rodionov 2023-05-02 14:47:34 +02:00
parent 77447a18da
commit 67ff2bbdfd
Signed by: allanger
GPG Key ID: 906851F91B1DA3EF
14 changed files with 241 additions and 28 deletions

View File

@ -46,15 +46,15 @@ steps:
- name: Test a build
image: git.badhouseplants.net/badhouseplants/hugo-builder
depends_on:
depends_on:
- clone
commands:
- hugo
- name: Build and push the docker image
image: git.badhouseplants.net/badhouseplants/badhouseplants-builder:80ffd53372652576fa3c36a56b351b448a025c6a
privileged: true
depends_on:
depends_on:
- Test a build
environment:
GITEA_TOKEN:
@ -64,7 +64,7 @@ steps:
- name: Sync pictures from lfs to Minio
image: git.badhouseplants.net/badhouseplants/badhouseplants-builder:80ffd53372652576fa3c36a56b351b448a025c6a
depends_on:
depends_on:
- Test a build
environment:
RCLONE_CONFIG_CONTENT:
@ -119,7 +119,7 @@ steps:
from_secret: GITHUB_OAUTH_KEY
ARGO_GOOGLE_OAUTH_KEY:
from_secret: GOOGLE_OAUTH_KEY
commands:
commands:
- mkdir $HOME/.kube
- echo $KUBECONFIG_CONTENT | base64 -d > $HOME/.kube/config
- export ARGO_APP_CHART_VERSION=`cat chart/Chart.yaml | yq '.version'`
@ -168,3 +168,28 @@ steps:
commands:
- echo "$RCLONE_CONFIG_CONTENT" > $RCLONE_CONFIG
- ./scripts/cleanup.pl
---
kind: pipeline
type: kubernetes
name: Spell-Checker
trigger:
event:
- push
clone:
disable: true
steps:
- name: clone
image: alpine/git
environment:
GIT_LFS_SKIP_SMUDGE: 1
commands:
- git clone $DRONE_REMOTE_URL --recurse-submodules .
- git checkout $DRONE_BRANCH
- name: Spell-Checker
image: node
commands:
- npm i markdown-spellcheck -g
- mdspell "**/*.md" -n -r

143
.spelling Normal file
View File

@ -0,0 +1,143 @@
# markdown-spellcheck spelling configuration file
# Format - lines beginning # are comments
# global dictionary is at the start, file overrides afterwards
# one word per line, to define a file override use ' - filename'
# where filename is relative to this configuration file
WIP
envs
anymore
hostname
hostnames
Dockerfile
helmfile
k8s
env
dir
dev'n'stages
oi
minio
ArgoCD
setups
SRE
autoscaler
gitea
vendoring
cli
vpa
ok
cmp
config
GitOps
argo
argocding
cluster-1
cluster-2
cluster-3
kubernetes
argocd
helmfiles
plugin
helmfile.yaml
cleanup
serie
backend
js
frontend
ShowToc
cover.png
allanger
DIY
DevOps
funkwhale
PSY
bandcamp
soundcloud
spotify
deezer
prog-rock
oveleane
IDM
gitlab
bitwarden
elasticsearch
grafana
lifecycle
auditable
the-first-production-grafaba
grafana
applicationsets
helm-releases-v2
yaml
CR
minecraft
github
ddosed
VST
Xfer
plugins
yabridgectl
yabridge
DAW
polyverse
camelcrusher
standalone
url
glitchmachines
MacBook
configs
behaviour
FreqEcho
DSP
Supermassive
Ableton
Softube
center
iZotope
V2
auth
README.md
TAL-Chorus-LX
badhouseplants
Deelay
Gatelab
Filterstep
Panflow
PaulXStretch
Audiomodern
Kushview
JUCE
Melda
MDrummer
GBs
laggy
MDrumReplacer
MPowerSynth
MGuitarArchitect
u-he
TyrellN6
Tyrell
install.sh
MGuitarArchitect
Amazona.de.
Bitwig
ProjectSAM
Pendulate
Protoverb
Eurorack
S3
XT
Ruina
VCV
LFO
- themes/papermod/README.md
PaperMod
hugo-paper
og
ExampleSite
exampleSite
pipelining
Fuse.js
webpack
nodejs
Pagespeed
Highlight.js

View File

@ -37,7 +37,7 @@ params:
... by allanger.\
I'm looking for projects to work on for my sound engineering portfolio.\
Currently, I can mix you tracks for free. Just shoot me a message and we'll figure it out.\
Oi!
Oi!ne
imageUrl: "main-logo.png"
imageWidth: 150
imageHeight: 150

View File

@ -5,7 +5,7 @@ draft: false
ShowToc: true
---
Everything that's created by me, can be found on my [funkwhale instance](https://funkwhale.badhouseplants.net). But I'm only uploading `lossy` there. I was trying to upload lossless, but then it either doesn't really work with my Android App, or it's hard to manage. And it needs a way more disk that way. So if you want to listnen to lossless, go to my [Bandcamp](https://allanger.bandcamp.com/). *A lot of tracks are still not there, but they will be there soon*. I also have a [SoundCloud account](https://soundcloud.com/allanger) and I try to publish everything there.
Everything that's created by me, can be found on my [funkwhale instance](https://funkwhale.badhouseplants.net). But I'm only uploading `lossy` there. I was trying to upload lossless, but then it either doesn't really work with my Android App, or it's hard to manage. And it needs a way more disk that way. So if you want to listen to lossless, go to my [Bandcamp](https://allanger.bandcamp.com/). *A lot of tracks are still not there, but they will be there soon*. I also have a [SoundCloud account](https://soundcloud.com/allanger) and I try to publish everything there.
---

45
content/posts/.spelling Normal file
View File

@ -0,0 +1,45 @@
# markdown-spellcheck spelling configuration file
# Format - lines beginning # are comments
# global dictionary is at the start, file overrides afterwards
# one word per line, to define a file override use ' - filename'
# where filename is relative to this configuration file
WIP
envs
anymore
hostname
hostnames
Dockerfile
helmfile
k8s
env
dir
dev'n'stages
oi
minio
ArgoCD
setups
SRE
autoscaler
gitea
vendoring
cli
vpa
ok
cmp
config
GitOps
argo
argocding
cluster-1
cluster-2
cluster-3
kubernetes
argocd
helmfiles
plugin
helmfile.yaml
cleanup
serie
backend
js
frontend

View File

@ -13,11 +13,11 @@ cover:
[Do you remember?]({{< ref "dont-use-argocd-for-infrastructure" >}})
> And using `helmfile`, I will install `ArgoCD` to my clusters, of course, because it's an awesome tool, without any doubts. But don't manage your infrastructure with it, because it's a part of your infrastructure, and it's a service that you provide to other teams. And I'll talk about in one of the next posts.
Yes, I have written 4 posts where I was almost absuletely negative about `ArgoCD`. But I was talking about infrastructure then. I've got some ideas about how to describe it in a better way, but I think I will write another post about it.
Yes, I have written 4 posts where I was almost absolutely negative about `ArgoCD`. But I was talking about infrastructure then. I've got some ideas about how to describe it in a better way, but I think I will write another post about it.
Here, I want to talk about dynamic *(preview)* environments, and I'm going to describe how to create them using my blog as an example. My blog is a pretty easy application. From `Kubernetes` perspective, it's just a container with some static content. And here, you already can notice that static is an opposite of dynamic, so it's the first problem that I'll have to tackle. Turning static content into dynamic. So my blog consists of `markdown` files that are used by `hugo` for a web page generation.
>Initially I was using `hugo` server to serve the static, but it needs way more resources than `nginx`, so I've decided in favor of `nginx`.
>Initially I was using `hugo` server to serve the static, but it needs way more resources than `nginx`, so I've decided in favour of `nginx`.
I think that I'll write 2 of 3 posts about it, because it's too much to cover in only one. So here, I'd share how I was preparing my blog to be ready for dynamic environments.
@ -33,7 +33,7 @@ So this is how my workflow looked like before I decided to use dynamic environme
- `Keel` spots that images was updated and pulls it.
- Pod with a static is being recreated, and I have my blog with a new content
What I don't like about it? I can't test something unless it's in `production`. And when I stated to work on adding comments (that is still WIP) I've understood that I'd like to have a real environemnt where I can test everything before firing the main pipeline. Even though having a static development environment would be fine for me, because I'm the only one who do the development here, I don't like the concept of static envs, and I want to be able to work on different posts in the same time. Also, adding a new static environemnt for development purposes it kind of the same amount of work as implementing a solution for deploying them dynamically.
What I don't like about it? I can't test something unless it's in `production`. And when I stated to work on adding comments (that is still WIP) I've understood that I'd like to have a real environment where I can test everything before firing the main pipeline. Even though having a static development environment would be fine for me, because I'm the only one who do the development here, I don't like the concept of static envs, and I want to be able to work on different posts in the same time. Also, adding a new static environment for development purposes it kind of the same amount of work as implementing a solution for deploying them dynamically.
Before I can start deploying them, I have to prepare the application for that. At the first glance changes looks like that:
@ -507,7 +507,7 @@ And then I'm setting an env var `HUGO_PARAMS_GITBRANCH`. And now badge is lookin
## Some kind of conclusion
Even though my application is just a simple blog, I still believe that creating dynamic environments is a great idea that should totally replace static dev'n'stages. And it's not only my blog, I've created dynamic envs for. Two biggest pains *as I think* are `Static content` and `Persistent data` (I think, there are more, but these two are most obvious). I've already shown an example how you can handle the first one, and the second is also a big pain in the ass. In my case this data is the one coming from the `Minio` and I'm not doing anything about it, *but I'll write one more post, when it's solved*, other, in my opinion, more obvious example, are databases. You need it to contain all the data that's required for testing, but you also may want it not to be huge, and it most probably should not contain any sensible personal data. So maybe you could stream a database from the production through some kind of anonymizer, clean it up, so it's not too big. And it doesn't sound easy already. But if I'll have to add something like that to my blog once, I'll try to describe it.
Even though my application is just a simple blog, I still believe that creating dynamic environments is a great idea that should totally replace static dev'n'stages. And it's not only my blog, I've created dynamic envs for. Two biggest pains *as I think* are `Static content` and `Persistent data` (I think, there are more, but these two are most obvious). I've already shown an example how you can handle the first one, and the second is also a big pain in the ass. In my case this data is the one coming from the `Minio` and I'm not doing anything about it, *but I'll write one more post, when it's solved*, other, in my opinion, more obvious example, are databases. You need it to contain all the data that's required for testing, but you also may want it not to be huge, and it most probably should not contain any sensible personal data. So maybe you could stream a database from the production through some kind of anonymiser, clean it up, so it's not too big. And it doesn't sound easy already. But if I'll have to add something like that to my blog once, I'll try to describe it.
Thanks,

View File

@ -16,7 +16,7 @@ First, I'd like to show how I fixed the Minio issue from the previous post.
>Im using Minio as a storage for pictures, and currently all pictures (and other files) are stored in one folder regardless of the environment.
I've started using `git lfs` for media data. But I still want to have small docker images so I've decided to add some logi around pushing media files to `Minio`. So I've added a Drone job:
I've started using `git lfs` for media data. But I still want to have small docker images so I've decided to add some logic around pushing media files to `Minio`. So I've added a Drone job:
```YAML
- name: Sync pictures from lfs to Minio
image: rclone/rclone:latest

View File

@ -65,7 +65,7 @@ But logic would be like this
- Add a new application to the `manifests/$CLUSTER` dir
- Push
- CI/CD
- Since it needs to be `GitOps`, you need to check that charts in the `vendor` dir are up-to-date with `helm-freeze.yaml`. *Because if you updated helm-freeze and forgot to execute `helm-freeze sync`, you will have a contradiction between actual and desired states. That's one of the reasons, why I don't like this kind of vendoring. Either it's an addition step in CI, that is verifying that the manual step was done, or it's an additional work for reviewer. You also can add an action that is going to execute it withing the pipeline and push to your branch, but I'm completely against it. (something for another post maybe)*
- Since it needs to be `GitOps`, you need to check that charts in the `vendor` dir are up-to-date with `helm-freeze.yaml`. *Because if you updated helm-freeze and forgot to execute `helm-freeze sync`, you will have a contradiction between actual and desired states. That's one of the reasons, why I don't like this kind of vendoring. Either it's an addition step in CI, that is verifying that the manual step was done, or it's an additional work for reviewer. You also can add an action that is going to execute it within the pipeline and push to your branch, but I'm completely against it. (something for another post maybe)*
- Then depending on a branch:
- If not `main`
@ -441,7 +441,7 @@ argocd app diff vpa
FATA[0000] rpc error: code = NotFound desc = error getting application: applications.argoproj.io "vpa" not found
```
There is a `--local` option, but it still requires a name ~~(why if there is a name in manfiests 🙃🙃🙃)~~
There is a `--local` option, but it still requires a name ~~(why if there is a name in manifests 🙃🙃🙃)~~
```BASH
# Just testing out
argocd app diff vpa --local ./manifests/cluster2/
@ -543,7 +543,7 @@ Let's try installing apps directly. Remove an app-of-apps from k8s. And let's us
## Conclusion
So you can check the PR here: <https://git.badhouseplants.net/allanger/helmfile-vs-argo/pulls/2/files>
I like that `values` can be handled as normal values files. (But for handling secrets you might have to add a [CMP](https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/), that means an additional work and maintenance) But even if adding CMP is fine, I couldn't get proper `diffs` for my changes, that means that I can't see what's happening without applying manifests. And applying manifests will mean that other team members will not be work on other tickets withing the same scope, so it looks like a bottleneck to me.
I like that `values` can be handled as normal values files. (But for handling secrets you might have to add a [CMP](https://argo-cd.readthedocs.io/en/stable/user-guide/config-management-plugins/), that means an additional work and maintenance) But even if adding CMP is fine, I couldn't get proper `diffs` for my changes, that means that I can't see what's happening without applying manifests. And applying manifests will mean that other team members will not be work on other tickets within the same scope, so it looks like a bottleneck to me.
But I don't like that you need to add a lot of manifests to manage all the applications. We have only 2 manifests that are copied from folder to folder. So we have a lot of repeating code. And repeating code is never good. So I would write a tool that can let you choose applications from the list of all applications and choose clusters where they need to be deployed. So the config looks like this:
```YAML
@ -565,7 +565,7 @@ But I think that with the whole GitOps pulling concept it will be a hard thing t
I can only say, that I see no profit in using argo like this. It only seems like either a very complicated setup (most probably you will be able to implement anything you need, the question is, how much time will you spend with that), or a ~~crippled~~ not complete setup.
And if you compare an amount of lines that area updadated to install these apps as `Applications` to the helmfile stuff, it's going to be ~100 vs ~30. And that's what I also don't like.
And if you compare an amount of lines that area updated to install these apps as `Applications` to the helmfile stuff, it's going to be ~100 vs ~30. And that's what I also don't like.
In the next post I will try doing the same with `ApplicationSets`, and we'll see, if it looks better or not.

View File

@ -64,7 +64,7 @@ spec:
```
Manifests with a setup like thos have only one values that is really different, so we could create just one manifest that would look like that:
Manifests with a setup like this have only one values that is really different, so we could create just one manifest that would look like that:
```YAML
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
@ -209,7 +209,7 @@ Those applications should be deployed automatically within a pipeline. So steps
- Add new generators
- Sync applications with argocd cli
But I'm not sure what going to happen if you have two different pipelines at the same time. Probably, changes will be overwriten but the pipeline that is a little bit slower. But I think that it can be solved without a lot of additional problems. And also I don't think that it's a situation that you will have to face very often, so you can just rerun your pipeline after all.
But I'm not sure what going to happen if you have two different pipelines at the same time. Probably, changes will be overwritten but the pipeline that is a little bit slower. But I think that it can be solved without a lot of additional problems. And also I don't think that it's a situation that you will have to face very often, so you can just rerun your pipeline after all.
### Diffs
Diffs are not supported for `ApplicationSets` at the moment, and I'm not sure when they will be: <https://github.com/argoproj/argo-cd/issues/10895>

View File

@ -78,7 +78,7 @@ helmfiles:
- path: {{.Environment.Name }}/helmfile.yaml
```
It's going to import helmfiles that are not used across all clusters. So if you want to install something to `cluster-2` accross, you will add it to the `/cluster2/helmfile.yaml` and sync the main helmfile. I will show an example later.
It's going to import helmfiles that are not used across all clusters. So if you want to install something to `cluster-2` across, you will add it to the `/cluster2/helmfile.yaml` and sync the main helmfile. I will show an example later.
So we're all set and ready to begin installing new stuff.
@ -1575,7 +1575,7 @@ Yeah, it's huge, but you can see everything that's going to happen. So I'd say i
It's a short article, because I think the whole setup is super easy, CI is easy too. You still have a full `GitOps` (or almost full) but you also have control. I love this setup and would like to use it for my infrastructure.
Why do I think it's better that `ArgoCD`?
With `ArgoCD` I either have a lot of `yaml` to install things, or I have complicated setups with `ApplicationSets` that are most probably very special and won't be reused in other companies. I need to care about how `ArgoCD` will handle a lot of applications that are added there only for diffing. I need additional applications installed im my clusters not only as a part of infrastructure itself, but also as a service that I'm providing other teams with. Because I want to manage applications that are being developed by other teams with `Argo`, so I'm mixing a lot of different kinds of applications here.
With `ArgoCD` I either have a lot of `yaml` to install things, or I have complicated setups with `ApplicationSets` that are most probably very special and won't be reused in other companies. I need to care about how `ArgoCD` will handle a lot of applications that are added there only for diffing. I need additional applications installed in my clusters not only as a part of infrastructure itself, but also as a service that I'm providing other teams with. Because I want to manage applications that are being developed by other teams with `Argo`, so I'm mixing a lot of different kinds of applications here.
Helmfile lets me separate infra from applications. `ArgoCD` can be only provided as a service, and other teams can use, because it's making k8s easier for those who don't need to understand it so deeply. Also, helmfile lets me use helm-secrets to encrypt values. I can do it with Argo too, but then I need to either have a custom `ArgoCD` image, or support a CMP plugin, that will handle SOPS.

View File

@ -10,7 +10,7 @@ cover:
responsiveImages: false
---
Well, after I've posted my argo serie, I've found out that I couldn't really make myself understood. So now I want to talk more not about the way of implementation, but rather about the consequinces of different implementations. And maybe I will e able to finally make a point about why I don't like Terraform and why I think that ArgoCD is mostly mis-used by almost any SRE team I know.
Well, after I've posted my argo serie, I've found out that I couldn't really make myself understood. So now I want to talk more not about the way of implementation, but rather about the consequences of different implementations. And maybe I will e able to finally make a point about why I don't like Terraform and why I think that ArgoCD is mostly misused by almost any SRE team I know.
But first I'll try to describe how I see myself as a part of a team, the team as a part of a bigger team, and all the teams across different companies as links in the bigger chain.
@ -18,7 +18,7 @@ This is how I used to see development teams before:
![Chain](/posts/design-a-scalable-system/chain-1.png)
The whole team is using something as a service, for example `AWS`, the whole team is working together and producing something that is passed to a customer. But apparently this approach is only applicable to small teams, and I think it's working just fine. But there is a problem. Teams tend to grow without an understanding that they are growing, hence they keep acting like they're small but in the same time they don't change the workflow, and brick-by-brick they are building something that eventually is something unscalable at first and later unmaintanble.
The whole team is using something as a service, for example `AWS`, the whole team is working together and producing something that is passed to a customer. But apparently this approach is only applicable to small teams, and I think it's working just fine. But there is a problem. Teams tend to grow without an understanding that they are growing, hence they keep acting like they're small but in the same time they don't change the workflow, and brick-by-brick they are building something that eventually is something unscalable at first and later unmaintainable.
Example of an evolution like this:

View File

@ -179,7 +179,7 @@ They are working but there is one UI glitch
Because we probably have different system configs, so maybe it's only possible to reproduce this bug with a set of configs and packages I'm using in my Linux. So if you don't face this issue, lucky you!
It's not very annoying to me, but to avoid this kind of behavior, I can wrap these plugins with **Carla.**
It's not very annoying to me, but to avoid this kind of behaviour, I can wrap these plugins with **Carla.**
![Glitchmachines with Carla](/posts/vst-on-linux-1/glitchmaker-carla.gif)
It's working perfectly with Carla *(it's not that buggy in real life, only on the record)*

View File

@ -117,7 +117,7 @@ But when I add **Filterstep**, Ardour stops responding. I'm sure it's possible t
I was tired after **Audiomodern** plugins, because they were freezing my Ardour and I had to log out and log in again to my system, for Ardour wouldn't run again after that.
But **PaulXStrech** has a native Linux version too, and it has given me a strength to finish with this top.
But **PaulXStretch** has a native Linux version too, and it has given me a strength to finish with this top.
So I'm just installing it with a package manager.

View File

@ -22,7 +22,7 @@ All of them are covered in [the first post]({{< ref "vst-on-linux-1" >}})
## Before we begin
In the previous post, I was trying to run paulxstretch on Linux, and using it as a plugin in a DAW didn't work out. I've tried to update the JUCE library in the source code, and now it's working. You can find the code here: [https://git.badhouseplants.net/badhouseplants/paulxstretch](https://git.badhouseplants.net/badhouseplants/paulxstretch)
In the previous post, I was trying to run PaulXStretch on Linux, and using it as a plugin in a DAW didn't work out. I've tried to update the JUCE library in the source code, and now it's working. You can find the code here: [https://git.badhouseplants.net/badhouseplants/paulxstretch](https://git.badhouseplants.net/badhouseplants/paulxstretch)
To build, refer to the official build doc or use the `/build_docker.sh` script
@ -155,7 +155,7 @@ Downloading a Windows version again.
{{< video "/posts/vst-on-linux-3/eventide-pendulate.mp4" "video-9" >}}
Runnin just fine
Running just fine
As you see, this is a pretty interesting Synth, I have enough of synths for everything, but this one may join the ranks too.
## VCV Rack 👍
@ -174,9 +174,9 @@ I didn't have enough time to learn it yet, so that's what I could do with it
Protoverb is a reverb created by u-he. It has native Linux support
Download the **Linux** version and install it by running a script. You can finfd everything [here](https://u-he.com/products/protoverb/)
Download the **Linux** version and install it by running a script. You can find everything [here](https://u-he.com/products/protoverb/)
## Paulstretch 👍
## PaulXStretch 👍
It's already covered in the previous article. But since then, one thing is changed. You could've seen it in the very beginning of the post, that I've updated JUCE library in the source code, and now it's running as a VST plugin. If you missed it, try reading the beginning one more time.