First Commit

The main feature of the operator is implemented, but it's not ready
for real use yet.
This commit is contained in:
Nikolai Rodionov 2023-11-24 18:42:45 +01:00
commit e857c359e0
Signed by: allanger
GPG Key ID: 19DB54039EBF8F10
21 changed files with 3381 additions and 0 deletions

1
.containerignore Normal file
View File

@ -0,0 +1 @@
target

2
.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
/target
image.tar

2499
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

35
Cargo.toml Normal file
View File

@ -0,0 +1,35 @@
[package]
name = "shoebill-operator"
version = "0.1.0"
edition = "2021"
default-run = "shoebill"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[[bin]]
doc = false
name = "shoebill"
path = "src/main.rs"
[lib]
name = "controller"
path = "src/lib.rs"
[dependencies]
tokio = { version = "1.32.0", features = ["macros", "rt-multi-thread"] }
k8s-openapi = { version = "0.20.0", features = ["latest"] }
serde = { version = "1.0.185", features = ["derive"] }
serde_json = "1.0.105"
serde_yaml = "0.9.25"
anyhow = "1.0.75"
clap = { version = "4.4.8", features = ["derive", "env"] }
kube = { version = "0.87.1", features = ["derive", "runtime", "client"] }
schemars = { version = "0.8.12", features = ["chrono"] }
chrono = { version = "0.4.26", features = ["serde"] }
futures = "0.3.29"
thiserror = "1.0.50"
actix-web = "4.4.0"
log = "0.4.20"
env_logger = "0.10.1"
base64 = "0.21.5"
handlebars = "4.5.0"
kube-client = "0.87.1"

9
Containerfile Normal file
View File

@ -0,0 +1,9 @@
FROM rust:1.74.0-alpine3.18 AS builder
RUN apk update && apk add --no-cache musl-dev
WORKDIR /src
COPY . /src
RUN cargo build --release
FROM alpine
COPY --from=builder /src/target/release/shoebill /shoebill
ENTRYPOINT /shoebill

106
README.md Normal file
View File

@ -0,0 +1,106 @@
# Shoebill
## !! Be careful !!
The code is not ready for real use, it can only create secrets, but a lot of errors are not handled, so they are making the controller die, and there is no clean-up at all. Also, reconciliation doesn't work as I would like to work yet. I hope that I'll release the first prod-ready version soon
## What's that?
It's a **Kubernetes** operator that lets you build new **Secrets** and **ConfigMaps** using ones that exist already as inputs for templates.
## Why does that exist?
I'm one of maintainers of [db-operator](https://github.com/db-operator/db-operator), and there we have implemented a feature that we call **templated credentials**, it lets user define templates that should be used for creating new entries to **Secrets** and **ConfigMaps** that are managed by the operator. Because sometimes you need to have more than just credentials, cause your application may require a custom connection string. But since this feature doesn't exist in any operator, I've created another operator for exactly that.
Let's say you have an operator **some-operator** that should run something that is required for your application to run, and when you apply the CR, the operator is creating something that results in a **Secret** like that:
```yaml
kind: Secret
metadata:
name: some-secret
stringData:
password: really-strong-one
```
and a **ConfigMap**:
```yaml
kind: ConfigMap
metadata:
name: some-configmap
data:
username: application-user
hostname: some.app.rocks
```
But to use that something, your application require an environment variable in a format like this:
```bash
SOME_CONNECTION_STRING=${USERNAME}:${PASSWORD}@{$HOSTNAME}
```
What are your options?
- You can get the data from the **Secret** and **ConfigMap** to build a new **Secret** manually and add it as an env var to your application **Deployment**
- You can write an `initContainer` that will get the data from those sources, and create a formatted connection string, that later might be somehow set as an environment var for you main workload
- You can have a watcher that is checking those sources and modifies you workload object, setting the desired env
- _Or maybe you can use something that exists already, but I wanted to try writing an operator in Rust, so I don't care too much_
With this operator, you can create a **Custom Resource** called **ConfigSet**, that in our case should look like that:
```yaml
kind: ConfigSet
spec:
inputs:
- name: PASSWORD
from:
kind: Secret
name: some-secret
key: password
- name: USERNAME
from:
kind: ConfigMap
name: some-configmap
key: username
- name: HOSTNAME
from:
kind: ConfigMap
name: somet-configmap
key: hostname
targets:
- name: app-some-creds
target:
kind: Secret
name: app-some-creds
templates:
- name: SOME_CONNECTION_STRING
template: "{{USERNAME}}:{{PASSWORD}}@{{HOSTNAME}}"
```
And after you apply it, there will be a new secret created (or the existing one will be modified), and it will contain
```yaml
kind: Secret
metadata:
name: app-some-creds
stringData:
SOME_CONNECTION_STRING: application-user:really-strong-one@some.app.rocks
```
Now you can simply mount that newly created secret to your workload, and that's it.
## How can I start using it?
Once it's production ready, I'll start distributing it as a **helm** chart. Currently, since it's should only be used by those one who are developing it, it looks like that
- build an image
- import that image to you K8s
- build the tool locally (or use the image too)
- run `shoebill manifests > /tmp/manifests.yaml`, it will generate all the required manifests for the quick start
- apply those manifests, and check if the controller is up
- prepare you secrets and configmaps (or go to `./yaml/example` folder and use manifests from there
- create you `ConfigSet` manifests and apply it too. Example also can be found in `./yaml/example` dir
## Why Shoebill?
There is no real connection between the project and the name, I just always wanted to have a project called **Shoebill** because I really like those birds

1
src/api/mod.rs Normal file
View File

@ -0,0 +1 @@
pub mod v1alpha1;

View File

@ -0,0 +1,75 @@
use futures::StreamExt;
use kube::api::ListParams;
use kube::runtime::controller::Action;
use kube::runtime::watcher::Config;
use kube::runtime::Controller;
use kube::{Api, Client, CustomResource};
use log::*;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use std::time::Duration;
use thiserror::Error;
/// ConfigSet is the main CRD of the shoebill-operator.
/// During the reconciliation, the controller will get the data
/// from Secrets and ConfigMaps defined in inputs, use them for
/// building new variables, that are defined in templates, and
/// put them to target Secrets or ConfigMaps
#[derive(CustomResource, Deserialize, Serialize, Clone, Debug, JsonSchema)]
#[cfg_attr(test, derive(Default))]
#[kube(
kind = "ConfigSet",
group = "shoebill.badhouseplants.net",
version = "v1alpha1",
namespaced
)]
#[kube(status = "ConfigSetStatus", shortname = "confset")]
pub struct ConfigSetSpec {
pub targets: Vec<TargetWithName>,
pub inputs: Vec<InputWithName>,
pub templates: Vec<Templates>,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
pub struct ConfigSetStatus {
ready: bool,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
pub struct TargetWithName {
pub name: String,
pub target: Target,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
pub struct Target {
pub kind: Kinds,
pub name: String,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
pub struct InputWithName {
pub name: String,
pub from: Input,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
pub enum Kinds {
Secret,
ConfigMap,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
pub struct Input {
pub kind: Kinds,
pub name: String,
pub key: String,
}
#[derive(Deserialize, Serialize, Clone, Debug, JsonSchema)]
pub struct Templates {
pub name: String,
pub template: String,
pub target: String,
}

1
src/api/v1alpha1/mod.rs Normal file
View File

@ -0,0 +1 @@
pub mod configsets_api;

9
src/cmd/controller.rs Normal file
View File

@ -0,0 +1,9 @@
use clap::Args;
#[derive(Args)]
pub(crate) struct ControllerArgs {
/// Use this flag if you want to let shoebill
/// update secrets that already exist in the cluster
#[arg(long, default_value_t = false, env = "SHOEBILL_ALLOW_EXISTING")]
pub(crate) allow_existing: bool,
}

7
src/cmd/manifests.rs Normal file
View File

@ -0,0 +1,7 @@
use clap::{Args, Command, Parser, Subcommand};
#[derive(Args)]
pub(crate) struct ManifestsArgs {
#[arg(long, short, default_value = "default")]
pub(crate) namespace: String,
}

23
src/cmd/mod.rs Normal file
View File

@ -0,0 +1,23 @@
use clap::{command, Parser, Subcommand};
use self::controller::ControllerArgs;
use self::manifests::ManifestsArgs;
pub(crate) mod controller;
pub(crate) mod manifests;
#[derive(Parser)]
#[command(author, version, about, long_about = None)]
#[command(propagate_version = true)]
pub(crate) struct Cli {
#[command(subcommand)]
pub(crate) command: Commands,
}
#[derive(Subcommand)]
pub(crate) enum Commands {
// Start the controller
Controller(ControllerArgs),
// Generate manifests for quick install
Manifests(ManifestsArgs),
}

View File

@ -0,0 +1,308 @@
use crate::api::v1alpha1::configsets_api::ConfigSet;
use futures::StreamExt;
use handlebars::Handlebars;
use k8s_openapi::api::core::v1::{ConfigMap, Secret};
use k8s_openapi::ByteString;
use kube::api::{ListParams, PostParams};
use kube::core::{Object, ObjectMeta};
use kube::error::ErrorResponse;
use kube::runtime::controller::Action;
use kube::runtime::watcher::Config;
use kube::runtime::Controller;
use kube::{Api, Client, CustomResource};
use log::*;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::collections::{BTreeMap, HashMap};
use std::str::{from_utf8, Utf8Error};
use std::sync::Arc;
use std::time::Duration;
use thiserror::Error;
#[derive(Error, Debug)]
pub enum Error {
#[error("SerializationError: {0}")]
SerializationError(#[source] serde_json::Error),
#[error("Kube Error: {0}")]
KubeError(#[source] kube::Error),
#[error("Finalizer Error: {0}")]
// NB: awkward type because finalizer::Error embeds the reconciler error (which is this)
// so boxing this error to break cycles
FinalizerError(#[source] Box<kube::runtime::finalizer::Error<Error>>),
#[error("IllegalDocument")]
IllegalDocument,
}
pub type Result<T, E = Error> = std::result::Result<T, E>;
impl Error {
pub fn metric_label(&self) -> String {
format!("{self:?}").to_lowercase()
}
}
// Context for our reconciler
#[derive(Clone)]
pub struct Context {
/// Kubernetes client
pub client: Client,
}
async fn reconcile(csupstream: Arc<ConfigSet>, ctx: Arc<Context>) -> Result<Action> {
let cs = csupstream.clone();
info!(
"reconciling {} - {}",
cs.metadata.name.clone().unwrap(),
cs.metadata.namespace.clone().unwrap()
);
match cs.metadata.deletion_timestamp {
Some(_) => return cs.cleanup(ctx).await,
None => return cs.reconcile(ctx).await,
}
}
/// Initialize the controller and shared state (given the crd is installed)
pub async fn setup() {
info!("starting the configset controller");
let client = Client::try_default()
.await
.expect("failed to create kube Client");
let docs = Api::<ConfigSet>::all(client.clone());
if let Err(e) = docs.list(&ListParams::default().limit(1)).await {
error!("{}", e);
std::process::exit(1);
}
let ctx = Arc::new(Context { client });
Controller::new(docs, Config::default().any_semantic())
.shutdown_on_signal()
.run(reconcile, error_policy, ctx)
.filter_map(|x| async move { std::result::Result::ok(x) })
.for_each(|_| futures::future::ready(()))
.await;
}
fn error_policy(doc: Arc<ConfigSet>, error: &Error, ctx: Arc<Context>) -> Action {
Action::requeue(Duration::from_secs(5 * 60))
}
impl ConfigSet {
// Reconcile (for non-finalizer related changes)
async fn reconcile(&self, ctx: Arc<Context>) -> Result<Action> {
/*
* First we need to get inputs and write them to the map
* Then use them to build new values with templates
* And then write those values to targets
*/
let mut inputs: HashMap<String, String> = HashMap::new();
for input in self.spec.inputs.clone() {
info!("populating data from input {}", input.name);
match input.from.kind {
crate::api::v1alpha1::configsets_api::Kinds::Secret => {
let secrets: Api<Secret> = Api::namespaced(
ctx.client.clone(),
self.metadata.namespace.clone().unwrap().as_str(),
);
let secret: String = match secrets.get(&input.from.name).await {
Ok(s) => from_utf8(&s.data.clone().unwrap()[input.from.key.as_str()].0)
.unwrap()
.to_string(),
Err(err) => {
error!("{err}");
return Err(Error::KubeError(err));
}
};
inputs.insert(input.from.key, secret);
}
crate::api::v1alpha1::configsets_api::Kinds::ConfigMap => {
let configmaps: Api<ConfigMap> = Api::namespaced(
ctx.client.clone(),
self.metadata.namespace.clone().unwrap().as_str(),
);
let configmap: String = match configmaps.get(&input.from.name).await {
Ok(cm) => {
let data = &cm.data.unwrap()[input.from.key.as_str()];
data.to_string()
}
Err(err) => {
error!("{err}");
return Err(Error::KubeError(err));
}
};
inputs.insert(input.name, configmap);
}
}
}
let mut target_secrets: HashMap<String, Secret> = HashMap::new();
let mut target_configmaps: HashMap<String, ConfigMap> = HashMap::new();
for target in self.spec.targets.clone() {
match target.target.kind {
crate::api::v1alpha1::configsets_api::Kinds::Secret => {
let secrets: Api<Secret> = Api::namespaced(
ctx.client.clone(),
self.metadata.namespace.clone().unwrap().as_str(),
);
match secrets.get_opt(&target.target.name).await {
Ok(sec_opt) => match sec_opt {
Some(sec) => target_secrets.insert(target.name, sec),
None => {
let empty_data: BTreeMap<String, ByteString> = BTreeMap::new();
let new_secret: Secret = Secret {
data: Some(empty_data),
metadata: ObjectMeta {
name: Some(target.target.name),
namespace: self.metadata.namespace.clone(),
..Default::default()
},
..Default::default()
};
match secrets.create(&PostParams::default(), &new_secret).await {
Ok(sec) => target_secrets.insert(target.name, sec),
Err(err) => {
error!("{err}");
return Err(Error::KubeError(err));
}
}
}
},
Err(err) => {
error!("{err}");
return Err(Error::KubeError(err));
}
};
}
crate::api::v1alpha1::configsets_api::Kinds::ConfigMap => {
let configmaps: Api<ConfigMap> = Api::namespaced(
ctx.client.clone(),
self.metadata.namespace.clone().unwrap().as_str(),
);
match configmaps.get_opt(&target.target.name).await {
Ok(cm_opt) => match cm_opt {
Some(cm) => target_configmaps.insert(target.name, cm),
None => {
let empty_data: BTreeMap<String, String> = BTreeMap::new();
let new_configmap: ConfigMap = ConfigMap {
data: Some(empty_data),
metadata: ObjectMeta {
name: Some(target.target.name),
namespace: self.metadata.namespace.clone(),
..Default::default()
},
..Default::default()
};
match configmaps
.create(&PostParams::default(), &new_configmap)
.await
{
Ok(cm) => target_configmaps.insert(target.name, cm),
Err(err) => {
error!("{err}");
return Err(Error::KubeError(err));
}
}
}
},
Err(err) => {
error!("{err}");
return Err(Error::KubeError(err));
}
};
}
}
}
let mut templates: HashMap<String, String> = HashMap::new();
for template in self.spec.templates.clone() {
let reg = Handlebars::new();
info!("building template {}", template.name);
let var = reg
.render_template(template.template.as_str(), &inputs)
.unwrap();
info!("result is {}", var);
match self
.spec
.targets
.iter()
.find(|target| target.name == template.target)
.unwrap()
.target
.kind
{
crate::api::v1alpha1::configsets_api::Kinds::Secret => {
let sec = target_secrets.get_mut(&template.target).unwrap();
let mut byte_var: ByteString = ByteString::default();
byte_var.0 = var.as_bytes().to_vec();
let mut existing_data = match sec.clone().data {
Some(sec) => sec,
None => BTreeMap::new(),
};
existing_data.insert(template.name, byte_var);
sec.data = Some(existing_data);
}
crate::api::v1alpha1::configsets_api::Kinds::ConfigMap => {
let cm = target_configmaps.get_mut(&template.target).unwrap();
let mut existing_data = match cm.clone().data {
Some(cm) => cm,
None => BTreeMap::new(),
};
existing_data.insert(template.name, var);
cm.data = Some(existing_data);
}
}
}
for (_, value) in target_secrets {
let secrets: Api<Secret> = Api::namespaced(
ctx.client.clone(),
self.metadata.namespace.clone().unwrap().as_str(),
);
match secrets
.replace(
value.metadata.name.clone().unwrap().as_str(),
&PostParams::default(),
&value,
)
.await
{
Ok(sec) => {
info!("secret {} is updated", sec.metadata.name.unwrap());
}
Err(err) => {
error!("{}", err);
return Err(Error::KubeError(err));
}
};
}
for (_, value) in target_configmaps {
let configmaps: Api<ConfigMap> = Api::namespaced(
ctx.client.clone(),
self.metadata.namespace.clone().unwrap().as_str(),
);
match configmaps
.replace(
value.metadata.name.clone().unwrap().as_str(),
&PostParams::default(),
&value,
)
.await
{
Ok(sec) => {
info!("secret {} is updated", sec.metadata.name.unwrap());
}
Err(err) => {
error!("{}", err);
return Err(Error::KubeError(err));
}
};
}
Ok::<Action, Error>(Action::await_change())
}
// Finalizer cleanup (the object was deleted, ensure nothing is orphaned)
async fn cleanup(&self, ctx: Arc<Context>) -> Result<Action> {
info!("removing, not installing");
Ok::<Action, Error>(Action::await_change())
}
}

1
src/controllers/mod.rs Normal file
View File

@ -0,0 +1 @@
pub(crate) mod configsets_controller;

166
src/helpers/manifests.rs Normal file
View File

@ -0,0 +1,166 @@
use std::{collections::BTreeMap, default};
use k8s_openapi::{
api::{
apps::v1::{Deployment, DeploymentSpec},
core::v1::{Container, EnvVar, PodSpec, PodTemplate, PodTemplateSpec, ServiceAccount},
rbac::v1::{ClusterRole, ClusterRoleBinding, PolicyRule, Role, RoleRef, Subject},
},
apimachinery::pkg::apis::meta::v1::LabelSelector,
};
use kube::{core::ObjectMeta, CustomResourceExt, ResourceExt};
use crate::api::v1alpha1::configsets_api::ConfigSet;
pub fn generate_kube_manifests(namespace: String) {
print!("---\n{}", serde_yaml::to_string(&ConfigSet::crd()).unwrap());
print!(
"---\n{}",
serde_yaml::to_string(&prepare_cluster_role(namespace.clone())).unwrap()
);
print!(
"---\n{}",
serde_yaml::to_string(&prepare_service_account(namespace.clone())).unwrap()
);
print!(
"---\n{}",
serde_yaml::to_string(&prepare_cluster_role_binding(namespace.clone())).unwrap()
);
print!(
"---\n{}",
serde_yaml::to_string(&prepare_deployment(namespace.clone())).unwrap()
)
}
fn prepare_cluster_role(namespace: String) -> ClusterRole {
let rules: Vec<PolicyRule> = vec![
PolicyRule {
api_groups: Some(vec!["shoebill.badhouseplants.net".to_string()]),
resources: Some(vec!["configsets".to_string()]),
verbs: vec![
"get".to_string(),
"list".to_string(),
"patch".to_string(),
"update".to_string(),
"watch".to_string(),
],
..Default::default()
},
PolicyRule {
api_groups: Some(vec!["shoebill.badhouseplants.net".to_string()]),
resources: Some(vec!["configsets/finalizers".to_string()]),
verbs: vec![
"get".to_string(),
"list".to_string(),
"patch".to_string(),
"update".to_string(),
"watch".to_string(),
"create".to_string(),
"delete".to_string(),
],
..Default::default()
},
PolicyRule {
api_groups: Some(vec!["".to_string()]),
resources: Some(vec!["secrets".to_string(), "configmaps".to_string()]),
verbs: vec![
"get".to_string(),
"list".to_string(),
"watch".to_string(),
"update".to_string(),
"create".to_string(),
"delete".to_string(),
],
..Default::default()
},
];
ClusterRole {
metadata: ObjectMeta {
name: Some("shoebill-controller".to_string()),
namespace: Some(namespace),
..Default::default()
},
rules: Some(rules),
..Default::default()
}
}
fn prepare_service_account(namespace: String) -> ServiceAccount {
ServiceAccount {
metadata: ObjectMeta {
name: Some("shoebill-controller".to_string()),
namespace: Some(namespace),
..Default::default()
},
..Default::default()
}
}
fn prepare_cluster_role_binding(namespace: String) -> ClusterRoleBinding {
ClusterRoleBinding {
metadata: ObjectMeta {
name: Some("shoebill-controller".to_string()),
namespace: Some(namespace.clone()),
..Default::default()
},
role_ref: RoleRef {
api_group: "rbac.authorization.k8s.io".to_string(),
kind: "ClusterRole".to_string(),
name: "shoebill-controller".to_string(),
},
subjects: Some(vec![Subject {
kind: "ServiceAccount".to_string(),
name: "shoebill-controller".to_string(),
namespace: Some(namespace.clone()),
..Default::default()
}]),
}
}
fn prepare_deployment(namespace: String) -> Deployment {
let mut labels: BTreeMap<String, String> = BTreeMap::new();
labels.insert("container".to_string(), "shoebill-controller".to_string());
Deployment {
metadata: ObjectMeta {
name: Some("shoebill-controller".to_string()),
namespace: Some(namespace.clone()),
..Default::default()
},
spec: Some(DeploymentSpec {
replicas: Some(1),
selector: LabelSelector {
match_labels: Some(labels.clone()),
..Default::default()
},
template: PodTemplateSpec {
metadata: Some(ObjectMeta {
labels: Some(labels.clone()),
..Default::default()
}),
spec: Some(PodSpec {
automount_service_account_token: Some(true),
containers: vec![Container {
command: Some(vec!["/shoebill".to_string()]),
args: Some(vec!["controller".to_string()]),
image: Some("shoebill".to_string()),
image_pull_policy: Some("Never".to_string()),
name: "shoebill-controller".to_string(),
env: Some(vec![EnvVar {
name: "RUST_LOG".to_string(),
value: Some("info".to_string()),
..Default::default()
}]),
..Default::default()
}],
service_account_name: Some("shoebill-controller".to_string()),
..Default::default()
}),
},
..Default::default()
}),
..Default::default()
}
}

1
src/helpers/mod.rs Normal file
View File

@ -0,0 +1 @@
pub(crate) mod manifests;

24
src/lib.rs Normal file
View File

@ -0,0 +1,24 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum Error {
#[error("SerializationError: {0}")]
SerializationError(#[source] serde_json::Error),
#[error("Kube Error: {0}")]
KubeError(#[source] kube::Error),
#[error("Finalizer Error: {0}")]
// NB: awkward type because finalizer::Error embeds the reconciler error (which is this)
// so boxing this error to break cycles
FinalizerError(#[source] Box<kube::runtime::finalizer::Error<Error>>),
#[error("IllegalDocument")]
IllegalDocument,
}
pub type Result<T, E = Error> = std::result::Result<T, E>;
impl Error {
pub fn metric_label(&self) -> String {
format!("{self:?}").to_lowercase()
}
}

55
src/main.rs Normal file
View File

@ -0,0 +1,55 @@
#![allow(unused_imports, unused_variables)]
use std::process::exit;
use actix_web::{
get, middleware, web::Data, App, HttpRequest, HttpResponse, HttpServer, Responder,
};
use clap::{Args, Command, Parser, Subcommand};
use cmd::{Cli, Commands};
use controllers::configsets_controller;
use log::*;
mod api;
mod cmd;
mod controllers;
mod helpers;
#[get("/")]
async fn index(req: HttpRequest) -> impl Responder {
let d = "Shoebill";
HttpResponse::Ok().json(&d)
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
let cli = Cli::parse();
match &cli.command {
Commands::Manifests(args) => {
helpers::manifests::generate_kube_manifests(args.namespace.clone())
}
Commands::Controller(args) => {
// Initiatilize Kubernetes controller state
let controller = configsets_controller::setup();
// Start web server
let server =
match HttpServer::new(move || App::new().service(index)).bind("0.0.0.0:8080") {
Ok(server) => server.shutdown_timeout(5),
Err(err) => {
error!("{}", err);
exit(1)
}
};
// Both runtimes implements graceful shutdown, so poll until both are done
match tokio::join!(controller, server.run()).1 {
Ok(res) => info!("server is started"),
Err(err) => {
error!("{}", err);
exit(1)
}
};
}
}
Ok(())
}

View File

@ -0,0 +1,6 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: database-configmap
data:
PROTOCOL: postgresql

44
yaml/example/example.yaml Normal file
View File

@ -0,0 +1,44 @@
---
apiVersion: shoebill.badhouseplants.net/v1alpha1
kind: ConfigSet
metadata:
name: test
spec:
targets:
- name: app-connection-string
target:
kind: Secret
name: app-connection-string
inputs:
- name: PASSWORD
from:
kind: Secret
name: database-secret
key: PASSWORD
- name: USERNAME
from:
kind: Secret
name: database-secret
key: USERNAME
- name: DATABASE
from:
kind: Secret
name: database-secret
key: DATABASE
- name: PROTO
from:
kind: ConfigMap
name: database-configmap
key: PROTOCOL
templates:
- name: CONNECTION
template: "{{ PROTO }}:{{ USERNAME }}:{{ PASSWORD }}/{{ DATABASE }}"
target: app-connection-string
- name: IS_POSTGRES
template: |
{{#if (eq PROTO "postgresql") }}
true
{{ else }}
false
{{/if}}
target: app-connection-string

8
yaml/example/secret.yaml Normal file
View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: database-secret
stringData:
PASSWORD: 123123!!
USERNAME: real_root
DATABASE: postgres