Write Kubernetes operator using Kubebuilder

Dilip Kumar
32 min readJan 7, 2025

--

Let’s write kubernetes operator to manage AWS S3 bucket.

CRD vs CR vs Controller

CRD

CRDs Define Structure: When you create a Custom Resource Definition (CRD), you’re essentially extending the Kubernetes API with a new kind of resource. You’re defining a schema for data — what fields it has, what data types those fields are, and so on.

CRs as Data Objects: Once a CRD is defined, you can create Custom Resources (CRs) of that type. These CRs are just data objects stored in etcd, the Kubernetes database. They're similar to built-in resources like Pods or Deployments, but they represent your own custom concepts.

Passive Existence: Without a controller, a CR is just a piece of data. It doesn’t do anything on its own. It’s like a record in a database table that just sits there until something queries or modifies it.

Example: Imagine you define a CRD for a Website resource. Each Website CR might contain fields like domainName, template, and content. You can create multiple Website CRs, and they will be stored in etcd, but nothing will happen to them unless another component interacts with them.

In this scenario, CRs are simply a way to:

  • Extend the Kubernetes API: Introduce new types of objects relevant to your domain.
  • Store structured data: Leverage Kubernetes’ built-in storage (etcd) and API machinery to manage your custom data.

Operators: CRs + Controllers = Automation

Active Management: An Operator combines a CRD (and its associated CRs) with a custom controller. This controller is the key difference. It’s a piece of software that actively watches for CRs of a specific type and takes action based on their contents.

Reconciliation Loop: The controller typically runs a reconciliation loop. It continuously compares the desired state (specified in the CR) with the actual state of the system and takes steps to make the actual state match the desired state.

Automation: This is where the automation magic happens. The controller can do anything it’s programmed to do: create other Kubernetes resources, call external APIs, modify configurations, etc.

Example: Building on the Website example, a Website Operator would include a controller that watches for Website CRs. When a new Website CR is created, the controller might:

  1. Provision a new server (e.g., using a cloud provider API).
  2. Deploy a web server container to that server.
  3. Configure DNS records to point to the new server.
  4. Fetch content from a specified source and populate the website.

In this scenario, CRs become:

  • Declarative Instructions: The CR acts as a set of instructions that the controller follows.
  • Triggers for Automation: Creating, updating, or deleting a CR triggers the controller’s reconciliation loop, leading to automated actions.

Write CustomResourceDefinition yaml file

Writing a CustomResourceDefinition (CRD) YAML file is how you tell Kubernetes about your new, custom resource type. It’s like defining a blueprint for the objects you’ll create to manage your application or infrastructure.

Here’s a breakdown of the key elements and how to structure your CRD YAML:

# website-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# Name of the CRD (must be in the format: <plural>.<group>)
name: websites.example.com
spec:
# Group name for the API (e.g., example.com)
group: example.com
# Version(s) of the API
versions:
- name: v1
# Is this version served (available via the API)
served: true
# Is this version the storage version (only one version can be the storage version)
storage: true
# Schema definition for the resource
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
properties:
domainName:
type: string
template:
type: string
enum:
- blog
- shop
- portfolio
content:
type: string
replicas:
type: integer
minimum: 1
maximum: 10
# Required fields for spec
required:
- domainName
- template
- replicas
status:
type: object
properties:
state:
type: string
enum:
- Pending
- Active
- Failed
message:
type: string
# Optional: Subresources (e.g., status, scale)
subresources:
status: {}
# Scope of the CRD (Namespaced or Cluster)
scope: Namespaced
# Names for the resource
names:
# Plural name used in the URL: /apis/<group>/<version>/<plural>
plural: websites
# Singular name used as an alias on the CLI and for display
singular: website
# Kind is normally the CamelCased singular type. Your resource manifests use this.
kind: Website
# Short names allow shorter string to match your resource on the CLI
shortNames:
- ws

To apply CRD to Kubernetes, we can run following command.

kubectl apply -f website-crd.yaml

Custom Resource

# website-cr.yaml
apiVersion: example.com/v1
kind: Website
metadata:
name: my-website
namespace: default
spec:
domainName: mywebsite.com
template: blog
replicas: 3
  • Validation: The openAPIV3Schema provides validation. Kubernetes will reject Website resources that don't conform to the schema.
  • Controller (Operator): This CRD only defines the data structure. To make it functional, you would need to develop a controller (Operator) that watches for Website resources and takes actions based on their spec.
  • Storage Version: Choose the storage version carefully. Changing it later requires a migration process.

To apply this resource, run following command.

kubectl apply -f website-cr.yaml

Operator scafolding

Following is sample command to initialize the controller boilerplate code.

$ kubebuilder init --domain learning.k8.com --repo github.com/dilipkumardk/S3ObjectStoreController

The kubebuilder init --domain command is your starting point for creating a new Kubernetes operator project using the Kubebuilder framework. It initializes a new project directory with the essential scaffolding and configuration files.

Scaffolding: The command generates a basic project structure with the following:

  • config: Contains Kustomize manifests for deploying your operator.
  • Dockerfile: A Dockerfile for building your operator image.
  • hack: Scripts and tools for development.
  • main.go: The main entry point for your operator.
  • Makefile: A Makefile with common build and deployment tasks.
  • controllers: A directory where your controllers will be placed.
  • api: A directory for your API definitions (CRDs).

Initializes Go Modules:

  • Go Modules: kubebuilder init automatically initializes a Go module for your project, managing dependencies for your operator.

Create API

$ kubebuilder create api --group custom.operator.k8.com --version v1alpha1 --kind S3ObjStore

This command scaffolds the basic structure for a new API in your Kubernetes project. It generates the necessary Go code and YAML manifests to define a new custom resource (CR) and its corresponding controller. This allows you to extend the Kubernetes API to manage your own application-specific resources.

What API it creates

  • Custom Resource Definition (CRD)
  • Custom Resource (CR)
  • Controller

Key options for kubebuilder create api

  • --group <group>: The API group for your CRD (e.g., cache.example.com).
  • --version <version>: The API version for your CRD (e.g., v1alpha1).
  • --kind <kind>: The kind of your CRD (e.g., Memcached).

It modifies few existing files and also create new files as below.

 modified:   PROJECT
new file: api/v1alpha1/groupversion_info.go
new file: api/v1alpha1/s3objstore_types.go
new file: api/v1alpha1/zz_generated.deepcopy.go
modified: cmd/main.go
new file: config/crd/kustomization.yaml
new file: config/crd/kustomizeconfig.yaml
modified: config/default/kustomization.yaml
modified: config/rbac/kustomization.yaml
new file: config/rbac/s3objstore_editor_role.yaml
new file: config/rbac/s3objstore_viewer_role.yaml
new file: config/samples/custom.operator.k8.com_v1alpha1_s3objstore.yaml
new file: config/samples/kustomization.yaml
new file: internal/controller/s3objstore_controller.go
new file: internal/controller/s3objstore_controller_test.go
new file: internal/controller/suite_test.go

Purpose of zz_generated.deepcopy.go

This file contains automatically generated code that provides deep copy functions for your custom resource (CR) types. Deep copying is essential in Kubernetes to ensure that operations on objects don’t unintentionally modify the original objects.

Why Deep Copy is Important

  • Immutability: Kubernetes encourages immutability. Modifying an object directly can lead to unexpected behavior and make it harder to track changes.
  • Concurrency: In a concurrent environment like Kubernetes, multiple controllers or components might access and modify objects simultaneously. Deep copies prevent race conditions and data corruption.
  • Predictability: Deep copies ensure that operations on objects are predictable and don’t have side effects on other parts of the system.

How zz_generated.deepcopy.go Works

  • Code Generation: When you run make, Kubebuilder uses a tool called controller-gen to analyze your CRD definitions and generate the deep copy functions in zz_generated.deepcopy.go.
  • DeepCopy Functions: The generated code includes DeepCopy, DeepCopyInto, and DeepCopyObject functions for each of your CR types. These functions create true copies of your objects, including all nested fields and structures.

Benefits

  • Reduced Boilerplate: You don’t have to write tedious deep copy code manually.
  • Consistency: The generated code ensures consistent and correct deep copy implementation across your project.
  • Maintainability: When you update your CRD definitions, running make regenerates the zz_generated.deepcopy.go file, keeping it in sync.

Update Custom Resource definition

We can add following two fields in the s3objstore_types.go file.

type S3ObjStoreSpec struct {
// Name is the name of the S3ObjectStore we want to create.
Name string `json:"name"`
// Locked prevents the deletion of S3 object store.
Locked bool `json:"locked"`
}

After that we need to run following command.

$ make manifest

It updates CRD definition as below.

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.4
name: s3objstores.custom.operator.k8.com.learning.k8.com
spec:
group: custom.operator.k8.com.learning.k8.com
names:
kind: S3ObjStore
listKind: S3ObjStoreList
plural: s3objstores
singular: s3objstore
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: S3ObjStore is the Schema for the s3objstores API.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: S3ObjStoreSpec defines the desired state of S3ObjStore.
properties:
locked:
description: Locked prevents the deletion of S3 object store.
type: boolean
name:
description: Name is the name of the S3ObjectStore we want to create.
type: string
required:
- locked
- name
type: object
status:
description: S3ObjStoreStatus defines the observed state of S3ObjStore.
type: object
type: object
served: true
storage: true
subresources:
status: {}

Install CRD into the Kube cluster

$ make install

It will install CRD into the default Kubecluster.

Note: Since kubebuilder uses kustomize therefore we can’t simply run kubectl apply -f <file_path> . Therefore we have to use make install to do the same behind the scene.

We can run following command to get the list of crd installed.

$ kubectl get crd

Now let’s update given sample S3ObjStore template file as below.

apiVersion: custom.operator.k8.com.learning.k8.com/v1alpha1
kind: S3ObjStore
metadata:
labels:
app.kubernetes.io/name: s3objectstorecontroller
app.kubernetes.io/managed-by: kustomize
name: s3objstore-aws-bucket
spec:
name: demo-operator-aws-s3-bucket-01 # This is the AWS s3 bucket name therefore needs to be globally unique
locked: false

Now we can apply this template to kubernetes as below.

$ kubectl apply -f config/samples/custom.operator.k8.com_v1alpha1_s3objstore.yaml

Now we can query to get the running S3ObjStore resource.

$ kubectl get S3ObjStore
NAME AGE
s3objstore-sample 78s

So what just happened?

So far we only installed the CRD followed by created a S3ObjStore resource. It doesn’t nothing bcz we didn’t updated the controller logic yet.

Add Status to the CRD to showcase controller impact

We need to modify s3objstore_types.go as below.

const (
PENDING_STATE = "PENDING"
CREATED_STATE = "CREATED"
CREATING_STATE= "CREATING"
ERROR_STATE ="ERROR"
)

// S3ObjStoreStatus defines the observed state of S3ObjStore.
type S3ObjStoreStatus struct {
State string `json:"state"`
}

Then modify s3objstore_controller.go as below.

func (r *S3ObjStoreReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)

instance := &customoperatork8comv1alpha1.S3ObjStore{}
if err := r.Get(ctx, req.NamespacedName, instance); err != nil {
logger.Error(err, "Unable to get resource")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
if instance.Status.State == "" {
instance.Status.State = customoperatork8comv1alpha1.PENDING_STATE
r.Status().Update(ctx, instance)

}

return ctrl.Result{}, nil
}

Followed by we need to run following commands

$ make 
$ make manifests
$ make generate
$ make install
$ kubectl apply -f config/samples/custom.operator.k8.com_v1alpha1_s3objstore.yaml
$ kubectl get S3ObjStore
NAME STATE
s3objstore-sample PENDING

It shows the newly added State column.

Kubebuilder markers (comments)

What are Kubebuilder Markers?

Kubebuilder markers are special comments in your Go code that instruct Kubebuilder’s code generator (controller-gen) to create or modify Kubernetes resources, API definitions (CRDs), and other related components. They act as annotations that define metadata and behavior for your custom controllers and operators.

Basic Structure of a Marker

A Kubebuilder marker starts with +kubebuilder: followed by the marker name and optional parameters. Parameters are usually specified in the format key=value or just value if the key can be inferred.

// +kubebuilder:markerName:key=value

Simple Example

Let’s say you have a custom resource called MyApp with a field replicas. You want to ensure that the replicas field is always between 1 and 5.

package v1

import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// MyAppSpec defines the desired state of MyApp
type MyAppSpec struct {
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=5
Replicas int `json:"replicas"`
}

// MyApp is the Schema for the myapps API
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
type MyApp struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec MyAppSpec `json:"spec,omitempty"`
Status MyAppStatus `json:"status,omitempty"`
}
  • // +kubebuilder:validation:Minimum=1: This marker tells Kubebuilder to add validation to the replicas field in the generated CRD, ensuring that the minimum value is 1.
  • // +kubebuilder:validation:Maximum=5: Similarly, this marker sets the maximum allowed value to 5.
  • // +kubebuilder:object:root=true: This marker indicates that MyApp is a root-level object in the API, meaning it can be created, updated, deleted, etc. directly.
  • // +kubebuilder:subresource:status: This marker enables the /status subresource for the MyApp resource. This subresource is used to update only the status field of your custom resource without affecting the spec, which is a common pattern for controllers.

CRD Generation Markers

  • // +kubebuilder:object:root=true: Indicates that a type is a root object for the API (a resource).
  • // +kubebuilder:subresource:status: Enables a /status subresource for the resource, allowing status updates separately from spec updates.
  • // +kubebuilder:printcolumn: Defines additional columns to be displayed by kubectl get.
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
  • // +kubebuilder:resource:path=foos,shortName=fo: Specifies the plural name (foos) and optional short name (fo) for the resource in the API.
// +kubebuilder:resource:path=foos,shortName=fo,categories=all

Field Validation Markers

  • // +kubebuilder:validation:Required: Marks a field as required.
  • // +kubebuilder:validation:Minimum=<value>: Sets the minimum value for a numeric field.
  • // +kubebuilder:validation:Maximum=<value>: Sets the maximum value for a numeric field.
  • // +kubebuilder:validation:MinLength=<value>: Sets the minimum length for a string field.
  • // +kubebuilder:validation:MaxLength=<value>: Sets the maximum length for a string field.
  • // +kubebuilder:validation:Pattern=<regex>: Defines a regular expression pattern that a string field must match.
  • // +kubebuilder:validation:Enum={val1,val2,val3}: Restricts a field to a set of allowed values.
  • // +kubebuilder:default=<value>: Sets the default value for a field.

RBAC Markers (for generating RBAC rules in config/rbac/)

  • // +kubebuilder:rbac:groups=<group>,resources=<resource>,verbs=<verb1;verb2>: Specifies RBAC permissions required by your controller. Example:
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete

Webhook Markers

  • // +kubebuilder:webhook:path=/mutate-...,mutating=true,failurePolicy=fail,groups=...,resources=...,verbs=create;update,versions=...,name=... : Configures a mutating webhook.
  • // +kubebuilder:webhook:path=/validate-...,mutating=false,failurePolicy=fail,groups=...,resources=...,verbs=create;update,versions=...,name=...: Configures a validating webhook.
package controllers

import (
"context"

appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"

mygroupv1 "my-operator/api/v1"
)

// MyAppReconciler reconciles a MyApp object
// +kubebuilder:rbac:groups=mygroup.example.com,resources=myapps,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=mygroup.example.com,resources=myapps/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
type MyAppReconciler struct {
client.Client
Scheme *runtime.Scheme
}

//+kubebuilder:rbac:groups=mygroup.example.com,resources=myapps,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=mygroup.example.com,resources=myapps/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=mygroup.example.com,resources=myapps/finalizers,verbs=update
func (r *MyAppReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)

// Your reconcile logic here

return ctrl.Result{}, nil
}

// SetupWithManager sets up the controller with the Manager.
func (r *MyAppReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&mygroupv1.MyApp{}).
Owns(&appsv1.Deployment{}).
Complete(r)
}

Kubebuilder comes with a tool called controller-gen. Run it to generate the Kubernetes manifests and other code based on your markers:

$ make manifests

Go modules used to write controller

Before we update our controller, let’s go through few go module to understand how to write controller.

k8s.io/api/core/v1

The k8s.io/api/core/v1 module is a Go library that provides the Go type definitions for the core API objects of Kubernetes version 1 (v1). These definitions are generated from the Kubernetes OpenAPI specification, ensuring consistency between the API server and client libraries.

Key Components (Types) within k8s.io/api/core/v1

This module houses a vast number of types. Here are some of the most commonly used ones:

  • Pod: Represents a single instance of an application running in your cluster. It’s the smallest deployable unit in Kubernetes.
  • Service: An abstraction that defines a logical set of Pods and a policy by which to access them (often called a micro-service).
  • Node: A worker machine in Kubernetes, either a physical or a virtual machine.
  • Namespace: A way to divide cluster resources between multiple users (via resource quota).
  • ConfigMap: Holds configuration data that can be consumed by Pods.
  • Secret: Holds sensitive information, such as passwords, OAuth tokens, and SSH keys.
  • PersistentVolume (PV): A piece of storage in the cluster that has been provisioned by an administrator.
  • PersistentVolumeClaim (PVC): A request for storage by a user.
  • ReplicationController: Ensures that a specified number of Pod replicas are running at any one time. (Note: This is largely superseded by Deployments, but still present in the API.)
  • ResourceQuota: Provides constraints that limit aggregate resource consumption per namespace.
  • ServiceAccount: Provides an identity for processes that run in a Pod.
  • Endpoints: Represents the list of endpoints implementing a service. Usually populated by Kubernetes automatically, along with a Service, so long as a selector is defined.

Usage Examples

To use these types effectively, you’ll typically interact with them through a Kubernetes client library like k8s.io/client-go. Here's how you might work with some of these core types:

Creating a Pod

package main

import (
"context"
"fmt"
"os"
"path/filepath"

corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)

func main() {
// Use the current context in kubeconfig
kubeconfig := filepath.Join(homedir.HomeDir(), ".kube", "config")
if envvar := os.Getenv("KUBECONFIG"); len(envvar) > 0 {
kubeconfig = envvar
}
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
panic(err.Error())
}

// Create a Kubernetes clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}

// Define the Pod
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "my-nginx-pod",
Namespace: "default",
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "nginx",
Image: "nginx:latest",
Ports: []corev1.ContainerPort{
{
ContainerPort: 80,
},
},
},
},
},
}

// Create the Pod in the cluster
result, err := clientset.CoreV1().Pods("default").Create(context.TODO(), pod, metav1.CreateOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("Created pod %q.\n", result.GetObjectMeta().GetName())
}

Listing Pods in a Namespace

package main

import (
"context"
"fmt"
"os"
"path/filepath"

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)

func main() {
// ... (same kubeconfig and clientset setup as above) ...

// List Pods in the "default" namespace
pods, err := clientset.CoreV1().Pods("default").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the cluster in the default namespace\n", len(pods.Items))
for _, pod := range pods.Items {
fmt.Printf(" * %s\n", pod.Name)
}
}

Creating a ConfigMap

package main

import (
"context"
"fmt"
"os"
"path/filepath"

corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)

func main() {
// ... (same kubeconfig and clientset setup as above) ...

// Define the ConfigMap
configMap := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "my-configmap",
Namespace: "default",
},
Data: map[string]string{
"config.yaml": "key1: value1\nkey2: value2",
"example.txt": "Hello, Kubernetes!",
},
}

// Create the ConfigMap in the cluster
result, err := clientset.CoreV1().ConfigMaps("default").Create(context.TODO(), configMap, metav1.CreateOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("Created ConfigMap %q.\n", result.GetObjectMeta().GetName())
}

Get a Secret

package main

import (
"context"
"fmt"
"os"
"path/filepath"

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)

func main() {
// ... (same kubeconfig and clientset setup as above) ...

// Get the secret named "my-secret"
secret, err := clientset.CoreV1().Secrets("default").Get(context.TODO(), "my-secret", metav1.GetOptions{})
if err != nil {
panic(err.Error())
}

// Access the data in the secret
password, ok := secret.Data["password"]
if ok {
fmt.Printf("The password is: %s\n", string(password)) // Remember to decode base64 if needed
} else {
fmt.Println("Password not found in secret.")
}
}

For more details, please refer to https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/.

k8s.io/apimachinery

k8s.io/apimachinery is a Go library that serves as a collection of shared code and data structures used throughout the Kubernetes codebase, particularly in the API definitions (k8s.io/api) and the client libraries (k8s.io/client-go). It's designed to promote consistency, reusability, and maintainability when working with Kubernetes objects.

Key Components and Usage Examples

The k8s.io/apimachinery module is quite extensive. Here, we will categorize its main parts and provide examples:

pkg/apis/meta/v1 (Metadata and Object Management)

  • ObjectMeta: Embedded in almost all Kubernetes objects, it provides metadata like name, namespace, labels, annotations, creation timestamp, etc.
  • TypeMeta: Specifies the API version and kind of an object.
  • ListMeta: Metadata for list objects (used when retrieving multiple resources).
  • LabelSelector: Defines a query to select objects based on their labels.
  • OwnerReference: Represents an owning object (e.g., a ReplicaSet owning a Pod).
package main

import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"fmt"
)

func main() {
// Example of creating an ObjectMeta
objectMeta := metav1.ObjectMeta{
Name: "my-object",
Namespace: "default",
Labels: map[string]string{
"app": "my-app",
"tier": "frontend",
},
Annotations: map[string]string{
"description": "An example object",
},
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "apps/v1",
Kind: "ReplicaSet",
Name: "my-replicaset",
UID: "some-uid", // Replace with actual UID if needed
},
},
}

// Example of creating a TypeMeta
typeMeta := metav1.TypeMeta{
Kind: "Pod",
APIVersion: "v1",
}

// Example of creating labelSelector
labelSelector := &metav1.LabelSelector{
MatchLabels: map[string]string{"app": "my-app"},
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "environment",
Operator: metav1.LabelSelectorOpIn,
Values: []string{"dev", "test"},
},
},
}
fmt.Println(objectMeta)
fmt.Println(typeMeta)
fmt.Println(labelSelector)
}

pkg/api/errors (API Errors)

  • errors.StatusError: Represents an error returned by the Kubernetes API server. It implements the k8s.io/apimachinery/pkg/api/errors interface, as well as the standard Go error interface.
  • errors.StatusReason: Constants defining various reasons for API errors (e.g., metav1.StatusReasonNotFound, metav1.StatusReasonAlreadyExists, metav1.StatusReasonTimeout, etc.).
  • Helper functions to create specific error types (e.g., errors.NewNotFound, errors.NewAlreadyExists, errors.NewTimeoutError, etc.).
package main

import (
"context"
"fmt"
"os"
"path/filepath"

apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)

func main() {
// Use the current context in kubeconfig
kubeconfig := filepath.Join(homedir.HomeDir(), ".kube", "config")
if envvar := os.Getenv("KUBECONFIG"); len(envvar) > 0 {
kubeconfig = envvar
}
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
panic(err.Error())
}

// Create a Kubernetes clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}

// Try to get a non-existent Pod
_, err = clientset.CoreV1().Pods("default").Get(context.TODO(), "non-existent-pod", metav1.GetOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
fmt.Println("Pod not found.")
} else if apierrors.IsTimeout(err) {
fmt.Println("Timeout fetching pod.")
} else if apierrors.IsAlreadyExists(err) {
fmt.Println("Pod already exist.")
} else {
fmt.Println("Unknown error:", err)
}
}

// Example of creating an error
notFoundErr := apierrors.NewNotFound(schema.GroupResource{Group: "", Resource: "pods"}, "my-pod")
fmt.Println("Custom Error:", notFoundErr)
}

sigs.k8s.io/controller-runtime

sigs.k8s.io/controller-runtime is a Go library that provides high-level APIs and utilities for building Kubernetes controllers. It aims to abstract away much of the boilerplate code typically involved in controller development, making it easier to write robust and maintainable controllers. It is part of the Kubernetes SIGs (Special Interest Groups) and is actively maintained.

Key Concepts and Components

Here’s a breakdown of the core concepts and components you’ll encounter in controller-runtime:

Manager: The central component. It’s responsible for managing all registered controllers, setting up shared caches, clients, schemes, and starting the controllers. It ensures proper initialization and graceful shutdown.

package main

import (
"log"
"os"

"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/manager/signals"
"k8s.io/client-go/rest"
)

func main() {
// Get a config to talk to the apiserver
cfg, err := rest.InClusterConfig()
if err != nil {
log.Fatalf("Failed to get config: %v", err)
}

// Create a new manager to provide shared dependencies and start components
mgr, err := manager.New(cfg, manager.Options{
Scheme: scheme, // Your scheme, e.g., the one from your API package
MetricsBindAddress: "0", // Disable metrics for this example
})
if err != nil {
log.Fatalf("Failed to create manager: %v", err)
}

// ... (Add your controllers here) ...

// Start the manager (and controllers)
log.Print("Starting manager")
if err := mgr.Start(signals.SetupSignalHandler()); err != nil {
log.Fatalf("Failed to run manager: %v", err)
}
}

Controller: Implements the core reconciliation logic. It watches for changes to specific Kubernetes resources and takes actions to converge the actual state towards the desired state specified in the resource’s specification.

package main

import (
// ... other imports ...
"github.com/my-org/my-operator/controllers"
appsv1 "k8s.io/api/apps/v1"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/source"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
)
var (
scheme = runtime.NewScheme()
)

func init() {
_ = clientgoscheme.AddToScheme(scheme)
_ = appsv1.AddToScheme(scheme) // Add your API types to the scheme
// +kubebuilder:scaffold:scheme
}

func main() {
// ... manager initialization ...
// Setup all Controllers
if err := controllers.AddToManager(mgr); err != nil {
log.Fatalf("Failed to add controllers to manager: %v", err)
}

// ... start manager ...
}

func AddToManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&appsv1.Deployment{}).
Complete(&controllers.DeploymentReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
})
}

Reconciler: An interface that you implement to define the reconciliation logic for your controller. It has a single method, Reconcile, which is called by the controller whenever a relevant event occurs.

package controllers

import (
"context"

appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)

// DeploymentReconciler reconciles a Deployment object
type DeploymentReconciler struct {
client.Client
Scheme *runtime.Scheme
}

// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete

func (r *DeploymentReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)

// Fetch the Deployment instance
deployment := &appsv1.Deployment{}
err := r.Get(ctx, req.NamespacedName, deployment)
if err != nil {
if errors.IsNotFound(err) {
// Deployment not found, could have been deleted after reconcile request.
// Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
// Return and don't requeue
log.Info("Deployment resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
}
// Error reading the object - requeue the request.
log.Error(err, "Failed to get Deployment")
return ctrl.Result{}, err
}

log.Info("Reconciling Deployment", "Deployment.Name", deployment.Name)

// Your custom logic goes here
// Example: Check if the deployment has a specific annotation
if deployment.Annotations["example.com/my-annotation"] == "true" {
// Do something specific if the annotation is present
log.Info("Deployment has the special annotation")
}

return ctrl.Result{}, nil
}

Client: An interface that provides methods for reading and writing Kubernetes objects. controller-runtime provides a cached client that automatically reads from the cache and falls back to the API server when necessary.

// Inside your Reconcile function:

// Get a Deployment
deployment := &appsv1.Deployment{}
err := r.Get(ctx, client.ObjectKey{Namespace: "default", Name: "my-deployment"}, deployment)

// List Pods in a namespace
podList := &corev1.PodList{}
err = r.List(ctx, podList, client.InNamespace("default"))

// Create a new Pod
newPod := &corev1.Pod{
// ... set pod spec ...
}
err = r.Create(ctx, newPod)

// Update a Deployment
deployment.Spec.Replicas = pointer.Int32(5)
err = r.Update(ctx, deployment)

// Delete a Pod
err = r.Delete(ctx, podToDelete)

// Patch a Deployment (using a strategic merge patch)
patch := client.MergeFrom(originalDeployment.DeepCopy())
originalDeployment.Annotations["example.com/updated"] = "true"
err = r.Patch(ctx, originalDeployment, patch)

// Apply Patch for a Deployment.
patch := []byte(`{"spec":{"replicas": 3}}`)
err = r.Patch(ctx, deployment, client.RawPatch(types.StrategicMergePatchType, patch))

Cache: An interface for an informer cache that stores Kubernetes objects locally. It’s used by the controller and client to reduce load on the API server.

// Get an informer for Pods (used internally by the cache)
podInformer, err := r.Cache.GetInformer(ctx, &corev1.Pod{})

// Add an event handler to the informer (e.g., to update an internal data structure)
podInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
// Handle new Pod
},
UpdateFunc: func(oldObj, newObj interface{}) {
// Handle updated Pod
},
DeleteFunc: func(obj interface{}) {
// Handle deleted Pod
},
})

Scheme: A registry that maps Go types to Kubernetes GroupVersionKinds (GVKs). The controller-runtime manager sets up a scheme that is used for serialization and deserialization of objects.

Webhook:controller-runtime helps you to set up and manage validating and mutating admission webhooks, which allow you to intercept and modify requests to the Kubernetes API server.

package main

import (
// ... other imports ...
admissionv1 "k8s.io/api/admission/v1"
"sigs.k8s.io/controller-runtime/pkg/webhook"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
)

func main() {
// ... manager setup ...

// Create a new webhook server
hookServer := mgr.GetWebhookServer()

// Register the validating webhook for Pods
hookServer.Register("/validate-v1-pod", &webhook.Admission{
Handler: &podValidator{},
})

// ... start manager ...
}

// podValidator implements admission.Handler
type podValidator struct {
decoder *admission.Decoder
}

var _ admission.DecoderInjector = &podValidator{}

// InjectDecoder injects the decoder.
func (v *podValidator) InjectDecoder(d *admission.Decoder) error {
v.decoder = d
return nil
}

// Handle handles admission requests.
func (v *podValidator) Handle(ctx context.Context, req admission.Request) admission.Response {
pod := &corev1.Pod{}

err := v.decoder.Decode(req, pod)
if err != nil {
return admission.Errored(http.StatusBadRequest, err)
}

// Validate the Pod
if pod.Annotations["example.com/validated"] != "true" {
return admission.Denied("Pod must have the 'example.com/validated' annotation set to 'true'")
}

return admission.Allowed("")
}

Leader Election: Provides utilities to ensure that only one instance of your controller is actively reconciling objects at a time, even when multiple replicas are running. This is crucial for controllers that cannot have concurrent modifications.

mgr, err := manager.New(config, manager.Options{
// ...
LeaderElection: true,
LeaderElectionID: "my-operator-lock", // A unique ID for your operator
})

Metrics: Integrates with Prometheus to expose metrics about your controller’s performance and health.

k8s.io/client-go/kubernetes/scheme

k8s.io/client-go/kubernetes/scheme is a crucial component when working with the Kubernetes API in Go. It's responsible for managing the various schemes (serialization and deserialization formats) used to communicate with the Kubernetes API server.

It provides following

Serialization and Deserialization: Kubernetes objects are represented as Go structs. To send these objects to the API server (or receive them from the server), they need to be converted to and from formats like JSON or YAML. The scheme package handles this conversion process.

Built-in Types: It includes the built-in Kubernetes API types (Pods, Services, Deployments, etc.) and their corresponding serialization formats. This allows you to easily work with these standard Kubernetes objects in your Go code.

Custom Resource Definitions (CRDs): When you create custom resources, you need to register their types with the scheme so that the Kubernetes client can properly serialize and deserialize them. This is essential for working with CRDs in your Go applications.

Codecs: The scheme provides codecs (encoder/decoder) that handle the actual conversion between Go structs and the wire formats (JSON, YAML).

k8s.io/apimachinery/pkg/util/runtime

k8s.io/apimachinery/pkg/util/runtime is a utility package in Kubernetes that provides functions for handling common runtime operations, particularly those related to error handling and panics. It's designed to help you write more robust and reliable Kubernetes applications.

Here’s a breakdown of its key functionalities:

Error Handling

  • Must(obj interface{}, err error): This function checks if an error occurred. If so, it panics, effectively halting the program. It's often used in situations where an error should never occur, and if it does, it indicates a critical issue.
  • HandleError(err error): This function logs an error using the configured logger. It's a convenient way to handle errors without interrupting the program's flow.
  • PanicHandlers: This is a slice of functions that are called when a panic occurs. You can add your own custom panic handlers to perform actions like logging, cleanup, or graceful shutdown.

Goroutine Management

  • Go(f func()): This function starts a new goroutine and handles any panics that occur within that goroutine. It prevents panics in goroutines from crashing the entire program.
  • HandleCrash(handler func(interface{})): This function sets up a global panic handler that recovers from panics and calls the provided handler function with the panic value.

Other Utilities

  • SetLogger(logger Logger): Sets the logger used by the HandleError function.
  • GetCaller(): Retrieves information about the caller of a function.

Following is example

package main

import (
"fmt"

"k8s.io/apimachinery/pkg/util/runtime"
)

func main() {
runtime.HandleCrash(func(panic interface{}) {
fmt.Printf("Recovered from panic: %v\n", panic)
})

// ... your code ...
}

sigs.k8s.io/controller-runtime/pkg/log/zap

sigs.k8s.io/controller-runtime/pkg/log/zap provides a logging implementation for your Kubernetes controllers using the popular zap logging library.

Following are important features of zap logger.

Structured Logging: zap is known for its structured logging capabilities. This means log messages include key-value pairs (contextual information) that make it much easier to filter, search, and analyze logs, especially in complex environments like Kubernetes.

Performance: zap is designed for performance. It's highly optimized for speed and efficiency, which is important in a Kubernetes environment where controllers need to handle events quickly.

Flexibility: zap offers a lot of flexibility in terms of log formatting, output destinations, and log levels. You can customize it to fit your specific logging needs.

Integration with controller-runtime: This package integrates zap with the controller-runtime framework, making it easy to use zap for logging in your controllers.

Sample code to use logger.

import (
"sigs.k8s.io/controller-runtime/pkg/log/zap"
)

log := zap.New(zap.UseDevMode(true)) // Use development mode for more verbose logs
ctrl.SetLogger(log) // Set the logger for controller-runtime

log.Info("Reconciling object", "name", req.NamespacedName)

k8s.io/client-go/plugin/pkg/client/auth

k8s.io/client-go/plugin/pkg/client/auth provides a plugin mechanism for authenticating with a Kubernetes cluster.

Here’s why it’s important:

Variety of Authentication Methods: Kubernetes supports various authentication methods, such as:

  • Service accounts
  • X.509 client certificates
  • Static token files
  • OpenID Connect (OIDC)
  • And more…

Extensibility: This package allows you to easily add new authentication methods or customize existing ones without modifying the core Kubernetes client code.

Pluggable Architecture: It defines a set of interfaces (AuthProvider) that authentication plugins must implement. This makes it easy to swap in different authentication mechanisms as needed.

Client-Side Authentication: It focuses on authenticating client applications (like your controller or kubectl) that need to access the Kubernetes API.

sigs.k8s.io/controller-runtime/pkg/healthz

sigs.k8s.io/controller-runtime/pkg/healthz is a handy package that helps you add health checks to your Kubernetes controllers. Health checks are a way to monitor the status and readiness of your controllers, allowing Kubernetes to know if they're functioning correctly and able to handle requests.

Here’s why it’s useful:

Liveness Probe: You can use healthz to create a liveness probe endpoint for your controller. Kubernetes periodically checks this endpoint to see if your controller is alive and responding. If the probe fails, Kubernetes will restart the controller to try to recover it.

Readiness Probe: You can also create a readiness probe endpoint. This tells Kubernetes whether your controller is ready to serve traffic. If the readiness probe fails, Kubernetes won’t send traffic to the controller until it becomes healthy again.

Simple Setup: The healthz package provides simple functions for creating health check endpoints with minimal code.

Integration with controller-runtime: It integrates seamlessly with the controller-runtime framework, making it easy to add health checks to your controllers.

How it’s used

import "sigs.k8s.io/controller-runtime/pkg/healthz"

// A simple health check that always returns healthy
func healthzHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte("ok"))
}

// A readiness check that could check dependencies, etc.
func readinessHandler(w http.ResponseWriter, r *http.Request) {
// Check if the controller is ready (e.g., database connection)
if isReady() {
w.WriteHeader(http.StatusOK)
w.Write([]byte("ready"))
} else {
w.WriteHeader(http.StatusServiceUnavailable)
w.Write([]byte("not ready"))
}
}

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
// ... other options ...
HealthProbeBindAddress: ":8081", // Address to bind health probes
})
if err != nil {
// Handle error
}

if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
// Handle error
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
// Handle error
}

Go module to interact with AWS S3 bucket

Let’s review few go module we will use to interact with AWS S3 bucket.

github.com/aws/aws-sdk-go/aws/credentials

github.com/aws/aws-sdk-go/aws/credentials is responsible for managing the credentials that your Go applications use to authenticate with AWS services.

Following are key functionalities

Credential Providers: It provides various ways to obtain AWS credentials:

  • Environment Variables: Reads credentials from environment variables (e.g., AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY).
  • Shared Credentials File: Reads credentials from the shared credentials file (~/.aws/credentials).
  • EC2 Instance Roles: Retrieves credentials from an IAM role associated with an EC2 instance.
  • Web Identity Tokens: Uses web identity tokens (e.g., from OIDC providers) to fetch credentials.
  • STS (Security Token Service): Assumes IAM roles or retrieves temporary credentials from STS.

Credential Chain: It implements a credential chain that tries different providers in a specific order until it finds valid credentials. This allows you to configure multiple credential sources for flexibility.

Custom Providers: It lets you create your own custom credential providers if you have unique authentication requirements.

Caching: It caches credentials to improve performance and reduce the number of API calls to AWS.

Expiration Handling: It handles credential expiration and automatically refreshes them when necessary.

Follownig is sample code to use this.

import "github.com/aws/aws-sdk-go/aws/credentials"

creds := credentials.NewEnvCredentials() // Use environment variables
// or
creds := credentials.NewSharedCredentials("", "profile_name") // Use shared credentials file

// Use the credentials provider with an AWS session:
sess, _ := session.NewSession(&aws.Config{
Credentials: creds,
})

github.com/aws/aws-sdk-go/aws

github.com/aws/aws-sdk-go/aws is a core package within the AWS SDK for Go. It provides essential functionalities and configurations that are used across all the different AWS services, including S3, EC2, DynamoDB, and others.

This is a foundation service upon which the service-specific packages (like github.com/aws/aws-sdk-go/service/s3) are built.

Following is example to use it.

creds := credentials.NewStaticCredentials("your_access_key_id", "your_secret_access_key", "")
sess, _ := session.NewSession(&aws.Config{
Credentials: creds,
})

github.com/aws/aws-sdk-go/service/s3

github.com/aws/aws-sdk-go/service/s3 is the Go package that provides the official AWS SDK for interacting with Amazon S3. It offers following.

API Client: It provides a client (s3.S3) that you can use to make API requests to Amazon S3. This client handles the underlying communication with the S3 service.

Types and Structures: It defines all the necessary data structures for working with S3, such as GetObjectInput, PutObjectInput, CreateBucketInput etc.

Methods for S3 Operations: The s3.S3 client provides methods that correspond to the various actions you can perform on S3 such asGetObject,PutObject, DeleteObject etc.

Following is sample code to use it.

import "github.com/aws/aws-sdk-go/service/s3"
//....
//....
sess, err := session.NewSession()
if err != nil {
// Handle error
}
svc := s3.New(sess) // Create client
result, err := svc.GetObject(&s3.GetObjectInput{
Bucket: aws.String("your-bucket-name"),
Key: aws.String("your-object-key"),
})
if err != nil {
// Handle error
}
// Process the result

Other relevant gomodule used

flags

The flag package in Go's standard library is go-to tool for handling command-line flags in the Go programs. It provides a simple and convenient way to define flags, parse them from the command line, and use their values in your code.

Defining Flags

  • flag.String(): Defines a string flag.
var name = flag.String("name", "Guest", "a string var")
  • flag.Int(): Defines an integer flag.
var age = flag.Int("age", 25, "an int var")
  • flag.Bool(): Defines a boolean flag.
var debug = flag.Bool("debug", false, "a bool var")
  • And more: flag supports various other types like float64, duration, and even custom types.

Parsing Flags:flag.Parse() parses the command-line arguments and sets the values of the defined flags.

Using the flag variables: After calling flag.Parse(), you can access the values of the flags directly through the variables you defined.

package main

import (
"flag"
"fmt"
)

func main() {
var name = flag.String("name", "Guest", "a string var")
var age = flag.Int("age", 25, "an int var")
var debug = flag.Bool("debug", false, "a bool var")

flag.Parse()

fmt.Println("Name:", *name)
fmt.Println("Age:", *age)
fmt.Println("Debug:", *debug)
}

Running the program

$ go run main.go -name="Dilip" -age=30 -debug

errors gomodule

It provides basic error handling functionalities. Here’s why we need the errors package:

  • Creating Errors: The errors.New() function allows you to create new error values with custom error messages. This is essential for signaling that something went wrong in your code.
err := errors.New("something went wrong")
  • Error Wrapping: The fmt.Errorf() function with the %w verb lets you wrap an error with additional context. This helps you create a chain of errors, preserving the original error while adding more information.
err := errors.New("failed to connect to database") 
wrappedErr := fmt.Errorf("unable to process request: %w", err)
  • Error Unwrapping: The errors.Unwrap() function allows you to access the underlying error in a wrapped error. This is useful for inspecting the root cause of an error.
if errors.Unwrap(wrappedErr) != nil {     
// Handle the unwrapped error
}
  • Error Comparison: The errors.Is() function helps you compare errors, even if they are wrapped. This is particularly useful for checking for specific error types.
if errors.Is(err, io.EOF) {     // Handle end-of-file error }

os go package

os package in Go is your gateway to interacting with the operating system.

File Operations

  • os.Open(): Opens a file for reading.
  • os.Create(): Creates a new file or truncates an existing one.
  • os.Read(): Reads data from a file.
  • os.Write(): Writes data to a file.
  • os.Close(): Closes a file.
  • os.Remove(): Deletes a file.
  • os.Rename(): Renames a file.
  • os.Stat(): Retrieves information about a file (e.g., size, permissions).

Directory Operations

  • os.Mkdir(): Creates a new directory.
  • os.MkdirAll(): Creates a directory and any necessary parent directories.
  • os.Remove(): Deletes an empty directory.
  • os.RemoveAll(): Deletes a directory and its contents.
  • os.ReadDir(): Reads the contents of a directory.

Process Management

  • os.Getpid(): Returns the process ID of the current process.
  • os.Getppid(): Returns the parent process ID.
  • os.StartProcess(): Starts a new process.
  • os.Exit(): Terminates the current process.
  • os.Signal: Sends signals to processes (e.g., interrupt, kill).

Environment Variables

  • os.Getenv(): Retrieves the value of an environment variable.
  • os.Setenv(): Sets the value of an environment variable.
  • os.Environ(): Returns a list of all environment variables.

Other functionalities

  • os.Args: Access command-line arguments.
  • os.Stdin, os.Stdout, os.Stderr: Standard input, output, and error streams.
  • os.Hostname(): Retrieves the hostname of the machine.

Example

package main

import (
"fmt"
"os"
)

func main() {
// Open a file for reading
file, err := os.Open("my_file.txt")
if err != nil {
panic(err)
}
defer file.Close()

// Get the current working directory
cwd, err := os.Getwd()
if err != nil {
panic(err)
}
fmt.Println("Current directory:", cwd)
}

kustomize

kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.

Kustomize has two key concepts, Base and Overlays. With Kustomize we can reuse the base files (common YAMLs) across all environments and overlay (patches) specifications for each of those environments.

Overlaying is the process of creating a customized version of the manifest file (base manifest + overlay manifest = customized manifest file).

All customization specifications are contained within a kustomization.yaml file.

First, you need to understand the following Key Kustomize concepts.

  1. kustomization.yamlfile
  2. Base and Overlays
  3. Transformers
  4. Patches

kustomization.yaml file

The kustomization.yaml file is the main file used by the Kustomize tool.

When you execute Kustomize, it looks for the file named kustomization.yaml. This file contains a list of all of the Kubernetes resources (YAML files) that should be managed by Kustomize. It also contains all the customizations that we want to apply to generate the customized manifest.

Here is an example kustomization.yaml file.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml
- service.yaml
- configmap.yaml
- secret.yaml

commonLabels:
app: my-java-app

configMapGenerator:
- name: my-java-app-config
files:
- config/application.properties

secretGenerator:
- name: my-java-app-secret
literals:
- username=admin
- password=secretpassword

images:
- name: my-java-app-image
newName: my-registry/my-java-app
newTag: v1.0.0

Base and Overlays

The Base folder represents the config that going to be identical across all the environments. We put all the Kubernetes manifests in the Base. It has a default value that we can overwrite.

On the other side, the Overlays folder allows us to customize the behavior on a per-environment basis. We can create an Overlay for each one of the environments. We specify all the properties and parameters that we want to overwrite & change.

Basically, Kustomize uses patch directive to introduce environment-specific changes on existing Base standard k8s config files without disturbing them. We will look at patches in a bit.

Transformers

As the name indicates, transformers are something that transforms one config into another. Using Transformers, we can transform our base Kubernetes YAML configs. Kustomize has several built-in transformers. Let’s see some common transformers:

  1. commonLabel – It adds a label to all Kubernetes resources
  2. namePrefix – It adds a common prefix to all resource
    names
  3. nameSuffix – It adds a common suffix to all resource
    names
  4. Namespace – It adds a common namespace to all resources
  5. commonAnnotations – It adds an annotation to all resources

Patches (Overlays)

Patches or overlays provide another method to modify Kubernetes configs. It provides more specific sections to change in the configuration. There are 3 parameters we need to provide:

  1. Operation Type: add or remove or replace
  2. Target: Resource name which we want to modify
  3. Value: Value name that will either be added or replaced. For the remove operation type, there would not be any value.

There are two ways to define the patch:

  1. JSON 6902 and

In this way, there are two details that we have to provide, the target and the patch details i.e. operation, path, and the new value.

patches:
- target:
kind: Deployment
name: web-deployment
patch: |-
- op: replace
path: /spec/replicas
value: 5

2. Stragetic Merge Patching.

In this way, all the patch details are similar to a standard k8s config. It would be the original manifest file, we just add the fields that need to be modified.

Here is an example of inline Stragetic Merge Patching.

patches:
- patch: |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 5

Here is the directory structure for using Kustomize.

├── kustomize
├── base
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── kustomization.yaml
└ overlays
├── dev
│ ├── deployment-dev.yaml
| ├── service-dev.yaml
│ └── kustomization.yaml
└── prod
├── deployment-prod.yaml
├── service-prod.yaml
└── kustomization.yaml

We can deploy the customized manifest using the following command.

$ kustomize build overlays/dev | kubectl apply -f -

You can also use the following kubectl command.

$ kubectl apply -k overlays/dev

Finalizer concept in Kubernetes

Q. What are Finalizers in Kubernetes?

In essence, finalizers are keys added to a Kubernetes resource’s metadata that instruct Kubernetes to wait before fully deleting the resource. They act as pre-delete hooks, allowing controllers to perform cleanup tasks before a resource is permanently removed from the system.

Q. How Finalizers Work During Resource Deletion?

Deletion Request: A user or a controller initiates a deletion request for a resource (e.g., using kubectl delete pod my-pod).

metadata.deletionTimestamp is Set: Instead of immediately removing the resource, Kubernetes marks the resource for deletion by setting its metadata.deletionTimestamp to the current time. The resource enters a "terminating" state.

Finalizer Check: The Kubernetes API server checks if the resource has any entries in its metadata.finalizers field.

Controller Intervention (if finalizers are present):

  • If finalizers are present, responsible controllers (which are constantly watching for resources) notice that the deletionTimestamp is set, and the resource's finalizers field is not empty.
  • Each controller associated with a finalizer in the list performs its designated cleanup tasks.
  • Crucially: Once a controller completes its cleanup, it removes its corresponding finalizer from the metadata.finalizers list.

Resource Removal (when finalizers are gone): When the metadata.finalizers list becomes empty (meaning all controllers have completed their cleanup and removed their finalizers), the Kubernetes API server finally deletes the resource from etcd (the Kubernetes data store). The resource is then permanently removed.

Update Controller to interact with AWS S3 bucket

Update ObjStoreReconciler to add S3 API

// ObjStoreReconciler reconciles a ObjStore object
type ObjStoreReconciler struct {
client.Client
Scheme *runtime.Scheme
S3svc *s3.S3
}

Update main method to read AWS S3 key and secret

We would like to read AWS S3 key and secret from environment variable. Following is code for reference.

 id, ok := os.LookupEnv("AWS_ACCESS_KEY_ID")
if !ok {
setupLog.Error(errors.New("load aws access key failed"), "unable to load environment")
os.Exit(2)
}
secret, ok := os.LookupEnv("AWS_SECRET_ACCESS_KEY")
if !ok {
setupLog.Error(errors.New("load aws access key failed"), "unable to load environment")
os.Exit(2)
}

Update main method to build AWS S3 session

 sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-west-2"),
Credentials: credentials.NewStaticCredentials(id, secret, ""),
})

Update main method to build Reconciler with S3 bucket session

 if err = (&controllers.ObjStoreReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
S3svc: s3.New(sess),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "ObjStore")
os.Exit(1)
}

Update Reconciler to handle both Create and Delete event.

func (r *ObjStoreReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)

instance := &cninfv1alpha1.ObjStore{}
if err := r.Get(ctx, req.NamespacedName, instance); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
if instance.ObjectMeta.DeletionTimestamp.IsZero() {
if instance.Status.State == "" {
instance.Status.State = cninfv1alpha1.PENDING_STATE
r.Status().Update(ctx, instance)
}
controllerutil.AddFinalizer(instance, finalizer)
if err := r.Update(ctx, instance); err != nil {
return ctrl.Result{}, err
}
if instance.Status.State == cninfv1alpha1.PENDING_STATE {
log.Info("stating to create resources")
if err := r.createResources(ctx, instance); err != nil {
instance.Status.State = cninfv1alpha1.ERROR_STATE
r.Status().Update(ctx, instance)
log.Error(err, "error creating bucket")
return ctrl.Result{}, err
}
}
} else {
log.Info("deletion flow")
if err := r.deleteResources(ctx, instance); err != nil {
instance.Status.State = cninfv1alpha1.ERROR_STATE
r.Status().Update(ctx, instance)
log.Error(err, "error deleting bucket")
return ctrl.Result{}, err
}
controllerutil.RemoveFinalizer(instance, finalizer)
if err := r.Update(ctx, instance); err != nil {
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}

Helper method to create S3 bucket resource.


func (r *ObjStoreReconciler) createResources(ctx context.Context, objStore *cninfv1alpha1.ObjStore) error {
//update the status first
objStore.Status.State = cninfv1alpha1.CREATING_STATE
err := r.Status().Update(ctx, objStore)
if err != nil {
return err
}
//create the bucket
b, err := r.S3svc.CreateBucket(&s3.CreateBucketInput{
Bucket: aws.String(objStore.Spec.Name),
ObjectLockEnabledForBucket: aws.Bool(objStore.Spec.Locked),
})
if err != nil {
return err
}
//wait for it to be created
err = r.S3svc.WaitUntilBucketExists(&s3.HeadBucketInput{Bucket: aws.String(objStore.Spec.Name)})
if err != nil {
return err
}
//now create the configmap
data := make(map[string]string, 0)
data["bucketName"] = objStore.Spec.Name
data["location"] = *b.Location
configmap := &v1.ConfigMap{
TypeMeta: metav1.TypeMeta{
Kind: "ConfigMap",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf(configMapName, objStore.Spec.Name),
Namespace: objStore.Namespace,
},
Data: data,
}
err = r.Create(ctx, configmap)
if err != nil {
return err
}
objStore.Status.State = cninfv1alpha1.CREATED_STATE
err = r.Status().Update(ctx, objStore)
if err != nil {
return err
}
return nil
}

Helper method to delete the S3 bucket.

func (r *ObjStoreReconciler) deleteResources(ctx context.Context, objStore *cninfv1alpha1.ObjStore) error {
//delete the bucket first
_, err := r.S3svc.DeleteBucket(&s3.DeleteBucketInput{Bucket: aws.String(objStore.Spec.Name)})
if err != nil {
return err
}
//Now delete the config map
configmap := &v1.ConfigMap{}
err = r.Get(ctx, client.ObjectKey{
Name: fmt.Sprintf(configMapName, objStore.Spec.Name),
Namespace: objStore.Namespace,
}, configmap)
if err != nil {
return err
}
err = r.Delete(ctx, configmap)
if err != nil {
return err
}
return nil
}

Reference

This post is based on following course taught by
Frank P Moley III on linkedin.

Happy learning Kubernetes :-)

--

--

Dilip Kumar
Dilip Kumar

Written by Dilip Kumar

With 18+ years of experience as a software engineer. Enjoy teaching, writing, leading team. Last 4+ years, working at Google as a backend Software Engineer.

No responses yet