Develop a simple operator to deploy a web application using the GO Operator SDK

This blog post is a bigger cheat sheet about how to start with an operator implementation with the GO Operator SDK and also contains some details how to define Kubernetes deployments, secrets, and services.

That blog post does reference an example GitHub project called Example Tenancy Frontend Operator. This project contains the source code for the example operator.

That operator has only one simple objective:

Deploy the example frontend application of the Open-Source Multi-Cloud Asset to build SaaS project to a minikube instance on the local computer.

Therefor the Example Tenancy Frontend Operator creates following Kubernetes resources:

  • A deployment
  • A service
  • Some secrets

The source code of the example frontend application is available in the open sourced GitHub project multi-tenancy-frontend.

So, the objective is only to get the frontend web application running without any authentication or connection to a backend or specific values in a custom resource.

Again, just get the frontend application running on minikube using an operator!

This is an image of the simple web frontend we will deploy later with our operator. Just a web application with a headline.

The source code of the frontend application is available in the open sourced Multi Tenancy Frontend GitHub project. In case if you are interested in details of the deployed application, just take a look into that source code.


Remember: As I said, this is only a bigger cheat sheet and not a workshop with detailed steps of the developed example operator and the related explanations.


If you want to reproduce some of the steps, you should ensure you have installed the Operator SDK, GO, Docker and minikube. To get the source code you just need to clone the project Example Tenancy Frontend Operator to your local machine.

That very simplified diagram shows what the local running example frontend operator basically does:

  1. Observe if a custom resource definition (CRD) exists
  2. Create a deployment, secrets, services

I did a live stream related to this blog post. In that live stream I deployed the frontend application to a free IBM Cloud Kubernetes cluster. Extended PDF of the slides used in the live stream.

These are the sections of the blog post:

  • 1 Generate an own Operator API by using the GO SDK
  • 2 Get a basic understanding of some parts in the operator implementation
    • 2.1 Let’s start with the setup of the example on the local machine
    • 2.2 Let’s understand the existing code of the project.
      • 2.2.1 How to define content for a custom resource definition
      • 2.2.2 Understand a bit the controller
      • 2.2.3 Remember the basics of the reconcile function
      • 2.2.4 Ensure that a container image is available for the deployment
    • 2.3. How to define the deployments?
      • 2.3.1 Understand the actual deployment of the frontend application
      • 2.3.2 Understand the implementation of the deployment definition
    • 2.4 Services and secrets definitions
      • 2.4.1 Understand the service definition for NodePort and ClusterIP
      • 2.4.2 Implement the NodePort and ClusterIP service definitions
  • 3 Run the operator locally and verify if the frontend is deployed and accessible on minikube
    • 3.1 Recreate the needed operator manifests
    • 3.2 Run the operator on minikube

If you only want to run the example "Frontend Operator" you can skip this section and start with “2 Get a basic understanding of some parts in the operator implementation.

1. Generate an own Operator API using the GO SDK

That section contains steps you can follow on your computer to create your own Operator API, here you should get a basic understanding of the folder structure and the created files.

The created Operator API project is just a temporary project which we can delete.
later in section “2 Get a basic understanding of some parts in the operator implementation we will use the existing developed Example Tenancy Frontend Operator and clone it to our local computer.

Step 1: Create a project folder called frontendOperator

That folder name will later be reused in the PROJECT file as the project name.

mkdir frontendOperator
cd frontendOperator

Step 2: Init a new operator project

The --repo parameter does reflect the module name in the go.mod file.

  • The module name: module github.com/thomassuedbroecker/multi-tenancy-frontend-operator
operator-sdk init --domain example.net --repo github.com/thomassuedbroecker/multi-tenancy-frontend-operator
  • Example output:

This is only an output and you don’t need execute any of the written example commands.

Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
...
Next: define a resource with:
$ operator-sdk create api
  • Verify the created folders and files:
tree .
  • Example output:
.
├── Dockerfile
├── Makefile
├── PROJECT
├── config
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── manifests
│   │   └── kustomization.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── role_binding.yaml
│   │   └── service_account.yaml
│   └── scorecard
│       ├── bases
│       │   └── config.yaml
│       ├── kustomization.yaml
│       └── patches
│           ├── basic.config.yaml
│           └── olm.config.yaml
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
└── main.go

10 directories, 29 files

Step 3: Create a new Operator API

operator-sdk create api --group multitenancy --version v1alpha1 --kind TenancyFrontend --resource --controller

  • Example output:

This is only an output and don’t execute any of the written example commands.

Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
...
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests

  • Verify the additional created folders and files:
tree .
  • Example output:
.
├── Dockerfile
├── Makefile
├── PROJECT
├── api
│   └── v1alpha1
│       ├── groupversion_info.go
│       ├── tenancyfronted_types.go
│       └── zz_generated.deepcopy.go
├── bin
│   └── controller-gen
├── config
│   ├── crd
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │       ├── cainjection_in_tenancyfronteds.yaml
│   │       └── webhook_in_tenancyfronteds.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── manifests
│   │   └── kustomization.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── role_binding.yaml
│   │   ├── service_account.yaml
│   │   ├── tenancyfronted_editor_role.yaml
│   │   └── tenancyfronted_viewer_role.yaml
│   ├── samples
│   │   ├── cache_v1alpha1_tenancyfronted.yaml
│   │   └── kustomization.yaml
│   └── scorecard
│       ├── bases
│       │   └── config.yaml
│       ├── kustomization.yaml
│       └── patches
│           ├── basic.config.yaml
│           └── olm.config.yaml
├── controllers
│   ├── suite_test.go
│   └── tenancyfronted_controller.go
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
└── main.go

17 directories, 43 files

Step 4: Install missing components or version, if needed

go get k8s.io/client-go@latest

  • Example output:
go get k8s.io/client-go@latest
go: downloading k8s.io/client-go v0.23.3
...
go get: upgraded golang.org/x/net v0.0.0-20210825183410-e898025ed96a => v0.0.0-20211209124913-491a49abca63
...
go get: upgraded k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b => v0

This is related to the definition in your go.mod file. Your local installation must fulfil the required packages defined here.

require (
    github.com/onsi/ginkgo v1.16.5
    github.com/onsi/gomega v1.17.0
    k8s.io/apimachinery v0.23.3
    k8s.io/client-go v0.23.3
    sigs.k8s.io/controller-runtime v0.11.0
)

2. Get a basic understanding of some parts in the operator implementation with the GO SDK

In that section we will clone the existing project to your local computer. And then we have a walk through to different topics of the implementation, just by viewing existing example code.

We will not work with the project, which was created before!

The before created project was just to ensure we get a basic understanding how the project and API creation works with the Operator SDK and which files and folders were created.

That section will give us some insights of the implementation of the existing example "Frontend Operator".


2.1. Let’s start with the setup of the example on the local machine

In these steps we setup the existing example on the local machine.

Step 1: Create a new folder on you machine:

mkdir example
cd example

Step 2: Clone the operator code into the “example” folder:

git clone https://github.com/thomassuedbroecker/multi-tenancy-frontend-operator.git

Step 3: Navigate to the frontendOperator folder of the cloned project

cd multi-tenancy-frontend-operator/frontendOperator

Step 4: Add the folder frontendOperator to your Visual Studio Code workspace

If you just want to run the frontendOperator you can skip this section and move on with 3. Run the operator locally and verify, if the frontend is deployed and accessible on minikube.


2.2 Let’s understand the existing code of the project.

2.2.1 How to define content for a custom resource definition in the frontendOperator/api/v1alpha1/tenancyfrontend_type.go file?

In this file we define entries of our custom resource definition (CRD) and the file will be used later, when we will execute the two commands make generate and make manifests which will create the CRD definition for our operator.

As you see these function contains only two values, which we will use in the example operator to create the frontend application instance.

This is just an example entry: Size int32json:”size”“

type TenancyFrontendSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file

    // Size is an example field of TenancyFrontend. Edit tenancyfrontend_types.go to remove/update
    // The field DisplayName will be used in the frontend application as the headline of the web application.
    Size        int32  `json:"size"`
    DisplayName string `json:"displayname,omitempty"`
}

  • The TenancyFrontend custom resource definition, will be automated created for you. The created yaml file for the TenancyFrontend custom resource definition you find here.

An example for the usage to create later an custom resource is created in the sample folder of the project frontendOperator/config/samples/multitenancy_v1alpha1_tenancyfrontend.yaml, we will use later this file to create an instance / deployment called tenancyfrontend-sample of the frontend application.

apiVersion: multitenancy.example.net/v1alpha1
kind: TenancyFrontend
metadata:
  name: tenancyfrontendsample
spec:
  # TODO(user): Add fields here
  size: 1
  displaysname: MyFrontendDisplayname

2.2.2 Understand a bit the controller

The tenancyfrontend_controller.go file contains the controller loop implementation.

In that tenancyfrontend_controller.go file is a reconcile function, that is responsible to ensure that the desired state for our operator will be achieved, that means all the given custom resource definitions for instances we will be deployed to minikube and frontend application will be deployed.

At the beginning of that file we find imports of existing GO packages we need to interact with Kubernetes.

These are example links to some GO Kubernetes packages:

    // Add to read error from Kubernetes
    "k8s.io/apimachinery/pkg/api/errors"

    // Add to read deployments from Kubernetes
    appsv1 "k8s.io/api/apps/v1"
    "k8s.io/apimachinery/pkg/types"

    // Add to define the own deployment 'yaml' configuration
    "github.com/thomassuedbroecker/multi-tenancy-frontend-operator/api/v1alpha1"
    multitenancyv1alpha1 "github.com/thomassuedbroecker/multi-tenancy-frontend-operator/api/v1alpha1"
    corev1 "k8s.io/api/core/v1"
    v1 "k8s.io/api/core/v1"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

2.2.3 Remember the basics of the reconcile function

The reconcile function is responsible to ensure that the desired state of our operator will be achieved, that means for all the given custom resource definition instances (yaml's which have been applied to minikube) the reconcile needs to verify, if there are all frontend applications deployed.

These are the major steps of the example code extract.

Note: This is just a simpler and older extraction of the example code. You don’t need to change the code you have cloned from the repository.

  1. Setup logging
  2. Verify if a custom resource (TenancyFrontend) exists
  3. Verify if there is an existing deployment for that custom resouce
  4. If no deployment exists create a new one
func (r *TenancyFrontendReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
        // Setup logging
    logger := log.FromContext(ctx)

    // "Verify if a CRD of TenancyFrontend exists"
    logger.Info("Verify if a CRD of TenancyFrontend exists")
    tenancyfrontend := &multitenancyv1alpha1.TenancyFrontend{}
    err := r.Get(ctx, req.NamespacedName, tenancyfrontend)

    if err != nil {
        if errors.IsNotFound(err) {
            // Request object not found, could have been deleted after reconcile request.
            // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
            // Return and don't requeue
            logger.Info("TenancyFrontend resource not found. Ignoring since object must be deleted")
            return ctrl.Result{}, nil
        }
        // Error reading the object - requeue the request.
        logger.Error(err, "Failed to get TenancyFrontend")
        return ctrl.Result{}, err
    }

    // Check if the deployment already exists, if not create a new one
    logger.Info("Verify if the deployment already exists, if not create a new one")

    found := &appsv1.Deployment{}
    err = r.Get(ctx, types.NamespacedName{Name: tenancyfrontend.Name, Namespace: tenancyfrontend.Namespace}, found)

    if err != nil && errors.IsNotFound(err) {
        // Define a new deployment
        dep := r.deploymentForTenancyFronted(tenancyfrontend, ctx)
        logger.Info("Creating a new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
        err = r.Create(ctx, dep)
        if err != nil {
            logger.Error(err, "Failed to create new Deployment", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
            return ctrl.Result{}, err
        }
        // Deployment created successfully - return and requeue
        return ctrl.Result{Requeue: true}, nil
    } else if err != nil {
        logger.Error(err, "Failed to get Deployment")
        return ctrl.Result{}, err
    }

    logger.Info("Just return nil")
    return ctrl.Result{}, nil
}

We also need to ensure that this function has the access rights to modify the Kubernetes resources on minikube. Therefor following two lines added above the reconcile function.

//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch

2.2.4 Ensure that a container image is available for the deployment

We need to ensure that a container image for our frontend is available in a public container registry (just for simplification). In the current situation we can use a container image on quay.io I created.

If you want to create an own container image for the frontend, you can clone the code from the Multi Tenancy Frontend GitHub project and push a container to a public registry.

Later you need to change the image location in the function deploymentForTenancyFrontend.

  • Example commands:
docker login quay.io
docker build -t "quay.io/tsuedbroecker/service-frontend:latest" -f Dockerfile .
docker push  "quay.io/tsuedbroecker/service-frontend:latest"

2.3 How to define the deployments?

Define the deployment inside the operator, therefor we created a function called deploymentForTenancyFrontend.


2.3.1 UNDERSTAND THE ACTUAL DEPLOYMENT OF THE FRONTEND APPLICATION

Before we start to implement the deploymentForTenancyFrontend function, we need to verify and understand what the structure of the existing frontend application deployment does. We can do this by visiting with the given Kubernetes deployment in a yaml format, in this case you can use the deployment.yaml given in the Multi Tenancy Frontend GitHub project for the source code of the frontend.

Here are some of the important values:

  • image: IMAGE_NAME The container image.
  • Definitions for environment variables:
    • Some of the variables are defined with secrets secretKeyRef:name: appid.discovery-endpointCustom and some not.
  • Custom definitions for liveness and readiness probes
  • containerPort
apiVersion: apps/v1
kind: Deployment
metadata:
  name: service-frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: service-frontend
  template:
    metadata:
      labels:
        app: service-frontend -> (tenancyfrontendsample)
    spec:
      containers:
      - name: service-frontend
        image: IMAGE_NAME
        env:
        - name: VUE_APPID_DISCOVERYENDPOINT
          valueFrom:
            secretKeyRef:
              name: appid.discovery-endpoint
              key: VUE_APPID_DISCOVERYENDPOINT
        - name: VUE_APPID_CLIENT_ID
          valueFrom:
            secretKeyRef:
              name: appid.client-id-fronted
              key: VUE_APPID_CLIENT_ID       
        - name: VUE_APP_API_URL_CATEGORIES
          value: "VUE_APP_API_URL_CATEGORIES_VALUE" 
        - name: VUE_APP_API_URL_PRODUCTS
          value: "VUE_APP_API_URL_PRODUCTS_VALUE" 
        - name: VUE_APP_API_URL_ORDERS
          value: "VUE_APP_API_URL_ORDERS_VALUE" 
        - name: VUE_APP_CATEGORY_NAME
          value: "VUE_APP_CATEGORY_NAME_VALUE" 
        - name: VUE_APP_HEADLINE
          value: "VUE_APP_HEADLINE_VALUE" 
        - name: VUE_APP_ROOT
          value: "/" 
        ports:
        - containerPort: 8080
        livenessProbe:
          exec:
            command: ["sh", "-c", "curl -s http://localhost:8080"]
          initialDelaySeconds: 20
        readinessProbe:
          exec:
            command: ["sh", "-c", "curl -s http://localhost:8080"]
          initialDelaySeconds: 40
      restartPolicy: Always

2.3.2 UNDERSTAND THE IMPLEMENTATION OF THE DEPLOYMENT DEFINITION

The deploymentForTenancyFrontend function has the return value *appsv1.Deployment, that value will be used in the reconsile function to create a deployment for the frontend application in Kubernetes.

Later on, we will define the liveness and readiness probes therefor, we define an execution command we will use later. Here you see how a GO slice is used to define the command for the liveness and readiness probes, we have seen in the yaml above.

    // Just reflect the command in the deployment.yaml
    // for the ReadinessProbe and LivenessProbe
    // command: ["sh", "-c", "curl -s http://localhost:8080"]
    mycommand := make([]string, 3)
    mycommand[0] = "sh"
    mycommand[1] = "-c"
    mycommand[3] = "curl -s http://localhost:8080"

An important part of the deployment implementation is the definition of the environment variables which are described by using secrets or simply values. Also the implementations of the liveness and readiness probe are interesting.

These are the imported package for Kubernetes, which are used in the source code:

The following GO source code shows the deployment definition implementation, for the deployment above.

You see that the specification of the custom resource definitions are used in that function.

Two examples:

  • The size defines the count of the replicas replicas := frontend.Spec.Size
  • The displayname defines the headline of the frontend webapplication. (Value: frontend.Spec.DisplayName,)
// deploymentForTenancyFronted returns a tenancyfrontend Deployment object
func (r *TenancyFrontendReconciler) deploymentForTenancyFronted(frontend *v1alpha1.TenancyFrontend, ctx context.Context) *appsv1.Deployment {
	logger := log.FromContext(ctx)
	ls := labelsForTenancyFrontend(frontend.Name, frontend.Name)
	replicas := frontend.Spec.Size

	// Just reflect the command in the deployment.yaml
	// for the ReadinessProbe and LivenessProbe
	// command: ["sh", "-c", "curl -s http://localhost:8080"]
	mycommand := make([]string, 3)
	mycommand[0] = "/bin/sh"
	mycommand[1] = "-c"
	mycommand[2] = "curl -s http://localhost:8080"

	// Using the context to log information
	logger.Info("Logging: Creating a new Deployment", "Replicas", replicas)
	message := "Logging: (Name: " + frontend.Name + ") \n"
	logger.Info(message)
	message = "Logging: (Namespace: " + frontend.Namespace + ") \n"
	logger.Info(message)

	for key, value := range ls {
		message = "Logging: (Key: [" + key + "] Value: [" + value + "]) \n"
		logger.Info(message)
	}

	dep := &appsv1.Deployment{
		ObjectMeta: metav1.ObjectMeta{
			Name:      frontend.Name,
			Namespace: frontend.Namespace,
		},
		Spec: appsv1.DeploymentSpec{
			Replicas: &replicas,
			Selector: &metav1.LabelSelector{
				MatchLabels: ls,
			},
			Template: corev1.PodTemplateSpec{
				ObjectMeta: metav1.ObjectMeta{
					Labels: ls,
				},
				Spec: corev1.PodSpec{
					Containers: []corev1.Container{{
						Image: "quay.io/tsuedbroecker/service-frontend:latest",
						Name:  "service-frontend",
						Ports: []corev1.ContainerPort{{
							ContainerPort: 8080,
							Name:          "nginx-port",
						}},
						Env: []corev1.EnvVar{{
							Name: "VUE_APPID_DISCOVERYENDPOINT",
							ValueFrom: &corev1.EnvVarSource{
								SecretKeyRef: &v1.SecretKeySelector{
									LocalObjectReference: corev1.LocalObjectReference{
										Name: "appid.discovery-endpoint",
									},
									Key: "VUE_APPID_DISCOVERYENDPOINT",
								},
							}},
							{Name: "VUE_APPID_CLIENT_ID",
								ValueFrom: &corev1.EnvVarSource{
									SecretKeyRef: &corev1.SecretKeySelector{
										LocalObjectReference: corev1.LocalObjectReference{
											Name: "appid.client-id-frontend",
										},
										Key: "VUE_APPID_CLIENT_ID",
									},
								}},
							{Name: "VUE_APP_API_URL_CATEGORIES",
								Value: "VUE_APP_API_URL_CATEGORIES_VALUE",
							},
							{Name: "VUE_APP_API_URL_PRODUCTS",
								Value: "VUE_APP_API_URL_PRODUCTS_VALUE",
							},
							{Name: "VUE_APP_API_URL_ORDERS",
								Value: "VUE_APP_API_URL_ORDERS_VALUE",
							},
							{Name: "VUE_APP_CATEGORY_NAME",
								Value: "VUE_APP_CATEGORY_NAME_VALUE",
							},
							{Name: "VUE_APP_HEADLINE",
								Value: frontend.Spec.DisplayName,
							},
							{Name: "VUE_APP_ROOT",
								Value: "/",
							}}, // End of Env listed values and Env definition
						ReadinessProbe: &corev1.Probe{
							ProbeHandler: corev1.ProbeHandler{
								Exec: &corev1.ExecAction{Command: mycommand},
							},
							InitialDelaySeconds: 20,
						},
						LivenessProbe: &corev1.Probe{
							ProbeHandler: corev1.ProbeHandler{
								Exec: &corev1.ExecAction{Command: mycommand},
							},
							InitialDelaySeconds: 20,
						},
					}}, // Container
				}, // PodSec
			}, // PodTemplateSpec
		}, // Spec
	} // Deployment

	// Set TenancyFrontend instance as the owner and controller
	ctrl.SetControllerReference(frontend, dep, r.Scheme)
	return dep
}

That deployment will be deployed in the reconcile function with the Create function, provided by the reconciler instance. For more details visit the documentation.

err = r.Create(ctx, dep)


2.4 Services and secrets definitions

The frontend application normally needs a backend service and a configured IBM Cloud AppID service to run, but we will not provide that information.

2.4.1 Understand the service definition for NodePort and ClusterIP

First understand the existing Kubernetes service definitions formats in the given yaml you find in the Multi Tenancy Frontend GitHub source code project.

  • NodePort
apiVersion: v1
kind: Service
metadata:
  name: service-frontend
  labels:
    app: service-frontend
spec:
  type: NodePort
  ports:
    - port: 8080
      name: http
  selector:
    app: service-frontend -> (tenancyfrontendsample)

  • ClusterIP
apiVersion: v1
kind: Service
metadata:
  name: service-frontend-cip
  labels:
    app: service-frontend-cip
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app: service-frontend -> -> (tenancyfrontendsample)

2.4.2 Implement the NodePort and ClusterIP service definitions

  • Define a NodePort service
// Create Service NodePort definition
func defineServiceNodePort(name string, namespace string) (*corev1.Service, error) {
	// Define map for the selector
	mselector := make(map[string]string)
	key := "app"
	value := name
	mselector[key] = value

	// Define map for the labels
	mlabel := make(map[string]string)
	key = "app"
	value = "service-frontend"
	mlabel[key] = value

	var port int32 = 8080

	return &corev1.Service{
		TypeMeta:   metav1.TypeMeta{APIVersion: "v1", Kind: "Service"},
		ObjectMeta: metav1.ObjectMeta{Name: name, Namespace: namespace, Labels: mlabel},
		Spec: corev1.ServiceSpec{
			Type: corev1.ServiceTypeNodePort,
			Ports: []corev1.ServicePort{{
				Port: port,
				Name: "http",
			}},
			Selector: mselector,
		},
	}, nil
}

  • Define a Custer IP service
// Create Service ClusterIP definition
func defineServiceClust(name string, namespace string) (*corev1.Service, error) {
	// Define map for the selector
	mselector := make(map[string]string)
	key := "app"
	value := name
	mselector[key] = value

	// Define map for the labels
	mlabel := make(map[string]string)
	key = "app"
	value = "service-frontend"
	mlabel[key] = value

	var port int32 = 80
	var targetPort int32 = 8080
	var clustserv = name + "clusterip"

	return &corev1.Service{
		TypeMeta:   metav1.TypeMeta{APIVersion: "v1", Kind: "Service"},
		ObjectMeta: metav1.ObjectMeta{Name: clustserv, Namespace: namespace, Labels: mlabel},
		Spec: corev1.ServiceSpec{
			Type: corev1.ServiceTypeClusterIP,
			Ports: []corev1.ServicePort{{
				Port:       port,
				TargetPort: intstr.IntOrString{IntVal: targetPort},
			}},
			Selector: mselector,
		},
	}, nil
}
  • Define a secret
func defineSecret(name string, namespace string, key string, value string) (*corev1.Secret, error) {
    m := make(map[string]string)
    m[key] = value

    return &corev1.Secret{
        TypeMeta:   metav1.TypeMeta{APIVersion: "v1", Kind: "Secret"},
        ObjectMeta: metav1.ObjectMeta{Name: name, Namespace: namespace},
        Immutable:  new(bool),
        Data:       map[string][]byte{},
        StringData: m,
        Type:       "Opaque",
    }, nil
}

The services and secrets will be deployed in the reconcile function with the  Get function provided by the reconciler instance. For more details visit the documentation.

err = r.Get(context.TODO(), types.NamespacedName{Name: targetServPort.Name, Namespace: targetServPort.Namespace}, servPort)


3. Run the operator locally and verify, if the frontend is deployed and accessible on minikube

If you want to run the example operator on your local machine you need to follow the next steps.

3.1 Recreate the needed operator manifests

Just to ensure all your environment is working.

STEP 1: ENSURE APPLICATION IS CREATED

cd frontendOperator
make generate

STEP 2: CREATE THE MANIFESTS ALL NEEDED YAMLS

make manifests

3.2 Run the operator on minikube

STEP 1: START DOCKER DESKTOP

STEP 2: START MINIKUBE

minikube start --driver=docker

minikube status

kubectl get ns

STEP 3: OPEN THE MINIKUBE DASHBOARD

minikube dashboard

STEP 4: RUN THE OPERATOR LOCALLY

This will run the operator locally and connects to the minikube with have running on the local machine.

That means the operator isn’t installed at minikube, but does observe the Kubernetes API of minikube. With that in mind, we will later create a frontend application instance in minikube by deploying a CRD with our operator.

make install run

  • Example output:

In the output, we see that the operator is connected to minikube and observes the Kubernetes API.

1.645016132941799e+09   INFO    controller-runtime.metrics      Metrics server is starting to listen        {"addr": ":8080"}
1.645016132942167e+09   INFO    setup   starting manager
1.645016132942423e+09   INFO    Starting server {"kind": "health probe", "addr": "[::]:8081"}
1.6450161329424288e+09  INFO    Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8080"}
1.645016132942475e+09   INFO    controller.tenancyfrontend      Starting EventSource    {"reconciler group": "multitenancy.example.net", "reconciler kind": "TenancyFrontend", "source": "kind source: *v1alpha1.TenancyFrontend"}
1.6450161329425201e+09  INFO    controller.tenancyfrontend      Starting Controller     {"reconciler group": "multitenancy.example.net", "reconciler kind": "TenancyFrontend"}
1.6450161330429811e+09  INFO    controller.tenancyfrontend      Starting workers        {"reconciler group": "multitenancy.example.net", "reconciler kind": "TenancyFrontend", "worker count": 1}

Inside the Kubernetes dashboard we see we don’t have anything deployed with our operator until now to minikube, as you see in the image below.

Note: If you get an error during the execution of the make install run command , you can try to resolve the error with the go mod tidy command.

STEP 5: DEPLOY A CUSTOMER RESOURCE TO CREATE A FRONTEND APPLICATION INSTANCE

Open a new terminal and apply the example definition of our example CRD.
Note: Ensure you are in the frontendOperator project folder.

kubectl apply -f config/samples/multitenancy_v1alpha1_tenancyfrontend.yaml 

STEP 6: VERIFY THE OUTPUT in the TERMINAL WHERE THE OPERATOR runs

Now we see an example of a deployed secret appid.client-id-frontend in minikube.

Target secret appid.client-id-frontend exists shows that the secret is deployed.

1.6450307442006469e+09  INFO    controller.tenancyfrontend      Target secret appid.client-id-frontend exists, updating it now      {"reconciler group": "multitenancy.example.net", "reconciler kind": "TenancyFrontend", "name": "tenancyfrontend-sample", "namespace": "default"}
1.645030744206499e+09   INFO    controller.tenancyfrontend      Target secret appid.discovery-endpoint exists, updating it now      {"reconciler group": "multitenancy.example.net", "reconciler kind": "TenancyFrontend", "name": "tenancyfrontend-sample", "namespace": "default"}
1.6450307442127218e+09  INFO    controller.tenancyfrontend      Just return nil {"reconciler group": "multitenancy.example.net", "reconciler kind": "TenancyFrontend", "name": "tenancyfrontend-sample", "namespace": "default"}

In the following gif we see the created resources for the frontend web application we created with our operator. REMEMBER: The operator only runs only locally and doesn’t run in minikube.

STEP 7: ACCESS THE EXAMPLE APPLICATION

Open a new terminal.

a) Try to use NodePort

That will not work with minikube on our local computer, but we can verify the needed steps.

Step 1: Get the node port of the service-frontend

export NODEPORT=$(kubectl get svc tenancyfrontendsample -o=jsonpath='{.spec.ports[0].nodePort}')
echo $NODEPORT

Step 2: Get minikube IP

export MINIKUBEIP=$(minikube ip)
echo $MINIKUBEIP

Note: In case if you have running a Kubernetes Cluster on IBM Cloud.

workernodeip=$(ibmcloud ks workers --cluster YOUR_CLUSTER | awk '/Ready/ {print $2;exit;}')
echo $workernodeip

Step 3: Open the example application in the browser

You will notice this will not work!

open http://$MINIKUBEIP:$NODEPORT

b) Using tunnel for minikube

The tunnel will work to access the example frontend application and we will create an additional service for the frontend application deployment, by exposing the tenancyfrontend-sample deployment.

Step 1: Create a tunnel

minikube tunnel

Step 2: Get the deployments

Open a new terminal and get the deployments.

kubectl get deployments | grep "tenancyfrontend"

  • Example output:
tenancyfrontend-sample   1/1     1            1           125m

Step 3: Expose the deployment (which creates a new service)

kubectl expose deployment tenancyfrontend-sample --type=LoadBalancer --port=8080

Step 4: Get the newly service created service

kubectl get svc tenancyfrontend-sample

  • Example output:

Now you can see the external IP for that service and the

NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
tenancyfrontend-sample   LoadBalancer   10.103.234.205   127.0.0.1     8080:32104/TCP   17m

Step 5: Access the example application

Open the application in your browser.

http://127.0.0.1:8080

  • Example output:

Summary

This is really a bigger cheat sheet, but it covers from my perspective very essential topics differently to the memcached example of the Operator SDK.


I hope this was useful for you and let’s see what’s next?

Greetings,

Thomas

#operator, #go, #operatorsdk, #minikube, #kubernetes, #buildlabs4saas, #operatorlearningjourney

10 thoughts on “Develop a simple operator to deploy a web application using the GO Operator SDK

Add yours

  1. this is the best GO + operator-sdk tutorial I’ve found.
    good job!

    just one question, when i do operator-sdk init, and I don’t have git-repo with go modules, what should I do?

    Like

    1. Hi Tor,

      thanks for your feedback.

      Here my feedback to your interesting question:

      You need the GitHub project for future changes as references where you GO code can import package, for example for the API version specification.

      Just take a look into the “ tenancyfrontend_controller.go” file and then verify the import statements:. https://github.com/thomassuedbroecker/multi-tenancy-frontend-operator/blob/main/frontendOperator/controllers/tenancyfrontend_controller.go .

      Overall, this is related to the GO implementation and how to import packages (https://go.dev/doc/code).

      So, if you don’t have a GitHub project, but your “GO code” can find your package, you are good to ‘go’. The easiest way for me was just to create on, and that is a common way.

      You find also the Repo information in the project file https://github.com/thomassuedbroecker/multi-tenancy-frontend-operator/blob/main/frontendOperator/PROJECT and as you can this file includes also the API information.

      In the future you maybe will have different versions of your operator.

      I hope this helps to understand why you need a repository information.

      Regards,

      Thomas

      Like

  2. i kept failing using “go + operator-sdk” to deploy application for several days util i met this blog, and it finally worked!!!!

    thank you very much!

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

Up ↑

%d bloggers like this: