
Deploying the sample guestbook application
In this chapter, you will deploy the classic guestbook sample Kubernetes application. You will be mostly following the steps from https://Kubernetes.io/docs/tutorials/stateless-application/guestbook/ with some modifications. You will employ these modifications to show additional concepts, such as ConfigMaps, that are not present in the original sample.
The sample guestbook application is a simple, multi-tier web application. The different tiers in this application will have multiple instances. This is beneficial for both high availability and for scale. The guestbook's front end is a stateless application because the front end doesn't store any state. The Redis cluster in the back end is stateful as it stores all the guestbook entries.
You will be using this application as the basis for testing out the scaling of the back end and the front end, independently, in the next chapter.
Before we get started, let's consider the application that we'll be deploying.
Introducing the application
The application stores and displays guestbook entries. You can use it to record the opinion of all the people who visit your hotel or restaurant, for example. Along the way, we will explain Kubernetes concepts such as deployments and ReplicaSets.
The application uses PHP as a front end. The front end will be deployed using multiple replicas. The application uses Redis for its data storage. Redis is an in-memory key-value database. Redis is most often used as a cache. It's among the most popular container images according to https://www.datadoghq.com/docker-adoption/.

Figure 3.2: High-level overview of the guestbook application
We will begin deploying this application by deploying the Redis master.
Deploying the Redis master
In this section, you are going to deploy the Redis master. You will learn about the YAML syntax that is required for this deployment. In the next section, you will make changes to this YAML. Before making changes, let's start by deploying the Redis master.
Perform the following steps to complete the task:
- Open your friendly Cloud Shell, as highlighted in Figure 3.3:
Figure 3.3: Opening the Cloud Shell
- If you have not cloned the github repository for this book, please do so now by using the following command:
git clone https://github.com/PacktPublishing/Hands-On-Kubernetes-on-Azure---Second-Edition Hands-On-Kubernetes-on-Azure
cd Hands-On-Kubernetes-on-Azure/Chapter03/
- Enter the following command to deploy the master:
kubectl apply -f redis-master-deployment.yaml
It will take some time for the application to download and start running. While you wait, let's understand the command you just typed and executed. Let's start by exploring the content of the YAML file that was used:
1 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
2 kind: Deployment
3 metadata:
4 name: redis-master
5 labels:
6 app: redis
7 spec:
8 selector:
9 matchLabels:
10 app: redis
11 role: master
12 tier: backend
13 replicas: 1
14 template:
15 metadata:
16 labels:
17 app: redis
18 role: master
19 tier: backend
20 spec:
21 containers:
22 - name: master
23 image: k8s.gcr.io/redis:e2e # or just image: redis
24 resources:
25 requests:
26 cpu: 100m
27 memory: 100Mi
28 ports:
29 - containerPort: 6379
Let's dive deeper into the code to understand the provided parameters:
- Line 2: This states that we are creating a Deployment. As explained in Chapter 1, Introduction to Docker and Kubernetes, a deployment is a wrapper around Pods that makes it easy to update and scale Pods.
- Lines 4-6: Here, the Deployment is given a name, which is redis-master.
- Lines 7-12: These lines let us specify the containers that this Deployment will manage. In this example, the Deployment will select and manage all containers for which labels match (app: redis, role: master, and tier: backend). The preceding label exactly matches the labels provided in lines 14-19.
- Line 13: Tells Kubernetes that we need exactly one copy of the running Redis master. This is a key aspect of the declarative nature of Kubernetes. You provide a description of the containers your applications need to run (in this case, only one replica of the Redis master), and Kubernetes takes care of it.
- Line 14-19: Adds labels to the running instance so that it can be grouped and connected to other containers. We will discuss them later to see how they are used.
- Line 22: Gives this container a name, which is master. In the case of a multi-container Pod, each container in a Pod requires a unique name.
- Line 23: This line indicates the Docker image that will be run. In this case, it is the redis image tagged with e2e (the latest Redis image that successfully passed its end-to-end [e2e] tests).
- Lines 28-29: These two lines indicate that the container is going to listen on port 6379.
- Lines 24-27: Sets the cpu/memory resources requested for the container. In this case, the request is 0.1 CPU, which is equal to 100m and is also often referred to as 100 millicores. The memory requested is 100Mi, or 104857600 bytes, which is equal to ~105MB (https://Kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/). You can also set CPU and memory limits in a similar way. Limits are caps on what a container can use. If your Pod hits the CPU limit, it'll get throttled, whereas if it hits the memory limits, it'll get restarted. Setting requests and limits is best practice in Kubernetes.
Note
The Kubernetes YAML definition is similar to the arguments given to Docker to run a particular container image. If you had to run this manually, you would define this example in the following way:
# Run a container named master, listening on port 6379, with 100M memory and 100m CPU using the redis:e2e image.
docker run --name master -p 6379:6379 -m 100M -c 100m -d k8s.gcr.io/redis:e2e
In this section, you have deployed the Redis master and learned about the syntax of the YAML file that was used to create this deployment. In the next section, you will examine the deployment and learn about the different elements that were created.
Examining the deployment
The redis-master deployment should be complete by now. Continue in the Azure Cloud Shell that you opened in the previous section and type the following:
kubectl get all
You should get the output displayed in Figure 3.4:

Figure 3.4: Output displaying the objects that were created by your deployment
You can see that we have a deployment named redis-master. It controls a ReplicaSet of redis-master-<random id>. On further examination, you will also find that the ReplicaSet is controlling a Pod, redis- master-<replica set random id>-<random id>. Figure 3.1 has a graphical representation of this relationship.
More details can be obtained by executing the kubectl describe <object> <instance-name> command, as follows:
kubectl describe deployment/redis-master
This will generate an output as follows:

Figure 3.5: Output of describing the deployment
You have now launched a Redis master with the default configuration. Typically, you would launch an application with an environment-specific configuration.
In the next section, we will introduce a new concept called ConfigMaps and then recreate the Redis master. So, before proceeding, we need to clean up the current version, and we can do so by running the following command:
kubectl delete deployment/redis-master
Executing this command will produce the following output:
deployment.extensions "redis-master" deleted
In this section, you examined the Redis master deployment you created. You saw how a deployment relates to a ReplicaSet and how a ReplicaSet relates to Pods. In the following section, you will recreate this Redis master with an environment-specific configuration provided via a ConfigMap.
Redis master with a ConfigMap
There was nothing wrong with the previous deployment. In practical use cases, it would be rare that you would launch an application without some configuration settings. In this case, we are going to set the configuration settings for redis-master using a ConfigMap.
A ConfigMap is a portable way of configuring containers without having specialized images for each configuration. It has a key-value pair for data that needs to be set on a container. A ConfigMap is used for non-secret configuration. Kubernetes has a separate object called a Secret. A Secret is used for configurations that contain critical data such as passwords. This will be explored in detail in Chapter 10, Securing your AKS cluster of this book.
In this example, we are going to create a ConfigMap. In this ConfigMap, we will configure redis-config as the key and the value will be:
maxmemory 2mb
maxmemory-policy allkeys-lru
Now, let's create this ConfigMap. There are two ways to create a ConfigMap:
- Creating a ConfigMap from a file
- Creating a ConfigMap from a YAML file
We will explore each one in detail.
Creating a ConfigMap from a file
The following steps will help us create a ConfigMap from a file:
- Open the Azure Cloud Shell code editor by typing code redis-config in the terminal. Copy and paste the following two lines and save it as redis-config:
maxmemory 2mb
maxmemory-policy allkeys-lru
- Now you can create the ConfigMap using the following code:
kubectl create configmap example-redis-config --from-file=redis-config
- You should get an output as follows:
configmap/example-redis-config created
- You can use the same command to describe this ConfigMap:
kubectl describe configmap/example-redis-config
- The output will be as shown in Figure 3.6:
Figure 3.6: Output of describing the ConfigMap
In this example, you created the ConfigMap by referring to a file on disk. A different way to deploy ConfigMaps is by creating them from a YAML file. Let's have a look at how this can be done in the following section.
Creating a ConfigMap from a YAML file
In this section, you will recreate the ConfigMap from the previous section using a YAML file:
- To start, delete the previously created ConfigMap:
kubectl delete configmap/example-redis-config
- Copy and paste the following lines into a file named example-redis-config.yaml, and then save the file:
apiVersion: v1
data:
redis-config: |-
maxmemory 2mb
maxmemory-policy allkeys-lru
kind: ConfigMap
metadata:
name: example-redis-config
namespace: default
- You can now recreate your ConfigMap via the following command:
kubectl create -f example-redis-config.yaml
- You should get an output as follows:
configmap/example-redis-config created
- Next, run the following command:
kubectl describe configmap/example-redis-config
- This command returns the same output as the previous one:
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
----
maxmemory 2mb
maxmemory-policy allkeys-lru
Events: <none>
As you can see, using a YAML file, you were able to create the same ConfigMap.
Note:
kubectl get has the useful option -o, which can be used to get the output of an object in either YAML or JSON. This is very useful in cases where you have made manual changes to a system and want to see the resulting object in YAML format. You can get the current ConfigMap in YAML using the following command:
kubectl get -o yaml configmap/example-redis-config
Now that you have the ConfigMap defined, let's use it.
Using a ConfigMap to read in configuration data
In this section, you will reconfigure the redis-master deployment to read configuration from the ConfgMap:
- To start, modify redis-master-deployment.yaml to use the ConfigMap as follows. The changes you need to make will be explained after the source code:
Note
If you downloaded the source code accompanying this book, there is a file in Chapter 3, Application deployment on AKS, called redis-master-deployment_Modified.yaml, which has the necessary changes applied to it.
1 apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
2 kind: Deployment
3 metadata:
4 name: redis-master
5 labels:
6 app: redis
7 spec:
8 selector:
9 matchLabels:
10 app: redis
11 role: master
12 tier: backend
13 replicas: 1
14 template:
15 metadata:
16 labels:
17 app: redis
18 role: master
19 tier: backend
20 spec:
21 containers:
22 - name: master
23 image: k8s.gcr.io/redis:e2e
24 command:
25 - redis-server
26 - "/redis-master/redis.conf"
27 env:
28 - name: MASTER
29 value: "true"
30 volumeMounts:
31 - mountPath: /redis-master
32 name: config
33 resources:
34 requests:
35 cpu: 100m
36 memory: 100Mi
37 ports:
38 - containerPort: 6379
39 volumes:
40 - name: config
41 configMap:
42 name: example-redis-config
43 items:
44 - key: redis-config
45 path: redis.conf
Let's dive deeper into the code to understand the different sections:
- Lines 24-26: These lines introduce a command that will be executed when your Pod starts. In this case, this will start the redis-server pointing to a specific configuration file.
- Lines 27-29: Shows how to pass configuration data to your running container. This method uses environment variables. In Docker form, this would be equivalent to docker run -e "MASTER=true". --name master -p 6379:6379 -m 100M -c 100m -d Kubernetes /redis:v1. This sets the environment variable MASTER to true. Your application can read the environment variable settings for its configuration.
- Lines 30-32: These lines mount the volume called config (this volume is defined in lines 39-45) on the /redis-master path on the running container. It will hide whatever exists on /redis-master on the original container.
In Docker terms, it would be equivalent to docker run -v config:/redis-master. -e "MASTER=TRUE" --name master -p 6379:6379 -m 100M -c 100m -d Kubernetes /redis:v1.
- Line 40: Gives the volume the name config. This name will be used within the context of this Pod.
- Lines 41-42: Declare that this volume should be loaded from the example-redis-config ConfigMap. This ConfigMap should already exist in the system. You have already defined this, so you are good.
- Lines 43-45: Here, you are loading the value of the redis-config key (the two-line maxmemory settings) as a redis.conf file.
- Let's create this updated deployment:
kubectl create -f redis-master-deployment_Modified.yml
- This should output the following:
deployment.apps/redis-master created
- Let's now make sure that the configuration was successfully applied. First, get the Pod's name:
kubectl get pods
- Then exec into the Pod and verify that the settings were applied:
kubectl exec -it redis-master-<pod-id> redis-cli
127.0.0.1:6379> CONFIG GET maxmemory
1) "maxmemory" 2) "2097152"
127.0.0.1:6379> CONFIG GET maxmemory-policy
"maxmemory-policy"
"allkeys-lru" 127.0.0.1:6379>exit
To summarize, you have just performed an important and tricky part of configuring cloud-native applications. You will have also noticed that the apps have to be configured to read config dynamically. After you set up your app with configuration, you accessed a running container to verify the running configuration.
Note
Connecting to a running container is useful for troubleshooting and doing diagnostics. Due to the ephemeral nature of containers, you should never connect to a container to do additional configuration or installation. This should either be part of your container image or configuration you provide via Kubernetes (as we just did).
In this section, you configured the Redis Master to load configuration data from a ConfigMap. In the next section, we will deploy the end-to-end application.