Modernizing Web Development Workflows: A Tutorial on Containerizing with Docker and Deploying with Kubernetes

Welcome to modern-css.com! Today, we're going to take a deep dive into containerization with Docker and deployment with Kubernetes. Containerization streamlines development and deployment, allowing us to create scalable, efficient, and resilient fullstack applications while keeping our workflow clean and organized. By the end of this tutorial, you'll be well on your way to modernizing your web development workflow.

What is Containerization and Why Use Docker?

Containerization is the process of packaging an application with its dependencies and libraries into a single, self-contained unit called a container. Containers allow us to run our applications consistently across different environments, making it easier to maintain and deploy them. Docker is one of the most popular containerization platforms that provides us with a convenient and efficient way to create, manage, and distribute containers.

How to Install Docker

Before we get started, we need to install Docker. Docker provides a straightforward installation process for most platforms. For detailed installation instructions for your specific platform, you can visit the official Docker documentation.

Windows: Download Docker for Windows from the Docker Desktop for Windows page. Follow the installation instructions and ensure that you have Hyper-V installed and running to enable virtualization on your machine.

macOS: Download Docker Desktop for Mac from the Docker Desktop for Mac page. Follow the installation instructions, and you'll have Docker installed and running in no time.

Linux: Docker is available for most Linux distributions. You can follow the installation instructions for your particular distribution on the Docker documentation website.

Creating a Docker Container

Now that we've installed Docker let's begin by creating a basic Docker container for a Node.js application. We'll start with a basic Hello World program so that we can see how Docker builds and runs containers.

Setting up a Basic Hello World Node.js Application

For this tutorial, we'll be using a simple "Hello World" application using Node.js. We'll write a server that listens on port 3000 and responds with the message "Hello, World!" when we visit the web page.


    const http = require('http');
    
    const server = http.createServer((req, res) => {
      res.writeHead(200, {'Content-Type': 'text/plain'});
      res.end('Hello, World!');
    });
    
    server.listen(3000, () => {
      console.log('Server running on port 3000');
    });
  

Save the file as index.js in a new directory called hello-world.

Creating a Dockerfile

Next, we need to create a Dockerfile that will define how to build our container. Dockerfiles are simple text files that contain a set of instructions for Docker to follow when building an image. An image is a read-only template, while a container is a running instance of that image.


    FROM node:alpine
    WORKDIR /app
    
    #Copy package.json and package-lock.json into container
    COPY package*.json ./
    RUN npm install
    COPY . .
    
    EXPOSE 3000
    CMD [ “npm”, “start” ]
  
  • FROM: Specifies a base image to use for our container. In this case, we’re using the official Node.js image from Docker Hub, specifically the alpine variant, which is a lightweight version of Node.js.
  • WORKDIR: Sets the working directory inside the container where our application will be stored.
  • COPY: Copies our local package.json and package-lock.json files into the container. This allows Docker to install dependencies and run our application inside the container in an isolated environment.
  • RUN npm install: Installs all the necessary dependencies specified in our package.json file.
  • COPY . .: Copies all remaining files in our local directory into the container.
  • EXPOSE 3000: Tells Docker that we intend to publish the container's port 3000 to the host, making it accessible for external requests.
  • CMD [ “npm”, “start” ]: Specifies the command to run when the container starts, in this case, running our application with the npm start command.

Save this Dockerfile in the same directory as our index.js file.

Building a Docker Image

Now it's time to build our Docker image using the Dockerfile as the build context. To do this, we need to navigate to the directory where our Dockerfile and application files are located in a terminal window.


    docker build -t hello-world .
  

The docker build command tells Docker to build an image from a Dockerfile, and the -t flag tags our image with a name and optionally a tag in the name:tag format. In our case, we're using hello-world as the name of our image and . representing the current working directory as the build context.

Running a Docker Container

Now that we've built our Docker image, we can run it as a container using the docker run command.


    docker run -p 3000:3000 hello-world
  

The -p flag maps a port on our host machine to a port inside the container. In our case, we're mapping port 3000 on our host to port 3000 inside the container so that we can access our application at http://localhost:3000.

By now, you should see your "Hello, World!" message displayed when you access http://localhost:3000. Congratulations, you've successfully created and run a Docker container!

Deploying with Kubernetes

Now that we've seen how to create and run a Docker container locally, let's take it to the next level by deploying our application using Kubernetes. Kubernetes is an open-source platform that provides powerful container orchestration functionality, allowing us to manage our containers and containerized applications at scale.

What is Kubernetes?

Kubernetes, also known as K8s, is a powerful platform for managing containerized applications. It provides features such as automatic scaling, load balancing, rolling updates, and more, allowing us to build resilient and scalable applications. Kubernetes is made up of a master node that is responsible for managing other nodes that run our workloads, while the nodes themselves are where our containers run. Kubernetes provides a declarative API that allows us to define our desired state, and it takes care of the nitty-gritty details of managing our system to meet that desired state.

Setting up a Kubernetes Cluster

To use Kubernetes, we first need to set up a cluster. There are many different ways to set up a cluster, ranging from local development environments to cloud-based solutions. For this article, we'll be using the popular Kubernetes distribution called Minikube.

Setting up Minikube

To install Minikube, you first need to install a virtualization layer that can run it. One popular virtualization solution for this is VirtualBox.

  • Windows:
    1. Download and install the latest version of VirtualBox from the official VirtualBox website.
    2. Download and install the latest version of Minikube from the official Minikube website.
    3. Open a command prompt window and run minikube start.
    Minikube should be installed and running after these steps have been completed.
  • macOS:
    1. Download and install the latest version of VirtualBox from the official VirtualBox website.
    2. Download and install the latest version of Minikube from the official Minikube website.
    3. Open a terminal window and run minikube start.
    Minikube should be installed and running after these steps have been completed.
  • Linux:
    1. Download and install the latest version of VirtualBox from the official VirtualBox website.
    2. Download and install the latest version of Minikube from the official Minikube website.
    3. Open a terminal window and run minikube start.
    Minikube should be installed and running after these steps have been completed.

Creating a Kubernetes Deployment

Now that we've set up a cluster, we're ready to deploy our application. A deployment defines how many instances of a container we want to run, as well as how we want to scale them, and how we want to update them.

Creating a Deployment Manifest File

Create a new file called deployment.yaml in the root directory of your project with the following content:


    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-world-deployment
      labels:
        app: hello-world
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: hello-world
      template:
        metadata:
          labels:
            app: hello-world
        spec:
          containers:
          - name: hello-world
            image: hello-world
            ports:
            - containerPort: 3000
  

This manifest file describes the desired state of our deployment. Our deployment will be called hello-world-deployment, with a label of app: hello-world to identify it and ensure that it matches the service definition. We'll specify that we want three replicas of our container to run, and we'll define a selector that matches the label defined in our deployment. This will ensure that all of our replicas are the same and that they match the desired state. We'll define the template for the deployment, specifying that we want to run a container called hello-world using the hello-world image we created earlier, and that we want to expose port 3000 inside the container.

Creating a Service

The last piece of the puzzle is creating a service manifest file, which defines how Kubernetes should handle incoming network requests.

Create a new file called service.yaml in the root directory of your project with the following content:


    apiVersion: v1
    kind: Service
    metadata:
      name: hello-world-service
      labels:
        app: hello-world
    spec:
      selector:
        app: hello-world
      ports:
      - protocol: TCP
        port: 80
        targetPort: 3000
      type: LoadBalancer
  

This manifest file specifies that our service will be called hello-world-service, with a label of app: hello-world to match the deployment. We'll specify that we want to route incoming requests to the deployment by matching the app: hello-world label. We'll expose port 80 on our host machine using the LoadBalancer type, which allows external traffic to be load balanced across our replicas. In our case, we'll map port 80 on our host machine to port 3000 on our containers inside our deployment.

Deploying to Kubernetes

We're now ready to deploy our application to Kubernetes. In your terminal window, navigate to the root directory of your project, where the deployment.yaml and service.yaml files are located.


    kubectl apply -f deployment.yaml
    kubectl apply -f service.yaml
  

The kubectl apply command applies the manifest files and creates the necessary resources on the cluster. The -f flag specifies that we are providing a file.

After running these commands, you should see output similar to the following indicating that the deployment and service have been created:


    deployment.apps/hello-world-deployment created
    service/hello-world-service created
  

You can verify that everything is working as expected by running the following command:


    minikube service hello-world-service
  

This command tells Minikube to open a browser window and connect to our service. You should see your "Hello, World!" message displayed in your web browser.

Congratulations!

You've just learned how to master containerization with Docker and deployment with Kubernetes. Using these tools, you can streamline your web development workflow, create scalable, efficient, and resilient fullstack applications, and manage your containers and containerized applications at scale.