Automating Push-Based Deployment on Kubernetes

Overcoming manual activities through automation is key to streamlining operations for seamless results. Automating docker and Kubernetes for build and orchestration boosts delivery confidence while enhancing operational efficiency. However, achieving the state of true automation isn’t straightforward as varying factors influence the automation flow and introduce unavoidable overheads. In this article, let’s look at some solutions to automate push-based deployment on Kubernetes.

Why Push-Based Deployments?

Version control systems (VCS) like GitHub play a crucial role in connecting code with services to drive business operations. Most organizations rely on VCS to host and maintain their code, serving as a single source of truth. However, enabling seamless code execution across various platforms requires additional efforts and resources. This is where orchestration platforms come into play, designed to track changes in code repositories using a push-based change capture mechanism to automate processes and trigger downstream tasks. In the context of managing containerized workloads, the push-based approach is particularly valuable because:

  • Docker images need to be stored in registries, ready to be deployed.
  • Kubernetes processes run on remote nodes, adding layers of complexity.
  • Applications often wait for parent processes to complete before they can start.
  • Each deployment requires pulling the image from the registry to the Kubernetes cluster, which can sometimes result in Kubernetes ImagePullBackOff errors.
  • Additional issues such as cluster unavailability or node crashes caused by CreateContainerConfigError further complicate the process.

By using push-based deployments, these complexities are managed more effectively, ensuring that code changes are automatically and consistently propagated across all platforms.

Push-Based Solution and Its Significance

To ensure containerized applications are deployed and executed successfully, they must pass through several checks and services. During the development stage, maintaining operational services like Jenkins or OpenShift is crucial for orchestrating and applying changes. After each commit, developers manually trigger builds and monitor execution flow on orchestration platforms. This process requires a deep understanding of the tools and expertise in interpreting logs, stdout, and stderr outputs. With GitKube, this complex workflow is simplified through:

  • Minimal dependencies and plug-and-play integration, making setup easier.
  • RBAC capabilities that ensure secure and controlled access.
  • The ability for developers to deploy new changes with a simple git push command.
  • Streamlined monitoring of the execution flow and status through events, reducing the need for deep expertise in logs and outputs.

This push-based approach enables rapid development and deployment, significantly improving efficiency.

The Automation Use Case

A business wants to optimize development processes as part of the operational efficiency initiative. We are tasked to implement solutions using free and open-source services to automate deployments while triggering alerts and status reports. We have a managed Kubernetes environment which is orchestrating and running business operations. 

We will set up a GitKube job that triggers the deployment process and monitors the job status. Based on the outcome, we will notify the dev teams or trigger alerts based on the criticality. 

Solution Implementation

  1. We will automate the push-based provisioning of a rust application on remote Kubernetes clusters.
  2. We will create a sample rust HTTP application with generic REST protocols.
  3. We will package and build the rust code using cargo and containerize the code for deployment.
  4. Finally, we will implement a GitHub actions job to automate the deployment and trigger status alerts upon code commit.

Application Dependencies

We will use standard rust crates to develop and containerize our minimalistic app. Let us add request and tokio to define REST functionality, actix-web to spin up a server serving on localhost, and serde_json to serialize and deserialize JSON payloads.

[dependencies]
actix-web = "4.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
reqwest = { version = "0.11", features = ["json"] }
tokio = { version = "1", features = ["full"] }

Rust Application as a REST Server

The REST server will accept GET requests and respond with a simple string. The server functionalities are handled by actix_web under the hood.

use actix_web::{web, App, HttpServer, Responder, HttpResponse};
use serde::Serialize;

#[derive(Serialize)]
struct K8sResponse {
  message: String,
  status: u16,
}

async fn get_res() -> impl Responder {
  let response = K8sResponse{
      message: String::from("Hurray, The API call was successfully processed"),
      status: 200,
  };
  HttpResponse::Ok().json(response)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
  HttpServer::new(|| {
      App::new()
          .route("/api/get", web::get().to(get_res))
  })
  .bind("127.0.0.1:3000")?
  .run()
  .await
}

Docker Definition

Our docker file will be as follows where we use a stage for the base image. This will ensure that the container build will capture modified changes while consuming minimum resources.

FROM rust:latest as builder

RUN USER=root cargo new --bin myapp
WORKDIR /rust_app

COPY Cargo.toml Cargo.lock ./

RUN cargo build --release
RUN rm src/*.rs

COPY src ./src

RUN cargo build --release

FROM debian:sid-slim

COPY --from=builder /rust_app/target/release/rust_app /usr/local/bin/rust_app

EXPOSE 3000

CMD ["cargo run"]

Remote Configs for Kubernetes Deployment

A remote.yml file is essential to deploy and host a containerized application on a remote Kubernetes cluster. In push-based deployment scenarios, it’s crucial to capture and deploy the latest version of the code. Simply building a container image and hosting it on a registry won’t suffice, as it will always pull a static image version. Instead, the image must be built on remote instances for each run, with GitKube handling the publishing of the Docker image to a registry. Additionally, registry credentials need to be included in the remote.yml file to ensure proper access during deployment.

apiVersion: gitkube.sh/v1alpha1
kind: Remote
metadata:
name: rust-remote-server
amespace: default
spec:
replicas: 1
selector:
  matchLabels:
    app: rust-remote-server
authorizedKeys:
- "ssh-rsa your-ssh-public-key"
registry:
  url: "hub.docker.com/k8sUser"
  credentials:
    secretRef: "***********"
spec:
  deployments:
    - name: rust-remote-server
      containers:
      - name: rust-remote-server                                         
        dockerfile: /Dockerfile
      ports:
        - containerPort: 3000

With the remote config definition, we can create a Kubernetes remote URL to deploy the application on the remote instance.

GitKube Trigger

kubectl get remote rust-remote-server -o yaml

This will return an SSH remoteURL endpoint which we can use as a git remote URL. This approach will push the code to a remote URL and deploy the application with every commit.

git push rust-remote-server master

When we commit the code to the rust-remote-server branch, the rust application will be deployed to the remote Kubernetes cluster where the image will be containerized and provisioned. If everything runs as expected, the final code can be committed to the feature branch and later merged with the master/main.

GitHub Actions to Monitor Job Status and Trigger Alert 

We will implement GitHub actions to automate the monitoring and alerting stages. The job continuously checks the cluster status every two minutes and sends Slack alerts when failures are encountered.

name: Kubernetes Cluster Monitor

on:
push:
  branches:
    - remote

jobs:
check-cluster-status:
  runs-on: ubuntu-latest

  steps:
  - name: actions checkout
    uses: actions/checkout@v3

  - name: kubectl Provisioning
    uses: azure/setup-kubectl@v3
    with:
      version: 'latest'

  - name: kubectl Configure
    env:
      KUBE_CONFIG: ${{ secrets.KUBE_CONFIG }}
    run: |
      mkdir -p $HOME/.kube
      echo "$KUBE_CONFIG" | base64 -d > $HOME/.kube/config

  - name: cluster status check every 2 mins
    run: |
      max_attempts=30
      attempt=0
      alert_sent=false

      while [ $attempt -lt $max_attempts ] && [ "$alert_sent" = false ]
      do
        status=$(kubectl get nodes -o json | jq -r '.items[].status.conditions[] | select(.type=="Ready") | .status')
        if [[ $status == *"False"* ]]; then
          echo "CLUSTER_STATUS=erroneous" >> $GITHUB_ENV
          alert_sent=true
        else
          echo "Cluster is stable and running.."
          sleep 120
        fi
        attempt=$((attempt+1))
      done

      if [ "$alert_sent" = false ]; then
        echo "CLUSTER_STATUS=Ok" >> $GITHUB_ENV
      fi

  - name: Send alert
    if: env.CLUSTER_STATUS == 'erroneous'
    env:
      SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
    run: |
      curl -X POST -H 'Content-type: application/json'
      --data '{"text":"Alert: Deployments failed, Please take Action!"}' $SLACK_WEBHOOK

  - name: Final status
    run: |
      if [ "$CLUSTER_STATUS" = "Ok" ]; then
        echo "Cluster Lifecycle - Ok"
      else
        echo "Alert was sent due to unhealthy cluster status."
      fi

We have outlined the key tasks involved in developing the use case, assuming that the installation and setup of Kubernetes dependencies and cloud infrastructure have been completed. This detailed guide will demonstrate how to replace manual deployments with robust and resilient automation practices, streamlining your processes for greater efficiency.

Conclusion

Rapid development and deployment cycles are essential for turning ideas into reality. Manual tasks between these stages often lead to delays and inefficiencies. To enhance business continuity and ensure accurate, efficient outcomes, adopting strong automation practices is critical. GitKube offers a streamlined solution for accelerating Kubernetes development and validation cycles, requiring minimal effort to achieve faster, more reliable results.

Leave a Comment

Your email address will not be published. Required fields are marked *