End-to-End Deployment of an application on Kubernetes — Devops Guide 2025

Ahmed Shaikh
13 min read5 days ago

--

Hey Devops Learners, In this blog i will be sharing with you my first end to end project which i had worked on in my early devops career when i joined my company as a trainee. This Project gave me the exact flow and a correct path to start my devops journey and How can i build the infrastructure in AWS to deploy the applications.

In this devops guide, you will find a complete step by step guide on How to build the infrastructure from scratch and deploy our application on Kubernetes and then automate the entire process by using the pipeline.

So lets start without wasting our much time.

AWS Services used:

  1. AWS EKS — This is an End-to-End Deployment of an application on Kubernetes: Where we will be deploying our application in EKS Cluster.
  2. Jenkins for CI/CD
  3. Docker
  4. SonarQube — for security code scanning
  5. Trivy — for scanning the docker images before we deploy our microservices in kubernetes
  6. Prometheus & Grafana — for monitoring
  7. ArgoCD
  8. Helm.
  9. Terraform — We will be creating our entire infrastructure using Terraform.

Note: I have added all the terraform code which i had used to build this infra in my github repo, if you want you can clone the repo and start working on this project.

Github Repohttps://github.com/Ahmedgit7/Netflix-DevSecOps-Project

Before we go ahead and deploy our Netflix-like application on the cloud, let me explain you the architecture.

Architecture:

End-to-End Deployment of an application on Kubernetes

We start by creating an EC2 instance and deploying our application locally using a Docker container. Once the application is running, we will integrate security using SonarQube and Trivy. After completing this manually, we will automate the process using a CI/CD tool, Jenkins. Jenkins will automate the creation of the Docker image, ensuring it is secure before uploading it to Docker Hub.

Once this process is automated, we will integrate monitoring using Prometheus and Grafana. Prometheus and Grafana will monitor our EC2 instance as well as Jenkins, checking various metrics like CPU usage, RAM usage, successful jobs, and failed jobs. Additionally, if any job fails or succeeds, we will receive notifications via email using SMTP.

After automating this, we will deploy the application on Kubernetes using Argo CD. Monitoring will be set up for our Kubernetes cluster, which will be installed using Helm charts.

Though I have used Terraform for the infrastructure setup of this project, but still I will give you the step by step guide for this.

Phase 1: Initial Setup and Deployment

Step 1: Launch EC2 (Ubuntu 22.04):

Provision an EC2 instance on AWS with Ubuntu 22.04.

Connect to the instance using SSH.

Step 2: Clone the Code:

Update all the packages and then clone the code.

Clone your application’s code repository onto the EC2 instance:

“git clone https://github.com/Ahmedgit7/Netflix-DevSecOps-Project.git

Step 3: Install Docker and Run the App Using a Container:

Set up Docker on the EC2 instance:

“sudo apt-get update

sudo apt-get install docker.io -y

sudo usermod -aG docker $USER # Replace with your system’s username, e.g., ‘ubuntu’

newgrp docker

sudo chmod 777 /var/run/docker.sock”

After doing installing docker.

Step 4: Get the API Key:

We need to go to the tmdb website to get the api key for the movie databases, and after getting that api key, if we want we can add that api key in the Dockerfile or else we can pass the variable in the docker build command.

Open a web browser and navigate to TMDB (The Movie Database) website.

Click on “Login” and create an account.

Once logged in, go to your profile and select “Settings.”

Click on “API” from the left-side panel.

Create a new API key by clicking “Create” and accepting the terms and conditions.

Provide the required basic details and click “Submit.”

You will receive your TMDB API key.

Now create the Docker image with your api key:

“docker build — build-arg TMDB_V3_API_KEY=<your-api-key> -t netflix .”

“docker run -d — name netflix -p 8081:80 netflix:latest”

Accessed the application:

Phase 2: Security

Install SonarQube and Trivy on the EC2 instance to scan for vulnerabilities.

To install sonarqube:

“docker run -d — name sonar -p 9000:9000 sonarqube:lts-community”

To install Trivy:

“sudo apt-get install wget apt-transport-https gnupg lsb-release

wget -qO — https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -

echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list

sudo apt-get update

sudo apt-get install trivy”

to scan image using trivy we can run this command:

“trivy image <imageid>”

Phase 3: CI/CD Setup

Install Jenkins on the EC2 instance to automate deployment:

Install Java

“sudo apt update

sudo apt install fontconfig openjdk-17-jre”

java -version

Install Jenkins

“sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \

https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key

echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \

https://pkg.jenkins.io/debian-stable binary/ | sudo tee \

/etc/apt/sources.list.d/jenkins.list > /dev/null

sudo apt-get update

sudo apt-get install jenkins

sudo systemctl start jenkins

sudo systemctl enable Jenkins”

Access Jenkins in a web browser using the public IP of your EC2 instance.

Install Necessary Plugins in Jenkins:

Go to Manage Jenkins →Plugins → Available Plugins →

1 Eclipse Temurin Installer (Install without restart)

2 SonarQube Scanner (Install without restart)

3 NodeJs Plugin (Install Without restart)

Configure Java and Nodejs in Global Tool Configuration

Goto Manage Jenkins → Tools → Install JDK(17) and NodeJs(16)→ Click on Apply and Save

SonarQube

Create the token

Go to Jenkins Dashboard → Manage Jenkins → Credentials → Add Secret Text. It should look like this

After adding sonar token

Click on Apply and Save

The Configure System option is used in Jenkins to configure different server

Global Tool Configuration is used to configure different tools that we install using Plugins

We will install a sonar scanner in the tools.

Install Dependency-Check and Docker Tools in Jenkins

Install Dependency-Check Plugin:

Go to “Dashboard” in your Jenkins web interface.

Navigate to “Manage Jenkins” → “Manage Plugins.”

Click on the “Available” tab and search for “OWASP Dependency-Check.”

Check the checkbox for “OWASP Dependency-Check” and click on the “Install without restart” button.

Configure Dependency-Check Tool:

After installing the Dependency-Check plugin, you need to configure the tool.

Go to “Dashboard” → “Manage Jenkins” → “Global Tool Configuration.”

Find the section for “OWASP Dependency-Check.”

Add the tool’s name, e.g., “DP-Check.”

Save your settings.

Install Docker Tools and Docker Plugins:

Add DockerHub Credentials:

Click “OK” to save your DockerHub credentials.

Now, you have installed the Dependency-Check plugin, configured the tool, and added Docker-related plugins along with your DockerHub credentials in Jenkins. You can now proceed with configuring your Jenkins pipeline to include these tools and credentials in your CI/CD process.

Create a CI/CD pipeline in Jenkins to automate your application deployment.

pipeline {
agent any
tools {
jdk 'jdk17'
nodejs 'node16'
}
environment {
SCANNER_HOME = tool 'sonar-scanner'
}
stages {
stage('Clean Workspace') {
steps {
cleanWs()
}
}
stage('Checkout from Git') {
steps {
git branch: 'main', url: 'https://github.com/N4si/DevSecOps-Project.git'
}
}
stage("Sonarqube Analysis") {
steps {
withSonarQubeEnv('sonar-server') {
sh '''
$SCANNER_HOME/bin/sonar-scanner \
-Dsonar.projectName=Netflix \
-Dsonar.projectKey=Netflix
'''
}
}
}
stage("Quality Gate") {
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token'
}
}
}
stage('Install Dependencies') {
steps {
sh "npm install"
}
}
stage('OWASP FS SCAN') {
steps {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
stage('TRIVY FS SCAN') {
steps {
sh "trivy fs . > trivyfs.txt"
}
}
stage("Docker Build & Push") {
steps {
script {
withDockerRegistry(credentialsId: 'docker', toolName: 'docker') {
sh '''
docker build --build-arg TMDB_V3_API_KEY=fb5ccbc7d2756bc73ad2e668c1ae157a -t netflix .
docker tag netflix ahmed7860/netflix:latest
docker push ahmed7860/netflix:latest
'''
}
}
}
}
stage("TRIVY") {
steps {
sh "trivy image ahmed7860/netflix:latest > trivyimage.txt"
}
}
stage('Deploy to Container') {
steps {
sh 'docker run -d -p 8081:80 ahmed7860/netflix:latest'
}
}
}
post {
always {
emailext (
attachLog: true,
subject: "${currentBuild.result}",
body: """Project: ${env.JOB_NAME}<br/>
Build Number: ${env.BUILD_NUMBER}<br/>
URL: ${env.BUILD_URL}<br/>""",
to: 'ahmedhshaikh786@gmail.com',
attachmentsPattern: 'trivyfs.txt, trivyimage.txt'
)
}
}
}

Pipeline has been successfully finished.

Also image has been pushed to Docker hub.

Now we will use this image to deploy our application to kubernetes cluster using argocd.

Phase 4: Monitoring

Install Prometheus and Grafana:

Installing Prometheus:

First, create a dedicated user for Prometheus and download Prometheus:

“sudo useradd — system — no-create-home — shell /bin/false prometheus

wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz”

Extract Prometheus files, move them, and create directories:

“tar -xvf prometheus-2.47.1.linux-amd64.tar.gz

cd prometheus-2.47.1.linux-amd64/

sudo mkdir -p /data /etc/prometheus

sudo mv prometheus promtool /usr/local/bin/

sudo mv consoles/ console_libraries/ /etc/prometheus/

sudo mv prometheus.yml /etc/prometheus/prometheus.yml”

Set ownership for directories:

sudo chown -R prometheus:prometheus /etc/prometheus/ /data/

Create a systemd unit configuration file for Prometheus:

sudo nano /etc/systemd/system/prometheus.service

Add the following content to the prometheus.service file:

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
- config.file=/etc/prometheus/prometheus.yml \
- storage.tsdb.path=/data \
- web.console.templates=/etc/prometheus/consoles \
- web.console.libraries=/etc/prometheus/console_libraries \
- web.listen-address=0.0.0.0:9090 \
- web.enable-lifecycle
[Install]
WantedBy=multi-user.target

Here’s a brief explanation of the key parts in this prometheus.service file:

Enable and start Prometheus:

sudo systemctl enable prometheus

sudo systemctl start prometheus

Verify Prometheus’s status:

sudo systemctl status prometheus

You can access Prometheus in a web browser using your server’s IP and port 9090:

Installing Node Exporter:

Create a system user for Node Exporter and download Node Exporter:

“sudo useradd — system — no-create-home — shell /bin/false node_exporter

wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz”

Extract Node Exporter files, move the binary, and clean up:

“tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz

sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/

rm -rf node_exporter*”

Create a systemd unit configuration file for Node Exporter:

sudo nano /etc/systemd/system/node_exporter.service

Add the following content to the node_exporter.service file:

[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/node_exporter - collector.logind
[Install]
WantedBy=multi-user.target

Replace — collector.logind with any additional flags as needed.

Enable and start Node Exporter:

sudo systemctl enable node_exporter

sudo systemctl start node_exporter

Verify the Node Exporter’s status:

sudo systemctl status node_exporter

You can access Node Exporter metrics in Prometheus.

Configure Prometheus Plugin Integration:

Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.

Prometheus Configuration:

To configure Prometheus to scrape metrics from Node Exporter and Jenkins, you need to modify the prometheus.yml file. Here is an example prometheus.yml configuration for your setup:

global:

scrape_interval: 15s

scrape_configs:

- job_name: ‘node_exporter’

static_configs:

- targets: [‘localhost:9100’]

- job_name: ‘jenkins’

metrics_path: ‘/prometheus’

static_configs:

- targets: [‘<your-jenkins-ip>:<your-jenkins-port>’]

Make sure to replace <your-jenkins-ip> and <your-jenkins-port> with the appropriate values for your Jenkins setup.

Check the validity of the configuration file:

promtool check config /etc/prometheus/prometheus.yml

Reload the Prometheus configuration without restarting:

curl -X POST http://localhost:9090/-/reload

You can access Prometheus targets at:

http://<your-prometheus-ip>:9090/targets

Grafana

Install Grafana on Ubuntu 22.04 and Set it up to Work with Prometheus

Step 1: Install Dependencies:

First, ensure that all necessary dependencies are installed:

sudo apt-get update

sudo apt-get install -y apt-transport-https software-properties-common

Step 2: Add the GPG Key:

Add the GPG key for Grafana:

wget -q -O — https://packages.grafana.com/gpg.key | sudo apt-key add -

Step 3: Add Grafana Repository:

Add the repository for Grafana stable releases:

echo “deb https://packages.grafana.com/oss/deb stable main” | sudo tee -a /etc/apt/sources.list.d/grafana.list

Step 4: Update and Install Grafana:

Update the package list and install Grafana:

sudo apt-get update

sudo apt-get -y install grafana

Step 5: Enable and Start Grafana Service:

To automatically start Grafana after a reboot, enable the service:

sudo systemctl enable grafana-server

Then, start Grafana:

sudo systemctl start grafana-server

Step 6: Check Grafana Status:

Verify the status of the Grafana service to ensure it’s running correctly:

sudo systemctl status grafana-server

Step 7: Access Grafana Web Interface:

You’ll be prompted to log in to Grafana. The default username is “admin,” and the default password is also “admin.”

Step 8: Change the Default Password:

When you log in for the first time, Grafana will prompt you to change the default password for security reasons. Follow the prompts to set a new password.

Step 9: Add Prometheus Data Source:

To visualize metrics, you need to add a data source. Follow these steps:

Click on the gear icon (⚙️) in the left sidebar to open the “Configuration” menu.

Select “Data Sources.”

Click on the “Add data source” button.

Choose “Prometheus” as the data source type.

In the “HTTP” section:

Set the “URL” to http://localhost:9090 (assuming Prometheus is running on the same server).

Click the “Save & Test” button to ensure the data source is working.

Step 10: Import a Dashboard:

To make it easier to view metrics, you can import a pre-configured dashboard. Follow these steps:

Click on the “+” (plus) icon in the left sidebar to open the “Create” menu.

Select “Dashboard.”

Click on the “Import” dashboard option.

Enter the dashboard code you want to import (e.g., code 1860).

Click the “Load” button.

Select the data source you added (Prometheus) from the dropdown.

Click on the “Import” button.

You should now have a Grafana dashboard set up to visualize metrics from Prometheus.

Configure Prometheus Plugin Integration:

Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.

To do this we need to install Prometheus metrics plugin in Jenkins

You can monitor the Jenkins in Grafana dashboard

Phase 5: Notification

Implement Notification Services:

Set up email notifications in Jenkins or other notification mechanisms.

post {
always {
emailext (
attachLog: true,
subject: "${currentBuild.result}",
body: """Project: ${env.JOB_NAME}<br/>
Build Number: ${env.BUILD_NUMBER}<br/>
URL: ${env.BUILD_URL}<br/>""",
to: 'ahmedhshaikh786@gmail.com',
attachmentsPattern: 'trivyfs.txt, trivyimage.txt'
)
}
}

Add this to the pipeline in Jenkins.

Phase 6: Kubernetes

Create Kubernetes Cluster with Nodegroups

In this phase, you’ll set up a Kubernetes cluster with node groups. This will provide a scalable environment to deploy and manage your applications.

Install ArgoCD:

You can install ArgoCD on your Kubernetes cluster by following the instructions provided in the EKS Workshop documentation.

To Install Argocd follow this:

All those components could be installed using a manifest provided by the Argo Project:

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml

After running the above commands check whether the argocd namespace has been created or not using this command:

kubectl get ns

Use this command to check whether the argocd pods are running or not — kubectl get all -n argocd

Now to create a service for the argocd to access it in browser as a load balancer use this command:

kubectl patch svc argocd-server -n argocd -p ‘{\”spec\”: {\”type\”: \”LoadBalancer\”}}’

After running this a loadbalancer will get created and you can access argocd dash using the load balancer.

To get the password run the below commands:

$encodedPassword = kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=”{.data.password}”

$ARGO_PWD = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($encodedPassword))

[System.Environment]::SetEnvironmentVariable(‘ARGO_PWD’, $ARGO_PWD, ‘Process’)

echo $ARGO_PWD

Set Your GitHub Repository as a Source:

After installing ArgoCD, you need to set up your GitHub repository as a source for your application deployment. This typically involves configuring the connection to your repository and defining the source for your ArgoCD application. The specific steps will depend on your setup and requirements.

Create an App:

Accessed our Application using node ip and port 30007.

--

--

Ahmed Shaikh
Ahmed Shaikh

Written by Ahmed Shaikh

DevOps Engineer | AWS Certified Solutions Architect | HashiCorp Certified Terraform | Skilled in GitHub, Docker, Kubernetes, Terraform, CI/CD, AWS.

Responses (1)