Kafdrop
Kafka
AWS
Ubuntu
EC2
SSL
Setting Up Kafdrop on Ubuntu EC2 Instance with SSL Configuration

by: Ashish Sharma

June 14, 2024

titleImage

Setting up a reliable and user-friendly interface to manage your Kafka cluster is crucial for efficient data streaming operations. Kafdrop provides a web UI for viewing Kafka topics and managing consumer groups. In this guide, we'll demonstrate how to deploy Kafdrop on an Ubuntu EC2 instance with SSL configuration for secure communication. To streamline the process, we'll also provide a Terraform configuration to automate the setup.

Prerequisites

  1. Kafka Cluster: Ensure you have a Kafka cluster running with SSL enabled.
  2. Java 17: Make sure Java 17 is installed on your EC2 instance.
  3. EC2 Instance: An Ubuntu EC2 instance with internet access.

Step-by-Step Guide

1. Install Java 17

First, update your package list and install Java 17:

sudo apt update
sudo apt install openjdk-17-jdk -y

2. Download Kafdrop

Download the Kafdrop JAR file using curl:

curl -L -o kafdrop.jar https://github.com/obsidiandynamics/kafdrop/releases/download/3.31.0/kafdrop-3.31.0.jar

3. Copy the JDK Certs to a Temporary Location

Copy the Java cacerts file to a temporary location:

sudo cp /usr/lib/jvm/java-17-openjdk-amd64/lib/security/cacerts /tmp/kafka.client.truststore.jks

4. Create client.properties

Create a client.properties file with the necessary SSL configuration:

sudo nano /home/ubuntu/client.properties

Add the following content to the file:

security.protocol=SSL
ssl.truststore.location=/tmp/kafka.client.truststore.jks

5. Export Environment Variables

Set the environment variables required for Kafdrop:

export KAFKA_PROPERTIES_FILE=/home/ubuntu/client.properties
export SERVER_SERVLET_CONTEXTPATH="/kafdrop"

6. Run Kafdrop

Run Kafdrop with the specified Kafka broker endpoints:

nohup java -Djava.net.preferIPv4Stack=true -Xmx1g -Xms1g -jar /home/ubuntu/kafdrop.jar --kafka.brokerConnect=<kafka-broker> &

Replace the broker endpoints with your actual Kafka broker addresses.

7. Verify Access

By default, Kafdrop runs on port 9000. Open your web browser and go to:

http://<EC2_PUBLIC_IP>:9000/kafdrop

Replace <EC2_PUBLIC_IP> with the public IP address or DNS name of your EC2 instance.

8. Check Health Endpoint

To verify the health of the application, access the actuator health endpoint:

http://<EC2_PUBLIC_IP>:9000/kafdrop/actuator/health

Automating with Terraform

You can automate the setup of the EC2 instance and the installation of Kafdrop using Terraform. Below is an example Terraform configuration that accomplishes this.

Terraform Configuration

  1. Create a Terraform Configuration File: Save the following content in a file named main.tf.
provider "aws" {
  region = "ap-south-1"
}

resource "aws_instance" "kafdrop" {
  ami           = "ami-0dc2d3e4c0f9ebd18" # Ubuntu 20.04 LTS AMI
  instance_type = "t2.micro"

  tags = {
    Name = "Kafdrop"
  }

  key_name = "your-key-pair" # Replace with your key pair name

  provisioner "remote-exec" {
    inline = [
      "sudo apt update",
      "sudo apt install -y openjdk-17-jdk",
      "curl -L -o /home/ubuntu/kafdrop.jar https://github.com/obsidiandynamics/kafdrop/releases/download/3.31.0/kafdrop-3.31.0.jar",
      "sudo cp /usr/lib/jvm/java-17-openjdk-amd64/lib/security/cacerts /tmp/kafka.client.truststore.jks",
      "echo 'security.protocol=SSL' > /home/ubuntu/client.properties",
      "echo 'ssl.truststore.location=/tmp/kafka.client.truststore.jks' >> /home/ubuntu/client.properties",
      "export KAFKA_PROPERTIES_FILE=/home/ubuntu/client.properties",
      "export SERVER_SERVLET_CONTEXTPATH=/kafdrop",
      "nohup java -Djava.net.preferIPv4Stack=true -Xmx1g -Xms1g -jar /home/ubuntu/kafdrop.jar --kafka.brokerConnect=<kafka-broker> &"
    ]

    connection {
      type        = "ssh"
      user        = "ubuntu"
      private_key = file("path/to/your-private-key.pem")
      host        = aws_instance.kafdrop.public_ip
    }
  }

  provisioner "file" {
    source      = "path/to/your-private-key.pem"
    destination = "/home/ubuntu/private-key.pem"

    connection {
      type        = "ssh"
      user        = "ubuntu"
      private_key = file("path/to/your-private-key.pem")
      host        = aws_instance.kafdrop.public_ip
    }
  }
}

resource "aws_security_group" "kafdrop_sg" {
  name_prefix = "kafdrop-sg"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 9000
    to_port     = 9000
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

output "public_ip" {
  value = aws_instance.kafdrop.public_ip
}
  1. Initialize Terraform: Run the following command to initialize Terraform in your working directory:
terraform init
  1. Apply the Terraform Configuration: Run the following command to create the resources defined in the configuration file:
terraform apply

This configuration will set up an Ubuntu EC2 instance, install Java 17, download Kafdrop, configure it with SSL, and run it with your Kafka broker endpoints.

Troubleshooting

If you encounter any issues, follow these steps:

  1. Ensure the Truststore File Exists: Verify that the truststore file has been copied to the correct location and has the correct permissions:
    ls -l /tmp/kafka.client.truststore.jks
  2. Check Logs for Errors: Review the nohup.out file to see if there are any errors:
    cat nohup.out
  3. Security Group Configuration: Ensure that the security group associated with your EC2 instance allows inbound traffic on port 9000.
  4. Enable Debug Logging: For more detailed output, enable debug logging:
    nohup java -Djava.net.preferIPv4Stack=true -Xmx1g -Xms1g -jar /home/ubuntu/kafdrop.jar --kafka.brokerConnect=<kafka-broker> --logging.level.org.springframework=DEBUG &

Conclusion

Setting up Kafdrop on an Ubuntu EC2 instance with SSL configuration ensures secure communication with your Kafka cluster. By following these steps and using the provided Terraform configuration, you can easily monitor your Kafka topics and consumer groups through a user-friendly web interface. If you encounter any issues, the troubleshooting steps should help you resolve common problems. Happy monitoring!


contact us

Get started now

Get a quote for your project.
logofooter
title_logo

USA

Edstem Technologies LLC
254 Chapman Rd, Ste 208 #14734
Newark, Delaware 19702 US

INDIA

Edstem Technologies Pvt Ltd
Office No-2B-1, Second Floor
Jyothirmaya, Infopark Phase II
Ernakulam, Kerala 682303

© 2024 — Edstem All Rights Reserved

Privacy PolicyTerms of Use