Securing Grafana and Prometheus with SSH tunnels instead of public ports
In this article, I will show you how I deploy my API service for class decompilation which I use for jar.tools alongside a full monitoring stack (Prometheus for metrics collection, Grafana for dashboards, Node Exporter for system metrics, cAdvisor for container monitoring) without exposing dashboards to the public internet.
The Problem: VPS Providers Don't Give You Security Out of the Box
When I spin up a fresh VPS from Digital Ocean, Linode, or any other cloud provider, the server is essentially wide open. By default, most VPS providers give you:
- Root access via password authentication
- SSH accessible from any IP address worldwide
- No firewall configured
- Default system services running
- Basic Ubuntu installation with minimal security hardening
Initially, I access my fresh VPS like this:
ssh root@server-ip
# Password: the temporary password emailed by the provider
This is convenient for initial setup, but it's also a security nightmare. Bots are constantly scanning the internet for servers with weak root passwords, default configurations, and exposed services. Your server can be compromised within hours of going online.
That's why the first thing I do - before installing any applications - is create a proper foundation.
Step 1: Setting Up the Server Foundation
Before deploying any applications, I start by setting up the basic server environment. My script handles the initial setup while still logged in as root:
# Update system packages
apt-get update -qq
apt-get install -y curl wget git jq openssl zip unzip ufw fail2ban
# Add 1GB swap for memory stability
fallocate -l 1G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
# Install Docker
curl -fsSL https://get.docker.com | sh
systemctl enable docker
systemctl start docker
The key step is creating a service user that will run everything:
# Create service user with Docker access
useradd -m -s /bin/bash -G docker jarapp
# Configure limited sudo access for system tasks
echo "jarapp ALL=(ALL) NOPASSWD: /usr/bin/systemctl, /usr/sbin/nginx, /bin/chown, /bin/chmod, /usr/bin/tee, /usr/sbin/ufw" > /etc/sudoers.d/jarapp
This creates a dedicated user (jarapp
) that can manage Docker containers and perform essential system tasks, but can't do everything root can.
Step 2: Configuring SSH Keys for the Service User
Next, I set up SSH key authentication for the service user and test it before applying any hardening:
# Copy SSH public key to jarapp user
install -d -m 700 -o jarapp -g jarapp /home/jarapp/.ssh
echo "$PUBLIC_KEY" > /home/jarapp/.ssh/authorized_keys
chown jarapp:jarapp /home/jarapp/.ssh/authorized_keys
chmod 600 /home/jarapp/.ssh/authorized_keys
I always test this immediately:
ssh -i ~/.ssh/id_rsa jarapp@server-ip "echo 'jarapp SSH OK'"
This verification step is crucial - if SSH key access fails here, you'll be locked out when we disable root login later.
Step 3: Deploying My Decompiler API
With the foundation ready, I deploy my containerized decompilation service:
services:
decompiler-api:
build: .
container_name: decompiler-api
user: 1000:1000
read_only: true
tmpfs:
- /tmp
- /app/temp
deploy:
resources:
limits:
memory: 400M
cpus: "0.8"
cap_drop:
- ALL
cap_add:
- SETUID
- SETGID
ports:
- "127.0.0.1:3000:8080" # only accessible locally
networks:
- shared-network
My service exposes /v1/decompile
, /healthz
, and /logs
endpoints. The container runs with strict security: read-only filesystem, minimal capabilities, memory limits, and only binds to localhost.
Step 4: Setting Up the Monitoring Stack
I deploy monitoring with Docker Compose and isolate it on a shared network:
# docker-compose.monitoring.yml
services:
prometheus:
image: prom/prometheus:latest
ports:
- "127.0.0.1:9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
networks:
- shared-network
grafana:
image: grafana/grafana:latest
ports:
- "127.0.0.1:3001:3000"
environment:
GF_SECURITY_ADMIN_USER: admin
GF_SECURITY_ADMIN_PASSWORD: SecureP@ssw0rd2024!
networks:
- shared-network
node-exporter:
image: prom/node-exporter:latest
ports:
- "127.0.0.1:9100:9100"
networks:
- shared-network
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
ports:
- "127.0.0.1:8081:8080"
networks:
- shared-network
networks:
shared-network:
driver: bridge
Every service binds to 127.0.0.1 only, making them unreachable from outside my server.
⚠️ Security Note: The Grafana password shown here is just an example. In production, always use a strong, unique password and never share it publicly or commit it to version control. Consider using environment variables or Docker secrets for sensitive credentials.
I configure Prometheus (metrics collector) to scrape both system and application metrics:
# prometheus.yml
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
- job_name: 'decompiler-api'
metrics_path: '/stats'
static_configs:
- targets: ["decompiler-api:8080"]
The key is connecting my decompiler API to the same Docker network so Prometheus can scrape its metrics:
docker network create shared-network
docker network connect shared-network decompiler-api
Step 5: Applying Security Hardening
Now comes the critical security lockdown. I configure SSH hardening in /etc/ssh/sshd_config
:
PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
MaxAuthTries 3
AllowUsers jarapp
ClientAliveInterval 300
ClientAliveCountMax 2
I set up the firewall to only allow essential ports:
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp comment 'SSH'
ufw allow 80/tcp comment 'HTTP - Nginx reverse proxy'
ufw --force enable
I configure Nginx as a reverse proxy with Basic Auth:
server {
listen 80;
server_name _;
# Health check endpoint (public, no authentication)
location = /healthz {
proxy_pass http://127.0.0.1:3000/healthz;
}
# Block internal metrics from external access
location /stats { return 404; }
# Main API endpoints with authentication
location / {
auth_basic "JAR Decompiler API";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://127.0.0.1:3000;
}
}
Finally, I enable fail2ban for intrusion prevention:
systemctl enable fail2ban
systemctl start fail2ban
After applying hardening, I always test SSH access to make sure I didn't lock myself out:
ssh -i ~/.ssh/id_rsa jarapp@server-ip "echo 'SSH access verified'"
Step 6: How I Access Everything via SSH Tunneling
I never hit Grafana or Prometheus directly. Instead, I open tunnels:
# Grafana
ssh -L 3001:127.0.0.1:3001 jarapp@server-ip
# Prometheus
ssh -L 9090:127.0.0.1:9090 jarapp@server-ip
# cAdvisor
ssh -L 8081:127.0.0.1:8081 jarapp@server-ip
Then I open http://localhost:3001 for Grafana (visualization dashboard).
This makes my dashboards available only to authenticated SSH users.
Step 7: Running Health Checks
My deployment script runs comprehensive health checks to verify everything works:
# Test API health endpoint (public)
curl -f http://server-ip/healthz
# Test Grafana (internal via SSH)
curl -f http://127.0.0.1:3001/api/health
# Test Prometheus (internal via SSH)
curl -f http://127.0.0.1:9090/-/healthy
# Verify container networking
docker exec grafana nslookup prometheus
If Grafana (dashboard service) were accidentally exposed publicly, these checks would fail.
Other Common Use Cases
While I use SSH tunneling to keep my monitoring dashboards private, the same technique is widely used in other scenarios.
Databases – Developers often forward localhost:5432 for PostgreSQL or localhost:3306 for MySQL so they can connect from their laptop without opening ports to the internet.
ssh -L 5432:127.0.0.1:5432 user@server
Internal Web Apps – Admin tools or staging environments that should never be public can be safely accessed through a tunnel.
ssh -L 8080:127.0.0.1:8080 user@server
Kubernetes Dashboards – kubectl port-forward works on the same principle as SSH tunneling, forwarding traffic securely from a cluster service to your local machine.
In short, SSH tunneling is a way of creating a private, encrypted pipe for traffic. You only open it when needed, and it closes when your session ends.
My Automated Deployment Script
Everything runs from a single Bash script that I created to:
- Check prerequisites and test SSH connection
- Bootstrap the server and create service user
- Configure SSH keys and verify access
- Deploy the decompiler app in Docker with hardening
- Deploy monitoring stack with shared networking
- Apply comprehensive security hardening
- Run health checks and generate deployment report
The script includes retry logic, error handling, and colored output to make deployment reliable and easy to debug.
Security Benefits I Get
This design provides me:
- Zero attack surface for Grafana (dashboards) and Prometheus (metrics)
- End-to-end encryption with SSH tunnels
- Auditable access via SSH logs and fail2ban
- Compliance-friendly approach with no internet-exposed dashboards
- Operational simplicity with no certificates or VPN required
My Real-World Admin Flow
For me as an operator, using this system is simple:
# Open tunnel
ssh -L 3001:127.0.0.1:3001 jarapp@server-ip
# Access dashboard
open http://localhost:3001
# When done
exit
Everything else runs automatically in the background.
Conclusion
Instead of exposing monitoring tools on public ports, I keep them bound to localhost and only reachable via SSH tunnels. Combined with my automated deployment scripts, this approach gives me enterprise-grade monitoring security with minimal overhead.
If you are running APIs in production, avoid putting Grafana on the internet. Wrap it in SSH tunneling and automation to keep monitoring invisible to attackers while effortless for your team.