Jenkins
CI/CD automation server, pipeline engine, and plugin ecosystem
Overview
Jenkins is an open-source automation server written in Java. It is the most widely deployed CI/CD tool in the world, with over 2,000 plugins that integrate with virtually every tool in the software delivery ecosystem. Jenkins automates building, testing, and deploying software through pipelines defined as code.
History
Jenkins began as Hudson, created by Kohsuke Kawaguchi at Sun Microsystems in 2004 and first released in February 2005. After Oracle acquired Sun in 2010, a trademark dispute over the Hudson name led the community to vote overwhelmingly to rename and fork the project as Jenkins in January 2011. Hudson faded into obscurity while Jenkins became the dominant CI/CD server. The project is now governed by the Continuous Delivery Foundation (CDF), a Linux Foundation project.
Core Automation Server
Jenkins runs as a Java web application (typically on port 8080). It provides a web UI, REST API, and CLI for managing jobs and pipelines. The core is lightweight — almost all functionality comes from plugins.
Core Plugin Ecosystem
Over 2,000 community-maintained plugins covering SCM integration, build tools, cloud providers, notification systems, security, and more. Jenkins without plugins is a bare scheduler; plugins make it a CI/CD platform.
Pipeline Jenkinsfile
Pipelines defined in a Jenkinsfile stored in your repository. Version-controlled, reviewable, and portable. Supports both declarative and scripted syntax.
UI Blue Ocean
A modern UI for Jenkins pipelines with visual pipeline editors, per-branch dashboards, and GitHub/Bitbucket integration. No longer actively developed — receives only security patches. Still functional but not recommended for new setups.
Jenkins vs modern alternatives
| Tool | Model | Trade-offs |
|---|---|---|
| Jenkins | Self-hosted, plugin-based | Maximum flexibility, huge ecosystem. But requires maintenance, plugin compatibility issues, Groovy complexity. |
| GitHub Actions | SaaS, YAML workflows | Zero infrastructure, tight GitHub integration. Limited for non-GitHub repos, vendor lock-in, runner costs at scale. |
| GitLab CI/CD | Built into GitLab | Unified platform (SCM + CI + registry + deploy). Requires GitLab, less flexible than Jenkins for exotic workflows. |
| Tekton | Kubernetes-native | Cloud-native, CRD-based pipelines. Steep learning curve, requires K8s cluster, less mature ecosystem. |
| CircleCI / Travis CI | SaaS | Easy setup, good caching. Cost at scale. CircleCI offers self-hosted runners (machine and container); Travis CI is being sunset by many organizations in favor of GitHub Actions. |
Jenkins remains the best choice when you need full control over your CI/CD infrastructure, run complex multi-branch pipelines with custom toolchains, or operate in air-gapped / on-premises environments. For teams already on GitHub or GitLab, the built-in CI systems are often simpler to start with.
Architecture
Jenkins uses a controller-agent architecture (historically called master-slave). The controller orchestrates pipelines and serves the UI; agents execute the actual build steps.
Key concepts
Core Controller
The central Jenkins process. Manages configuration, schedules builds, dispatches work to agents, and stores build artifacts and logs. Should not run builds itself in production — reserve it for orchestration only.
Core Agents
Separate machines (physical, VM, or container) that execute build steps. Each agent has one or more executors (concurrent build slots). Agents connect via SSH, inbound TCP/WebSocket (formerly JNLP), or the Kubernetes plugin.
Concept Executors
A slot on a node that can run one build at a time. A node with 4 executors can run 4 concurrent builds. Set executor count based on CPU cores and workload type (CPU-bound builds need fewer executors).
Concept Labels
Tags assigned to agents that let pipelines target specific node types. Example: agent { label 'linux && docker' } runs only on agents with both labels. Use labels for OS, toolchain, or capability matching.
Agent connection methods
| Method | How it works | When to use |
|---|---|---|
| SSH | Controller SSHes into the agent and launches the agent process. Controller-initiated. | Persistent Linux/macOS agents. Most common for static infrastructure. |
| Inbound (TCP / WebSocket) | Agent initiates connection to the controller via TCP (port 50000) or WebSocket. Formerly called JNLP. Agent process runs as a service or container. | Agents behind firewalls/NAT, Windows agents, Docker-based agents. WebSocket is preferred for modern setups as it works through HTTP proxies without opening a dedicated TCP port. |
| Kubernetes plugin | Controller dynamically provisions pods in a K8s cluster. Pod is the agent; destroyed after the build. | Ephemeral, auto-scaling agents. Best for cloud-native Jenkins. |
Set the controller's executor count to 0 in production. All builds should run on agents. This keeps the controller responsive, reduces its attack surface, and prevents a rogue build from destabilizing Jenkins itself.
Pipeline as Code
Jenkins Pipelines are defined in a Jenkinsfile — a Groovy-based DSL stored in your repository's root. There are two syntax flavors: declarative (structured, opinionated) and scripted (full Groovy, maximum flexibility).
Declarative pipeline
// Jenkinsfile (Declarative)
pipeline {
agent { label 'linux && docker' }
options {
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
buildDiscarder(logRotator(numToKeepStr: '10'))
}
environment {
REGISTRY = 'registry.example.com'
IMAGE = "${REGISTRY}/myapp:${env.BUILD_NUMBER}"
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build & Test') {
parallel {
stage('Unit Tests') {
steps {
sh 'npm ci'
sh 'npm test -- --coverage'
}
post {
always {
junit 'test-results/*.xml'
publishHTML(target: [
reportDir: 'coverage',
reportFiles: 'index.html',
reportName: 'Coverage Report'
])
}
}
}
stage('Lint') {
steps {
sh 'npm run lint'
}
}
}
}
stage('Docker Build') {
steps {
sh "docker build -t ${IMAGE} ."
}
}
stage('Push') {
when {
branch 'main'
}
steps {
withCredentials([usernamePassword(
credentialsId: 'registry-creds',
usernameVariable: 'REG_USER',
passwordVariable: 'REG_PASS'
)]) {
sh "echo ${REG_PASS} | docker login ${REGISTRY} -u ${REG_USER} --password-stdin"
sh "docker push ${IMAGE}"
}
}
}
stage('Deploy') {
when {
branch 'main'
beforeAgent true
}
input {
message 'Deploy to production?'
ok 'Deploy'
}
steps {
sh "./deploy.sh ${IMAGE}"
}
}
}
post {
success {
slackSend(channel: '#deploys', message: "SUCCESS: ${env.JOB_NAME} #${env.BUILD_NUMBER}")
}
failure {
slackSend(channel: '#deploys', color: 'danger', message: "FAILED: ${env.JOB_NAME} #${env.BUILD_NUMBER}")
}
always {
cleanWs()
}
}
}
Scripted pipeline
// Jenkinsfile (Scripted)
node('linux && docker') {
try {
stage('Checkout') {
checkout scm
}
stage('Build') {
sh 'npm ci'
sh 'npm run build'
}
stage('Test') {
sh 'npm test'
junit 'test-results/*.xml'
}
if (env.BRANCH_NAME == 'main') {
stage('Deploy') {
withCredentials([string(credentialsId: 'deploy-token', variable: 'TOKEN')]) {
sh "curl -X POST -H 'Authorization: Bearer ${TOKEN}' https://deploy.example.com/trigger"
}
}
}
currentBuild.result = 'SUCCESS'
} catch (Exception e) {
currentBuild.result = 'FAILURE'
throw e
} finally {
cleanWs()
}
}
Declarative vs Scripted
| Aspect | Declarative | Scripted |
|---|---|---|
| Syntax | Structured pipeline { ... } block | Full Groovy inside node { ... } |
| Learning curve | Lower — constrained structure guides you | Higher — requires Groovy knowledge |
| Validation | Syntax checked before execution | No pre-validation; errors at runtime |
| Flexibility | Covers 90% of use cases; use script { } blocks for escape hatches | Unlimited — any Groovy code |
| Post actions | Built-in post { always / success / failure } | Manual try/catch/finally |
| Recommendation | Preferred for most teams | Use when declarative is genuinely insufficient |
Plugins
Jenkins's power comes from its plugin ecosystem. The core provides a scheduler and web UI; plugins add SCM integration, build tools, cloud agents, credentials, notifications, and more. Managing plugins well is critical to a stable Jenkins installation.
Essential plugins
Must-have Pipeline
The Pipeline plugin suite (workflow-aggregator) enables Jenkinsfile-based pipelines. Includes Pipeline: Declarative, Pipeline: Groovy, Pipeline: Stage View, and more. This is the foundation of modern Jenkins.
Must-have Git
SCM integration for Git repositories. Supports polling, webhooks, branch discovery, sparse checkout, and credentials for SSH/HTTPS authentication.
Must-have Credentials / Credentials Binding
Secure storage for secrets. Credentials Binding exposes secrets as environment variables in pipeline steps via withCredentials(). Supports username/password, SSH keys, secret text, certificates, and files.
Must-have Docker Pipeline
Run pipeline stages inside Docker containers with agent { docker { image 'node:20' } }. Build, push, and run Docker images from pipeline steps.
Recommended Job DSL
Define Jenkins jobs programmatically in Groovy. Seed jobs generate other jobs from DSL scripts. Essential for managing hundreds of jobs as code.
Recommended Blue Ocean
Modern pipeline visualization with branch-aware dashboards, visual pipeline creation, and GitHub/Bitbucket integration. No longer actively developed by CloudBees; receives security patches only. The classic UI's Pipeline Stage View plugin now offers similar visualization.
Plugin management
# Install plugins via CLI
jenkins-cli.jar install-plugin git pipeline-model-definition docker-workflow
# Install plugins from a text file (used in Docker images)
# plugins.txt
pipeline-model-definition:2.2175.v76a_fff0a_2618
git:5.2.1
docker-workflow:572.v950f58993843
credentials-binding:677.vdc9c38cb_254d
job-dsl:1.87
configuration-as-code:1775.v810dc950b_514
# Install with jenkins-plugin-cli (Docker image)
jenkins-plugin-cli --plugin-file /usr/share/jenkins/ref/plugins.txt
Plugins run with full access to the Jenkins controller. A malicious or vulnerable plugin can read all credentials, modify any job, and execute arbitrary code. Only install plugins from the official Jenkins Update Center, review changelogs before updating, and remove plugins you no longer use. Subscribe to Jenkins security advisories.
Credentials & Security
Jenkins stores secrets in an encrypted credentials store and provides fine-grained access control. Security is critical because Jenkins has access to source code, deployment keys, and production infrastructure.
Credential types
| Type | Use case | Pipeline access |
|---|---|---|
| Username with password | Registry logins, API auth | usernamePassword(credentialsId: 'id', usernameVariable: 'U', passwordVariable: 'P') |
| SSH Username with private key | Git clone over SSH, SSH deploy | sshUserPrivateKey(credentialsId: 'id', keyFileVariable: 'KEY') |
| Secret text | API tokens, webhook secrets | string(credentialsId: 'id', variable: 'TOKEN') |
| Secret file | Kubeconfig, service account JSON | file(credentialsId: 'id', variable: 'FILE_PATH') |
| Certificate (PKCS#12) | Client TLS certificates | certificate(credentialsId: 'id', keystoreVariable: 'KS', passwordVariable: 'KP') |
// Using credentials in a pipeline
pipeline {
agent any
stages {
stage('Deploy') {
steps {
withCredentials([
usernamePassword(credentialsId: 'docker-hub', usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASS'),
string(credentialsId: 'slack-webhook', variable: 'SLACK_URL'),
file(credentialsId: 'prod-kubeconfig', variable: 'KUBECONFIG')
]) {
sh 'echo ${DOCKER_PASS} | docker login -u ${DOCKER_USER} --password-stdin'
sh 'kubectl --kubeconfig=${KUBECONFIG} apply -f k8s/'
sh "curl -X POST ${SLACK_URL} -d '{\"text\": \"Deployed successfully\"}'"
}
}
}
}
}
Access control
Recommended Role-Based Strategy
The Role-based Authorization Strategy plugin provides global roles, project roles (regex-matched), and agent roles. Define roles like developer (build/read), admin (full access), viewer (read-only). The most practical RBAC solution for most teams.
Alternative Matrix Authorization
Built-in fine-grained permission matrix. Assign individual permissions (Job/Build, Job/Configure, Overall/Administer, etc.) per user or group. Powerful but tedious to manage at scale.
Security hardening
- LDAP / Active Directory — integrate Jenkins with your corporate directory. Use the LDAP or AD plugin. Avoid local Jenkins accounts in production.
- CSRF protection — enabled by default since Jenkins 2.0, and the option to disable it was removed entirely in Jenkins 2.222.x. API calls require a crumb token or API token authentication.
- Script approval — the Groovy Sandbox blocks unapproved method calls. Review pending approvals carefully — approving
java.lang.Runtime.execgives pipeline authors arbitrary command execution. - Agent-to-controller security — "Agent → Controller Access Control" is always enabled and cannot be disabled since Jenkins 2.326. It prevents agents from reading files or executing commands on the controller. On older versions, verify it is enabled in Manage Jenkins → Security.
- HTTPS — always run Jenkins behind a reverse proxy (Nginx, Caddy) with TLS. Never expose the Jenkins UI over plain HTTP.
Docker Integration
The Docker Pipeline plugin lets you run pipeline stages inside Docker containers, build images, and push to registries — all from your Jenkinsfile.
Running steps in a Docker container
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'node:20-alpine'
args '-v $HOME/.npm:/root/.npm' // Cache npm packages
}
}
steps {
sh 'npm ci'
sh 'npm run build'
stash includes: 'dist/**', name: 'build-output'
}
}
stage('Test') {
agent {
docker {
image 'node:20-alpine'
}
}
steps {
sh 'npm ci'
sh 'npm test'
}
}
stage('Build Image') {
agent { label 'docker' }
steps {
unstash 'build-output'
script {
def img = docker.build("registry.example.com/myapp:${env.BUILD_NUMBER}")
docker.withRegistry('https://registry.example.com', 'registry-creds') {
img.push()
img.push('latest')
}
}
}
}
}
}
Docker-in-Docker vs socket mounting
DinD Docker-in-Docker
Run a full Docker daemon inside the build container using docker:dind. Fully isolated but requires --privileged mode. Slower due to nested storage drivers. Used in GitLab CI runners.
- Complete isolation between builds
- Requires
--privileged(security risk) - Layer cache not shared between builds
Socket Docker Socket Mounting
Mount the host's /var/run/docker.sock into the build container. Builds run on the host daemon — shared layer cache, no --privileged needed, but no isolation between builds.
- Shared layer cache (fast builds)
- No
--privilegedrequired - Build containers can see/affect other containers on the host
Kaniko (daemonless image building)
// Build Docker images without Docker daemon (Kubernetes-friendly)
stage('Build with Kaniko') {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ["sleep"]
args: ["infinity"]
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker
volumes:
- name: docker-config
secret:
secretName: registry-credentials
'''
}
}
steps {
container('kaniko') {
sh """
/kaniko/executor \
--context=dir://. \
--destination=registry.example.com/myapp:${env.BUILD_NUMBER} \
--cache=true
"""
}
}
}
For Kubernetes-based Jenkins agents, use Kaniko for building container images. It does not require a Docker daemon or privileged mode, executing each Dockerfile command entirely in userspace. This makes it safer and more compatible with restricted pod security policies and multi-tenant clusters.
Agents & Scaling
Jenkins scales horizontally by adding agents. Agents can be static (always-on VMs), dynamic (provisioned on demand from clouds), or ephemeral (Kubernetes pods that exist only for the duration of a build).
Kubernetes plugin
The Jenkins Kubernetes plugin dynamically provisions pods as build agents. Each build gets a fresh pod with one or more containers. Pods are destroyed after the build completes, ensuring clean environments and automatic scaling.
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins: agent
spec:
containers:
- name: jnlp
image: jenkins/inbound-agent:latest
resources:
requests:
cpu: 500m
memory: 512Mi
- name: maven
image: maven:3.9-eclipse-temurin-21
command: ["sleep"]
args: ["infinity"]
resources:
requests:
cpu: "1"
memory: 1Gi
- name: docker
image: docker:24-cli
command: ["sleep"]
args: ["infinity"]
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
'''
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn clean package -DskipTests'
}
}
}
stage('Test') {
steps {
container('maven') {
sh 'mvn test'
}
}
}
stage('Docker Build') {
steps {
container('docker') {
sh "docker build -t myapp:${env.BUILD_NUMBER} ."
}
}
}
}
}
Cloud agent types
Recommended Kubernetes Pods
Ephemeral pods provisioned per build. Zero idle cost, perfect isolation, auto-scaling built in. Requires a K8s cluster. The modern standard for Jenkins at scale.
Cloud EC2 / Azure VM / GCE
Cloud plugins (EC2 Plugin, Azure VM Agents, Google Compute Engine) launch VMs on demand. Slower to provision than pods (minutes vs seconds) but support any OS and toolchain.
Docker Docker Plugin
Provisions Docker containers as agents on a Docker host. Lighter than VMs but less isolated than K8s pods. Good middle ground for teams without Kubernetes.
Static Permanent Agents
Always-on VMs or bare-metal machines. Simple to set up but waste resources when idle. Still needed for specialized hardware (GPU, macOS for iOS builds, Windows for .NET).
Use Kubernetes pods for the majority of builds (Linux, Docker, standard toolchains). Keep a small pool of static agents for workloads that cannot run in containers (macOS, Windows, GPU). Set resource requests/limits on pod templates to prevent build pods from starving the cluster.
Managing Jenkins through the UI does not scale. JCasC (Jenkins Configuration as Code) manages the Jenkins system configuration in YAML. Job DSL manages job definitions in Groovy. Together, they let you rebuild a Jenkins instance entirely from Git.
JCasC (jenkins.yaml)
# jenkins.yaml - Jenkins Configuration as Code
jenkins:
systemMessage: "Jenkins configured via JCasC"
numExecutors: 0 # No builds on controller
mode: EXCLUSIVE
securityRealm:
ldap:
configurations:
- server: ldap.example.com
rootDN: dc=example,dc=com
userSearchBase: ou=People
userSearch: uid={0}
groupSearchBase: ou=Groups
managerDN: cn=jenkins,ou=ServiceAccounts,dc=example,dc=com
managerPasswordSecret: "${LDAP_MANAGER_PASSWORD}"
authorizationStrategy:
roleBased:
roles:
global:
- name: admin
permissions:
- "Overall/Administer"
entries:
- group: jenkins-admins
- name: developer
permissions:
- "Overall/Read"
- "Job/Build"
- "Job/Read"
- "Job/Workspace"
entries:
- group: developers
- name: viewer
permissions:
- "Overall/Read"
- "Job/Read"
entries:
- group: everyone
clouds:
- kubernetes:
name: k8s
namespace: jenkins
jenkinsUrl: http://jenkins:8080
jenkinsTunnel: jenkins-agent:50000
containerCapStr: "20"
podLabels:
- key: jenkins
value: agent
unclassified:
location:
url: https://jenkins.example.com/
slackNotifier:
teamDomain: myteam
tokenCredentialId: slack-token
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
scope: GLOBAL
id: github-creds
username: jenkins-bot
password: "${GITHUB_TOKEN}"
- string:
scope: GLOBAL
id: slack-token
secret: "${SLACK_TOKEN}"
Job DSL (seed job)
// seed-job.groovy - generates jobs from DSL
// This runs as a "seed job" in Jenkins
// Multi-branch pipeline for every repo in the org
['frontend', 'backend', 'api-gateway', 'worker'].each { repo ->
multibranchPipelineJob("${repo}") {
displayName(repo.capitalize())
branchSources {
github {
id("${repo}-github")
repoOwner('myorg')
repository(repo)
scanCredentialsId('github-creds')
}
}
orphanedItemStrategy {
discardOldItems {
numToKeep(10)
}
}
triggers {
periodicFolderTrigger {
interval('5m')
}
}
}
}
// Freestyle job for infrastructure tasks
job('backup-jenkins') {
description('Nightly Jenkins backup')
triggers {
cron('H 2 * * *')
}
steps {
shell('''
tar czf /backups/jenkins-$(date +%Y%m%d).tar.gz /var/jenkins_home
find /backups -name "jenkins-*.tar.gz" -mtime +30 -delete
''')
}
publishers {
slackNotifier {
notifyFailure(true)
room('#ops')
}
}
}
// Pipeline job with parameters
pipelineJob('deploy-production') {
parameters {
stringParam('IMAGE_TAG', '', 'Docker image tag to deploy')
choiceParam('ENVIRONMENT', ['staging', 'production'], 'Target environment')
}
definition {
cpsScm {
scm {
git {
remote {
url('https://github.com/myorg/deploy-scripts.git')
credentials('github-creds')
}
branches('*/main')
}
}
scriptPath('Jenkinsfile.deploy')
}
}
}
Store both jenkins.yaml (JCasC) and seed job DSL scripts in a Git repository. Apply JCasC on Jenkins startup (mount as a volume or use the CASC_JENKINS_CONFIG environment variable). Run the seed job on startup or via webhook to keep job definitions in sync with Git. This gives you a fully reproducible Jenkins instance.
Docker Deployment
Running Jenkins in Docker is the most common deployment method. It provides isolation, reproducibility, and easy upgrades. A typical production setup includes the Jenkins controller, one or more agents, and persistent volumes for data.
Docker Compose setup
# docker-compose.yml
services:
jenkins:
image: jenkins/jenkins:lts-jdk21
container_name: jenkins
restart: unless-stopped
ports:
- "8080:8080" # Web UI
- "50000:50000" # Inbound agent port (TCP)
environment:
JAVA_OPTS: >-
-Xmx2g -Xms1g
-Djenkins.install.runSetupWizard=false
-Dhudson.model.DirectoryBrowserSupport.CSP=
CASC_JENKINS_CONFIG: /var/jenkins_home/casc_configs
volumes:
- jenkins_home:/var/jenkins_home
- ./casc_configs:/var/jenkins_home/casc_configs:ro
- ./plugins.txt:/usr/share/jenkins/ref/plugins.txt:ro
- /var/run/docker.sock:/var/run/docker.sock
user: root # Needed for Docker socket access; note this runs Jenkins as root (security trade-off)
agent-1:
image: jenkins/inbound-agent:latest-jdk21
container_name: jenkins-agent-1
restart: unless-stopped
environment:
JENKINS_URL: http://jenkins:8080
JENKINS_AGENT_NAME: agent-1
JENKINS_SECRET: "${AGENT_1_SECRET}"
JENKINS_AGENT_WORKDIR: /home/jenkins/agent
volumes:
- agent1_workspace:/home/jenkins/agent
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- jenkins
volumes:
jenkins_home:
agent1_workspace:
Custom Jenkins image with pre-installed plugins
# Dockerfile
FROM jenkins/jenkins:lts-jdk21
# Skip setup wizard
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
ENV CASC_JENKINS_CONFIG=/var/jenkins_home/casc_configs
# Install plugins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli --plugin-file /usr/share/jenkins/ref/plugins.txt
# Copy JCasC configuration
COPY casc_configs/ /var/jenkins_home/casc_configs/
# Install Docker CLI (for Docker Pipeline plugin)
USER root
RUN apt-get update && \
apt-get install -y ca-certificates curl gnupg && \
install -m 0755 -d /etc/apt/keyrings && \
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg && \
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list && \
apt-get update && apt-get install -y docker-ce-cli && \
rm -rf /var/lib/apt/lists/*
RUN usermod -aG docker jenkins
USER jenkins
Backup strategies
Recommended Volume Backup
Back up the jenkins_home volume regularly. Contains all job configs, build history, credentials, and plugin data. Use docker run --rm -v jenkins_home:/data -v $(pwd):/backup alpine tar czf /backup/jenkins-backup.tar.gz -C /data .
Recommended Configuration as Code
With JCasC + Job DSL + plugin list in Git, you only need to back up build history and credentials. The Jenkins configuration itself is reproducible from code. This is the ideal state.
Plugin ThinBackup
Jenkins plugin that creates differential backups of JENKINS_HOME. Scheduled backups, retention policies, and restore from the UI. Good for teams not yet on JCasC.
Avoid Snapshot only
VM or volume snapshots alone are not sufficient. They capture a point-in-time but do not protect against logical corruption (a bad plugin update corrupting configs). Combine with file-level backups.
Pipeline Best Practices
Keep pipelines fast
- Parallelize — run independent stages (lint, unit tests, integration tests) in parallel using
parallel { }blocks. - Cache dependencies — mount volume caches for npm, Maven, Gradle. Use Docker layer caching for image builds.
- Avoid unnecessary checkouts — use sparse checkout for monorepos. Only clone what you need.
- Set timeouts — always use
options { timeout(time: 30, unit: 'MINUTES') }to prevent hung builds from consuming executors indefinitely.
Artifact management
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'npm run build'
// Stash files to pass between stages/nodes
stash includes: 'dist/**', name: 'build-artifacts'
}
}
stage('Test') {
steps {
unstash 'build-artifacts'
sh 'npm test'
}
post {
always {
// Archive test results for Jenkins UI
junit 'test-results/*.xml'
// Archive build artifacts for download
archiveArtifacts artifacts: 'dist/**', fingerprint: true
}
}
}
}
}
Pipeline patterns
Pattern Input gates
Use input directives at stage level for manual approval before production deploys. The stage-level input pauses before allocating an agent, so no executor is held. Use beforeAgent true inside when to evaluate conditions before agent allocation.
stage('Deploy Prod') {
when { branch 'main'; beforeAgent true }
input {
message 'Deploy to production?'
submitter 'deployers'
}
steps { sh './deploy.sh prod' }
}
Pattern Notifications
Send build status to Slack, email, or webhooks in post { } blocks. Always notify on failure; optionally on success.
post {
failure {
slackSend(
channel: '#builds',
color: 'danger',
message: "FAILED: ${env.JOB_NAME}"
)
emailext(
to: 'team@example.com',
subject: "Build Failed: ${env.JOB_NAME}",
body: "Check: ${env.BUILD_URL}"
)
}
}
Avoid common mistakes
- Do not use
sh 'git clone ...'— use thecheckout scmstep, which handles credentials, shallow clones, and branch detection correctly. - Do not store secrets in Jenkinsfile — always use
withCredentials()to access secrets. Jenkins masks credential values in logs automatically. - Do not use shared mutable state — global variables or shared workspaces between parallel stages cause race conditions. Use
stash/unstashto pass data. - Do not rely on workspace persistence — workspaces may be reused or cleaned between builds. Use
archiveArtifactsor external storage for anything that must persist. - Clean up — always include
cleanWs()in apost { always { } }block to prevent workspace accumulation on agents.
Production Checklist
- Zero executors on controller — set controller executors to 0. All builds run on agents.
- HTTPS everywhere — put Jenkins behind a reverse proxy with TLS. Never expose HTTP to the network.
- LDAP / SSO authentication — integrate with corporate identity provider. Disable local accounts except for a break-glass admin.
- Role-based access control — use the Role Strategy plugin. Grant minimum required permissions per team.
- Credentials in the credentials store — never hardcode secrets in Jenkinsfiles, environment variables, or job configurations.
- Agent-to-controller security — always enabled and not disableable since Jenkins 2.326. On older versions, verify it is enabled to prevent agents from accessing controller files.
- CSRF protection enabled — always on since Jenkins 2.222.x (the option to disable was removed). For older versions, verify it is enabled. Do not disable for API convenience.
- JCasC for configuration — manage Jenkins system config in YAML, stored in Git. Reproducible and auditable.
- Job DSL for job definitions — no manual job creation through the UI. Seed jobs generate all pipelines from code.
- Pin plugin versions — use a
plugins.txtwith explicit versions. Test plugin updates in staging before production. - Backup jenkins_home — automated daily backups of the Jenkins home directory. Test restore procedures regularly.
- Build timeouts — set timeouts on all pipelines to prevent hung builds from consuming executors.
- Log rotation — configure
buildDiscarder(logRotator(...))on all jobs. Old builds consume disk and slow the UI. - Monitoring — export Jenkins metrics to Prometheus (Prometheus plugin). Alert on queue length, executor utilization, and failed builds.
- Ephemeral agents — use Kubernetes or cloud agents for auto-scaling. Minimize static agent infrastructure.
- Shared libraries — extract common pipeline logic into versioned shared libraries. Reduce duplication across Jenkinsfiles.
- Script approval review — regularly audit the script approval list. Remove unnecessary approvals. Understand what each approved signature allows.