MinIO
High-performance S3-compatible object storage for production workloads
Overview
MinIO is a high-performance, S3-compatible object storage system. It is designed for large-scale data infrastructure — data lakes, machine learning pipelines, backup targets, and artifact repositories. MinIO ships as a single binary with zero dependencies, runs on bare metal, VMs, containers, and Kubernetes, and is released under the GNU AGPL v3 license.
In May 2025, MinIO removed the full admin console from the community edition (leaving only an object browser). In December 2025, the open-source repository entered maintenance mode with no new features or PRs accepted. In February 2026, the repository was archived (read-only). MinIO now steers users toward AIStor, its commercial product. Existing community builds remain usable under AGPL v3, but will receive no further updates or security patches. Evaluate alternatives such as Garage, SeaweedFS, or Ceph RGW for new deployments requiring active community maintenance.
Core S3 Compatible
MinIO implements the AWS S3 API natively. Any application, SDK, or tool that speaks S3 works with MinIO without modification — aws-cli, Terraform, Spark, Presto, and every major language SDK.
Core Single Binary
No JVM, no runtime dependencies. Download the minio binary, point it at a directory, and you have object storage. This simplicity extends to upgrades — replace the binary and restart.
Performance Hardware-Speed I/O
MinIO is written in Go and optimized for NVMe drives. Benchmarks consistently show 300+ GiB/s read and 100+ GiB/s write throughput on modern hardware. It uses all available CPU cores for erasure coding and inline hashing.
Ecosystem MinIO Console
Built-in web UI for bucket management, user administration, monitoring, and configuration. Accessible on port 9001 by default. No separate installation needed. Note: As of May 2025, the full admin console was removed from the community edition and is only available in AIStor. The community edition retains a basic object browser only.
Common use cases
- Data lake storage — back-end for Spark, Trino, Hive, and other analytics engines via S3A connector.
- Backup & archive — target for Veeam, Commvault, Velero (Kubernetes), and
resticwith S3 backend. - CI/CD artifact storage — store build artifacts, Docker layers, Terraform state, and ML model checkpoints.
- Application storage — replace AWS S3 in self-hosted deployments. Upload/download files via presigned URLs.
- Log aggregation — long-term storage tier for Loki, Thanos, and Cortex.
Architecture
MinIO is designed for distributed, fault-tolerant object storage. The architecture revolves around erasure coding, server pools, and a shared-nothing design where every node is equal — no special coordinator or metadata server.
Erasure coding
MinIO splits each object into data and parity shards using Reed-Solomon erasure coding. With the default configuration of N/2 data shards and N/2 parity shards (where N is the number of drives in an erasure set), MinIO can lose up to half the drives and still reconstruct every object. This provides redundancy without the overhead of full replication.
Distributed mode
In distributed mode, MinIO runs across multiple nodes (servers). Each node contributes drives to erasure sets. The minimum production deployment is 4 nodes with 1 drive each, but typical setups use 4–16 nodes with multiple drives per node.
Concept Server Pools
A server pool is a group of MinIO nodes that form an independent erasure coding unit. You can expand capacity by adding new server pools without disrupting existing pools. Objects are distributed across pools based on a deterministic hash.
Concept Erasure Sets
Within a server pool, drives are grouped into erasure sets (typically 4–16 drives each). Each object lives entirely within one erasure set. MinIO automatically calculates the optimal erasure set size based on the total number of drives.
Feature Healing
When a failed drive is replaced, MinIO automatically heals the data by reconstructing missing shards from the surviving shards in each erasure set. Healing runs in the background without impacting availability.
Feature Bitrot Protection
Every shard is checksummed with HighwayHash on write and verified on read. Silent data corruption (bitrot) is detected and automatically repaired from healthy shards during the healing process.
MinIO vs traditional storage
| Aspect | MinIO | Traditional (NFS/SAN) |
|---|---|---|
| Access protocol | S3 API (HTTP/HTTPS) | NFS, SMB, iSCSI, Fibre Channel |
| Scalability | Horizontal — add nodes/pools | Vertical — buy bigger appliance |
| Data protection | Erasure coding per object | RAID at disk level |
| Metadata | Stored inline with object data | Centralized metadata server |
| Cost | Commodity hardware + open source | Proprietary appliance + licensing |
| Cloud integration | Native S3 — drop-in for AWS S3 | Requires gateways or adapters |
S3 API Compatibility
MinIO implements the most commonly used subset of the AWS S3 API. Any S3 client can connect by changing the endpoint URL from s3.amazonaws.com to your MinIO server address.
Supported S3 operations
| Category | Operations |
|---|---|
| Object | PutObject, GetObject, DeleteObject, CopyObject, HeadObject, ListObjectsV2 |
| Multipart | CreateMultipartUpload, UploadPart, CompleteMultipartUpload, AbortMultipartUpload, ListParts |
| Bucket | CreateBucket, DeleteBucket, ListBuckets, HeadBucket, GetBucketLocation |
| Versioning | GetBucketVersioning, PutBucketVersioning, ListObjectVersions |
| Locking | PutObjectLockConfiguration, GetObjectRetention, PutObjectLegalHold |
| Policy | PutBucketPolicy, GetBucketPolicy, DeleteBucketPolicy |
| Notifications | PutBucketNotification, GetBucketNotification |
| Lifecycle | PutBucketLifecycle, GetBucketLifecycle, DeleteBucketLifecycle |
Presigned URLs
Generate time-limited URLs that grant temporary access to private objects without requiring credentials. Useful for file downloads/uploads in web applications.
# Generate a presigned GET URL (valid for 12 hours)
mc share download --expire 12h myminio/my-bucket/report.pdf
# Generate a presigned PUT URL (valid for 1 hour)
mc share upload --expire 1h myminio/my-bucket/uploads/
Using aws-cli with MinIO
# Configure aws-cli for MinIO
aws configure set aws_access_key_id minioadmin
aws configure set aws_secret_access_key minioadmin
aws configure set default.region us-east-1
# List buckets
aws --endpoint-url http://minio.example.com:9000 s3 ls
# Upload a file
aws --endpoint-url http://minio.example.com:9000 \
s3 cp backup.tar.gz s3://backups/daily/
# Sync a directory
aws --endpoint-url http://minio.example.com:9000 \
s3 sync ./data/ s3://data-lake/raw/
Python (boto3) example
import boto3
s3 = boto3.client(
"s3",
endpoint_url="http://minio.example.com:9000",
aws_access_key_id="minioadmin",
aws_secret_access_key="minioadmin",
)
# Upload a file
s3.upload_file("local-file.csv", "my-bucket", "data/file.csv")
# Generate a presigned URL (expires in 1 hour)
url = s3.generate_presigned_url(
"get_object",
Params={"Bucket": "my-bucket", "Key": "data/file.csv"},
ExpiresIn=3600,
)
print(url)
mc CLI Tool
The MinIO Client (mc) is a command-line tool that provides Unix-like commands for object storage. It works with MinIO, AWS S3, Google Cloud Storage, and any S3-compatible service. Think of it as ls, cp, rm, and find for buckets.
Setting up aliases
# Install mc
curl -O https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc && sudo mv mc /usr/local/bin/
# Add a MinIO alias
mc alias set myminio http://minio.example.com:9000 minioadmin minioadmin
# Add an AWS S3 alias
mc alias set aws https://s3.amazonaws.com AKIAIOSFODNN7EXAMPLE wJalrXUtnFEMI
# List all configured aliases
mc alias list
File operations
# List buckets
mc ls myminio
# List objects in a bucket
mc ls myminio/my-bucket/data/
# Copy a file to MinIO
mc cp ./report.pdf myminio/reports/2024/report.pdf
# Copy between buckets
mc cp myminio/source-bucket/file.txt myminio/dest-bucket/file.txt
# Recursive copy (like cp -r)
mc cp --recursive ./local-dir/ myminio/my-bucket/backups/
# Mirror a local directory to MinIO (like rsync)
mc mirror ./data/ myminio/data-lake/
# Mirror with delete (remove objects not in source)
mc mirror --remove ./data/ myminio/data-lake/
# Find objects matching a pattern
mc find myminio/my-bucket --name "*.log" --older-than 30d
# Show diff between local and remote
mc diff ./data/ myminio/data-lake/
# Remove objects
mc rm myminio/my-bucket/old-file.txt
# Remove recursively with force
mc rm --recursive --force myminio/my-bucket/temp/
Admin commands
# Server info
mc admin info myminio
# View real-time server logs
mc admin trace myminio
# Service restart
mc admin service restart myminio
# Manage users
mc admin user add myminio newuser newpassword
mc admin user list myminio
mc admin user disable myminio newuser
# Attach a policy to a user
mc admin policy attach myminio readwrite --user newuser
# List available policies
mc admin policy list myminio
# Create a custom policy
mc admin policy create myminio my-policy policy.json
# Manage groups
mc admin group add myminio developers alice bob
mc admin group list myminio
Bucket Management
Buckets are the top-level containers for objects in MinIO. Bucket configuration controls versioning, retention, lifecycle management, and event notifications.
Creating and configuring buckets
# Create a bucket
mc mb myminio/my-bucket
# Create with object locking enabled (must be set at creation time)
mc mb --with-lock myminio/compliance-bucket
# Enable versioning
mc version enable myminio/my-bucket
# Check versioning status
mc version info myminio/my-bucket
Object locking (WORM)
Object locking provides Write-Once-Read-Many (WORM) protection. Once locked, objects cannot be deleted or overwritten until the retention period expires. This is essential for regulatory compliance (SEC 17a-4, HIPAA, GDPR).
# Set default retention on a bucket (governance mode, 365 days)
mc retention set --default governance 365d myminio/compliance-bucket
# Set compliance mode (cannot be overridden, even by root)
mc retention set --default compliance 2555d myminio/legal-hold-bucket
# Apply legal hold to a specific object
mc legalhold set myminio/compliance-bucket/contract.pdf
Lifecycle rules
Lifecycle rules automate object expiration and transition. Use them to delete old objects, move infrequently accessed data to cheaper tiers, or clean up incomplete multipart uploads.
{
"Rules": [
{
"ID": "expire-old-logs",
"Status": "Enabled",
"Filter": { "Prefix": "logs/" },
"Expiration": { "Days": 90 }
},
{
"ID": "transition-to-cold",
"Status": "Enabled",
"Filter": { "Prefix": "archive/" },
"Transition": {
"Days": 30,
"StorageClass": "COLD_TIER"
}
},
{
"ID": "cleanup-multipart",
"Status": "Enabled",
"Filter": { "Prefix": "" },
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}
# Apply lifecycle rules (mc ilm import is deprecated)
mc ilm rule import myminio/my-bucket < lifecycle.json
# List lifecycle rules (mc ilm ls is deprecated)
mc ilm rule ls myminio/my-bucket
# Remove all lifecycle rules (mc ilm rm is deprecated)
mc ilm rule rm --all myminio/my-bucket
Bucket notifications
MinIO can send event notifications when objects are created, accessed, or deleted. Supported targets include webhooks, Kafka, AMQP (RabbitMQ), MQTT, NATS, Redis, PostgreSQL, MySQL, and Elasticsearch.
# Configure a webhook notification target
mc admin config set myminio notify_webhook:my_hook \
endpoint="https://app.example.com/minio-events" \
auth_token="Bearer my-secret-token"
# Restart to apply
mc admin service restart myminio
# Add event notification for object creation
mc event add myminio/my-bucket arn:minio:sqs::my_hook:webhook \
--event put --prefix uploads/ --suffix .pdf
# List configured events
mc event list myminio/my-bucket
# Configure Kafka notification target
mc admin config set myminio notify_kafka:my_kafka \
brokers="kafka1:9092,kafka2:9092" \
topic="minio-events"
Bucket notifications are essential for building event-driven architectures. For example, trigger image processing when a file is uploaded, send audit events to Kafka, or update a search index when documents change.
Security
MinIO security covers authentication (who are you), authorization (what can you do), encryption (protect data in transit and at rest), and auditing (what happened). All production deployments should enable TLS and use IAM policies to enforce least-privilege access.
Access keys and secret keys
MinIO uses S3-style access key / secret key pairs for authentication. The root credentials are set at server startup. Additional credentials are created via IAM users or service accounts.
# Set root credentials (environment variables)
export MINIO_ROOT_USER=minio-admin
export MINIO_ROOT_PASSWORD=SuperSecretPassword123
# Start MinIO server
minio server /data --console-address ":9001"
Never use the default minioadmin/minioadmin credentials in production. Set strong root credentials via MINIO_ROOT_USER and MINIO_ROOT_PASSWORD environment variables. The root account should only be used for initial setup — create dedicated IAM users for all other access.
IAM policies
MinIO IAM policies use the same JSON syntax as AWS IAM. They define which actions are allowed or denied on which resources.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::app-data",
"arn:aws:s3:::app-data/*"
]
},
{
"Effect": "Deny",
"Action": [
"s3:DeleteBucket",
"s3:PutBucketPolicy"
],
"Resource": "arn:aws:s3:::*"
}
]
}
# Create and attach the policy
mc admin policy create myminio app-rw app-policy.json
mc admin policy attach myminio app-rw --user app-service
Bucket policies
Bucket policies control anonymous and public access at the bucket level. They follow the same JSON format as AWS S3 bucket policies.
# Make a bucket publicly readable
mc anonymous set download myminio/public-assets
# Set a custom bucket policy from JSON
mc anonymous set-json bucket-policy.json myminio/my-bucket
# Remove public access
mc anonymous set none myminio/public-assets
TLS / SSL
# Place certificates in the default MinIO cert directory
mkdir -p ~/.minio/certs
cp public.crt ~/.minio/certs/public.crt
cp private.key ~/.minio/certs/private.key
# MinIO auto-detects certs and enables HTTPS
minio server /data --console-address ":9001"
# For distributed mode, each node needs certs
# CA cert goes in ~/.minio/certs/CAs/
Encryption at rest
MinIO supports server-side encryption using three methods:
SSE-S3 Server-Managed Keys
MinIO manages the encryption keys internally. Each object is encrypted with a unique key derived from a master key. Simplest to set up — configure a master key via environment variable.
SSE-KMS External KMS
Encryption keys are managed by an external Key Management Service (HashiCorp Vault, AWS KMS, GCP KMS, Thales CipherTrust). Recommended for production — provides key rotation, audit trails, and FIPS 140-2 compliance. Note: The community KES (Key Encryption Service) was deprecated and archived in mid-2025. Enterprise users should migrate to AIStor KMS.
# Enable SSE-S3 with a master key (for testing/dev)
export MINIO_KMS_SECRET_KEY=my-key:Y2hhbmdlIG1lIGJ1dCB0aGlzIGtleSBtdXN0IGJlIDMyYg==
# Enable auto-encryption for all objects in a bucket
mc encrypt set sse-s3 myminio/sensitive-data
Identity Management
MinIO provides a built-in IAM system and integrates with external identity providers for enterprise authentication. All access is governed by policies — no policy means no access (deny by default).
Built-in IAM
# Create a user
mc admin user add myminio alice StrongPassword123
# Create a group and add users
mc admin group add myminio data-engineers alice bob
# Attach policy to group (all members inherit it)
mc admin policy attach myminio readwrite --group data-engineers
# Create a service account (for applications)
mc admin user svcacct add myminio alice \
--access-key app-svc-key \
--secret-key app-svc-secret \
--policy app-policy.json
# List service accounts for a user
mc admin user svcacct list myminio alice
LDAP / Active Directory
MinIO integrates with LDAP for centralized user authentication. Users authenticate with their LDAP credentials, and MinIO maps LDAP groups to MinIO policies.
# Configure LDAP via environment variables
export MINIO_IDENTITY_LDAP_SERVER_ADDR=ldap.example.com:636
export MINIO_IDENTITY_LDAP_LOOKUP_BIND_DN="cn=admin,dc=example,dc=com"
export MINIO_IDENTITY_LDAP_LOOKUP_BIND_PASSWORD="ldap-password"
export MINIO_IDENTITY_LDAP_USER_DN_SEARCH_BASE_DN="ou=users,dc=example,dc=com"
export MINIO_IDENTITY_LDAP_USER_DN_SEARCH_FILTER="(uid=%s)"
export MINIO_IDENTITY_LDAP_GROUP_SEARCH_BASE_DN="ou=groups,dc=example,dc=com"
export MINIO_IDENTITY_LDAP_GROUP_SEARCH_FILTER="(&(objectclass=groupOfNames)(member=%d))"
export MINIO_IDENTITY_LDAP_TLS_SKIP_VERIFY=off
# Map an LDAP group to a MinIO policy
mc admin policy attach myminio readwrite \
--group "cn=data-engineers,ou=groups,dc=example,dc=com"
OpenID Connect (OIDC)
MinIO supports OpenID Connect for SSO integration with providers like Keycloak, Okta, Auth0, Microsoft Entra ID (formerly Azure AD), and Google Workspace.
# Configure OIDC
export MINIO_IDENTITY_OPENID_CONFIG_URL="https://keycloak.example.com/realms/minio/.well-known/openid-configuration"
export MINIO_IDENTITY_OPENID_CLIENT_ID="minio"
export MINIO_IDENTITY_OPENID_CLIENT_SECRET="client-secret"
export MINIO_IDENTITY_OPENID_SCOPES="openid,profile,email"
export MINIO_IDENTITY_OPENID_CLAIM_NAME="policy"
# Users log in via the MinIO Console, which redirects to the OIDC provider.
# The OIDC token's "policy" claim maps to a MinIO policy name.
Policy-based access control
Built-in Canned Policies
readonly— read-only access to all bucketsreadwrite— full read/write access to all bucketswriteonly— write-only (upload only, no read)diagnostics— server health and metrics accessconsoleAdmin— full Console admin access
Best Practice Least Privilege
- Create custom policies scoped to specific buckets and prefixes
- Use service accounts for applications (not user accounts)
- Attach policies to groups, not individual users
- Use
Denystatements to explicitly block dangerous actions - Regularly audit policy assignments with
mc admin policy entities
Replication & Tiering
MinIO provides multiple replication strategies for disaster recovery and data locality, plus tiering to offload cold data to cheaper cloud storage.
Site replication (multi-site)
Site replication synchronizes everything across multiple MinIO deployments — buckets, objects, IAM policies, users, and groups. All sites are active (read/write), providing true multi-site active-active replication.
# Add sites to a replication group
mc admin replicate add myminio1 myminio2 myminio3
# Check replication status
mc admin replicate status myminio1
# View replication info
mc admin replicate info myminio1
Bucket replication
Bucket replication copies objects from a source bucket to a target bucket, potentially on a different MinIO cluster or AWS S3. Supports one-way and two-way (bidirectional) replication. Since RELEASE.2022-12-24, mc replicate add handles remote target creation directly — the older mc admin bucket remote add command is deprecated.
# Enable replication (one-way) — remote target is specified inline
mc replicate add myminio/source-bucket \
--remote-bucket https://accesskey:secretkey@remote-minio.example.com/target-bucket \
--replicate "delete,delete-marker,existing-objects"
# Enable two-way replication (run on both sides)
mc replicate add myminio/bucket-a \
--remote-bucket https://accesskey:secretkey@remote-minio.example.com/bucket-a \
--replicate "delete,delete-marker,existing-objects"
# Check replication status
mc replicate status myminio/source-bucket
Tiering to cloud storage
MinIO can tier (transition) cold data to cheaper storage: AWS S3, Google Cloud Storage, or Azure Blob Storage. Objects are transparently accessible — MinIO proxies GET requests to the tier.
# Add an S3 tier
mc ilm tier add s3 myminio WARM_S3 \
--endpoint https://s3.amazonaws.com \
--access-key AKIAIOSFODNN7EXAMPLE \
--secret-key wJalrXUtnFEMI \
--bucket my-archive-bucket \
--prefix minio-tier/ \
--region us-east-1
# Add a GCS tier
mc ilm tier add gcs myminio COLD_GCS \
--credentials-file /path/to/gcs-credentials.json \
--bucket my-gcs-archive \
--prefix cold-tier/
# Add an Azure tier
mc ilm tier add azure myminio ARCHIVE_AZURE \
--account-name mystorageaccount \
--account-key base64key \
--bucket my-container \
--prefix archive/
# Create ILM rule to transition after 90 days
mc ilm rule add myminio/my-bucket \
--transition-days 90 \
--transition-tier WARM_S3 \
--prefix "logs/"
# Verify tiers
mc ilm tier list myminio
Tiering only moves the object data to the remote tier. The metadata and namespace remain in MinIO. Users and applications continue to access tiered objects normally via the S3 API — MinIO transparently retrieves them from the remote tier on demand.
Monitoring
MinIO exposes Prometheus-compatible metrics, supports audit logging, and provides real-time tracing via the mc CLI. Production deployments should integrate MinIO metrics into their existing monitoring stack.
Prometheus metrics
MinIO exposes v2 metrics at /minio/v2/metrics/cluster and /minio/v2/metrics/node. Newer releases also offer v3 metrics under /minio/metrics/v3, which provides more granular, per-bucket scraping and a unified endpoint structure. Both versions require authentication via a bearer token generated from the MinIO configuration. New deployments should prefer v3.
# Generate a Prometheus scrape config
mc admin prometheus generate myminio
# prometheus.yml - add to scrape_configs
scrape_configs:
- job_name: 'minio-cluster'
metrics_path: /minio/v2/metrics/cluster
scheme: https
bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9...
static_configs:
- targets:
- minio1.example.com:9000
- minio2.example.com:9000
- minio3.example.com:9000
- minio4.example.com:9000
- job_name: 'minio-node'
metrics_path: /minio/v2/metrics/node
scheme: https
bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9...
static_configs:
- targets:
- minio1.example.com:9000
- minio2.example.com:9000
- minio3.example.com:9000
- minio4.example.com:9000
Key metrics to monitor
Capacity Storage
minio_cluster_capacity_usable_total_bytesminio_cluster_capacity_usable_free_bytesminio_bucket_usage_total_bytesminio_bucket_usage_object_total
Performance Requests
minio_s3_requests_totalminio_s3_requests_errors_totalminio_s3_time_ttfb_seconds(time to first byte)minio_node_drive_latency_us
Health Drives & Nodes
minio_cluster_nodes_online_totalminio_cluster_nodes_offline_totalminio_cluster_drive_online_totalminio_cluster_drive_offline_total
Healing Repair
minio_heal_objects_totalminio_heal_objects_errors_totalminio_heal_time_last_activity_nano_seconds
mc admin trace
Real-time tracing of all S3 API calls, useful for debugging and auditing.
# Trace all S3 requests
mc admin trace myminio
# Trace only errors
mc admin trace --errors myminio
# Trace specific call categories (s3, internal, storage, healing, scanner, ilm, os, etc.)
mc admin trace --call s3 myminio
# Filter by HTTP method or function name using --filter-request/--filter-response
mc admin trace --filter-request "PUT" myminio
# Trace with verbose output (headers, response)
mc admin trace -v myminio
Audit logging
# Enable audit logging to a webhook
mc admin config set myminio audit_webhook:primary \
endpoint="https://audit.example.com/minio" \
auth_token="Bearer audit-token"
# Enable audit logging to Kafka
mc admin config set myminio audit_kafka:primary \
brokers="kafka1:9092,kafka2:9092" \
topic="minio-audit"
# Restart to apply
mc admin service restart myminio
Health checks
# Liveness check (is the server responding?)
curl -f http://minio.example.com:9000/minio/health/live
# Readiness check (same as live unless etcd is configured)
curl -f http://minio.example.com:9000/minio/health/ready
# Cluster health (are enough drives online for read/write?)
curl -f http://minio.example.com:9000/minio/health/cluster
# Cluster health with distributed verification (fan-out to all peers)
curl -f "http://minio.example.com:9000/minio/health/cluster?verify"
# Detailed cluster status via mc
mc admin info myminio
Docker Deployment
MinIO runs as a single container for development or as a multi-node distributed deployment for production. The official image is minio/minio on Docker Hub (also available at quay.io/minio/minio).
Single-node (development / testing)
# docker-compose.yml - single node MinIO
services:
minio:
image: minio/minio:latest
container_name: minio
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minio-admin
MINIO_ROOT_PASSWORD: SuperSecretPassword123
volumes:
- minio-data:/data
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
minio-data:
Distributed mode (4-node production)
# docker-compose.yml - distributed MinIO (4 nodes, 4 drives each)
x-minio-common: &minio-common
image: minio/minio:RELEASE.2024-06-13T22-53-53Z
command: server --console-address ":9001" http://minio{1...4}/data{1...4}
environment:
MINIO_ROOT_USER: minio-admin
MINIO_ROOT_PASSWORD: SuperSecretPassword123
MINIO_PROMETHEUS_AUTH_TYPE: public
healthcheck:
test: ["CMD", "mc", "ready", "local"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
services:
minio1:
<<: *minio-common
hostname: minio1
volumes:
- minio1-data1:/data1
- minio1-data2:/data2
- minio1-data3:/data3
- minio1-data4:/data4
minio2:
<<: *minio-common
hostname: minio2
volumes:
- minio2-data1:/data1
- minio2-data2:/data2
- minio2-data3:/data3
- minio2-data4:/data4
minio3:
<<: *minio-common
hostname: minio3
volumes:
- minio3-data1:/data1
- minio3-data2:/data2
- minio3-data3:/data3
- minio3-data4:/data4
minio4:
<<: *minio-common
hostname: minio4
volumes:
- minio4-data1:/data1
- minio4-data2:/data2
- minio4-data3:/data3
- minio4-data4:/data4
nginx:
image: nginx:1.28-alpine
ports:
- "9000:9000"
- "9001:9001"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- minio1
- minio2
- minio3
- minio4
restart: unless-stopped
volumes:
minio1-data1:
minio1-data2:
minio1-data3:
minio1-data4:
minio2-data1:
minio2-data2:
minio2-data3:
minio2-data4:
minio3-data1:
minio3-data2:
minio3-data3:
minio3-data4:
minio4-data1:
minio4-data2:
minio4-data3:
minio4-data4:
Nginx load balancer config
# nginx.conf for distributed MinIO
events { worker_connections 1024; }
http {
upstream minio_s3 {
least_conn;
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
}
upstream minio_console {
least_conn;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
}
server {
listen 9000;
server_name _;
client_max_body_size 0;
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_pass http://minio_s3;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
proxy_http_version 1.1;
chunked_transfer_encoding off;
}
}
server {
listen 9001;
server_name _;
location / {
proxy_pass http://minio_console;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
For production Docker deployments, always pin the MinIO image to a specific release tag (e.g., RELEASE.2024-06-13T22-53-53Z), never use :latest. Use host-path bind mounts or local SSDs instead of Docker volumes for better I/O performance. Place an Nginx or HAProxy load balancer in front of the MinIO nodes for both S3 API and Console traffic.
Backup Strategies
While MinIO's erasure coding protects against drive failures, it does not protect against accidental deletion, ransomware, or site-level disasters. A proper backup strategy is essential for production deployments.
mc mirror for backup
The simplest backup approach: mirror your MinIO data to a second MinIO instance or cloud S3 bucket.
# One-time backup: mirror entire bucket to a backup MinIO
mc mirror myminio/production-data backup-minio/production-data
# Continuous watch mode (mirrors changes in real time)
mc mirror --watch myminio/production-data backup-minio/production-data
# Mirror to AWS S3
mc mirror myminio/production-data aws/backup-bucket/minio-backup/
# Scheduled backup via cron (daily at 2 AM)
# crontab -e
0 2 * * * /usr/local/bin/mc mirror --overwrite \
myminio/production-data backup-minio/production-data \
>> /var/log/minio-backup.log 2>&1
Versioning for point-in-time recovery
With versioning enabled, every overwrite or delete creates a new version instead of replacing the object. You can recover any previous version of any object.
# Enable versioning on a bucket
mc version enable myminio/critical-data
# List all versions of an object
mc ls --versions myminio/critical-data/config.json
# Restore a specific version
mc cp --version-id "version-uuid" \
myminio/critical-data/config.json \
myminio/critical-data/config.json
# Remove delete markers to "undelete" objects
mc rm --versions --force myminio/critical-data/deleted-file.txt
Snapshot consistency
MinIO does not support atomic snapshots across the entire cluster. If you need point-in-time consistency for backup, use bucket replication to a dedicated backup cluster with versioning enabled. This ensures that even if objects are modified during the backup window, all versions are preserved. For filesystem-level snapshots, use LVM or ZFS snapshots on the underlying drives.
Kubernetes backup with Velero
Velero backs up Kubernetes resources and persistent volumes. MinIO serves as the S3-compatible backend for storing Velero backups.
# Install Velero with MinIO as the backup target
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.14.0 \
--bucket velero-backups \
--secret-file ./minio-credentials \
--backup-location-config \
region=us-east-1,s3ForcePathStyle=true,s3Url=https://minio.example.com:9000
# Create a backup
velero backup create daily-backup --include-namespaces production
# Restore from backup
velero restore create --from-backup daily-backup
Strategy 3-2-1 Rule
Keep 3 copies of data, on 2 different storage types, with 1 copy offsite. Example: primary MinIO cluster + replicated MinIO at a second site + tiered cold data to AWS S3 Glacier.
Testing Restore Drills
Backups are worthless if you cannot restore from them. Schedule quarterly restore drills. Verify that backed-up objects are complete, uncorrupted, and that your team knows the restore procedure.
Production Checklist
- Change default credentials — set
MINIO_ROOT_USERandMINIO_ROOT_PASSWORDto strong, unique values. Never deploy withminioadmin/minioadmin. - Enable TLS — all traffic (S3 API and Console) must be encrypted. Place certs in
~/.minio/certs/or terminate TLS at a reverse proxy. - Use distributed mode — minimum 4 nodes for production. Single-node MinIO has no fault tolerance.
- Pin the release version — use a specific MinIO release tag, not
:latest. Test upgrades in staging first. - Enable versioning — on all buckets that store important data. Protects against accidental deletion and overwrites.
- Configure lifecycle rules — set expiration on temporary data (logs, uploads). Clean up incomplete multipart uploads.
- Set up IAM users and policies — do not use root credentials for application access. Follow least-privilege principle.
- Enable encryption at rest — use SSE-KMS with an external KMS (HashiCorp Vault) for production. SSE-S3 for simpler deployments.
- Configure monitoring — scrape Prometheus metrics, set up Grafana dashboards, and alert on offline drives/nodes.
- Enable audit logging — send audit logs to a webhook or Kafka for compliance and forensics.
- Set up replication — use site replication or bucket replication for disaster recovery. At minimum, mirror critical buckets to a second site.
- Configure backup — use
mc mirroror bucket replication to maintain offsite copies. Follow the 3-2-1 backup rule. - Health check endpoints — integrate
/minio/health/liveand/minio/health/clusterwith your load balancer and orchestrator. - Use dedicated drives — MinIO performs best on XFS-formatted drives dedicated exclusively to MinIO. Avoid shared filesystems.
- Network configuration — use a dedicated network for inter-node traffic. Ensure low latency (<1ms) between nodes in the same server pool.
- Test disaster recovery — regularly test failover scenarios: kill a node, pull a drive, restore from backup. Know your RTO and RPO.
- Evaluate licensing and support — the community edition repository was archived in February 2026. For ongoing security patches and features, consider AIStor (commercial) or evaluate S3-compatible alternatives.