SUSE Manager Production Guide
Enterprise Linux systems management — patching, lifecycle management, compliance & automation at scale (now SUSE Multi-Linux Manager as of 5.1)
Overview
SUSE Manager (rebranded as SUSE Multi-Linux Manager starting with version 5.1) is an enterprise systems management solution for large-scale Linux infrastructure. It provides patch management, configuration management, compliance auditing, and system provisioning from a single web-based console. Built on the Uyuni open-source upstream (which itself descends from Spacewalk), SUSE Manager adds enterprise support, certified content, and tight integration with the SUSE Linux Enterprise ecosystem. Starting with version 5.0, both the server and proxy are delivered as containers managed by mgradm and mgrpxy tools respectively.
Organizations use SUSE Manager to manage hundreds or thousands of Linux systems across datacenters, cloud environments, and edge locations. It supports SUSE Linux Enterprise Server (SLES), SUSE Linux Micro, RHEL, CentOS, Ubuntu, Debian, Rocky Linux, AlmaLinux, Oracle Linux, Amazon Linux, and openSUSE — making it a viable single-pane-of-glass for heterogeneous Linux estates.
Core capabilities
- Patch management — Centralized errata and package updates across all managed systems with scheduled maintenance windows
- Configuration management — Deploy configuration files, Salt states, and formulas to groups of systems
- Compliance auditing — OpenSCAP integration for security benchmarks (CIS, DISA STIG) with automated scanning and reporting
- System provisioning — Bare-metal and VM provisioning with AutoYaST, Kickstart, and PXE boot support
- Content lifecycle management — Promote patches through dev/test/prod environments before production deployment
- CVE auditing — Track which systems are affected by specific CVEs and their patch status
Strengths
- Single console for multi-distro Linux estate management
- Mature patch management with approval workflows
- Content lifecycle environments (dev/test/prod promotion)
- Salt-based automation — modern, scalable, agentless-capable
- OpenSCAP compliance scanning built in
- Proxy servers for remote sites and DMZ deployments
- Free upstream available (Uyuni)
Considerations
- Server requires significant resources (CPU, RAM, disk for repository mirrors)
- Initial channel synchronization can take hours/days depending on content
- PostgreSQL database can grow very large in big environments
- Learning curve for content lifecycle and channel management
- Web UI can be slow with very large system inventories
- Requires SUSE subscription for the SUSE Manager server host OS (SL Micro or SLES)
Architecture
SUSE Manager follows a hub-and-spoke architecture. The central SUSE Manager Server handles all management operations, content synchronization, and the web interface. Proxy nodes extend reach to remote sites. Managed systems run the Salt minion agent (or legacy traditional client) and connect to the server or a proxy.
Core components
- SUSE Manager Server — The central management host. Runs the web UI (Java/Tomcat), API layer, task scheduler (Taskomatic), and Salt master. As of version 5.0, delivered as containers running on SL Micro 5.5 or SLES 15 SP6+ as the host OS, managed via the
mgradmtool. - PostgreSQL database — Stores system inventory, patch data, channel metadata, audit logs, and scheduling information. This is the single most critical data store.
- Salt master — Embedded in the server. All Salt minion communication flows through this. Uses ZeroMQ for transport by default.
- Taskomatic — Java-based task scheduler that handles channel sync, errata cache updates, report generation, and recurring jobs.
- Proxy servers — Lightweight caching proxies for remote sites. Cache packages and act as Salt broker. In version 5.0+, proxies are containerized and managed via the
mgrpxytool (running on Podman or K3s). - Salt minions — Agents on managed systems. Lightweight, persistent connection to Salt master via ZeroMQ.
Hardware requirements
Server (up to 1,000 systems)
- 4 CPU cores (8+ recommended)
- 16 GB RAM (32 GB recommended)
- 200 GB disk for
/var/spacewalk(repository data; in containerized 5.0+ deployments, this maps to/var/lib/containers/storage/volumes) - 50 GB disk for
/var/lib/pgsql(database) - Dedicated disk for
/var/cache
Server (1,000+ systems)
- 8–16 CPU cores
- 64–128 GB RAM
- 1 TB+ for
/var/spacewalk(multiple distro channels) - 200 GB+ for PostgreSQL data
- SSD/NVMe strongly recommended for database
- Consider SUSE Manager Hub for multi-server architectures
SUSE Manager 4.3 deprecated the legacy "traditional" client (based on rhnlib/osad), and SUSE Manager 5.0 removed traditional client support entirely. All deployments must now use Salt minions exclusively. If migrating from 4.3, all traditional clients must be converted to Salt before upgrading to 5.0.
Content Lifecycle Management
Content Lifecycle Management (CLM) is SUSE Manager's mechanism for controlling how patches and packages flow from vendor repositories to production systems. Instead of applying vendor updates directly, CLM lets you create filtered snapshots of content and promote them through a sequence of environments.
Key concepts
- Content project — A named project that defines which source channels to include, what filters to apply, and which environments to promote through
- Source channels — The upstream vendor channels (e.g., SLES 15 SP5 Pool, SLES 15 SP5 Updates) that feed into the project
- Filters — Rules that include or exclude specific patches, packages, or errata types (security, bugfix, enhancement). Filters can match by name, CVE ID, advisory type, or date
- Environments — Ordered stages (e.g., dev → test → prod) that represent points in your release pipeline. Each environment gets its own set of cloned channels
- Build & Promote — Building a project creates a snapshot. Promoting moves that snapshot to the next environment
Typical workflow
CLM pipeline example
- Sync vendor channels — SUSE Manager mirrors SLES 15 SP5 Pool + Updates from SCC (SUSE Customer Center)
- Create content project — Add SLES 15 SP5 channels as sources. Add filter to exclude kernel patches (these follow a separate approval process)
- Build to DEV — Click "Build" to create a frozen snapshot. DEV systems automatically see the new content
- Test in DEV — Run
zypper patchon DEV systems, validate application compatibility - Promote to TEST — After validation, promote the same snapshot to TEST. No new content is added — TEST gets exactly what DEV had
- Promote to PROD — After TEST sign-off, promote to PROD. Production systems now have access to the validated content
# Using the SUMA API to build and promote content projects
# Build the project (creates snapshot in first environment)
spacecmd -- contentmanagement_build "sles15sp5-standard"
# Promote from DEV to TEST
spacecmd -- contentmanagement_promote "sles15sp5-standard" "dev"
# Promote from TEST to PROD
spacecmd -- contentmanagement_promote "sles15sp5-standard" "test"
# List all content projects
spacecmd -- contentmanagement_listprojects
Always build CLM projects on a fixed schedule (e.g., monthly patch Tuesday + 3 days). This gives vendor patches time to stabilize before you snapshot them. Use filters to exclude kernel and driver updates that require separate reboot-window planning. Keep your environment count small — three stages (dev/test/prod) is the sweet spot for most organizations.
Patch Management
Patch management is the primary reason most organizations deploy SUSE Manager. The system tracks errata (vendor advisories) across all managed systems, shows which systems are affected by which CVEs, and provides scheduling and approval workflows for applying patches at scale.
How patching works
- Channel sync — Taskomatic periodically syncs vendor channels from SUSE Customer Center (SCC). New errata and packages are downloaded and cached locally.
- Errata cache update — After sync, Taskomatic recalculates which systems need which patches based on their installed package versions.
- Dashboard visibility — The web UI shows patch counts per system, per group, and per severity (critical, important, moderate, low).
- Scheduling — Administrators schedule patch actions for individual systems, groups, or via recurring actions. Patches can be applied immediately or during defined maintenance windows.
- Execution — The Salt master sends patch commands to minions. Minions execute
zypper patch(SLES) ordnf update/yum update(RHEL/CentOS —dnfon RHEL 8+,yumon RHEL 7) and report results back. - Reporting — Success/failure status is recorded per system. Failed patches can be retried or investigated.
CVE auditing
SUSE Manager includes a dedicated CVE Audit page. Given a CVE ID (e.g., CVE-2024-1234), it shows every managed system and its status:
Patched
System has the patched package version installed. No action needed.
Affected
System is running a vulnerable version. Patch is available in the assigned channel.
Patch pending
Patch action has been scheduled but not yet applied.
Not affected
System does not have the vulnerable package installed.
Maintenance windows
SUSE Manager supports maintenance schedules that restrict when patches can be applied to a system. This is critical for production environments where patching must happen during approved change windows.
# Create a maintenance schedule via API
spacecmd -- schedule_create "prod-patch-window" \
--start "2026-03-21 22:00" \
--end "2026-03-22 04:00" \
--recurrence "monthly"
# Assign systems to maintenance window
spacecmd -- system_setmaintenanceschedule server01.example.com "prod-patch-window"
spacecmd -- system_setmaintenanceschedule server02.example.com "prod-patch-window"
After syncing channels, the errata cache update can take significant time on large installations (30+ minutes for 5,000+ systems). Do not schedule patch actions immediately after a sync — wait for Taskomatic to finish the cache refresh, or patch counts will be inaccurate. Monitor the Taskomatic task status in the Admin → Task Schedules page.
Salt Integration
SUSE Manager uses Salt as its core management backend. Every managed system runs a Salt minion that maintains a persistent connection to the Salt master embedded in the SUSE Manager server. This architecture enables real-time command execution, event-driven automation, and declarative configuration management.
Salt concepts in SUSE Manager
- States — Declarative YAML files that describe the desired state of a system (packages installed, services running, files present). SUSE Manager can apply Salt states to systems and groups.
- Grains — Static data about a minion (OS, CPU, memory, network). SUSE Manager uses grains for hardware/software inventory and system grouping.
- Pillars — Confidential or system-specific data (passwords, configuration values) that is delivered only to the targeted minion. Stored on the Salt master.
- Formulas — Parameterized Salt states that can be configured through the SUSE Manager web UI. SUSE provides a catalog of pre-built formulas (monitoring, DHCP, BIND, etc.).
- Remote execution — Run arbitrary Salt commands on one or many systems in real time. Equivalent to
salt '*' cmd.run 'uptime'but through the web UI.
Salt vs. Traditional management
| Aspect | Salt (default) | Traditional (deprecated) |
|---|---|---|
| Agent | salt-minion (ZeroMQ persistent connection) | rhnsd + osad (polling-based) |
| Command execution | Real-time (seconds) | Polling interval (minutes to hours) |
| Configuration management | Salt states, formulas, pillars | Configuration channels (file deployment only) |
| Scalability | Thousands of minions per master | Limited by polling architecture |
| Event system | Event bus, reactors, beacons | None |
| Future support | Actively developed | Removed in SUSE Manager 5.0 |
# Example Salt state managed through SUSE Manager
# /srv/salt/webserver/init.sls
apache_installed:
pkg.installed:
- name: apache2
apache_running:
service.running:
- name: apache2
- enable: True
- require:
- pkg: apache_installed
apache_config:
file.managed:
- name: /etc/apache2/httpd.conf
- source: salt://webserver/files/httpd.conf
- user: root
- group: root
- mode: 644
- watch_in:
- service: apache_running
SUSE Manager exposes the full Salt API through its web interface, but you can also use the salt CLI directly on the server for troubleshooting. The Salt master configuration lives at /etc/salt/master.d/susemanager.conf. Be careful editing this directly — SUSE Manager regenerates parts of the Salt configuration on service restart.
System Registration
Registering (or "bootstrapping") systems into SUSE Manager is the first step in managing them. SUSE Manager supports multiple registration methods, with the bootstrap script being the most common for Salt minions.
Registration methods
- Web UI bootstrap — Enter the system's hostname/IP in the SUSE Manager UI. The server pushes Salt minion installation and configuration via SSH.
- Bootstrap script — Generate a shell script from SUSE Manager, copy it to the target system, and run it. The script installs the Salt minion, configures it to point to the SUSE Manager server, and registers the system.
- Manual Salt minion install — Install
salt-minionmanually, point it at the SUSE Manager server in/etc/salt/minion.d/susemanager.conf, and accept the key on the server. - AutoYaST / Kickstart — Include registration in the automated installation profile for new system deployments.
Activation keys
Activation keys are the mechanism for assigning channels, configuration, and entitlements to systems at registration time. Every system should be registered with an activation key.
Activation key components
- Base channel — The primary OS channel (e.g., SLES 15 SP5 Pool)
- Child channels — Additional channels (Updates, SDK, modules)
- System groups — Automatically assign to groups on registration
- Configuration channels — Deploy config on first checkin
- Contact method — Salt (default) or traditional
Naming convention
1-sles15sp5-dev— SLES 15 SP5, DEV environment1-sles15sp5-prod— SLES 15 SP5, PROD environment1-rhel8-prod— RHEL 8, PROD environment1-ubuntu2204-test— Ubuntu 22.04, TEST- Prefix with org ID, include OS and environment
# Generate and run bootstrap script
# On SUSE Manager server:
mgr-bootstrap --activation-keys=1-sles15sp5-prod \
--hostname=suma-server.example.com \
--ssl-cert=/srv/www/htdocs/pub/RHN-ORG-TRUSTED-SSL-CERT
# On target system:
curl -Sks https://suma-server.example.com/pub/bootstrap/bootstrap.sh | bash
# Or via the API:
spacecmd -- system_bootstrap -H client01.example.com \
-u root -a 1-sles15sp5-prod
System groups
System groups organize managed systems by function, location, or environment. Groups are used for targeting patch actions, applying Salt states, generating reports, and delegating management to team-specific administrators.
Create a group hierarchy that maps to your organization: by OS (sles15sp5, rhel8), by environment (dev, test, prod), and by function (webservers, databases, app-servers). Use activation keys to auto-assign systems to groups at registration time. This avoids manual group assignments that drift over time.
Proxy Servers
SUSE Manager Proxy is a lightweight component deployed at remote sites, in DMZs, or at branch offices. It caches packages locally, brokers Salt communication, and reduces bandwidth consumption between remote sites and the central SUSE Manager server.
When to use a proxy
- Remote sites with WAN links — Avoid pulling the same package over a slow link for every system. The proxy caches it locally after the first download.
- DMZ deployments — Place a proxy in the DMZ so managed systems never need direct access to the internal SUSE Manager server.
- Branch offices — Each branch gets its own proxy, reducing central bandwidth and improving patch install speed.
- Scale-out — Distribute Salt minion connections across proxies to reduce load on the central Salt master.
Proxy architecture
What the proxy does
- Caches RPM/DEB packages on local disk (squid-based in 4.3; containerized caching in 5.0+)
- Forwards Salt communication between minions and the central master
- Serves bootstrap repositories for client registration
- TFTP/PXE boot server for provisioning at remote sites
- Handles SSL termination for client connections
What the proxy does NOT do
- No web UI — all management is done on the central server
- No database — system data lives only on the server
- No independent operation — proxy requires connectivity to the server
- No content filtering — it caches whatever the server provides
- Cannot run Taskomatic or schedule tasks
# SUSE Manager 5.0+ (containerized proxy with mgrpxy)
# 1. Install SL Micro 5.5 or SLES 15 SP6+ on the proxy host
# 2. Install mgrpxy tool
# 3. Deploy containerized proxy
mgrpxy install podman proxy01.example.com
# Legacy (SUSE Manager 4.3 and earlier):
# 1. Register the system as a regular Salt minion first
# 2. Install proxy pattern
zypper in -t pattern suma_proxy
# 3. Run proxy setup
configure-proxy.sh --host=proxy01.example.com \
--suma-server=suma-server.example.com \
--ssl-city="Frankfurt" \
--ssl-country="DE" \
--ssl-org="ACME Corp" \
--ssl-email="admin@example.com"
# 4. Activate the proxy in SUSE Manager UI
# Systems -> proxy01 -> Details -> Proxy -> Activate
A proxy serving up to 500 minions needs 2 CPU cores, 4 GB RAM, and enough disk for the package cache (typically 100–200 GB depending on how many channels are used). For larger sites, increase to 4 cores and 8 GB RAM. The cache size depends on the number of unique packages pulled — monitor /var/cache/rhn usage and adjust the Squid cache size accordingly.
Configuration Management
SUSE Manager provides configuration management through two mechanisms: Salt states (for Salt-managed systems) and configuration channels (legacy, for traditional clients). For all new deployments, Salt states and formulas are the recommended approach.
Salt states management
Salt states are YAML files that declare the desired state of a system. SUSE Manager can store, version, and deploy states to individual systems or groups.
- State catalog — Browse and assign states from the SUSE Manager UI under Configuration → Salt States
- Highstate — Apply all assigned states to a system. Can be run on-demand or scheduled.
- Custom states — Place custom
.slsfiles in/srv/salt/on the SUSE Manager server. They appear in the UI automatically. - Git integration — Use a gitfs backend to pull Salt states from a Git repository, enabling version control and CI/CD workflows.
Formulas
Formulas are parameterized Salt states with a form-based UI for configuration. SUSE provides a growing catalog of pre-built formulas:
Available formulas
- Monitoring — Prometheus node exporter, Grafana configuration
- DHCP — ISC DHCP server configuration
- BIND — DNS server management
- Locale — System locale and timezone
- Yomi — Bare-metal OS provisioning
- Virtual Host Manager — VMware/KVM discovery
Custom formulas
- Place formula in
/usr/share/susemanager/formulas/ - Define
form.ymlfor UI input fields - Write Salt states that consume pillar data from the form
- Assign via UI — per system or per group
- Pillar values are stored in SUSE Manager database
# Example formula form.yml
# /usr/share/susemanager/formulas/myapp/form.yml
myapp:
$type: group
app_port:
$type: number
$default: 8080
$help: "Port the application listens on"
enable_ssl:
$type: boolean
$default: true
allowed_hosts:
$type: edit-group
$itemtype: text
$help: "List of allowed hostnames"
Use formulas for standardized configurations (monitoring agents, NTP, SSH hardening) and custom Salt states for application-specific logic. Store custom states in Git and use gitfs to pull them into SUSE Manager. This gives you version history, code review, and rollback capability that the built-in state editor lacks.
Monitoring & Reporting
SUSE Manager provides built-in reporting capabilities and integrates with external monitoring stacks for infrastructure observability.
Prometheus integration
SUSE Manager includes the Prometheus monitoring formula that deploys and configures the Prometheus node exporter on managed systems. The formula can also configure the Prometheus server's scrape targets, creating a fully automated monitoring pipeline.
- Deploy
prometheus-node_exporterto all systems via formula - Auto-generate Prometheus scrape configuration based on system groups
- Pre-built Grafana dashboards for SUSE Manager system inventory
- Alert on patch compliance, system health, and Salt minion connectivity
Built-in reports
System inventory
- Hardware inventory (CPU, RAM, disk, network interfaces)
- Installed packages and versions
- Installed patches and errata
- System group membership
- Last checkin time and Salt minion status
Compliance reports
- OpenSCAP scan results (CIS, DISA STIG profiles)
- CVE audit reports — which systems are affected
- Patch compliance — percentage of systems fully patched
- Subscription compliance — entitlement usage
- Exportable to CSV for external reporting tools
OpenSCAP integration
SUSE Manager can schedule OpenSCAP scans on managed systems and store the results centrally. This provides automated compliance checking against industry benchmarks.
# Schedule OpenSCAP scan via SUSE Manager
# Using spacecmd:
spacecmd -- system_runscap server01.example.com \
--profile "ssg-sle15-ds.xml" \
--content "/usr/share/xml/scap/ssg/content/ssg-sle15-ds.xml" \
--tailoring-id "xccdf_org.ssgproject.content_profile_cis"
# Results are viewable in:
# Systems -> server01 -> Audit -> OpenSCAP -> Scan Results
Monitor the SUSE Manager server itself with Prometheus. Key metrics to watch: Taskomatic task queue depth, PostgreSQL connection count and query duration, Salt master event bus throughput, and repository sync duration. Alert when sync times increase significantly — this often indicates storage performance degradation.
Licensing
Understanding the licensing model is critical for budgeting and architecture decisions. SUSE Manager is a commercial product with an open-source upstream (Uyuni).
SUSE Manager vs. Uyuni
| Aspect | SUSE Manager | Uyuni |
|---|---|---|
| License | Commercial (SUSE subscription) | Open source (GPL) |
| Support | SUSE enterprise support (L1–L3) | Community only |
| Certified content | SUSE-tested and signed patches | Community repositories |
| SCC integration | Full — mirrors from SUSE Customer Center | Limited — no SCC credentials |
| Server OS | SL Micro 5.5 or SLES 15 SP6+ (containerized in 5.0+) | openSUSE Leap or openSUSE Leap Micro (containerized) |
| Release cycle | Aligned with SLES, long-term support | Rolling releases, faster features |
| Multi-distro management | SLES, RHEL, Ubuntu, Debian, CentOS, Rocky, Alma, Oracle Linux, Amazon Linux | Same distro support |
Subscription model
What requires subscription
- SUSE Manager server — Requires a SUSE subscription (SL Micro or SLES) + SUSE Manager entitlement
- SLES managed clients — Each SLES system needs a SLES subscription to access SUSE update channels
- SUSE Manager Proxy — Requires its own SUSE host OS subscription + proxy entitlement
- Lifecycle Management+ — Advanced CLM features may require additional entitlements
What is free
- RHEL/CentOS/Rocky/Alma management — No SUSE subscription needed for these clients (they use their own vendor repos)
- Ubuntu management — No SUSE subscription for Ubuntu clients
- Salt minion agent — Open source, no per-agent fee
- Uyuni — Completely free alternative (without SUSE support)
A common misconception: SUSE Manager does not include SLES subscriptions for managed clients. You need a SUSE Manager subscription plus individual SLES subscriptions for each SLES client. For non-SUSE clients (RHEL, Ubuntu, CentOS), you only need the SUSE Manager management entitlement — the client OS subscription comes from its own vendor. Always validate entitlement counts with SUSE before procurement.
Consultant's Checklist
Use this checklist when scoping, deploying, or auditing a SUSE Manager environment.
Pre-deployment
- Inventory all Linux distros and versions in scope
- Confirm SLES + SUSE Manager subscription counts with SUSE
- Size the server: CPU, RAM, and disk (repository mirrors dominate)
- Plan network: firewall rules for Salt (4505/4506 TCP), HTTPS (443), and Cobbler (if provisioning)
- Identify proxy locations for remote sites / DMZ
- Decide on content lifecycle environments (dev/test/prod)
- Define activation key naming convention
- Plan PostgreSQL backup strategy (pg_dump or streaming replication)
Day-1 deployment
- Install SUSE Manager on dedicated SL Micro 5.5 or SLES 15 SP6+ system (containerized in 5.0+; SLES 15 SP4+ for legacy 4.3)
- Run
mgradm install podmanfor initial configuration (5.0+) oryast susemanager_setup(4.3) - Add SCC credentials and sync initial channels
- Create activation keys for each OS + environment combination
- Create system groups (by OS, environment, function)
- Configure CLM content projects with dev/test/prod stages
- Bootstrap pilot systems (5–10) and validate
- Set up Taskomatic schedules for channel sync and errata cache
Day-2 operations
- Build CLM projects on a monthly schedule
- Promote content through environments with testing at each stage
- Schedule patch windows aligned with change management process
- Run CVE audits after major vulnerability disclosures
- Monitor Taskomatic task queue and PostgreSQL performance
- Review Salt minion connectivity (Systems → Bootstrapping page)
- Run OpenSCAP compliance scans quarterly (minimum)
Common pitfalls
- Undersizing
/var/spacewalk— repository data grows fast - Not waiting for errata cache refresh before scheduling patches
- Using traditional clients instead of Salt (traditional support removed entirely in 5.0)
- Skipping CLM and applying vendor patches directly to production
- No PostgreSQL backup — the database is the hardest thing to rebuild
- Forgetting to sync child channels (SDK, modules, updates)
- Not monitoring Taskomatic — silent failures block patching
- Mixing activation keys across environments (dev key on prod system)