This topic is a step-by-step guide on how to set up XL Deploy in a production-ready environment. It describes how to configure the product, environment, and server resources to get the most out of the product. This topic is structured in three sections:

  • Preparation: Describes the prerequisites for the prescribed XL Deploy setup.
  • Installation: Covers setup and configuration procedures for each of the components.
  • Administration/Operation: Provides an overview of best practices to maintain and administer the system once it is in production.

Important: Proper configuration of I/O subsystems (database, file system, network) is critical for the optimal performance and operation of XL Deploy. This guide provides best practices and sizing recommendations.

Production environment setup

XL Deploy Production Configuration


In the first phase of setting up the production environment, you need to determine the correct hardware requirements and obtain the necessary prerequisites. A production-ready XL Deploy setup is a clustered, multi-node active/hot-standby setup, so you will need multiple machines.

Obtaining XL Deploy servers

The Requirements for installing XL Deploy topic describes the minimum system requirements. Here are the recommended requirements for each XL Deploy production machine:

  • 3+ Ghz 2 CPU quad-core machine (amounting to 8 cores) or better
  • 16 GB RAM or more
  • 500 GB hard disk space

Note: All of the XL Deploy cluster nodes must reside in the same network segment. This is required for the clustering protocol to optimally function. For best performance and to minimize network latency, it is also recommended that your database server be located in the same network segment.

Obtaining the XL Deploy distribution

Download the XL Deploy ZIP package from the XebiaLabs Software Distribution site (requires customer log-in).

For information about the supported versions of XL Deploy, see Supported XebiaLabs product versions.

Choosing a database server

A production setup requires an external clustered database to store the XL Deploy data. The supported external databases are described in Configure the XL Deploy SQL repository.

For more information about hardware requirements, see the database server supplier documentation.

Artifacts storage location

XL Deploy can be configured to store and retrieve artifacts in three different storage repository formats:

  • Repository Manager: Artifacts are managed by tools such as Nexus and Artifactory
  • Database: The artifacts are stored in and retrieved from a relational database management system (RDBMS).
  • File system: The artifacts are stored on and retrieved from the file system.

XebiaLabs recommendations are, in order of preference:

  1. A repository management system, with artifacts referenced using the fileUri property on the artifacts.
  2. A clustered database, if the RDBMS supports the size of the artifacts.
  3. On a shared file system.

XL Deploy can only use one local artifact repository at any time. The configuration option xl.repository.artifacts.type can be set to either file or db to select the storage repository.

Choosing a load balancer

To run XL Deploy in a high-availability (HA) setup, you must front the installation with a load balancer. This makes users unaware of which of the clustered nodes to which they are being routed. This topic uses HAProxy as an example. You can use any HTTP(s) load balancer that supports the following features:

  • SSL offloading
  • Checking a custom HTTP endpoint for node availability

Here are examples of load balancers that support this feature set:

Choosing an authentication provider

XL Deploy supports a number of single-sign on (SSO) authentication providers, including Secure LDAP (LDAPS) and OIDC providers. Many cloud providers support authentication through OIDC:

If you do not want to use a cloud provider or if your SSO solution is not compatible with OIDC, you can integrate your SSO with Keycloak, which is an OIDC bridge.

For more information, see Configure OpenID Connect (OIDC) Authentication for XL Deploy or Connect XL Deploy to your LDAP or Active Directory

Choosing a monitoring and alerting solution

For a production installation, make sure you set up a monitoring system to monitor the system and product performance for the components comprising your installation. XL Deploy exposes internal and system metrics over JMX. Any monitoring system that can read JMX data can be used to monitor the installation.

Common monitoring and alerting tools include:

Choosing a forensic data gathering toolchain

In addition to monitoring, ensure that system-related data gathering is active and available. You can analyze the gathered forensic data at a later point in time and perform root cause analysis for outages. You can also use forensic data to determine usage patterns and peak load patterns.

For this kind of monitoring, you can use a time series database. Common tools include:

You can graph and analyze the gathered data using a tool such as Grafana.

It is also recommended that you set up log file monitoring. The industry-standard toolchain for log file monitoring is the ELK stack:

These tools allow log files to be read and indexed while they are being written, so you can monitor for errant behavior during operation and perform analysis after outages.

Database server configuration

The basic database setup procedure, including schemas and privileges, is described in Configure the XL Deploy SQL repository. For various databases, additional configuration options are required to use them with XL Deploy or for a better performance.

MySQL or MariaDB

Important: MariaDB is not an officially supported database for XL Deploy, but you can use it as a drop in replacement for MySQL.

The default installation of MySQL is not tuned to run on a dedicated high-end machine. It is recommended that you change the following MySQL settings to improve its performance. These settings can be set in the MySQL options file. See the MySQL documentation to locate this file on your operating system.

Setting Value
innodb_buffer_pool_size Set this to maximum 70-75% of the available RAM of the database server. This setting controls the size of the database structure that can be kept in memory. Larger size provides better performance for the application due to caching at the database level.
innodb_log_file_size Set this to 256M. This sets the size of redo logs that MySQL keeps. If you set this to a large value, MySQL can process peak loads by keeping the transactions in the redo log.
innodb_thread_concurrency Set this to 2 * CPU cores of the database server. For example: for a 2 CPU quad-core machine, the optimal setting is 2 CPU * 4 Cores * 2 = 16.
max_allowed_packet Set this to 16M. This represents the maximum size of the packet transmitted from the server to the client. As the XL Deploy database for some columns contains BLOBs, this setting is better than the default of 1M.
open_files_limit XebiaLabs recommends setting this value to 10000 for large installations. This setting controls the number of file descriptors the MySQL database can keep open. This setting cannot be set to a higher value than the output of ulimit -n on a Linux/Unix system. If this limit is lower than the recommended value, see the documentation of your operating system.
innodb_flush_log_at_trx_commit Advanced: The default setting of this option is 1. Every transaction is always flushed to disk on commit, ensuring full ACID compliance. Setting this to either 0 (only flush the transaction buffer once per second to the transaction log), or 2 (directly write the transaction to the transaction log, flush the log once per second to disk), can produce transaction loss of up to a second worth of data. Note: When using a battery-backed disk cache, this setting can be set to 2 to prevent direct flushes to disk. The battery-backed disk cache ensures that the cache is flushed to disk before the power fails.


There are a number of settings in a default installation of PostgreSQL that can be tuned to better perform on higher end systems. These configuration options can be set in the PostgreSQL configuration file. See the PostgreSQL documentation to locate this file on your operating system.

Setting Value
shared_buffers Set to 30% of the available RAM of the database server. This setting controls the size of the memory allocated to PostgreSQL for caching data.
effective_cache_size Set to 50% of the available RAM of the database server. This setting provides an estimate of the memory size available for disk caching. The PostgreSQL query planner uses this setting to figure out if the query plan results can fit in the memory or not.
checkpoint_segments Set to 64. This setting controls how often the Write Ahead Log (WAL) is check-pointed. The WAL is written in 16MB segments. If you set this to 64, the WAL is check-pointed once every 64 * 16MB = 1024MB or once per 5 minutes, whichever is reached first.
default_statistics_target Set to 250. This setting controls the amount of information stored in the statistics tables for optimizing query execution.
work_mem Set to 0.2% of the available RAM of the database server. This setting controls the memory size available per connection for performing memory sorts and joins of query results. In a 100 connection scenario, this will be 20% of the available RAM in total.
maintenance_work_mem Set to 2% of the available RAM. This setting controls the amount of memory available to PostgreSQL for maintenance operations such as VACUUM and ANALYZE.
synchronous_commit Advanced: The default setting of this option is on. This guarantees full ACID compliance and no data loss on power failure. If you have a battery-backed disk cache, you can switch this setting to off to produce an increase in transactions per second.

Security settings

It is important to harden the XL Deploy environment from abuse. There are many industry-standard practices to ensure that an application runs in a sandboxed environment. You should minimally take the following actions:

  1. Run XL Deploy in a VM or a container. There are officially supported docker images for XL Deploy.
  2. Run XL Deploy on a read-only file system. XL Deploy needs to write to several directories during operation, notably its conf/, export/, log/, repository/, and work/ subdirectories. The rest of the file system can be made read-only.
  3. Enable SSL on JMX on XL Deploy server and satellites (see Using JMX counters for XL Satellite and (as of 8.6.x) cf. conf/xl-deploy.conf.example).
  4. Configure secure communications between XL Deploy and satellites: 8.5.x and earlier, or 8.6.x and later.
  5. Change the default Derby database to a production-ready database as described above.
  6. Do not enable SSL since the load balancer will offload SSL (see below).

Operating system

XL Deploy can run on both Microsoft Windows (64-bit) and Linux/Unix operating systems. Ensure that you maintain these systems with the latest security updates.

Java version

Important: XL Deploy requires Java 8. Running XL Deploy on Java 9 or later is not supported.

XL Deploy can run on the Oracle JDK or JRE, as well as OpenJDK. Always run the latest patch level of the JDK or JRE, unless otherwise instructed. For more information on Java requirements see Requirements for installing XL Deploy.

Installation and execution

To install XL Deploy on the machines with the minimum required permissions:

  1. Create a dedicated non-root user called xl-deploy. This ensures that you can lock down the operating system and prevent accidental privilege escalations.
  2. Create a directory under /opt called xebialabs, where the xl-deploy user has read access.
  3. Extract the downloaded version of XL Deploy in the /opt/xebialabs directory.
  4. Change the ownership of the installed product to xl-deploy and grant the user read access to the installation directory.
  5. Grant the xl-deploy user write access underneath the /opt/xebialabs/xl-deploy-<version>-server/ folder to the conf/ and log/ subdirectories. The user should also be able to create new subdirectories, or have write access to the export/, repository/, work/ subdirectories.
  6. Copy your license file to the /opt/xebialabs/xl-deploy-<version>-server/conf directory. You can download your license file from the XebiaLabs Software Distribution site (requires customer log-in).

Configure the SQL repository

For a clustered production setup, XL Deploy requires an external database, as described in Configure the XL Deploy SQL repository.

Configure XL Deploy clustering

In the hot-standby cluster mode used in a production setup, only 1 node is active at any given moment, the other node(s) are marked as offline in the load balancer and will not receive any HTTP traffic. See Configure active/hot-standby mode to configure XL Deploy in a clustered active/hot-standby setup.

Configure user authentication

Set up a secure process for authenticating users. For production setups, you can use an OIDC provider, use Keycloak as an OIDC bridge, or use an LDAP directory system over the LDAPS protocol.

For more information, see:

Configure XL Deploy Java virtual machine (JVM) options

To optimize XL Deploy performance, you can adjust JVM options to modify the runtime configuration of XL Deploy. To increase performance, add or change the following settings in the conf/xld-wrapper-linux.conf or the conf/xld-wrapper-windows.conf file.

Setting Value
-server Instructs the JVM to run in the server profile.
-Xms8192m Instructs the JVM to reserve a minimum of 8 GB of heap space.
-Xmx8192m Instructs the JVM to reserve a maximum of 8 GB of heap space.
-XX:+UnlockExperimentalVMOptions Instructs the JVM to unlock experimental options.
-XX:MaxMetaspaceSize=1024m Instructs the JVM to assign 1 GB of memory to the metaspace region (off-heap memory region for loading classes and native libraries).
-Xss1024k Instructs the JVM to limit the stack size to 1 MB.
-XX:+UseG1GC Instructs the JVM to use the new G1 (Garbage First) garbage collector. As of Java9, this will be the default garbage collector.  
-XX:+HeapDumpOnOutOfMemoryError Instructs the JVM to dump the heap to a file in case of an OutOfMemoryError. This is useful for debugging purposes if the XL Deploy process crashes.
-XX:HeapDumpPath=log/ Instructs the JVM to store generated heap dumps in the log/ directory of the XL Deploy server.

Configure the task execution engine

Deployment tasks are executed by the XL Deploy task execution engine. Based on your deployment task, the task execution engine generates a deployment plan that contains steps that XL Deploy will carry out to deploy an application. You can tune the XL Deploy task execution engine with the settings described in Tuning the task execution engine in XL Deploy.

Finalize the node configuration and start the server

After the node(s) are configured for production use, you can finalize the configuration.

Run the /opt/xebialabs/xl-deploy-<version>-server/bin/ or C:\xebialabs\xl-deploy-<version>-server\bin\run.cmd script on a single node to start the XL Deploy server.

Because this is the initial installation, XL Deploy prompts a series of questions. See the table below for the questions, recommended responses, and considerations.

Question Answer Explanation
Do you want to use the simple setup no Some properties need to be changed for production scenarios.
Please enter the admin password Choose a strong and secure admin password.
Do you want to generate a new password encryption key yes You should generate a random unique password encryption key for the production environment.
Please enter the password you wish to use for the password encryption key If you want to start XL Deploy as a service on system boot, do not add a password to the password encryption key. Adding a password to the encryption key prevents automated start. If your enterprise security compliance demands it, you can add a password in this step.
Would you like to enable SSL no SSL offloading is done on the load balancer. In this scenario, it is not required to enable SSL on the XL Deploy servers.
What HTTP bind address would you like the server to listen to Add this address to listen on all interfaces. If you only want to listen on a single IP address/interface, specify that one.
What HTTP port number would you like the server to listen on 4516 This is the default port; you can enter a different port number.
Enter the web context root where XL Deploy will run / By default, XL Deploy runs on the / context root (in the root of the server).
Enter the public URL to access XL Deploy https://LOADBALANCER_HOSTNAME For XL Deploy to correctly rewrite all the URLs, it must know how it can be reached. Enter the IP address or hostname configured on the load balancer, instead of the IP address (and port) of the XL Deploy server itself. The protocol is https.
Enter the minimum number of threads for the HTTP server 30 Unless otherwise instructed, the default value can be used.
Enter the maximum number of threads for the HTTP server 150 Start with the default value. If the monitoring points to thread pool saturation, this number can be increased.
Do you agree with these settings yes Type yes after reviewing all settings.

After you answer yes to the final question, the XL Deploy server will boot up. During the initialization sequence, it will initialize the database schemas and display the following message:

You can now point your browser to https://<IP_OF_LOADBALANCER>/

Stop the XL Deploy server. Edit the conf/deployit.conf file and change these configuration options to a hardened setting:

Option Value Explanation
hide.internals true Hides exception messages from end users and only shows a key. The XL Deploy administrator can use this key find the exception.
client.session.timeout.minutes 20 Defines the session idle timeout. Set this to the number of minutes that is defined by your enterprise security compliance officer. (Defaults to 20min)

Copy the conf/repository-keystore.jceks and conf/deployit.conf to the other nodes so that they run on the same settings.

All nodes are now fully configured and can be booted up.

Boot sequence

Start the nodes:

  1. Start the first node.
  2. Wait until the node is reachable at http://<node_ip_address>:4516/.
  3. Once the node is reachable, boot the other node(s).
  4. Check that only the first node reports success on a GET request to http://<node_ip_address>:4516/ha/health. All other nodes should report HTTP status code 503 Service Unavailable.

Configure the additional tools

Set up the load balancer

This example shows how to use HAProxy to set up a load balancer configuration. You can download this full HAProxy configuration file. The sections below show how to set up the routing and health checks for the load balancer. This configuration can be used for XL Deploy in hot-standby cluster mode.

frontend xl-http // <1>
  reqadd X-Forwarded-Proto:\ http
  default_backend xl-backend

frontend xl-https // <3>
  bind ssl crt /etc/ssl/certs/certificate.pem // <4>
  reqadd X-Forwarded-Proto:\ https
  option httplog
  log global
  default_backend xld-backend // <5>

backend xl-backend // <2>
  redirect scheme https if !{ ssl_fc }

backend xld-backend // <6>
  option httpchk GET /ha/health // <7>
  1. The xl-http front end routes all HTTP requests coming in on port 80 to the xl-backend backend.
  2. The xl-backend back end will redirect all requests to HTTPS if the front connection was not made using an SSL transport layer.
  3. The xl-https front end will handle all incoming SSL requests on port 443.
  4. Ensure you have a properly signed certificate to ensure a hardened configuration.
  5. Every incoming request on HTTPS will be routed to the xld-backend back end.
  6. The xld-backend will handle the actual load balancing for the XL Deploy nodes.
  7. Every XL Deploy node is checked on the /ha/health endpoint to verify whether it is up. If this endpoint returns a non-success status code, the node is taken out of the load balancer until it is back up.

Administration and operation

This section describes how to maintain the running system and what to do if monitoring shows any issues in the system.

Back up XL Deploy

To prevent inadvertent loss of data, ensure you regularly back up your production database as described in Back up XL Deploy.

Set up monitoring

Set up the desired metrics

Ensure that you monitor the following statistics for the systems that comprise your XL Deploy environment including the load balancer, your XL Deploy nodes, and database servers:

  • Network I/O
  • Disk I/O
  • RAM usage
  • CPU usage

Add monitoring to XL Deploy

You can remotely monitor JMX, add a Java agent such as the Dynatrace agent, or use a tool such as collectd to push the monitoring statistics to a central collectd server.

In general, it is not recommended that you add Java agents to the Java process. Testing has shown that the Java agents can adversely influence the performance characteristics of the XL Deploy system. You should also make sure to not expose insecure or unauthenticated JMX over the network, as it can be used to execute remote procedure calls on the JVM.

The optimal solution is to set up collectd to aggregate the statistics on the XL Deploy server and push them to a central collecting server that can graph them. To do this, you must install the following tools on the XL Deploy server:

After these tools are installed, you can download this sample collectd.conf file, which is preconfigured to monitor relevant XL Deploy application and system statistics. To use this file, add two configuration values to the configuration:

  • IP_ADDRESS_HERE: Enter the IP address of the central collectd server
  • NETWORK_INTERFACE_HERE: Enter the network interface over which XL Deploy communicates


This section reviews how XL Deploy will traverse your network to communicate with middleware application servers to perform deployment operations. Since XL Deploy is agentless, communication is done using standard SSH or WinRM protocols.

Standard XL Deploy connectivity

In this example, XL Deploy, using the Overthere plugin, connects to the target server using either SSH or WinRM.

XL Deploy connects using SSH or WinRM

For more information, review the following:

Standard XL Deploy connectivity using Jumpstation

  1. XL Deploy, using the Overthere plugin, connects to the jumpstation target server using SSH. Nothing is installed on the jumpstation server.
  2. Connection is made from jumpstation, using SSH or WinRM, to the target server.

XL Deploy connects using jumpstation to target server

For more information, see: Jumpstation details Connect XL Deploy through an SSH jumpstation or HTTP proxy

Standard XL Deploy connectivity using Satellite

  1. XL Deploy communicates to XL Satellite application using TCP.
  2. Deployment workload is moved from XL Deploy JVM to XL Satellite.
  3. XL Satellite, using the Overthere plugin, connects to the Target server using SSH or WinRM.

XL Deploy, first moves workload to XL Satellite through TCP, then SSH or WinRM to target server

For more information, see getting started with the satellite module.

Communication protocols and capabilities

  XL Deploy Outbound Connection Type Outbound Connections to Target Servers Deployment Tasks Control Tasks UI Extensions
XL Deploy Server - SSH, WinRM Y Y Y
XL Satellite TCP SSH, WinRM Y N N
Jumpstation SSH SSH, WinRM N/A N/A N/A

Jumpstation details


  • Uses standard SSH encryption
  • Can use PKI or User credentials
  • Uses a TLS-encrypted satellite connection
  • Will create a tunnel to allow direct communication to a target host
  • Port (configurable): 22 default
  • Port only open during communications. Pipe closed after task executed
  • Is bi-directional during activity

XL Deploy, connecting to DMZ using SSH, connects to Jumpstation

With this set up, you then need to:

  1. Establish connectivity method (credentials, or signed certificate).
  2. Define a firewall port.
  3. Generate IP Table rules.

For more details, see connecting to jumpstation.