[SOL-132] General system requirements for Exasol clusters Created: 17.08.2014  Updated: 20.08.2020  Resolved: 20.08.2020

Status: Obsolete
Project: Solution Center
Component/s: EXASolution Cluster
Affects Version/s: EXASolution 4.2.1, EXASolution 5.0, EXASOL 6.0.0
Fix Version/s: None

Type: Explanation
Reporter: Captain EXASOL Assignee: Captain EXASOL
Labels: None

Attachments: PNG File logical_exasuite_net_priv_public.png    
Issue Links:

Note: This solution is no longer maintained. For the latest information, please visit our Documentation:



This article outlines the general system and database resources that are required to support Exasol.

1. Network

Every Exasol cluster node is wired to two distinct network types:

  • the "private" network is used for cluster-internal communication
  • the "public" network is used for client-access connections

It's possible to use multiple private networks either in active/passive mode for failsafety or in active/active mode for link bundling on application level.

Multiple public networks can be used in active/passive mode for failsafety accordingly but not in active/active mode for link bundling.

a. Private network

Isolation: It's essential that every private network is fully separated from every other network. No traffic must pass in or out and only the dedicated interfaces of the cluster nodes shall be wired to this network. It's of great importance that clusters never exchange traffic with private networks of other clusters.

Directness: The private network is used to boot and configure the nodes, to constantly exchange vitality and configuration information and to synchronize the database payload. Thus the nodes must be directly connected to this layer 2 network (VLAN) and traffic must not be filtered.

Network management: The private network is managed through the cluster administration interface. IP addresses are assigned automatically through DHCP from a reserved, fixed address space.

b. Public network

Public networks are used by clients to access the database instances (ODBC, JDBC, EXAplus) and the administration interfaces.

IP addresses for a cluster should be assigned consecutively and without holes. For example: = license server = db node 1 = db node 2 = db node 3

Clusters may share a public network as long as the IP addresses differ.

If the servers are equipped with out-of-bands management interfaces, it's best practice to make them available from the public network (layer 3) and to integrate them in the web administration interface of the cluster.

c. Switch configuration

The following features must be disabled on the switch ports connected to the cluster:

  • Energy-Efficient Ethernet (EEE) ("no interface A1-A24 energy-efficient-ethernet")
  • Flow-Control ("no interface A1-A24 flow-control")


2. Server Hardware

A list of standard and certified hardware is referenced in the EXASOL knowledge center at http://www.exasol.com/en/knowledge-center/systemdb-administration/hardware/.

General requirements
  • CPU
    • EXASuite runs on 64-bit Intel platforms with SSSE3 featured CPUs (Xeon Woodcrest upwards) only.
  • Firmware Interface
    • EXASuite currently only support the classic Basic Input/Output System (BIOS) firmware interface. Please deactivate the Unified Extensible Firmware Interface (UEFI) if required.
  • Network
    • Depending on the network topology as explained above, the servers need to be equipped with two or more dedicated 1- or 10-Gbit Ethernet network interface cards.
    • Intel network chipsets are known to work best although chipsets from other manufacturers may work as well.
  • Storage Subsystem
    • SAS (Serial Attached SCSI) is the preferred bus technology and hard drive interface
    • SSDs (Solid-State Drives) are not supported
    • Use of hardware RAID controllers with RAID-1 pairs (mirroring) is recommended
  • Out-of-band Management
    • The preferred protocol to connect to Lights-out-Management interfaces is IPMI (Intelligent Platform Management Interface, version 2.0).
Types of Cluster Nodes

Exasol clusters are composed of one or more database nodes and at least one management node (called "license server"). Database nodes are the powerhouse of a cluster and operate both, the Exasol database instances as well as the EXAStorage volumes.

Database nodes are expected to be equipped with homogeneous hardware. The Exasol database and EXAStorage are designed and optimized to distribute processing load and database payload equally across the cluster. Odd hardware (especially differences in RAM and disk sizes) may cause undesired effects from poor performance to service disruptions.

License servers are the only nodes of a cluster that boot from local disk. They are installed from an EXASuite installation ISO image.

Database servers are installed and booted from the license server over the network via PXE (DHCP + TFTP).

a. License Server

Minimum hardware requirements:

  • Single socket quad core Intel CPU
  • 4 GB RAM
    • 2x 300 GB hard disks
    • 2x network adapters with at least 1 GB/s
  • DVD drive for installation (alternatively: virtual media on lights-out-management)
  • Recommended:
    • Hardware RAID controller
    • Lights-out-management interface

BIOS settings

  • Boot order: DVD, hard disk drive
  • Power management: maximum performance (static)
  • Processor options: hyper-threading enabled
b. Database Node

Minimum hardware requirements:

  • 2x Quad core Intel CPU (XEON)
  • 16 GB RAM (32 GB RAM recommended)
  • Hardware RAID controller
    • 2x 128 GB SAS hard disks in RAID-1 (Operating system)
    • 4x 300 GB SAS hard disks in RAID-1 (Database payload)
  • 2x network adapters with at least 1 GB/s (PXE enabled on NET-0)
  • Recommended: lights-out-management interface

BIOS settings

  • Boot order: PXE (network boot) from NET-0
  • Power management: maximum performance (static)
  • Processor options: hyper-threading enabled, virtualisation enabled

Note on CPU: The clock speed and number of cores you need for your system depend heavily on the planned workload. An analytic batch system benefits of faster CPUs while a higher number of cores is recommended for systems with many parallel processes (concurrent connections).

Category 1: Cluster Administration - Hardware
Category 2: Cluster Administration - Installation
Generated at Tue Sep 28 21:44:19 CEST 2021 using Jira 7.13.18#713018-sha1:e1230154f8ff8cc9272975bf568fc732e806fd68.