User Guide

System Installation

In this section you find guidelines to download, install, and set up NetEye in different environments: as a Single Node or as a Cluster, and on Satellites if necessary.

NetEye 4 is available as an ISO image. Please check section Acquiring NetEye ISO Image for download instructions. The remainder of this section consists of installation directions and is organized as follows.

Section Install on VMware guides you in the installation of NetEye ISO image in the most popular virtualisation environments; Section Single Nodes and Satellites provides the basic configuration for Single Node and Satellites; Section Cluster Nodes gives instructions for NetEye Clusters and finally Satellite Nodes Only contains directions specific for Satellite Nodes.

If your NetEye Node does not have direct access to Internet and needs instead to pass through a proxy to reach the Internet, then you need to configure the software running on NetEye to pass through this proxy, as explained in Section Nodes behind a Proxy.

Acquiring NetEye ISO Image

All the NetEye 4 ISO images can be found on the NetEye download site. To be sure you have downloaded a valid image, the following verification procedure must be followed.

Import the public GPG key

Go to the Downloads section of the NetEye blog, then move to NetEye –> GPG public key and save public-gpg-key. From the CLI, extract the archive and then import the key with the following command:

# gpg --import public.gpg

Verify now the imported key:

# gpg --fingerprint net.support@wuerth-phoenix.com

If the fingerprint matches the one reported on the NetEye blog, you have the right key installed on your system.

Download and Verify the ISO

NetEye is shipped as ISO images and a link to the Würth Phoenix repository containing the images and supporting material will be provided by the Support Team after the contract is signed.

From the link provided you will need to download the following files.

  • The desired ISO file (always download the most recent one)

  • The sha256sum.txt.asc file

Save both files in the same directory, then proceed to check that they are not corrupted: from the CLI, move to the directory where the files have been saved, then execute the following commands.

To verify the sha256sum.txt.asc file execute:

# gpg --verify sha256sum.txt.asc

The output will look something like this:

gpg: Signature made Tue 29 Sep 2020 03:50:01 PM CEST
gpg:               using RSA key B6777D151A0C0C60
gpg: Good signature from "Wuerth Phoenix <net.support@wuerth-phoenix.com>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: E610 174A 971E 5643 BC89  A4C2 B677 7D15 1A0C 0C60

Once you have verified the signature of the sha256sum.txt.asc file, you can then verify the ISO file with the following command:

# sha256sum -c sha256sum.txt.asc 2>&1 | grep OK

The output will look something like this:

# ./neteye4.23-rhel8.stable.iso: OK

At this point the ISO file is verified and it is ready to be used. In case the output is different from OK, the ISO image may be corrupted and needs to be downloaded once more.

Note

Both ISO and sha256sum.txt.asc files are generated nightly, so make sure to always download both of them each time you need.

See also

More details about GPG and the verification process can be found on our blog:

https://www.neteye-blog.com/2020/12/how-to-verify-the-integrity-of-your-neteye-4-iso-image/

Install on Physical Server

To install NetEye on a physical server, either burn the ISO on a DVD-Rom or to a USB stick, then turn the server on and follow the instruction in the next sections to install a Single Node or Satellite, a Cluster Node: in case of a Satellite, follow the additional direction in Section Satellite Nodes Only.

Install on VMware

To create a virtual machine and install NetEye, start VMware Workstation, click File / New Virtual Machine, follow these steps, and click Next whenever necessary.

  1. Select Custom (advanced) and leave the defaults as they are

  2. Select ISO image and then the NetEye ISO you want to install. You might see the warning Could not detect which operating system is in this image. You will need to specify which operating system will be installed: simply ignore it.

  3. Select Linux as the Guest OS, and specify Red Hat Linux in the dropdown menu

  4. Name the VM as you prefer, and select the location to store it

  5. Specify the number of processors

  6. Specify the amount of memory

  7. Select the type of connection according to your needs

  8. Select VMware Paravirtual as SCSI controller

  9. Select SATA or SCSI as virtual disk type

  10. Select Create a new virtual disk

  11. Specify the disk capacity

  12. Rename the disk to a name you prefer

  13. Review the configuration you just created, deselect Automatically start the VM, and click Finish

You can now proceed to section Powering up the VM.

Install on KVM

To create a KVM virtual machine and install NetEye, start the Virtual Machine Manager, click File / New Virtual Machine, follow these steps, and click Forward whenever necessary.

  1. Select Local install media

  2. Choose the NetEye ISO to install, uncheck Automatically detect from the installation media/source under Choose the operating system you are installing, and then select RedHat 8 for the OS

    Hint

    You can also start typing in the text box to see the available OSs, or run osinfo-query os in your terminal to see all available variants).

  3. Specify the amount of memory and the number of processors

  4. Specify the disk capacity

  5. Give the VM the name you prefer, review the configuration and untick Customize configuration before install

  6. In the configuration panel that appears, go to Boot Options and check that Disk1 and CDRom are both selected

  7. In the next configuration panel that appears, go to VirIO Disk 1, expand the Advanced options, and change the disk bus to SATA

  8. Click on Apply to propagate your changes

  9. Click on Begin installation to start the NetEye installation

You can now proceed to section Powering up the VM.

Install on HyperV

To create an HyperV virtual machine and install NetEye on it, start Hyper-V Manager, select File / New Virtual Machine, follow these steps, and click Next whenever necessary.

  1. Specify the name of your new VM and where to store it

  2. Leave the defaults for Specify Generation

  3. Specify the amount of memory

  4. Select Default switch as the connection adapter

  5. Specify the disk capacity

  6. Specify the ISO that you want to install

  7. Review your settings, then click Finish

  8. Before firing your new VM up, look at the list of startup media in the BIOS settings. Be sure that the CD is in the list

  9. Click Action / Start to start the virtual machine

You should now proceed to section Powering up the VM.

Powering up the VM

At this point, your VM has been successfully created, and you can power it up. After a few seconds, the NetEye logo will appear, and a countdown to automatically initiate the installation will start.

After ten seconds, if no key is pressed, the installation process starts. The installation process will take several minutes to complete, after which the VM will reboot from the internal hard disk.

At the end of the boot process, you will be prompted to enter your credentials (root/admin). If the login is successful, you can now start to configure your NetEye VM.

Single Nodes and Satellites

This section describes how to set up your NetEye virtual machine from scratch, and presents the NetEye 4 monitoring environment.

System Setup

Once NetEye is installed, you will need to access the VM via a terminal or ssh. The first time you log in, you will be required to change your password to a non-trivial one. To maintain a secure system, you should do this as soon as possible. The next steps are to configure your network, update NetEye, and complete the installation.

This procedure is split into two parts: The first part applies to both Single Nodes and Satellite Nodes, while the second to Single Nodes installations only. To complete the setup of Satellite Nodes, refer to Configuration of a Satellite.

Note

Curly braces ({ }) mark values that must be inserted according to the local infrastructure. For example, {hostname.domain} should be replaced with the actual hostname given to the node and domain with the local domain.

Part 1: Single Nodes and Satellite Nodes

Step 1: Define the host name for the NetEye instance

From NetEye 4.25 onwards, upon running neteye_secure_install a hostname will be validated for meeting a set of requirements that you can find below.

Note

A hostname allows to identify and contact the host on a network level. It should not be mixed with the Satellite name, which replicates the name of the config file, created on the Master upon new Satellite creation, and follows its own naming convention. Unlike the hostname, a Satellite name is a unique host identifier within a Tenant, and is used in NetEye to identify a particular Satellite.

Existing NetEye installations with hostname that do not meet requirements below will only get a warning during neteye_secure_install, but do not require a change of the hostname:

  • Only letters allowed in the first position

  • Hostname must consist only of letters [A-Za-z] numbers [0-9] hyphens - and dot

  • Hostname must end with a letter [A-Za-z] or a number [0-9]

  • Hostname must not contain two consecutive dots​

  • Hostname must not contain two hyphens in third and fourth position:

    • valid hostname: neteye-test-01, neteye-production, neteye-node1, master, master.lan,

    • invalid hostname: ne–teye, -neteye-, neteye_01, 3neteye, @neteye, neteye-.com, master..lan

# hostnamectl set-hostname {hostname.domain}
# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
{ip}        {hostname.domain} {hostname}

Step 2: Define the DNS configuration

# vim /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search {domain}
nameserver {ip1}
nameserver {ip2}

Step 3: Configure the network

# vim /etc/sysconfig/network-scripts/ifcfg-{interface}
# Generated by parse-kickstart
IPV6INIT="yes"
DHCP_HOSTNAME="{hostname}"
IPV6_AUTOCONF="yes"
BOOTPROTO="static" # To configure according to the client
DEVICE="{interface}"
ONBOOT="yes"
IPADDR={ip}  # Configure these three only if static
NETMASK={mask}
GATEWAY={gw}

Step 4: Generate dnf mirror configuration

Starting from NetEye 4.30, every repository requires an internal mirrorlist. You have to create them by running:

# neteye rpmmirror apply

Step 5: Set the node properties for the Red Hat registration

Warning

During this step you’ll need to insert the Red Hat Customer ID, the Contract number, the node type and the node deployment (dev, prod, etc…). If you don’t have this information or you are not sure on what to insert, please contact the NetEye support. To know more about this command please refer to neteye node tags set.

# neteye node tags set

Step 6: Define an SSH key for secure communications with Satellites

# ssh-keygen -t rsa

Step 7: Set the local time zone

Find the time zone that best matches your location:

# timedatectl list-timezones

Set it system-wide using the following command:

# timedatectl set-timezone {Region}/{City}

Then update PHP to use that same location:

  • Create a file named /neteye/local/php/conf/php.d/30-timezone.ini

  • Insert the following text in that file: date.timezone = {Region}/{City}

  • Restart the php-fpm service: # systemctl restart php-fpm.service

Note

If your are setting up a NetEye Satellite, skip the next section and make sure to carry out the steps in section Satellite Nodes Only.

Part 2: Single Nodes Only

To complete the setup of Satellite Nodes, jump to section Configuration of a Satellite.

Step 8: Make sure required services are running

# systemctl start grafana-server.service mariadb.service

Step 9: Complete NetEye setup

Run the secure install script:

# /usr/sbin/neteye_secure_install

If you would like to verify that NetEye is correctly installed, you can bring up all services and check their current status with the following commands.

# neteye start

# neteye status

Step 10: Update NetEye

To finalize the process, go to Update Procedure to update your installation.

Root User Password

When NetEye is first installed, the system generates a unique, random password to use when logging in to the web interface. The password is saved in a hidden file in the root directory of the machine: /root/.pwd_icingaweb2_root.

The first time you log in to the NetEye web interface, you will need to insert the following credentials:

  • User:   root

  • Password: The password you will find inside the file

    /root/.pwd_icingaweb2_root.

We suggest that you change the root password to a strong one, with at least the following characteristics:

  • At least eight characters long (the more characters, the stronger the password)

  • A combination of letters, numbers and symbols (@, #, $, %, etc.).

  • Both uppercase and lowercase letters

To change your password, click on the “gear” icon gear at the bottom left of NetEye, enter and confirm the new password, then click the Update Account button.

Cluster Nodes

NetEye 4’s clustering service is based on the RedHat 8 High Availability Clustering technologies:

  • Corosync: Provides group communication between a set of nodes, application restart upon failure, and a quorum system.

  • Pacemaker: Provides cluster management, lock management, and fencing.

  • DRBD: Provides data redundancy by mirroring devices (hard drives, partitions, logical volumes, etc.) between hosts in real time.

Cluster resources are typically quartets consisting of an internal floating IP, a DRBD device, a filesystem, and a (systemd) service.

Once you have installed clustering services according to the information on this page, please turn to the Cluster Architecture page for more information on configuration and how to update.

See also

For more information about RedHat Cluster, check the official RedHat’s documentation on High Availability Clusters.

Prerequisites

A NetEye 4 cluster must consist of between 2 and 16 identical servers (“Nodes”) running RHEL 8; each node must satisfy the following requirements:

  • Networking:

    • Bonding across NICs must be configured

    • A dedicated cluster network interface, named exactly the same on each node

    • One external static IP address which will serve as the external Cluster IP

    • One IP Address for each cluster node (i.e., N addresses)

    • One virtual (internal) subnet for internal floating service IPs (this subnet MUST NOT be reachable from any machine except cluster nodes, as it poses a security risk otherwise)

    • All nodes must know the internal IPs (Virtual IP) of all other nodes, which must be stored in file /etc/hosts

    • All nodes must be reachable over the internal network

    • The Corporate Network’s NIC must be in firewall zone public, while the Heartbeat Network’s NIC must be in firewall zone trusted

  • Storage:

    • At least one volume group with enough free storage to host all service DRBD devices defined in Services.conf

  • In general, each node in a NetEye Cluster…

    • must have SSH keys generated for the root user

    • must store the SSH keys of all nodes in file /root/.ssh/authorized_keys

    • needs Internet connectivity, including the ability to reach repositories of Würth Phoenix and Red Hat

    • must have the dnf group neteye installed

    • must have the tags set with the command neteye node tags set. To know more about this command please refer to neteye node tags set

    • must be subscribed with a valid Red Hat Enterprise Linux license. This can be done with the command neteye node register. To know more about this command please refer to neteye node register

    • must have the latest operating system and NetEye 4 updates installed

    • if a virtual Cluster Node, its RAM memory must be completely reserved

    • requirements for characters that can be used in the hostnames are the same for Single and Satellite Nodes and can be checked in the installation procedure

See also

Section Cluster Requirements and Best Practices contains more detailed requirements for NetEye cluster installation.

Installation Procedure

The first step of a NetEye Cluster installation is to install the NetEye ISO image, after which you need to follow, for each Node, installation’s Part 1: Single Nodes and Satellite Nodes. Then, make sure to copy the SSH key of each node on all the other Node’s /root/.ssh/authorized_keys file. To accomplish this goal, you can use on each the command

cluster# ssh copy-id /root/.ssh/id_rsa.pub root@172.27.0.3

Repeat the command for each Node, replacing 172.27.0.3 with the IP address of each of the other Nodes.

Once done, depending on the type of nodes you are installing in your cluster, select either of the following procedures: Cluster Services Configuration, NetEye Service Configuration, or Single Purpose Nodes

Once done, if your NetEye Cluster setup includes satellites, please make sure to carry out the steps in section Satellite Nodes Only after each Satellite Node’s installation.

Basic Cluster Installation

This task consists of two steps:

  1. Copy the cluster configuration json template from /usr/share/neteye/cluster/templates/ClusterSetup.conf.tpl to /etc/neteye-cluster and edit it to match your intended setup. You will be required to fill the following fieds:

    Key

    Type

    Description

    ClusterInterface

    str

    The name of the internal cluster network interface

    Hostname

    str

    Cluster’s FQDN that will resolve to ClusterIp

    ClusterIp

    str

    Floating IP address reserved for the cluster

    ClusterCIDR

    int

    Netmask in CIDR notation (8-32)

    Nodes

    list

    List of Operative node (must be at least 2)

    VotingOnlyNode

    object

    (Optional) Definition of the Voting only node

    ElasticOnlyNodes

    list

    (Optional) List of Elastic only nodes

    All the nodes specified in Nodes, VotingOnlyNode and ElasticOnlyNodes must have all of the following fileds:

    Key

    Type

    Description

    addr

    str

    The internal ip address of the node

    hostname

    str

    Internal FQDN of the node

    hostname_ext

    str

    External FQDN of the node

    id

    int

    An unique, progessive number (Note: ElasticOnlyNodes don’t require this field)

    Note

    take into account that the first node defined in the Nodes array, in the /etc/neteye-cluster, file will act as The NetEye Active Node during the update and upgrade procedures.

  2. Run the cluster setup command neteye cluster install to install a basic Corosync/Pacemaker cluster with a floating clusterIP. In case of any issue which prevents the correct script execution you can run the same command again adding the option --force to override. This will destroy existing cluster on the nodes.

    cluster# neteye cluster install
    

    Note

    Any not recognised option given to the neteye cluster install command will be passed to the internal Ansible installation command.

  3. At this point, all cluster nodes must be online, hence, as last step, verify that the Cluster installation was completed successfully by running the command:

    cluster# pcs status | grep -A4
    

    This command returns something like:

    Node List:
    * Online: [ my-neteye-01.example.com my-neteye-02.example.com ]
    

    If the installation includes also a Voting-only Node, check that it is online by running:

    cluster# pcs quorum status
    

    The bottom part of the output is similar to the following snippet:

    Membership information
    ----------------------
    Nodeid Votes Qdevice Name
    1   1  A,V,NMW my-neteye-01.example.com (local)
    2   1  A,V,NMW my-neteye-02.example.com
    0   1  Qdevice
    

    The last line shows that the Voting-only Node is correctly online.

Cluster Fencing Configuration

This section describes the procedures you can use to configure, test, and manage the fence devices in a cluster. Fencing is useful when occour that a node is unresponsive and may still be accessing data. The only way to be certain that your data is safe is to fence the node using STONITH. STONITH is an acronym for “Shoot The Other Node In The Head” and it protects your data from being corrupted by rogue nodes or concurrent access. Using STONITH, you can be certain that a node is truly offline before allowing the data to be accessed from another node.

See also

For more complete general information on fencing and its importance in a Red Hat High Availability cluster, see Fencing in a Red Hat High Availability Cluster.

  1. Initial Setup

    • Fencing can be enabled upon setting an environment variable. However, it is recommended to keep fencing disabled until it is configured properly:

      pcs property set stonith-enabled=false
      pcs stonith cleanup
      
    • Install ipmilan fence device on each node

      yum install fence-agents-ipmilan
      
    • Test that IDRAC interface is reachable on port 623 on each node

      nmap -sU -p623 10.255.6.106
      
  2. IDRAC Configuration

    • Enable IPMI access to IDRAC: IDRAC Settings > Connectivity > Network > IPMI Settings

      • Enable IPMI Over LAN: Enable

      • Channel Privilege Level Limit: Administrator

      • Encryption Key*: <mandatory random string, also 00000000>

    • Create a new user with username and password of your choice, Read-only privileges on console but administrative privileges on IPMI. (IDRAC Settings > Users > Local Users > Add)

      • User Role: Read Only

      • Login to IDRAC: enable

    • Advanced Settings

      • LAN Privilege Level: Administrator

    To test that the settings were properly applied to a news user you can check the status from NetEye machine

    ipmitool -I lanplus -H <IDRAC IP> -U <your_IPMI_username> -P <your_IPMI_password> -y <your_encryption_key> -v chassis status
    
  3. PCS Configuration

    To obtain information about your fence device run:

    pcs stonith list
    pcs stonith describe fence_idrac
    

    Create a fence device

    The following instructions will help you create a fence device.

    pcs stonith create <fence_device_name> fence_idrac ipaddr="<ip or fqdn>" pcmk_delay_base="5" lanplus="1" login="IPMI_username" passwd="IPMI_password" method="onoff" pcmk_host_list="<host_to_be_fenced>"
    

    Where:

    • fence_device_name: device name of your choice (e.g. idrac_node1)

    • fencing_agent: in this case fence_idrac, you can obtain this with pcs stonith list

    • ipaddr: IDRAC IP or FQDN

    • pcmk_delay_base: by default is 0, must differ on nodes by 5 seconds or more, based on how fast iDRAC can initiate a shutdown

    • lanplus: set always at 1 otherwise it will not connect

    • login: IPMI username (created before)

    • passwd: IPMI password created before

    • passwd_script: an alternative to password, if available you should use this instead of plain password

    • method: usually you should ‘onoff’ if available otherwise restart is not guarantee (power off/power on)

    • pcmk_host_list: list of host controlled by

    Warning

    In a 2-node cluster it may happen that both nodes are unable to communicate and both try to fence each other. This will cause a reboot of both nodes. To avoid this, set different pcmk_delay_base parameters for each fence device; this way one of the nodes will acquire more priority over the other.

    It is strongly suggested to set this parameter for EVERY cluster regardless of the number of its nodes.

    Note

    If possible use a passwd_script instead of passwd, as anybody with access to PCS can see the IPMI password. A password script is a simple bash script which performs an echo of the password and is also helpful to avoid escaping problems e.g.

    #!/bin/bash echo “my_secret_psw“

    and only root user has read privileges on it. (FYI chmod 500)

    You must put this script on all nodes e.g. in /usr/local/bin

    Example:

    pcs stonith create idrac_node1 fence_idrac ipaddr="idrac-neteye06.intra.tndigit.it" lanplus="1" login="neteye_fencing" passwd_script="/usr/local/bin/fencing_passwd.sh" method="onoff" pcmk_host_list="node1.neteyelocal"
    

    If your fence device has been properly configured running pcs status you should see the fencing device in status Stopped otherwise check in /var/log/messages.

    pcs stonith show <fence device> permit to view the current setup of device

    Now you have to create a fence device for each node of your cluster (remember to increase the delay)

    Note

    If you need to update a fence device properties, use the update command, e.g.:

    pcs stonith update <fence device> property=”value"
    
  4. Only for ‘onoff’ method

    edit the power key on /etc/systemd/logind.conf

    HandlePowerKey=ignore
    

    To do it programmatically:

    sed -i 's/#HandlePowerKey=poweroff/HandlePowerKey=ignore/g' /etc/systemd/logind.conf
    
  5. Increase totem token timeout

    Increasing totem token timeout at least to 5 seconds will avoid unwanted fencing (default is 1s); on cluster with virtual nodes it should be set to 10. It is not recommended to set the timeout to more than 10 seconds.

    pcs cluster config update totem token=10000
    

    To check if the value has been updated:

    corosync-cmapctl | grep totem.token
    

    Warning

    Stonith acts after totem token expiration, therefore it may take also 30-40 seconds to fence a node

  6. Testing

    To fence a device you can use the following command:

    pcs stonith fence <node1.neteyelocal>
    

    Warning

    The host will now be taken to a shutdown mode. Fencing should be tested on a node in standby.

  7. Enable fencing

    To enable fencing set property to true

    pcs property set stonith-enabled=true
    pcs stonith cleanup
    

    Warning

    If fencing fails cluster freezes and resources will not be relocated on a different node. Always disable fencing during updates/upgrades Disable fencing on virtual machines before shutting them down: it may happen that a fence device restarts a shutdown VM. A restart of a physical node may require several minutes so please be patient.

Cluster Services Configuration

  • For each service that you want to run on a NetEye Cluster, adjust all necessary options, including IPs, ports, DRBD devices, sizes, in the various *.conf.tpl files found in directory /usr/share/neteye/cluster/templates/. In a typical configuration you need to update only the following options.

    • ip_pre, the prefix of the IP (e.g., 192.168.1 for 192.168.1.0/24), which will be used to generate the virtual IP for the resource

    • cidr_netmask, the CIDR of the internal subnet used by IP resources (e.g., 24 for 192.168.1.0/24).

  • Run the cluster_service_setup.pl script on each *.tpl.conf file starting from Services-core.tpl.conf:

    # cd /usr/share/neteye/scripts/cluster
    # ./cluster_service_setup.pl -c Services-core.conf.tpl
    

    The cluster_service_setup.pl script is designed to report the last command executed in case there were any errors. If you manually fix an error, you will need to remove the successfully configured resource template from Services.conf and re-run that command. Then you should re-execute the cluster_service_setup.pl script in order to finalize the configuration.

NetEye Service Configuration

  • Move all resources to a single node by running pcs node standby on all other nodes. This is only a first-time requirement, as many services require local write access during the initial setup procedure.

  • Run the neteye_secure_install script on the single active node

  • Run the neteye_secure_install script on every other node

  • Take all nodes out of standby by running

    cluster# pcs node unstandby –all
    
  • Set up the Director field API user on slave nodes (Director / Icinga Infrastructure / Endpoints)

Single Purpose Nodes

This section applies only if you have are going to setup a Single Purpose Node, i.e., an Elastic-only or a NetEye Voting-only node.

Both Elastic-only and Voting-only nodes have the same prerequisites and follow the same installation procedure as a standard NetEye Cluster Node.

After installation, a Single Purpose Node requires to be configured as Elastic-only or Voting-only: please refer to Section General Elasticsearch Cluster Information for guidelines.

Configuration of Tenants

This section will explain how you can configure the NetEye Tenants that are part of your infrastructure.

NetEye ships a default master Tenant which is defined even in case no Tenants are used. All the data generated by the NetEye Master belong to this Tenant.

You can create additional NetEye Tenants with the following command:

neteye# neteye tenant config create <tenant_name> --display-name "Tenant Name"

The --display-name is a mandatory option of the tenant creation command and specifies a user-friendly representation of the <tenant_name>.

For command options and advanced usage, including the modification of existing NetEye Tenants, please refer to neteye tenant.

For every newly created Tenant a new role will appear in the Roles section of the NetEye Access Control. By default a new role automatically acquires the title following the syntax: neteye_tenant_<tenant_name>, and inherits Module permission settings of the Tenant it was created for.

Note

It is important to remember that this role should not be altered. In order to assign a user to the tenant, first create a new role that inherits the properties of the tenant role, set necessary permissions and then assign this role to the user.

Satellite Nodes Only

A Satellite is a NetEye instance which depends on a main NetEye installation, the Master, and carries out tasks such as:

  • execute Icinga 2 checks and forward results to the Master

  • collect logs and forward them to the Master

  • forward data through NATS

  • collect data through Tornado Collectors and forward them to the Master to be processed by Tornado

The remainder of this section will list the prerequisites for Satellites, then lead you through the steps needed to configure a new NetEye Satellite.

For further information about NetEye Satellites, refer to Section Master-Satellite Architecture.

Prerequisites

The following list contains hard prerequisites that must be satisfied for proper Satellite operating.

  • Both the Master and the Satellite must be equipped with the same NetEye version

  • Satellites can be arranged in Tenants

  • Ensure that the Networking Requirements for NATS Leaf Nodes are satisfied

  • The Satellite must be able to reach the NetEye repositories at the URL https://repo.wuerth-phoenix.com/. Due to infrastructure limitations, in some cases a Satellite node can’t access those. In order to check this, run the following command on your Satellite:

    curl -I https://repo.wuerth-phoenix.com/rhel8/neteye-$DNF0
    

    In case the status code received in the output is other than 200 OK, please open a ticket on the support portal.

  • The NATS connection between Master and Satellite is always initiated by the Satellite, not by the Master

  • If you are in a NetEye cluster environment, check that all resource are in started status before proceeding with the Satellite configuration procedure

  • Never run the neteye_secure_install command on Satellite Nodes, because this would remove all Satellite configuration. You will therefore end up with a NetEye Single Node instead!

  • The NetEye Tenant which the Satellite belongs to must be present on the NetEye Master. Please refer to Configuration of Tenants for more information on how to create it.

Configuration of a Satellite

The configuration of a Satellite is carried out in two phases. Phase one consists of the basic networking setup, that can be carried out by following steps 1 to 4 (i.e., Part 1) of System Setup. Phase two consists of the remainder of this section.

We will use the following notation when configuring a NetEye Satellite.

  • the domain is example.com

  • the Tenant is called tenant_A

  • the Satellite is called acmesatellite

This notation will be used also for the Configuration of a Second Satellite in Existing Icinga2 Zone and whenever a NetEye Satellite configuration is mentioned.

In order to create a new Satellite (acmesatellite), on the Master create the configuration file /etc/neteye-satellite.d/tenant_A/acmesatellite.conf (the folder /etc/neteye-satellite.d/tenant_A/ is already present if you configured tenant_a as explained in Configuration of Tenants).

Note

The basename of the file, without the trailing .conf (i.e., acmesatellite), will be used as Satellite unique identifier in the same Tenant.

In single Tenant environments where it is not expected to introduce multi-tenancy in the near future, you may want to create the Satellite in the master Tenant. Satellites belonging to the master Tenant belong to the same Tenant of the NetEye Master.

The configuration, including all optional parameters, should look similar to this excerpt.

{
  "fqdn": "acmesatellite.example.com",
  "name": "acmesatellite",
  "ssh_port": "22",
  "ssh_enabled": true,
  "icinga2_zone": "acme_zone",
  "icinga2_wait_for_satellite_connection": false,
  "icinga2_tenant_in_zone_name": true,
  "proxy": {
    "ssl_protocol": "TLSv1.2 TLSv1.3",
    "ssl_cipher_suite": "HIGH:!aNULL:!MD5:!3DES"
  }
}

The configuration file of the Satellite must contain the following attributes:

  • fqdn: the Satellite’s Fully Qualified Domain Name

  • name: the Satellite name, which must coincide with the configuration file name

  • ssh_port: the port to use for SSH communication from Master to Satellite

    Hint

    You can specify a different port from default 22 in case of custom SSH configurations.

  • ssh_enabled: if set to true, SSH connections from Master to the Satellite can be established, otherwise they are not allowed and Satellite’s configuration files must be manually copied from Master

  • icinga2_zone: the Satellite’s Icinga 2 high availability zone. This parameter is optional and default value is <tenant>_<satellite name>

  • icinga2_wait_for_satellite_connection: if set to false the Satellite will wait for Master to open the connection. This parameter is optional and default value is true.

  • icinga2_tenant_in_zone_name: if set to false, the Tenant’s name is not prepended to the Icinga 2 zone name. This parameter is optional and default value is true.

    Note

    This parameter should be used only for existing multi-tenant installations. For this reason, its usage is strongly discouraged for new installations. If a multi-tenant installation is not required, please use the special Tenant master instead.

  • proxy.ssl_protocol: the set of protocols allowed in NGINX. This parameter is optional and its default value is TLSv1.2 TLSv1.3. Change this value to either improve security or to allow older protocols (for backward compatibility only).

  • proxy.ssl_cipher_suite: the cypher suite allowed in NGINX. This parameter is optional and its default value is HIGH:!aNULL:!MD5:!3DES. Change this value to either improve security or to allow older cyphers (for backward compatibility only).

  • icinga2_endpoint_log_duration: the amount of time for which replay logs are kept on connection loss. It corresponds to log_duration when defining Icinga 2 endpoints, as described in Icinga 2 official documentation This parameter is optional and, if not set, it will take Icinga 2 defaults (i.e., 1d or 86400s).

Note

Remember to append the FQDN of the Satellite in /etc/hosts. If you are in a cluster environment you must change the /etc/hosts on each node of the cluster.

If you are installing a Satellite within a cluster, run the following command to synchronise the files /etc/neteye-satellites.d/* and /etc/neteye-cluster on all cluster nodes:

cluster# neteye config cluster sync

Generate the Satellite Configuration Archive and Configure the Master

To generate the configuration files for the acmesatellite Satellite and to reconfigure Master services, run on the Master the command below, which generates all the required configuration files for the Satellite, stored in file /root/satellite-setup/config/<neteye_release>/tenant_A-acmesatellite-satellite-config.

Warning

On a cluster, this command will temporarily put all the cluster resources in unmanaged state. This means that pcs will not take care of handling clusterized services until a valid configuration is successfully created. In case of error during the execution of the neteye satellite config create command the cluster is left in unmanaged state to avoid downtimes. If this happens the user is required to:

  • fix the errors

  • run again the command

master# neteye satellite config create acmesatellite

The command executes the Master autosetup scripts located in /usr/share/neteye/secure_install_satellite/master/, automatically reconfiguring services to allow the interaction with the Satellite.

For example, the NATS Server is reconfigured to authorize leaf connections from acmesatellite, while streams coming from the Satellite are exported in order to be accessible from Tornado or Telegraf consumers.

In case the same name is used for more than one Satellite in different Tenants, then the –tenant switch must be used to specify the desired Tenant.

master# neteye satellite config create acmesatellite --tenant tenant_A

Note

The command neteye satellite config create computes the resulting Icinga 2 Zone name at run-time, also validating the name in the process.

The resulting Zone, which can be different from the one specified via the icinga2_zone attribute, must be unique across all Tenants. In case the property is not satisfied, the neteye satellite config create command triggers an error, stopping the Satellite configuration.

A new Telegraf local consumer is also automatically started and configured for each Tenant, to consume metrics coming from the Satellites through NATS and to write them to InfluxDB. In our example, the Telegraf instance is called telegraf-local@neteye_consumer_influxdb_tenant_A.

Note

If you are in a cluster environment, an instance of Telegraf local consumer is started on each node of the cluster, to exploit the NATS built-in load balancing feature called distributed queue. For more information about this feature, see the official NATS documentation

The command also creates an archive containing all the configuration files, in order to easily move them to the Satellite. The archive can be found at /root/satellite-setup/config/<neteye_release>/tenant_A-acmesatellite-satellite-config.tar.gz

Alternatively, configurations can be generated for all the Satellites of all Tenants defined in /etc/neteye-satellites.d/ by typing:

master# neteye satellite config create --all

Synchronize the Satellite Configuration Archive

Satellite configuration is synchronised automatically if the attribute ssh_enabled is set to TRUE, using the port defined by the attribute ssh_port or the default SSH port (22) otherwise..

Note

If SSH is not enabled, the configuration archive must be manually copied before proceeding, see below.

To synchronize the configuration files between the Master and the acmesatellite Satellite, provided ssh_enabled is set to TRUE, run the following command on the Master:

master# neteye satellite config send acmesatellite

In case the same name is used for more than one satellite in different Tenants, then the –tenant option has to be used to specify the destination Tenant.

master# neteye satellite config send acmesatellite --tenant tenant_A

The command uses the unique ID of the Satellite to retrieve the connection attributes from the Satellite configuration file /etc/neteye-satellites.d/tenant_A/acmesatellite.conf, and uses them to send the archive tenant_A-acmesatellite-satellite-config.tar.gz to the Satellite.

Alternatively, configuration archives can be sent to all Satellites defined in /etc/neteye-satellites.d/ by typing:

master# neteye satellite config send --all

The configuration archives for each Satellite belonging to a specific Tenant, will be sent to the related Satellite using the following command:

master# neteye satellite config send --tenant tenant_A

Satellite Setup

After the configuration has been generated and sent to a Satellite, use the following command on the Satellite itself to complete its configuration:

sat# neteye satellite setup

This command performs three actions:

  • Copies the configuration files in the correct places overriding current configurations, if any.

  • Creates a backup of the configuration for future use in /root/satellite-setup/config/<neteye_release>/satellite-config-backup-<timestamp>/

  • Executes autosetup scripts located in /usr/share/neteye/secure_install_satellite/satellite/

To execute this command the configuration archive must be located in /root/satellite-setup/config/<neteye_release>/satellite-config.tar.gz. Use neteye satellite config send command or copy the archive manually if no SSH connection is available.

Note

Configuration provided by the Master is not user customizable: any change will be overwritten by the new configuration when running neteye satellite setup

Nodes behind a Proxy

Some software installed on the NetEye Nodes needs to have access to resources from the Internet. In certain environments though the NetEye Nodes do not have direct Internet access and they need instead to pass through a proxy which forwards the requests to the wider web. In these cases you need configure some parts of neteye so that the software can access the proxy.

Assume your NetEye Nodes need to pass through a proxy which has the settings below:

  1. The proxy hostname is proxy.example.com

  2. The proxy listens on port 12345

  3. The proxy uses basic authentication and valid credentials are:

    1. username: myuser

    2. password: mypassword

Configuring the environment

To add the proxy configuration for software like curl, wget, python, etc. the environment variables need to be set accordingly. The variables should set in the environment by creating the file /etc/profile.d/neteye-proxy.sh and adding the following lines:

#!/usr/bin/env bash

export http_proxy=http://<proxy_username>:<proxy_password>@<proxy_ip>:<proxy_port>
export https_proxy=https://<proxy_username>:<proxy_password>@<proxy_ip>:<proxy_port>
export no_proxy=localhost,127.0.0.1,127.0.0.2,::1,neteyelocal # (in a cluster:,<cluster_node_ips>)

If the node is part of a cluster you also need to add the IPs of each node in the cluster, so that the internal cluster traffic is not sent across the proxy. Be aware that no_proxy does not support network masks, so the IPs need to be added individually.

To refresh the current session run source /etc/profile.d/neteye-proxy.sh afterwards.

With the configuration mentioned above, when the node is not part of a cluster, the file could look like this:

#!/usr/bin/env bash

export http_proxy=http://myuser:mypassword@proxy.example.com:12345
export https_proxy=https://myuser:mypassword@proxy.example.com:12345
export no_proxy=localhost,127.0.0.1,127.0.0.2,::1,neteyelocal

Configuring Subscription Manager to use the proxy

This is an example of how to set the proxy directives in the Subscription Manager configuration. These commands modify the /etc/rhsm/rhsm.conf file.

# subscription-manager config --server.proxy_hostname "proxy.example.com"
# subscription-manager config --server.proxy_scheme "http"
# subscription-manager config --server.proxy_port "12345"

The following commands are needed only if the proxy is protected by authentication.

# subscription-manager config --server.proxy_user "myuser"
# subscription-manager config --server.proxy_password "mypassword"

Configuring DNF

Configure DNF to pass through the proxy. To do this add your proxy settings to the [main] section of the file /etc/dnf/dnf.conf. The resulting file should be similar to the one below (refer to the DNF manual for more rinformation on the dnf.conf options):

[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
proxy="http://proxy.example.com:12345"
proxy_username="myuser"
proxy_password="mypassword"