User Guide Functional Overview Requirements Architecture System Installation NetEye Additional Components Installation Setup The neteye Command Director NetEye Self Monitoring Tornado Business Service Monitoring IT Operation Analytics - Telemetry Geo Maps NagVis Audit Log Shutdown Manager Reporting ntopng Visual Monitoring with Alyvix Elastic Stack IT Operations (Command Orchestrator) Asset Management Service Level Management Cyber Threat Intelligence - SATAYO NetEye Update & Upgrade How To NetEye Extension Packs Troubleshooting Security Policy Glossary
module icon NetEye Update & Upgrade
Before you start Update Procedure Single Node Upgrade from 4.42 to 4.43 Cluster Upgrade from 4.42 to 4.43 Satellite Upgrade from 4.42 to 4.43 DPO machine Upgrade from 4.42 to 4.43 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting
NetEye Update & Upgrade How To NetEye Extension Packs Troubleshooting Security Policy Glossary Introduction to NetEye Monitoring Business Service Monitoring IT Operation Analytics Visualization Network Visibility Log Management & Security Orchestrated Datacenter Shutdown Application Performance Monitoring User Experience Service Management Service Level Management & Reporting Requirements for a Node Cluster Requirements and Best Practices NetEye Satellite Requirements TCP and UDP Ports Requirements Additional Software Installation Introduction Single Node Cluster NetEye Master Master-Satellite Architecture Underlying Operating System Acquiring NetEye ISO Image Installing ISO Image Single Nodes and Satellites Cluster Nodes Configuration of Tenants Satellite Nodes Only Nodes behind a Proxy Additional NetEye Components Single Node Cluster Node Satellites Nodes only Verify if a module is running correctly Accessing the New Module Cluster Satellite Security Identity and Access Management External Identity Providers Configure federated LDAP/AD Emergency Reset of Keycloak Configuration Advanced Configuration Authorization Resources Tuning Advanced Topics Basic Concepts & Usage Advanced Topics Monitoring Environment Templates Monitored Objects Import Monitored Objects Data Fields Deployment Icinga 2 Agents Configuration Baskets Dashboard Monitoring Status VMD Permissions Notifications Jobs API Configuring Icinga Monitoring Retention Policy NetEye Self Monitoring Concepts Collecting Events Add a Filter Node WHERE Conditions Iterating over Event fields Retrieving Payload of an Event Extract Variables Create a Rule Tornado Actions Test your Configuration Export and Import Configuration Example Under the hood Development Retry Strategy Configuration Thread Pool Configuration API Reference Configure a new Business Process Create your first Business Process Node Importing Processes Operators The ITOA Module Configuring User Permissions Telegraf Metrics in NetEye Telegraf Configuration Telegraf on Monitored Hosts Visualizing Dashboards Customizing Performance Graph The NetEye Geo Map Visualizer Map Viewer Configuring Geo Maps NagVis Audit Log Overview Shutdown Manager user Shutdown Manager GUI Shutdown Commands Advanced Topics Overview User Role Management Cube Use Cases ntopng and NetEye Integration Permissions Retention Advanced Topics Overview User Roles Nodes Test Cases Dashboard Use Cases Overview Architecture Authorization Kibana Elasticsearch Overview Enabling El Proxy Sending custom logs to El Proxy Configuration files Commands Elasticsearch Templates and Retentions El Proxy DLQ Blockchain Verification Handling Blockchain Corruptions El Proxy Metrics El Proxy Security El Proxy REST Endpoints Agents Logstash Elastic APM Elastic RUM Elastic XDR Log Manager - Deprecated Overview Authorization in the Command Orchestrator Module Configuring CLI Commands Executing Commands Overview Permissions Installation Single Tenancy Multitenancy Communication through a Satellite Asset collection methods Display asset information in monitoring host page Overview Customers Availability Event Adjustment Outages Resource Advanced Topics Introduction Getting Started SATAYO Items Settings Managed Service Mitre Attack Coverage Changelog Before you start Update Procedure Single Node Upgrade from 4.42 to 4.43 Cluster Upgrade from 4.42 to 4.43 Satellite Upgrade from 4.42 to 4.43 DPO machine Upgrade from 4.42 to 4.43 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting Tornado Networking Service Management - Incident Response IT Operation Analytics - Telemetry Identity Provider (IdP) Configuration NetEye Cluster on Microsoft Azure Introduction to NEP Getting Started with NEPs Online Resources Obtaining NEP Insights Available Packages Advanced Topics Upgrade to NetEye 4.31 Setup Configure swappiness Restarting Stopped Services Enable stack traces in web UI How to access standard logs Director does not deploy when services assigned to a host have the same name How to enable/disable debug logging Activate Debug Logging for Tornado Modules/Services do not start Sync Rule fails when trying to recreate Icinga object How to disable InfluxDB query logging Managing an Elasticsearch Cluster with a Full Disk Some logs are not indexed in Elasticsearch Elasticsearch is not functioning properly Reporting: Error when opening a report Debugging Logstash file input filter Bugfix Policy Reporting Vulnerabilities Glossary

Cluster Upgrade from 4.42 to 4.43

This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.42 to 4.43.

During the upgrade, individual nodes will be put into standby mode. Thus, overall performance will be degraded until the upgrade is completed and all nodes are revoked from standby mode. Granted the environment connectivity is seamless, the upgrade procedure may take up to 30 minutes per node.

Warning

Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.43 is possible only from 4.42; for example, if you have version 4.27, you must first upgrade to the 4.28, then 4.29, and so on.

Breaking Changes

Grafana upgrade to 12.0.2

Breaking changes

  • Data source UID format enforcement The UID format for data sources has been changed to enforce a stricter format. This may affect existing data sources that do not comply with the new format. Please refer to the Grafana 12.0 release notes for more details. To avoid issues, we added an upgrade NetEye prerequisite that prevents the upgrade if any data source UID does not comply with the new format.

  • Plugins compatibility Some plugins may not be compatible with Grafana 12.0.2. Please check the compatibility of your plugins before upgrading. For more information, see the Grafana Plugins page, where you can find the compatibility information for each plugin.

Kibana Multi-Instance

During the upgrade to NetEye 4.43, Kibana will be moved from a PCS-managed resources to the distributed multi-instance architecture. This improvement will not only enhance the reliability of the kibana service on NetEye clusters, but also have a more efficient workload distribution across the nodes. For system consistency, the Kibana migration will be performed also on NetEye single-node installations, where Kibana will have infrastructure and security improvements.

Directory Location Change

With the migration to Kibana multi-instance, the Kibana directory has been moved from /neteye/shared/kibana to /neteye/local/kibana. This is a breaking change that affects any custom backup scripts targeting the old location, monitoring configurations checking the old path, and custom integrations or automations accessing logs/configuration in the old location.

If you have any scripts, monitoring, or configurations that reference the old path, please update them to use the new location to ensure proper functionality after the upgrade.

Kibana Configuration File Change

With the new Kibana multi-instance architecture, the configuration file has been moved from the old file kibana/conf/kibana.yml to a file dedicated for any user customization in /neteye/local/kibana/conf/kibana_user_customization.yml. The original configuration file kibana/conf/kibana.yml is now managed by NetEye and should not be modified directly.

Additionally, since the configuration applied to the kibana_user_customization.yml file will be applied to all Kibana instances in the cluster, an additional configuration file /neteye/local/kibana/conf/kibana_local_customization.yml has been introduced to allow for local customizations that can be different on each node.

For more information on how to configure Kibana, please refer to the Architecture section.

Kibana systemd service renaming

The systemd service for Kibana has been renamed from kibana-logmanager.service to kibana.service. This means that any custom scripts, monitoring tools, or systemd configurations that reference the old kibana-logmanager.service service will need to be updated to use the new service name kibana.service.

Keep in mind that kibana.service service will run on all the configured nodes for Kibana and not anymore on a single node as it was the case with the previous kibana-logmanager.service.

Kibana volume

For the new Kibana multi-instance architecture, the Kibana DRBD volume will be removed. Although all the content of the Kibana volume will be migrated to the new location /neteye/local/kibana, it is important to note that the logical volume will not be created for the /neteye/local/kibana directory and it will be mounted instead on the /neteye LV.

If you still want to use a custom logical volume for Kibana, you can create it manually and mount it to the new path /neteye/local/kibana.

Kibana port on local interface

To prepare for the Kibana multi-instance migration, ensure that the port 5602 is free and available on all the NetEye nodes interface, both in single-node and cluster installations.

Upgrade and Migration procedure

The NetEye upgrade procedure to 4.43 will automatically migrate the existing Kibana instance, that is managed by PCS to a brand new multi-instance architecture based on Nginx and Keepalived to maximize the availability of the service and to allow for a more efficient workload distribution across the nodes. This migration will be performed automatically during the upgrade process, no manual intervention is required as soon as all the prerequisites are met.

Before proceeding with the upgrade, ensure to specify on the /etc/neteye-cluster file the nodes that will host the Kibana instances by adding the role “kibana” to the roles field of the nodes that you want to assign to the Kibana service.

Once all the prerequisites are met, the upgrade procedure will run trough the following steps:

  1. Initialization of Kibana instances on non-PCS nodes: All the nodes that should host a kibana instance, except the one currently hosting the PCS-managed Kibana instance, will be initialized with a new Kibana instance. The DRBD volume will be gracefully removed on all the PCS nodes (excluding the one running the old Kibana PCS instance) during this phase.

  2. PCS-node migration: The last kibana instance joining the cluster will be the one currently running the PCS-managed Kibana instance: During this phase, the following steps are performed:

    • Stop Kibana PCS instance: The Kibana service running on the PCS-managed node is stopped.

    • Delete all the PCS resources: All the PCS resources related to the Kibana service are removed, including the DRBD volume and the Kibana virtual IP.

    • Kibana Node Initialization: If the PCS node is selected as a Kibana instance host, it will be initialized with a new Kibana instance.

    • Keepalived and Nginx infrastructure restart: The Keepalived and Nginx services are restarted to ensure that the new Kibana instances are correctly integrated into the load balancing and high availability setup.

The migration process is designed to minimize the downtimes and to ensure a smooth transition to the new Kibana multi-instance architecture.

Prerequisites

Before starting the upgrade, you should read very carefully the latest release notes on NetEye’s blog and check out the features that will be changed or deprecated after the upgrade.

  1. All NetEye packages installed on a currently running version must be updated according to the update procedure prior to running the upgrade.

  2. NetEye must be up and running in a healthy state.

  3. Disk Space required:

    • 3GB for / and /var

    • 150MB for /boot

  4. If the NetEye Elastic Stack module is installed:

    • The rubygems.org domain should be reachable by the NetEye Master only during the update/upgrade procedure. This domain is needed to update additional Logstash plugins and thus is required only if you manually installed any Logstash plugin that is not present by default.

1. Run the Upgrade

The Cluster Upgrade is carried out by running the following command:

cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out

Warning

If the NetEye Elastic Stack feature module is installed and a new version of Elasticsearch is available, please note that the procedure will upgrade one node at the time and wait for the Elasticsearch cluster health status to turn green before proceeding with the next node. For more information, please consult the dedicated section.

After the command was executed, the output will inform if the upgrade was successful or not:

  • In case of successful upgrade you might need to restart the nodes to properly apply the upgrades. If the reboot is not needed, please skip the next step.

  • In case the command fails refer to the troubleshooting section.

2. Reboot Nodes

Restart each node, one at a time, to apply the upgrades correctly.

  1. Run the reboot command

    cluster-node-N# neteye node reboot
    
  2. In case of a standard NetEye node, put it back online once the reboot is finished

    cluster-node-N# pcs node unstandby --wait=300
    

You can now reboot the next node.

3. Cluster Reactivation

At this point you can proceed to restore the cluster to high availability operation.

  1. Bring all cluster nodes back out of standby with this command on the last standard node

    cluster# pcs node unstandby --all --wait=300
    cluster# echo $?
    
    0
    

    If the exit code is different from 0, some nodes have not been reactivated, so please make sure that all nodes are active before proceeding.

  2. Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please contact our service and support team before proceeding.

  3. Re-enable fencing on the last standard node, if it was enabled prior to the upgrade:

    cluster# pcs property set stonith-enabled=true
    

4. Additional Tasks

Migration to NetworkManager

Starting from NetEye 4.44 the deprecated network-scripts will be removed in favour of NetworkManager. Before upgrading to NetEye 4.44 you will be required to migrate the network configuration to the NetworkManager.

To carry this out, a new command specific for this purpose is available: neteye node network migrate. This command will migrate the network configuration from old network-scripts to NetworkManager.

Note

The future upgrade procedure will expect the NetworkManager.service systemd service to be enabled and running, and the network.service to be disabled and stopped. If your system already meets these requirements, you can skip the migration step.

Warning

You can decide to carry out the migration manually or to use the provided command. If the latter is used, it’s highly recommended to check out the dedicated command documentation for more information on how safely to do it.