User Guide Functional Overview Requirements Architecture System Installation NetEye Additional Components Installation Setup The neteye Command Director NetEye Self Monitoring Tornado Business Service Monitoring IT Operation Analytics - Telemetry Geo Maps NagVis Audit Log Shutdown Manager Reporting ntopng Visual Monitoring with Alyvix Elastic Stack IT Operations (Command Orchestrator) Asset Management Service Level Management Cyber Threat Intelligence - SATAYO NetEye.Cloud Strategy Monitoring SOC System Administrator SOC Attacker Centric Cyber Threat Intelligence - SATAYO NetEye Update & Upgrade Backup and Restore How To NetEye Extension Packs Troubleshooting Security Policy Glossary
module icon NetEye Update & Upgrade
Before you start Update Procedure Single Node Upgrade from 4.48 to 4.49 Cluster Upgrade from 4.48 to 4.49 Satellite Upgrade from 4.48 to 4.49 DPO machine Upgrade from 4.48 to 4.49 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting
NetEye Update & Upgrade Backup and Restore How To NetEye Extension Packs Troubleshooting Security Policy Glossary Introduction to NetEye Monitoring Business Service Monitoring IT Operation Analytics Visualization Network Visibility Log Management & Security Orchestrated Datacenter Shutdown Application Performance Monitoring User Experience Service Management Service Level Management & Reporting Requirements for a Node Cluster Requirements and Best Practices NetEye Satellite Requirements TCP and UDP Ports Requirements Additional Software Installation Introduction Single Node Cluster NetEye Master Master-Satellite Architecture Underlying Operating System Acquiring NetEye ISO Image Installing ISO Image Single Nodes and Satellites Cluster Nodes Configuration of Tenants Satellite Nodes Only Nodes behind a Proxy Additional NetEye Components Satellites Nodes only Verify if a module is running correctly Accessing the New Module Cluster Satellite Security Backup and Restore Identity and Access Management External Identity Providers Configure federated LDAP/AD Emergency Reset of Keycloak Configuration Advanced Configuration Roles Single Page Application in NetEye Module Permissions and Single Sign On Within NetEye Importing User Federation Groups inside another Group Importing OIDC IdP Groups inside another Group Resources Tuning Advanced Topics Basic Concepts & Usage Advanced Topics Monitoring Environment Templates Monitored Objects Import Monitored Objects Data Fields Deployment Icinga 2 Agents Configuration Baskets Dashboard Monitoring Status Icinga2 Features VMD Permissions Notifications Jobs API Configuring Icinga Monitoring Retention Policy NetEye Self Monitoring Concepts Collecting Events Add a Filter Node WHERE Conditions Iterating over Event fields Retrieving Payload of an Event Extract Variables Create a Rule Tornado Actions Test your Configuration Export and Import Configuration Example Under the hood Development Retry Strategy Configuration Thread Pool Configuration API Reference Configure a new Business Process Create your first Business Process Node Importing Processes Operators The ITOA Module Configuring User Permissions Telegraf Metrics in NetEye Telegraf Configuration Telegraf on Monitored Hosts Visualizing Dashboards Customizing Grafana The NetEye Geo Map Visualizer Map Viewer Configuring Geo Maps NagVis Audit Log Overview Shutdown Manager user Shutdown Manager GUI Shutdown Commands Advanced Topics Overview User Role Management Cube Use Cases ntopng and NetEye Integration Permissions Retention Advanced Topics Overview User Roles Nodes RDP Client Building Tools Editor: Interface Overview Editor: Script Building Editor: Managing Scripts Designer: Interface Overview Designer: Interface Options Designer: Component Tree Selector: Interface Overview Test Case Management Dashboard Use Cases Overview Architecture Authorization Kibana Elasticsearch Cluster Elasticsearch Configuration Replicas on a Single Node Elasticsearch Performance tuning Overview Enabling El Proxy Sending custom logs to El Proxy Configuration files Commands Elasticsearch Templates and Retentions El Proxy DLQ Blockchain Verification Handling Blockchain Corruptions El Proxy Metrics El Proxy Security El Proxy REST Endpoints Agents Logstash Rsyslog Elastic APM Elastic RUM Elastic XDR Overview Authorization in the Command Orchestrator Module Configuring CLI Commands Executing Commands Overview Permissions Installation Single Tenancy Multitenancy Communication through a Satellite Asset collection methods Display asset information in monitoring host page Overview Customers Availability Event Adjustment Outages Resource Advanced Topics Introduction The Intelligence We Produce Mitre Attack Coverage Getting Started Settings SATAYO Items Intelligence Requirements Managed Service Request Form FAQ Changelog SATAYO Community NetEye.Cloud as a SaaS solution Accessing NetEye.Cloud Monitoring with NetEye.Cloud Monitoring Environment Business Service Monitoring VMD SOC System Administrator (AdS) Access to NetEye and Elastic Elastic Dashboards Elastic Discover Elastic Alerts Elastic Rules Introduction to SOC Attacker Centric Service Description NetEye SIEM About SATAYO Threat Intelligence and Security Operations How SATAYO works MITRE ATT&CK Coverage SATAYO Findings SaaS & Managed Mode Before you start Update Procedure Single Node Upgrade from 4.48 to 4.49 Cluster Upgrade from 4.48 to 4.49 Satellite Upgrade from 4.48 to 4.49 DPO machine Upgrade from 4.48 to 4.49 Create a mirror of the RPM repository Sprint Releases Feature Troubleshooting Backup and Restore Tornado Networking Service Management - Incident Response IT Operation Analytics - Telemetry Identity Provider (IdP) Configuration NetEye Cluster on Microsoft Azure NetEye Cluster on AWS Introduction to NEP Getting Started with NEPs Online Resources Obtaining NEP Insights Available Packages Advanced Topics Upgrade to NetEye 4.31 Setup Configure swappiness Restarting Stopped Services Enable stack traces in web UI How to access standard logs Director does not deploy when services assigned to a host have the same name How to enable/disable debug logging Activate Debug Logging for Tornado Modules/Services do not start Sync Rule fails when trying to recreate Icinga object How to disable InfluxDB query logging Managing an Elasticsearch Cluster with a Full Disk Some logs are not indexed in Elasticsearch Elasticsearch is not functioning properly Reporting: Error when opening a report Debugging Logstash file input filter Bugfix Policy Reporting Vulnerabilities Glossary

Cluster Upgrade from 4.48 to 4.49

This guide will lead you through the steps specific for upgrading a NetEye Cluster installation from version 4.48 to 4.49.

Granted the environment connectivity is seamless, the upgrade procedure may take up to 30 minutes per node.

Warning

Remember that you must upgrade sequentially without skipping versions, therefore an upgrade to 4.49 is possible only from 4.48; for example, if you have version 4.27, you must first upgrade to the 4.28, then 4.29, and so on.

Breaking Changes

SLM Filters migration to Icinga DB

Starting from NetEye 4.48, Objects Filters configured in SLM Contracts accept only Icinga DB filter expressions and no longer support the old IDO filter syntax.

Existing Object Filters will be automatically migrated to the new syntax during the upgrade process. At the same time, any new SLM Contract created after the upgrade must use the new syntax for Object Filters. To create Object Filters with the new syntax, it is recommended to use the search filter builder in the Icinga DB Overview and then copy the generated expression.

For example, the old filter expression host_name=neteye* will now need to be written in the new syntax as host.name~neteye*.

Monitoring module removed

Starting from NetEye 4.48, the Monitoring module will be removed. All modules that previously relied on the Monitoring module must be fully migrated to Icinga DB before upgrading. The previously selected roles and permissions of the Monitoring module have been migrated to the equivalent Icinga DB roles and permissions.

IDO Support removed

Starting from NetEye 4.48, the IDO backend is no longer supported, so all monitoring data and integrations must be migrated to Icinga DB. As a consequence, since IDO is no longer used, idoreports will also be removed in favor of Icinga DB reports. Migration is needed before proceeding with the upgrade. Before proceeding, ensure a backup of the IDO database has been created.

Log Manager module removed

Starting from NetEye 4.48, the deprecated Log Manager module and its UI will be removed. As a result, the file /neteye/shared/rsyslog/conf/rsyslog.d/logmanager-hosts.conf, which maps the IP addresses received from Rsyslog to hostnames and host groups, will no longer be managed via the UI. Any future additions or changes to these mappings will therefore need to be made manually by updating the file directly.

During the upgrade procedure, the logmanager database will be dropped. If you want to preserve its data, we recommend backing it up before running the upgrade.

Additionally, the retention-policy-neteyelocal service will be removed during the upgrade.

The Log Manager module was responsible for managing the retention of the log files written by Rsyslog. After the upgrade, the Logscleaner service handles log retention by compressing log files and deleting them after 30 days. For more information, including how to customize the log retention policy, see the Rsyslog section.

telegraf-local User Customization

During the Upgrade process, the system will perform an optimization of the current telegraf-local configuration. Currently, every NetEye tenant creates a separate Telegraf Consumer instance, reading metrics from the tenant’s NATS topic and writing them into the tenant’s InfluxDB database. If the Alyvix module is installed and enabled for the tenant, an additional Telegraf Consumer per tenant is created. For example, the Telegraf Consumer for Telegraf metrics is created under /neteye/local/telegraf/conf/neteye_consumer_influxdb_<tenant>.conf, with its own dropin directory in /neteye/local/telegraf/conf/neteye_consumer_influxdb_<tenant>.d/.

With NetEye 4.48, there will be only one Telegraf Consumer (two if Alyvix is installed, per PCS node) reading from all tenants’s NATS topics and writing the data into the respective tenant’s InfluxDB database. This significantly reduces the number of Telegraf Consumers and optimizes resource usage, especially in environments with many tenants. Any customization made for a specific tenant’s Telegraf Consumer, will be automatically backed up under /root/telegraf_local_consumer_conf_backup/ and deleted during the Upgrade process.

If you have made any, you will need to adapt your previous custom configuration to the new architecture, which is now tenant-agnostic, and prepare the newly updated customization into /neteye/local/telegraf/conf/neteye_consumer_telegraf_metrics.d (or /neteye/local/telegraf/conf/neteye_consumer_alyvix_metrics.d for Alyvix). The new architecture works in such a way that for all InfluxDB Nodes (the default influxdb.neteyelocal and all the ones created under InfluxDBOnlyNodes in case of a Cluster), there will be one Telegraf Output plugin configured to dynamically write into the correct tenant’s database, using the tagpass feature to filter the metrics by tenant. Two tags are now added by Telegraf processors: tenant_influxdb_db which states the tenant’s InfluxDB database to write into, and tenant which states the tenant name. The tenant name will be discovered by the name of the NATS subject, which is in the format <tenant>.<module>.<data_type> (i.e. acme_tenant.telegraf.metrics refers to the subject for the acme_tenant tenant).

To migrate your previous custom configuration, you will need to exploit the same tagpass feature. For inputs, you need to enrich the metrics with the tenant and tenant_influxdb_db tags, and for processors and outputs you need to use the tagpass to filter the metrics by tenant. Below is an example of all the three configurations:

# Example of custom configuration for a tenant named `acme_tenant` with InfluxDB database `acme_tenant-alyvix`

# Input plugin configuration, with the addition of the tenant tags
[[inputs.cpu]]
  # other configuration options...

  # Add the tenant tags to the metrics
  [inputs.cpu.tags]
    tenant = "acme_tenant"
    tenant_influxdb_db = "acme_tenant-alyvix"


# Processor plugin configuration, with the addition of the tagpass to filter by tenant
[[processors.override]]
  # other configuration options...

  # Add the tagpass to filter the metrics by tenant
  [processors.override.tagpass]
    tenant = ["acme_tenant"]

# Output plugin configuration, with the addition of the tagpass to filter by tenant
[[outputs.influxdb_v2]]
  # other configuration options...

  # Optionally, you can also exploit yourself the `tenant_influxdb_db` tag
  database_tag = "tenant_influxdb_db"

  # Add the tagpass to filter the metrics by tenant
  [outputs.influxdb_v2.tagpass]
    tenant = ["acme_tenant"]

Warning

You now must ensure that any custom configuration for the Telegraf Consumer is compatible with the new setup, where a single Consumer handles metrics for all tenants. To check before the Upgrade, you can use the telegraf --config ./telegraf.conf --config-directory ./telegraf.d --test command, which will validate the configuration and print any error found.

Once you are ready, and you have ensured that the current configuration is compatible with the new setup, you need to do two steps:

  1. First, you need to prepare the new configuration for the Telegraf Consumers, by creating the drop in directory /neteye/local/telegraf/conf/neteye_consumer_telegraf_metrics.d (or /neteye/local/telegraf/conf/neteye_consumer_alyvix_metrics.d for Alyvix) and creating there the newly created custom configuration files after adapting them to the new setup as explained above on all PCS nodes in case of a Cluster.

  2. Update the owner and mode of the new configuration files to match the previous ones, which are telegraf:telegraf and 0640 respectively and telegraf:telegraf and 0750 for the drop in directories.

  3. Then, enable the acknowledgement flag in the UI, specifically under Configuration > Modules > neteye > Configuration.

Note

There is no need to remove the previous configuration, since it will be automatically disabled and cleaned up during the Upgrade.

During the Upgrade, Tenant-specific read-only InfluxDB users will be created. If you previously have created read only users for visualization, such as a Grafana Data Source, and it happens that you have named them <db_name>_ro, an upgrade prerequisite check will fail, asking you to add these users’s passwords to the corrisponding password file, following this pattern: /root/.pwd_influxdb_$influxdb_host_$influxdb_user, where ${influxdb_host} is the hostname of the InfluxDB node (i.e. influxdb.neteyelocal) and ${influxdb_user} is the username of the read only user (i.e. acme_tenant_ro). Password will be validated against the correct InfluxDB node according to the tenant’s configured InfluxDB node, otherwise they will be automatically generated if no such user is found.

OCS Inventory Module removed

Starting from NetEye 4.48, the deprecated OCS Inventory module will be removed in favour of the GLPI Asset Management module, which provides more advanced features and better integration with the rest of the NetEye ecosystem. This only applies to installations with the neteye-asset Feature Moduele installed.

Warning

All of the configuration related to OCS Inventory will be automatically removed, including the shared directories (/neteye/shared/ocsinventory-server and /neteye/shared/ocsinventory-ocsreports), the OCS Inventory database (ocsweb) and all the PCS and DRBD resources if on a cluster. If you want to preserve any of these, we recommend backing them up before running the upgrade.

You will be finally asked to confirm the removal of OCS Inventory by enabling the acknowledgement flag in the UI, specifically under Configuration > Modules > assetmanagement > Configuration, before proceeding with the upgrade.

Elastic Stack upgrade to 9.4.1

In NetEye 4.48, Elastic Stack upgrades from version 9.3.3 to 9.4.1. To ensure compatibility, review the official breaking changes linked below:

Prerequisites

Before starting the upgrade, carefully read the latest release notes on NetEye’s blog and check the features that will change or be deprecated.

  1. All NetEye packages installed on a currently running version must be updated according to the update procedure prior to running the upgrade.

  2. NetEye must be up and running in a healthy state.

  3. Disk Space required:

    • 3GB for / and /var

    • 150MB for /boot

  4. If the NetEye Elastic Stack module is installed:

    1. The rubygems.org domain should be reachable by the NetEye Master only during the update/upgrade procedure. This domain is needed to update additional Logstash plugins and thus is required only if you manually installed any Logstash plugin that is not present by default.

    2. There is a number of configuration items that should not be modified in order to avoid issues during the update/upgrade of your instance. Please check out Protected Configuration Items for details.

  5. Make sure you have migrated all your monitoring data from IDO to Icinga DB, because it’s a mandatory requirement before upgrading to NetEye 4.48. The migration is performed using the neteye cluster upgrade-prerequisites ido-migration command.

  6. If idoreports is in use, run icingacli reporting migrate idoreports on the node where Icinga Web 2 is running to migrate to Icinga DB reports before proceeding with the upgrade. It is highly recommended to first run icingacli reporting migrate idoreports --dry-run to verify compatibility with your existing report filters without applying any changes.

  7. Before starting the upgrade, you must set the corresponding flags under Configuration / Modules / neteye / Configuration to disable the IDO DB, IDO reports and Monitoring module. You can proceed with the upgrade only after selecting these flags.

  8. To confirm that you read the breaking changes regarding the removal of the Log Manager module, set the flag under Configuration / Modules / logmanager / Configuration.

1. Run the Upgrade

The Cluster Upgrade is carried out by running the following command:

cluster# (nohup neteye upgrade &) && tail --retry -f nohup.out

Warning

If the NetEye Elastic Stack feature module is installed and a new version of Elasticsearch is available, please note that the procedure may take a while to upgrade the Elasticsearch cluster. For more information on the Elasticsearch cluster upgrade and how to customize the upgrade process, please consult the dedicated section.

After the command was executed, the output will inform if the upgrade was successful or not:

  • In case of successful upgrade you might need to restart the nodes to properly apply the upgrades. If the reboot is not needed, please skip the next step.

  • In case the command fails refer to the troubleshooting section.

2. Reboot Nodes

Restart each node, one at a time, to apply the upgrades correctly.

  1. Run the reboot command

    cluster-node-N# neteye node reboot
    
  2. In case of a standard NetEye node, put it back online once the reboot is finished

    cluster-node-N# pcs node unstandby --wait=300
    

You can now reboot the next node.

3. Cluster Reactivation

At this point you can proceed to restore the cluster to high availability operation.

  1. Run the checks in the section Checking that the Cluster Status is Normal. If any of the above checks fail, please contact our service and support team before proceeding.

  2. Re-enable fencing on the last standard node, if it was enabled prior to the upgrade:

    cluster# pcs property set stonith-enabled=true
    

4. Additional Tasks

The IDO database is not removed automatically during the upgrade, so if you want to delete it you have to run the following commands:

mysql -e "DROP DATABASE icinga;"
mysql -e "DROP USER 'icinga'@'localhost';"

If you have the Elastic Stack installed, the retention-policy-neteyelocal service will be removed during the upgrade procedure. After the upgrade is complete, you will need to Director deploy to make the changes effective.