User Guide

El Proxy Configuration

Enabling El Proxy

El Proxy receives streams of log files from Logstash, signs them in real time into a blockchain, and forwards them to Elasticsearch. For more information, please check section El Proxy

To make the use of El Proxy easier, different default pipelines, called input_beats, auditbeat, filebeat, winlogbeat, elastic_agent and ebp_persistent pipelines have been provided. The input_beats pipeline redirects logs to main or to the beat specific pipelines (auditbeat, filebeat, winlogbeat). If El Proxy is enabled for a specific host and the log matches the conditions specified in the beat specific pipelines, logs will also be redirected to ebp_persistent pipeline.

Note

If the logs to be signed by El Proxy are collected by Elastic Agent, make sure to follow the section Signing Elastic Agent Logs in order to configure the Elastic Agent instance properly.

Check out Sending custom logs to El Proxy for more information on redirecting other types of events to El Proxy.

NetEye Logstash El Proxy architecture

Fig. 205 NetEye Logstash El Proxy architecture

Specifically, the ebp_persistent pipeline enables disk persistency, extracts client certificate details and redirects data to the El Proxy.

Please note that you need to have enough space in /neteye/shared/logstash/data/ for disk persistency. The ebp_persistent pipeline is configured with three parameters:

  • queue.type: specify if the queue is in memory or disk-persisted. If set to persisted the queue will be disk-persisted.

  • path.queue: the path where the events will be persisted, by default /neteye/shared/logstash/data/ebp

  • queue.max_bytes: the maximum amount of data the queue can write, by default 512mb. Exceeding this limit may lead to a loss of events.

You can check and adjust the parameters for the queue in the file /neteye/shared/logstash/conf/pipelines.yml.

The user can customize the ebp_persistent pipeline by adding custom Logstash filters in the form of .filters files in the directory /neteye/shared/logstash/conf/conf.ebp.d. Please note that the user must neither add .input or .output files nor modify existing configuration files.

Warning

Enabling El Proxy via the variable EBP_ENABLED in the file /neteye/shared/logstash/conf/sysconfig/logstash-user-customization is not anymore supported. Please enable it per host as described below.

El Proxy can be enabled per host via Icinga Director. A Host Template, called logmanager-blockchain-host is made available for this purpose.

To enable El Proxy on a host (we’ll call the host ACME), we strongly suggest to first create a dedicated host template, which imports logmanager-blockchain-host. Then, configure the host ACME to inherit from this host template.

You can refer to Sections Host Templates and to Adding a Host for further information about how to manage host templates and hosts in the Icinga Director.

As soon as you use the new dedicated template, the following fields are shown in the Host Configuration Panel under Custom properties:

  • Enable Logging: specify if logging for the current host must be enabled. If set to Yes log collection is enabled.

  • Blockchain Enable: it appears only if Enable Logging is enabled. Specify if the log signature must be enabled for the current host. If set to Yes, El Proxy is enabled for the host.

  • Blockchain Filter: it becomes available only if Blockchain Enable is enabled. This field allows to configure which type of logs should be signed by El Proxy. For example if for some host you select Only Authentication Logs, the logs of this host will be signed by El Proxy only if they have category “authentication”.

  • Blockchain Retention: it becomes available only if Blockchain Enable is enabled. The retention policy specified for the logs of the current host (default value is 2 years or 730 days). Refer to Section Elasticsearch Templates and Retentions for further information on retention policies.

Note

Enable Logging works in combination with Blockchain Enable, both properties must be set to Yes to fully enable the log collection and the log signing.

Log and Blockchain properties of a hosts are regularly checked by the logmanager-director-es-index-neteyelocal service, which automatically stores them in Elasticsearch where they can be queried and accessed when needed.

For additional information about El Proxy refer to section El Proxy.

By default, El Proxy uses the common name (CN) specified in the Beats certificate as the customer name. If the CN is not available, El Proxy uses a default customer name as a fallback. The user can configure a custom default customer name by setting the variable EBP_DEFAULT_CUSTOMER in the file /neteye/shared/logstash/conf/sysconfig/logstash-user-customization, as follows:

EBP_DEFAULT_CUSTOMER="mycustomer"

If the variable EBP_DEFAULT_CUSTOMER is not set, El Proxy will use the value “neteye” as default customer. You must restart Logstash for the changes to take effect.

For additional information about El Proxy refer to El Proxy

Sending custom logs to El Proxy

By default only Beats and Elastic Agents events are sent to El Proxy, if enabled. NetEye, however, provides an output also in the main pipeline to redirect events to El Proxy.

Logstash sends logs to El Proxy when field [EBP_METADATA][event][module] is set to elproxysigned, with the logs being redirected to the ebp_persistent pipeline.

For example, to send all syslog logs to El Proxy, you can use a filter similar to the following

filter {
if [type] == "syslog" {
  if [EBP_METADATA][event][module] {
    mutate {
      replace => {"[EBP_METADATA][event][module]" => "elproxysigned"}
    }
  } else {
      mutate {
        add_field => {"[EBP_METADATA][event][module]" => "elproxysigned"}
      }
    }
  }
}

All logs passed through the ebp_persistent pipeline will be disk-persisted.

El Proxy Commands

El Proxy is carried out with the CLI commands that allow you to use the functionality provided. Running the executable without any arguments returns a list of all available commands and global options that apply to every command.

Available commands:

  • acknowledge : Acknowledges a corruption in a blockchain.

  • acknowledge-range : Acknowledges all the corruptions in a blockchain given a time range.

  • dlq recover : Recovers signed logs inside the Dead Letter Queue.

  • export-logs : Exports the signed logs from ElasticSearch to a local file.

  • serve : Starts El Proxy server ready for processing incoming requests.

  • verify : Verify the validity of a blockchain.

  • verify-cpu-bench : Benchmark the underlying hardware for verification.

El Proxy configuration is partly based on configuration files and partly based on command line parameters. The location of configuration files in the file system is determined at startup based on the provided CLI options. In addition, each command can have specific CLI arguments required.

Global command line parameters:

  • config-dir: (Optional) The filesystem folder from which the configuration is read. The default path is /neteye/shared/elastic-blockchain-proxy/conf/.

  • static-config-dir: (Optional) The filesystem folder from which the static configuration (i.e. configuration files that the final user cannot modify) is read. The default path is /usr/share/elastic-blockchain-proxy.

Below you will find available command line parameters for all El Proxy commands.

acknowledge command

  • key-file: The path to the file that contains the iteration 0 signature key

  • corruption-id: The CorruptionId produced by the verify command to be acknowledged

  • description: (Optional) A description of the blockchain corruption. It will be prompted if not provided

  • reason: (Optional) The reason why the corruption happened. It will be prompted if not provided

  • es-auth-method: (Optional) The method used to authenticate to Elasticsearch. This can be:

    • none: (Default) the command does not authenticate to Elasticsearch

    • basicauth: Username and password are used to authenticate. If this method is specified, the following parameter is required (and a password will be prompted during the execution):

      • es-user: the name Elasticsearch user used to perform authentication

    • pemcertificatepath: PKI user authentication is used. If this method is specified, the following parameters are required:

      • es-client-cert path to the client certificate. A client certificate suitable for the acknowledge command is present, in each container run on the DPO machine, under /root/elproxy-verification/conf/certs/ElasticBlockchainProxyAcknowledge.crt.pem

      • es-client-key path to the private key of the client certificate. A client private key suitable for the acknowledge command is present, in each container run on the DPO machine, under /root/elproxy-verification/conf/certs/private/ElasticBlockchainProxyAcknowledge.key.pem

      • es-ca-cert path to the CA certificate to be trusted during the requests to Elasticsearch

Note

The following parameters are now deprecated:

  • elasticsearch-authentication-method: Use es-auth-method instead

  • elasticsearch-username: Use es-user instead

  • elasticsearch-client-cert: Use es-client-cert instead

  • elasticsearch-client-private-key: Use es-client-key instead

  • elasticsearch-ca-cert: Use es-ca-cert instead

For example, if you want to acknowledge the corruption with corruption-id abc123, you can execute the following command:

elastic_blockchain_proxy acknowledge \
--key-file '/path/to/secret/key' \
--corruption-id 'abc123' \
--es-auth-method 'pemcertificatepath' \
--es-client-cert '/root/elproxy-verification/conf/certs/ElasticBlockchainProxyAcknowledge.crt.pem' \
--es-client-key '/root/elproxy-verification/conf/certs/private/ElasticBlockchainProxyAcknowledge.key.pem'

Note

This command should be executed from one of the tenant-specific containers of the DPO machine. Furthermore, the command is intended to be used for the acknowledgment of one corruption at the time and it is not optimized for the acknowledgdment of multiple corruptions. If you want to acknowledge multiple corruptions, please refer to the acknowledge-range command.

acknowledge-range command

  • tenant: Tenant of the blockchain

  • retention: The retention name of the blockchain

  • tag: The tag of the blockchain

  • key-file: The path to the file that contains the iteration 0 signature key

  • from: The inclusive date in RFC3339 format from which the corruptions should be acknowledged

    Note

    In case the provided from timestamp does not match the timestamp of an actual log, the first log prior to that timestamp will be considered as a starting point for the acknowledge-range command, to ensure we correctly recognise and acknowledge the cases in which the from timestamp appears in a hole of the blockchain, so in correspondence of some missing logs

  • to: The exclusive date in RFC3339 format until which the corruptions should be acknowledged

  • batch-size: (Optional) The size of each read/verify operation. Default is 500.

  • concurrent-batches: (Optional) The number of concurrent threads verifying the blockchain, to discover the corruptions to acknowledge. Default is 2.

  • description: (Optional) A description of the blockchain corruption, which will be associated with all acknowledged corruptions. It will be prompted if not provided

  • reason: (Optional) The reason why the corruption happened, which will be associated with all acknowledged corruptions. It will be prompted if not provided

  • es-read-auth-method: (Optional) The method used to authenticate to Elasticsearch in order to read from it. This can be:

    • none: (Default) the command does not authenticate to Elasticsearch

    • basicauth: Username and password are used to authenticate. If this method is specified, the following parameter is required (and a password will be prompted during the execution):

      • es-read-user: the name Elasticsearch user with read permissions used to perform authentication

    • pemcertificatepath: PKI user authentication is used. If this method is specified, the following parameters are required:

      • es-read-client-cert path to the client certificate with read permissions

      • es-read-client-key path to the private key of the client certificate with read permissions

      • es-read-ca-cert path to the CA certificate to be trusted during the requests to Elasticsearch

  • es-write-auth-method: (Optional) The method used to authenticate to Elasticsearch in order to write to it. This can be:

    • none: (Default) the command does not authenticate to Elasticsearch

    • basicauth: Username and password are used to authenticate. If this method is specified, the following parameter is required (and a password will be prompted during the execution):

      • es-write-user: the name Elasticsearch user with write permissions used to perform authentication

    • pemcertificatepath: PKI user authentication is used. If this method is specified, the following parameters are required:

      • es-write-client-cert path to the client certificate with write permissions

      • es-write-client-key path to the private key of the client certificate with write permissions

      • es-write-ca-cert path to the CA certificate to be trusted during the requests to Elasticsearch

Note

The following parameters are now deprecated:

  • index-name: Use tenant, retention and tag instead.

For example, if you want to acknowledge all the corruptions from January 1st 2023 to July 31st 2023 (UTC time) on the index pattern *-*-elproxysigned-mytenant-6_months-0, you can execute the following command:

elastic_blockchain_proxy acknowledge-range \
--tenant 'mytenant' \
--retention '6_months' \
--tag '0' \
--key-file '/path/to/secret/key' \
--from '2023-01-01T00:00:00Z' \
--to '2023-07-31T00:00:00Z' \
--es-read-auth-method 'pemcertificatepath' \
--es-read-client-cert '/root/elproxy-verification/conf/certs/neteye_elproxy_verify_mycustomer.crt.pem' \
--es-read-client-key '/root/elproxy-verification/conf/certs/private/neteye_elproxy_verify_mycustomer.key.pem' \
--es-write-auth-method 'pemcertificatepath' \
--es-write-client-cert '/root/elproxy-verification/conf/certs/ElasticBlockchainProxyAcknowledge.crt.pem' \
--es-write-client-key '/root/elproxy-verification/conf/certs/private/ElasticBlockchainProxyAcknowledge.key.pem'

In case your infrastructure is set to Italian timezone (daylight saving time, so UTC+2) and you would like to use this format when setting the parameters for range acknowledging, you can execute the following command:

elastic_blockchain_proxy acknowledge-range \
--tenant 'mytenant' \
--retention '6_months' \
--tag '0' \
--key-file '/path/to/secret/key' \
--from '2023-01-01T00:00:00+02:00' \
--to '2023-07-31T00:00:00+02:00' \
--es-read-auth-method 'pemcertificatepath' \
--es-read-client-cert '/root/elproxy-verification/conf/certs/neteye_elproxy_verify_mycustomer.crt.pem' \
--es-read-client-key '/root/elproxy-verification/conf/certs/private/neteye_elproxy_verify_mycustomer.key.pem' \
--es-write-auth-method 'pemcertificatepath' \
--es-write-client-cert '/root/elproxy-verification/conf/certs/ElasticBlockchainProxyAcknowledge.crt.pem' \
--es-write-client-key '/root/elproxy-verification/conf/certs/private/ElasticBlockchainProxyAcknowledge.key.pem'

Note

This command should be executed from one of the tenant-specific containers of the DPO machine.

dlq recover command

  • dlq-dir: (Optional) The path to the Dead Letter Queue folder. The path defaults to the dlq_dir entry of the configuration file /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy.toml. The dlq recover command expects a specific file structure inside the dlq-dir folder. The dlq-dir folder should only contain subfolders and the subfolders should only contain NDJSON files with specific names.

  • dlq-recovered-dir: (Optional) The path to the folder where the successfully recovered logs will be stored. The path defaults to the dlq_recovered_dir entry of the configuration file /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy.toml.

  • batch-size: (Optional) The size of each read/write operation. Default is 500.

  • es-auth-method: (Optional) The method used to authenticate to Elasticsearch. This can be:

    • none: (Default) the command does not authenticate to Elasticsearch

    • basicauth: Username and password are used to authenticate. If this method is specified, the following parameter is required (and a password will be prompted during the execution):

      • es-user: the name Elasticsearch user used to perform authentication

    • pemcertificatepath: PKI user authentication is used. If this method is specified, the following parameters are required:

      • es-client-cert path to the client certificate. A client certificate suitable for the dlq recover command is present under /neteye/shared/elastic-blockchain-proxy/conf/certs/ElasticBlockchainProxyRecover.crt.pem

      • es-client-key path to the private key of the client certificate. A client private key suitable for the dlq recover command is present under /neteye/shared/elastic-blockchain-proxy/conf/certs/private/ElasticBlockchainProxyRecover.key.pem

      • es-ca-cert path to the CA certificate to be trusted during the requests to Elasticsearch

Note

The following parameters are now deprecated:

  • elasticsearch-authentication-method: Use es-auth-method instead

  • elasticsearch-username: Use es-user instead

  • elasticsearch-client-cert: Use es-client-cert instead

  • elasticsearch-client-private-key: Use es-client-key instead

  • elasticsearch-ca-cert: Use es-ca-cert instead

In the most common scenarios you can run the following command, which will recover the DLQ logs that are present under the default directory where El Proxy stores DLQ logs:

elastic_blockchain_proxy dlq recover \
--es-auth-method 'pemcertificatepath' \
--es-client-cert '/neteye/shared/elastic-blockchain-proxy/conf/certs/ElasticBlockchainProxyRecover.crt.pem' \
--es-client-key '/neteye/shared/elastic-blockchain-proxy/conf/certs/private/ElasticBlockchainProxyRecover.key.pem'

If instead you want to recover DLQ logs stored under a custom directory /my/custom/dlq-dir and let the successfully recovered logs be stored under /my/custom/dlq-recovered-dir, you can execute the following command:

elastic_blockchain_proxy dlq recover \
--dlq-dir '/my/custom/dlq-dir'
--dlq-recovered-dir '/my/custom/dlq-recovered-dir'
--es-auth-method 'pemcertificatepath' \
--es-client-cert '/neteye/shared/elastic-blockchain-proxy/conf/certs/ElasticBlockchainProxyRecover.crt.pem' \
--es-client-key '/neteye/shared/elastic-blockchain-proxy/conf/certs/private/ElasticBlockchainProxyRecover.key.pem'

export-logs command

  • output-file: The output file path where the exported logs are written

  • tenant: Tenant of the blockchain

  • retention: The retention name of the blockchain

  • tag: The tag of the blockchain

  • format: (Optional) The output file format. This can be:

  • from: (Optional) An inclusive ‘From’ date limit in RFC3339 format

  • to: (Optional) An exclusive ‘To’ date limit in RFC3339 format

  • batch-size: (Optional) The size of a each read/write operation. Default is 500.

  • es-auth-method: (Optional) The method used to authenticate to Elasticsearch. This can be:

    • none: (Default) the command does not authenticate to Elasticsearch

    • basicauth: Username and password are used to authenticate. If this method is specified, the following parameter is required (and a password will be prompted during the execution):

      • es-user: the name Elasticsearch user used to perform authentication

    • pemcertificatepath: PKI user authentication is used. If this method is specified, the following parameters are required:

      • es-client-cert path to the client certificate

      • es-client-key path to the private key of the client certificate

      • es-ca-cert path to the CA certificate to be trusted during the requests to Elasticsearch

Note

The following parameters are now deprecated:

  • from: Use from-date instead

  • to: Use to-date instead

  • elasticsearch-authentication-method: Use es-auth-method instead

  • elasticsearch-username: Use es-user instead

  • elasticsearch-client-cert: Use es-client-cert instead

  • elasticsearch-client-private-key: Use es-client-key instead

  • elasticsearch-ca-cert: Use es-ca-cert instead

  • index-name: Use tenant, retention and tag instead.

verify command

  • tenant: Tenant of the blockchain

  • retention: The retention name of the blockchain

  • tag: The tag of the blockchain

  • key-file: The path to the file that contains the iteration 0 signature key

  • batch-size: (Optional) The size of each read/verify operation. Default is 500.

  • concurrent-batches: (Optional) The number of concurrent threads verifying the blockchain. Default is 2.

  • skip-iterations-before: (Optional) The inclusive iteration number from which the verification has to be performed. Default is 0.

    Note

    It is not recommended to use this parameter when verifying the blockchain in production as it is intended for testing purposes only.

    We highlight that this parameter is retention-aware, which means it will be considered only if the specified iteration number is present, when performing the verification, given the set retention. Otherwise, in case for example it refers to an older iteration number already deleted by Elasticsearch when applying the retention, the specified iteration will be ignored.

  • es-auth-method: (Optional) The method used to authenticate to Elasticsearch. This can be:

    • none: (Default) the command does not authenticate to Elasticsearch

    • basicauth: Username and password are used to authenticate. If this method is specified, the following parameter is required (and a password will be prompted during the execution):

      • es-user: the name Elasticsearch user used to perform authentication

    • pemcertificatepath: PKI user authentication is used. If this method is specified, the following parameters are required:

      • es-client-cert path to the client certificate

      • es-client-key path to the private key of the client certificate

      • es-ca-cert path to the CA certificate to be trusted during the requests to Elasticsearch

  • elasticsearch-indexing-delay: (Optional) The expected maximum time in seconds spent by Elasticsearch to index a document. Useful when verifying recently created logs that could need some time to be indexed by Elasticsearch. Default is 60 seconds.

Note

The following parameters are now deprecated:

  • elasticsearch-authentication-method: Use es-auth-method instead

  • elasticsearch-username: Use es-user instead

  • elasticsearch-client-cert: Use es-client-cert instead

  • elasticsearch-client-private-key: Use es-client-key instead

  • elasticsearch-ca-cert: Use es-ca-cert instead

  • from-iteration: Use skip-iterations-before instead.

  • index-name: Use tenant, retention and tag instead.

For example, given a blockchain which refers to:

  • module: elproxysigned

  • tenant: mytenant

  • retention: 6_months

  • blockchain_tag: 0

You can verify the blockchain with the command:

elastic_blockchain_proxy verify \
--tenant 'mytenant' \
--retention '6_months' \
--tag '0' \
--key-file '/path/to/secret/key' \
--es-auth-method 'pemcertificatepath' \
--es-client-cert '/root/elproxy-verification/conf/certs/neteye_elproxy_verify_mycustomer.crt.pem' \
--es-client-key '/root/elproxy-verification/conf/certs/private/neteye_elproxy_verify_mycustomer.key.pem'

Note

This command should be executed from one of the tenant-specific containers of the DPO machine.

verify-cpu-bench command

  • batch-size: (Optional) The size of each read/verify operation. Default is 1000.

  • n-logs: (Optional) The number of logs to verify. Default is 1,000,000.

  • log-size: (Optional) The size of each log in KB. Default is 1KB.

  • concurrent-batches: (Optional) The number of concurrent threads for the benchmark. Default is 2.

You can check, how much data throughput your server can handle, based on the number of threads with the following command:

elastic_blockchain_proxy verify-cpu-bench

Besides these parameters, additional configuration entries are available in the following files:

  • /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy.toml

  • /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy_fields.toml

  • /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy.d/verification_history.toml

To customize the available options, described below, one or more file must be created in the /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy.d folder, specifying on each line an option and its value in the OPTION = VALUE format, e.g. message_queue_size = 20000.

The /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy.toml file contains the following configuration entries:

  • logger:

    • level: The Logger level filter; valid values are trace, debug, info, warn, and error. The logger level filter can specify a list of comma separated per-module specific levels, for example: warn,elastic_blockchain_proxy=debug

  • failure_max_retries: A predefined maximum amount of retry attempts for writing logs to Elasticsearch and for writing DLQ files. A value of 0 means that no retries will be attempted.

  • failure_sleep_ms_between_retries: A fixed amount of milliseconds to sleep between each retry attempt.

  • data_dir: The path to the folder that contains the key.json file. If the file is not present, El Proxy will generate one when needed.

  • data_backup_dir: The path where El Proxy creates a backup copy of the automatically generated keys. for security reasons, the user is in charge of moving these copies to a protected place as soon as possible.

  • dlq_dir: The path where the Dead Letter Queue with the failed logs is saved.

  • dlq_recovered_dir: The path where the successfully recovered logs are saved after running the dlq recover command.

  • message_queue_size: The size of the in-memory queue where messages will be stored before while waiting for being processed

  • web_server:

    • address: The address where El Proxy Web Server will listen for HTTP requests

    • tls: TLS options to enable HTTPS (See the Enabling TLS section below)

  • elasticsearch:

    • url: The URL of the ElasticSearch server

    • timeout_secs: the timeout in seconds for a connection to Elasticsearch

    • ca_certificate_path: path to CA certificate needed to verify the identity of the Elasticsearch server

    • auth: The authentication method to connect to the ElasticSearch server (See the Elasticsearch authentication section below)

The /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy_fields.toml file contains the following configuration entry:

  • include_fields: List of fields of the log that will be included in the signature process. Every field not included in this list will be ignored. The dot symbol is used as expander processor; for example, the field “name1.name2” refers to the “name2” nested field inside “name1”.

The /neteye/shared/elastic-blockchain-proxy/conf/elastic_blockchain_proxy.d/blockchain_state_history.toml file contains the following configuration entry:

  • blockchain_state_history_dir: path to the folder that contains the Blockchain State History BSH files used by the verification process.

Elasticsearch Templates and Retentions

Several ILM policies are provided by default and should cover most of the retention use cases. A set of Index Templates installed by El Proxy match indexes against some index patterns to map each index to one ILM policy.

For example, the data stream with name filebeat-7.17.4-elproxysigned-exampletenant-6_months-exampletag will match the index pattern filebeat-7.17.4-elproxysigned-*-6_months-* of the template filebeat-7.17.4-el-proxy-6_months-retention, which assigns the 6_months_retention lifecycle policy to the index.

The retention policies available by default are the following:

  • 3_months : delete index after 90 days from creation

  • 6_months : delete index after 180 days from creation

  • 1_year : delete index after 365 days from creation

  • 2_years : delete index after 730 days from creation

  • default (2_years) : delete index after 730 days from creation

  • infinite : never delete the index

Some of the Index Templates installed by El Proxy are composable Index Templates (with the associated Component Templates), while other are Legacy Index Templates.

The main difference between composable Index Templates and Legacy Index Templates is that when multiple Legacy Index Templates match an index they are merged together, using the templates’ order as a discriminant to address conflicting settings. On the contrary, if multiple composable Index Templates match an index, only one of them is applied, based on its priority. Moreover, when a composable Index Template matches an index, it completely overrides all the Legacy Index Templates that match the index.

This means that if a composable Index Template has index pattern filebeat-7.17.4-elproxysigned-*-6_months-* while a Legacy Index Template has index pattern filebeat-*-elproxysigned-*, an index with name filebeat-7.17.4-elproxysigned-exampletenant-6_months-exampletag will only receive the settings of the composable Index Template.

From Elastic 7.17 the Index Templates for Filebeat are shipped as composable Index Templates. In order to allow El Proxy indices generated by Beats agents (at the moment Filebeat, Winlogbeat and Auditbeat) to have both the settings coming from the Beats Index Template and the settings needed by El Proxy, El Proxy by default installs a set of composable Index Templates that only import the following Component Templates:

  1. neteye-autoexpand-replicas: sets the index.auto_expand_replicas to 0-1.

  2. el-proxy-mappings: defines static and dynamic mapping for El Proxy objects EBP_METADATA and ES_BLOCKCHAIN.

  3. <beats_agent_name>-<beats_agent_version>@settings: configures the settings for the Beats agent <beats_agent_name> with version <beats_agent_version>. This Component Template is generated by NetEye and is the transformation of the mappings of the Composable Index Template <beats_agent_name>-<beats_agent_version> into a Component Template.

  4. <beats_agent_name>-<beats_agent_version>@mappings: configures the mappings for the Beats agent <beats_agent_name> with version <beats_agent_version>. It is constructed in a similar way as the <beats_agent_name>-<beats_agent_version>@settings Component Template.

  5. el-proxy-<retention_name>-retention: sets the lifecycle of the index to the ILM policy <retention_name>_retention.

  6. <index_template_name>@custom: contains possible User customizations for the current Index Template, where <index_template_name> represents the name of the current Index Template.

All default Index Templates, Component Templates (with the exception of the @custom Component Templates) and ILM policies are automatically overwritten during every neteye_secure_install. For this reason you must never modify them.

Generating Templates for Additional Beats

El Proxy ships by default the composable Index Templates for the Filebeat, Auditbeat and Winlogbeat agents.

In case you install templates for any additional Beats agents (e.g. Packetbeat), you must create create El Proxy composable Index Templates for this Beats agent. If you do not perform this operation El Proxy retention policy and mappings will not be correctly applied on that Beats’ indices.

Supposing you want to create El Proxy Index Templates for the Beats packetbeat with version 8.8.2, you can execute this script as follows:

bash /usr/share/neteye/elasticsearch/scripts/elasticsearch-generate-el-proxy-beats-template.sh \
-b "packetbeat" -v "8.8.2"

Hint

If for any reason the composable Index Templates for the Beats agent has a name different than the standard <beats_agent_name>-<beats_agent_version>, you can call the script with the argument -t <index_template_name>.

Customizing the Composable Index Templates

As described in Section Elasticsearch Templates and Retentions, the composable Index Templates installed by El Proxy specify no settings, they only import the settings from a set of Component Templates.

Since all the Index Templates installed by El Proxy are overwritten during every neteye_secure_install, you must not add your customizations by modifying the Index Templates in place.

Instead, supposing you need to customize the composable Index Template filebeat-7.17.4-el-proxy-1_year-retention, you can simply add your customizations in the Component Template filebeat-7.17.4-el-proxy-1_year-retention@custom. This Component Template is empty by default, is included by default in the relative Index Template and will not be overwritten during the neteye_secure_install.

Configuring El Proxy Retentions

El Proxy retention policies are to be configured via the Icinga Director. Retention policies are host-based, and so a blockchain retention should be set as one of the Custom properties in the Host Configuration Panel, as described in Enabling El Proxy.

Some particular logs may require a special, custom retention configuration to be applied. Below you will find information on how to customize Logstash pipelines to apply different retention policies in El Proxy.

To specify which ILM policy to use from a variety of available ones you can set the field EBP_METADATA.retention in a logstash filter.

For example, if you want to set a 6 months policy for authentication logs you can create a new filter in the directory /neteye/shared/logstash/conf/conf.ebp.d called 1_f10000_auth_retention.filter which contains the following condition:

filter {
  if "authentication" in [event][category] {
    mutate {
      replace => {"[EBP_METADATA][retention]" => "6_months" }
    }
  }
}

If the index pattern does not match any valid retention policy name, the default (2_years) ILM policy will be applied.

Changing the retention policy for an existing blockchain results in the creation of a new blockchain: a new key will be generated and must be saved in a safe place. All new logs will be saved in the new blockchain while the old blockchain will not receive any new log. For the verification process you need to use the appropriate key for each blockchain.

The retention policy change is not retroactive: the old blockchain will maintain the original retention period.

Adding Custom Retentions to El Proxy

As described in Section Elasticsearch Templates and Retentions, El Proxy ships by default a set of retention policies and maintains a mapping between their names and corresponding duration in days in the /neteye/shared/elastic-blockchain-proxy/conf/retention_policies.toml.

During the El Proxy Verification Setup, /etc/neteye-dpo file should specify the retention of a blockchain to be verified.

Apart from a set of retention policies El Proxy ships by default, you can also add a custom retention of your own choice.

For instance, if you would like to use a custom retention policy for a blockchain, such as a 1 month retention, you have to:

  • Execute the script to add a custom retention for El Proxy by specifying a name for the retention (only alphanumeric characters and underscore are valid) and a duration (in days):

    /usr/share/neteye/scripts/elastic-blockchain-proxy/add_el_proxy_retention.py --name 1_month --duration-in-days 30
    
  • Create a new retention entry in the Icinga Director:

    • In the Provide Data Lists of the Director find Data lists tab

    ../../_images/data_lists.png
    • In the Blockchain Retention List add a new retention entry; specify as key the exact same name you used at the step above.

    • New blockchain retention will now be available in the selection of retentions in the Host Configuration Panel.

You must never change the deletion period with a shorter one otherwise you will have missing logs in the middle of the blockchain and this will cause the verification to fail.

Tuning El Proxy blockchain rollover

When dealing with El Proxy data streams that receive an unusual amount of logs (either fewer or greater than normality), you may want to tune the default rollover of the data streams underlying the blockchain in order to avoid oversharding.

This configuration tuning is meant to be done only by advanced Elastic users, so it will take for granted many underlying concepts.

Let’s say for example that the blockchain *-*-elproxysigned-master-6_months-0 is receiving so few logs that you’d like to roll over its data streams every 30 days (and not every 14 days as default).

To achieve this you should:

  1. Create a new ILM

  2. Create a new Index Template matching the blockchain and applying the newly created ILM

Below you will find how to perform these 2 steps in order to comply with El Proxy requirements

Create a new ILM

In Kibana Stack Management, enter the edit mode the ILM corresponding to the retention of the blockchain, in this case 6_months_retention.

Duplicate the ILM by enable the “Save as new policy” flag, and give it a good name, for example elproxysigned-master-6_months-0.

Modify the setting of the rollover as you prefer (in our case setting Maximum age to 30 days), leave the other settings as they are and save.

Create a new Index Template

In Kibana Stack Management, clone the El Proxy template corresponding to the retention of the blockchain, in this case el-proxy-6_months_retention.

At this point you should:

  1. Choose an appropriate name, for example elproxysigned-master-6_months-0

  2. Set the index pattern to the one of your blockchain, in our case *-*-elproxysigned-master-6_months-0

  3. Set the priority to 1000 to ensure that your template has priority on the other templates

  4. Unset the Add metadata flag since this template is not managed by NetEye

  5. Under Component template remove the El Proxy component template that sets the ILM for the index, in our case el-proxy-6_months-retention

  6. Under Index settings apply the ILM you created above, in our case:

    {
      "lifecycle": {
        "name": "elproxysigned-master-6_months-0"
      }
    }
    
  7. Proceed to create the template

Handling Logs in Dead Letter Queue

Logs which could not be indexed in Elasticsearch are written in the DLQ of El Proxy.

Since events written in the DLQ are not part of the secured blockchain indices, the NetEye Administrator needs to recover or explicitly acknowledge the presence of the events in the DLQ.

As soon as any log ends up in the DLQ, the Icinga2 service logmanager-blockchain-creation-neteyelocal will enter in CRITICAL state, indicating that some log could not be indexed in the blockchain.

To recover the logs, the NetEye admin can use the dlq recover command, which will try to write the DLQ logs on Elasticsearch in their original Data Stream (see How the data stream name is determined). If some of the logs cannot be recovered, they will remain in the DLQ.

Warning

To ensure that recovered logs are not kept in Elasticsearch longer than their retention period, please execute the dlq recover command as soon as logs end up in DLQ.

The retention of the logs starts from the moment when the logs are written in Elasticsearch, which means that if you for example recover logs from DLQ after one month, those logs will be kept in Elasticsearch one month more than the expected retention period.

If for any reason you cannot recover DLQ via the dlq recover command, you can acknowledge the presence of logs in the DLQ by:

  1. Ensuring that the logs in the DLQ do not contain any sign of malicious activities.

    To do so, please inspect the content of the DLQ log files /neteye/shared/elastic-blockchain-proxy/data/dlq/<customer>/<filename>, where <customer> is the name of the customer who generated the log, and <filename> is the name of the log file.

  2. Moving all the DLQ log files /neteye/shared/elastic-blockchain-proxy/data/dlq/<customer>/<filename> in the path /neteye/shared/elastic-blockchain-proxy/data/dlq_archived/<customer>/<filename>

    Hint

    To move all the DLQ files in the archive folder you can execute the following CLI commands:

    mkdir /neteye/shared/elastic-blockchain-proxy/data/dlq_archived/
    mv /neteye/shared/elastic-blockchain-proxy/data/dlq/* /neteye/shared/elastic-blockchain-proxy/data/dlq_archived/
    

How to Setup the Automatic Verification of Blockchains

This chapter illustrates the best practices for the setup of a secure and automatic verification of El Proxy blockchains. We will setup an environment where the verification of a specific blockchain is performed every day. Moreover, we will describe how we can notify of the blockchain verification results in Icinga 2 via Tornado Webhook Collectors and Tornado rules.

This guide is organized as follows: we first introduce the Prerequisites and then we describe the NetEye setup. After that, we provide indications on how to test the deployed configuration.

Prerequisites

  1. For the start you will need to have the file containing the initial key of the blockchain to be verified

  2. The automated verification needs to be set up on a machine which is external to the NetEye installation, accessible only by you, referred to as DPO machine. The DPO machine must meet the following requirements:

    • Run a Linux distribution

    • Support Docker

    • The DPO machine must be reachable, from the NetEye Master, on port 22 for the SSH connection during the setup procedure

      Note

      The SSH connection will be necessary only when executing the setup command and will be performed using password authentication. Credentials to access the DPO machine are never stored inside NetEye.

    • The DPO machine must reach the NetEye Master on port 9200 for contacting Elasticsearch.

    • The DPO machine must reach the webhook_collector_host on port 443 to connect to the Tornado Webhook Collector.

    • The DPO machine must reach DockerHub to be able to pull El Proxy container images

  3. The DPO machine must be a known host on the NetEye Master. This is needed for the setup procedure to safely connect to the DPO machine through SSH. Run the following command in order to add the DPO machine to the list of known hosts on the NetEye Master: ssh-keyscan <dpo-hostname> >> /root/.ssh/known_hosts.

  4. The DPO machine and the NetEye installation must be configured in the same timezone

  5. The backup of the DPO machine must be performed on the customer’s part

Note

If you would like to setup the automatic verification of the blockchain on a Windows system, you can download the Docker image from DockerHub and perform a manual setup of the automatic verification. For more information, please refer to the official channels: sales, consultants, or support portal.

Verification Setup

Step 1. Configure the blockchain verification

Configuration is to be carried out on the NetEye Master in the /etc/neteye-dpo JSON-formatted file.

As an example, we can configure, using the DPO root user on the dpo-host DPO machine, the verification of a blockchain having a 6_months retention. The blockchain belongs to the test_tenant and we would like the verification to run each day at 7PM, sending the results to a Tornado Webhook Collector hosted at satellite1 with the test_token token. In this case, the configuration in the /etc/neteye-dpo file is to be built as follows:

{
  "dpo_host": "dpo-host",
  "dpo_user": "root",
  "blockchains_verification": [
    {
      "tenant": "test_tenant",
      "retention": "6_months",
      "tag": "0",
      "webhook_host": "satellite1",
      "webhook_token": "test_token",
      "cron_scheduling": {
        "minute": "0",
        "hour": "19",
        "day": "*",
        "month": "*",
        "week_day": "*"
      }
    }
  ]
}

For looking up a full list of configuration file attributes please consult neteye dpo setup section.

Step 2. Run the neteye dpo setup command

The command should be run from the NetEye Master. Upon running the command, you will be prompted to provide the password of the user, specified in the configuration. This will allow you to connect to the DPO machine and the initial key of the blockchain.

Note

If you are not using the root account, you will be asked also for the password to be used to perform some actions with sudo. If the password equals to the one used to connect via SSH to the DPO machine, you can press enter to use it also for the commands execution.

The command will connect to the DPO machine and set up the verification launching a container for each verification, which will then be performed at the specified schedule.

Note

Among the different actions performed by the command, this also tries to install Docker on the system. However, the automatic installation may fail for some OS which require a more specific installation path. If the command fails due to this error, please manually install Docker on the system and then run the command with the –skip-tags install_docker_packages option:

neteye dpo setup --skip-tags install_docker_packages

Step 3. Configure a Webhook

Tornado Webhook Collector is to be configured on either the NetEye Master, or a NetEye Satellite. It will take care of receiving El Proxy blockchain verification result from the DPO machine and forwarding it to Tornado, which will then set an Icinga 2 status.

  1. On the node where the Tornado Webhook Collector is running, create the file /neteye/shared/tornado_webhook_collector/conf/webhooks/elproxy_verification.json

  2. Set its content to:

    {
      "id": "elproxy_verification",
      "token": "<webhook_token>",
      "collector_config": {
        "event_type": "elproxy_verification",
        "payload": {
          "data": "${@}"
        }
      }
    }
    
  3. Restart the Tornado Webhook Collector service to load the webhook

Step 4. Configure a Rule in Tornado to set a status in Icinga 2

Via NetEye Tornado GUI, create a Rule that matches the Events with type elproxy_verification and executes an Action of type SMART_MONITORING_CHECK_RESULT, where we set as check_result -> exit_status the value event.payload.data.exit_status and as check_result -> plugin_output the value event.payload.data.output.

Moreover, if you are sending the results of the verification of multiple blockchains to the same Tornado Webhook Collector, you can filter them by using the event.payload.data.tenant, event.payload.data.retention and event.payload.data.tag values.

Configure the rest of the Rule as you prefer.

Testing the Configuration

Now everything should be configured correctly. To test your configuration, on the DPO machine, you can Manually trigger a pre-configured automatic verification.

Manually trigger a pre-configured automatic verification

As a user I want to be able to manually trigger a verification even though the automatic verification has already been configured, for example to immediately report the status of the blockchain to the management.

  1. Connect to the DPO machine using the same credentials specified when performing the setup of the automatic verification.

    Note

    If you connected to the DPO machine with a user different from root, run the following commands using sudo

  2. Open a shell in the verification container you would like to trigger manually. To do so, run the docker exec -it elproxy-verify-<tenant>-<retention>-<tag> bash where the tenant, retention and tag match the properties of your blockchain.

    Hint

    To check which verification containers are running on the DPO machine you can execute the docker ps command

  3. Extract the verification command from the cron file defining its schedule, located at /etc/cron.d/elproxy-verify-<tenant>-<retention>-<tag> For example, for the blockchain defined in automatic verification, the file, located at /etc/cron.d/elproxy-verify-test_tenant-6_months-0 will have the following content:

    0 19 * * * root /bin/elproxy_verify_blockchain_and_send_result test_tenant 6_months 0 satellite1 test_token 1000
    

    In this case, the verification command will be /bin/elproxy_verify_blockchain_and_send_result test_tenant 6_months 0 satellite1 test_token 1000

  4. Run the extracted verification command.


    Expected results:


    The verification is performed and the result is sent to the configured Tornado Webhook Collector. The output of the command is saved in the /root/elproxy-verification/logs folder, with the file being identified by the verified blockchain and the current timestamp.

Manual Blockchain Verification

This section aims to define the guidelines for the manual verification of a blockchain. The following naming convention is used while defining the guidelines for manual verification:

  • <tenant>: The tenant (also referred to as “customer” in the previous chapters) of the blockchain that needs to be verified

  • <retention>: the retention of the blockchain to be verified

  • <tag>: the tag of the blockchain to be verified

Note

While following this guide and executing commands, remember to always substitute the placeholders <…> with their real value.

The guide is based on the two examples which should cover most of the use cases.

For both examples, we assume the following:

  1. El Proxy has been correctly set up by following the the NetEye Setup section

  2. Every month has exactly 30 days (to facilitate the date calculations)

Manual verification with existing BSH file

In this example, we assume the following:

  1. The current date is 2022-08-30

  2. Stored on the NetEye machine, there is a non-empty blockchain created on 2022-05-01 with the following properties:

    1. TENANT: exampletenant

    2. RETENTION: 3_months

    3. TAG: exampletag

  3. The BSH file is stored under its default path on the DPO machine and contains 90 entries (one entry for each day) from 2022-05-01 to 2022-07-30

Note that, on the current date, the blockchain is already missing some logs. Since the retention policy is three months and the blockchain was created on 2022-05-01, the logs indexed from 2022-05-01 to 2022-05-30 have been automatically deleted by Elasticsearch.

In order to start the verification manually, the DPO admin needs to connect to the running verification container, see Manually trigger a pre-configured automatic verification, and then run:

/usr/bin/elastic_blockchain_proxy verify \
--key-file "/root/elproxy-verification/data/keys/elproxysigned-exampletenant-3_months-exampletag_key.json" \
--tenant "exampletenant" \
--retention "3_months" \
--tag "exampletag" \
--elasticsearch-authentication-method pemcertificatepath \
--elasticsearch-client-cert /root/elproxy-verification/conf/certs/neteye_ebp_verify_exampletenant.crt.pem \
--elasticsearch-client-private-key /root/elproxy-verification/conf/certs/private/neteye_ebp_verify_exampletenant.key.pem

Internally, the verify command tries to retrieve the iteration from which to start the verification; based on the current date and the retention policy of the blockchain, El Proxy expects the first iteration of the blockchain to be the first log indexed on 2022-06-01.

This date was calculated as follows:

date = current_date - retention_policy_in_days + 1

Where date is the resulting date, current_date is the current date, and retention_policy_in_days is the retention policy in days of the blockchain. For completeness, the final + 1 avoids the resulting date being the date on which Elasticsearch is applying the retention policy.

To retrieve the actual first iteration from which to start the verification, El Proxy fetches the BSH file searching for the entry corresponding to date. As an example, a section of the BSH file could look like the following one:

"2022-05-30": {
    "first_iteration": 15
    "last_iteration": 20
},
"2022-06-01": {
    "first_iteration": 21
    "last_iteration": 23
},
"2022-06-02": {
    "first_iteration": 24
    "last_iteration": 30
}

In this case, the first iteration from which to start the verification is iteration 21. At this point, El Proxy can start verifying the logs from the retrieved first iteration. If the verification is successful, the old BSH file will be overwritten with a new one containing one entry for each day from 2022-06-01 to the current date.

Manual verification of newly created blockchain

In this example, we suppose the following:

  1. The current date is 2022-09-01

  2. Stored on the NetEye machine, there is a non-empty blockchain created on the current date with the following properties:

    1. TENANT: exampletenant

    2. RETENTION: 3_months

    3. TAG: exampletag

  3. The BSH file does not exist

To start the verification manually, the DPO admin needs to connect to the running verification container, see Manually trigger a pre-configured automatic verification, and then run:

/usr/bin/elastic_blockchain_proxy verify \
--key-file "/root/elastic-blockchain-proxy/data/keys/elproxysigned-exampletenant-3_months-exampletag_key.json" \
--tenant "exampletenant" \
--retention "3_months" \
--tag "exampletag" \
--elasticsearch-authentication-method pemcertificatepath \
--elasticsearch-client-cert /root/elproxy-verification/conf/certs/neteye_ebp_verify_exampletenant.crt.pem \
--elasticsearch-client-private-key /root/elproxy-verification/conf/certs/private/neteye_ebp_verify_exampletenant.key.pem

Internally, the verify command tries to retrieve the iteration from which to start the verification. Since the BSH file is missing, El Proxy tries to get the first existing log by querying Elasticsearch.

In this example, since the blockchain is new and all its logs are still present, Elasticsearch will return the log with iteration zero as the first log in the blockchain.

El Proxy will then start the verification from the log with iteration zero. If the verification succeeds, a BSH file will be created and stored on the DPO machine.

Handling Blockchain Corruptions

The correct state of a blockchain can be checked by running the verification process, which provides, at the end of its execution, a report about the consistency of the blockchain.

If everything is fine, the verification process will complete successfully, otherwise, the execution report will contain a list of all errors encountered. For example:

--------------------------------------------------------
--------------------------------------------------------
Verify command execution completed.
Blockchain verification process completed with 2 errors.
--------------------------------------------------------
Errors detail:
--------------------------------------------------------
Log verification error 0 details:
 - Error code: Log Verification Bad Iteration
 - Error type: Missing logs
 - Missing logs from iteration 0 (inclusive) to iteration 1 (inclusive)
 - CorruptionId: eyJmcm9tX2l0ZXJhdGlvbiI6MCwidG9faXRlcmF0aW9uIjoxLCJwcmV2aW91c19oYXNoIjpudWxsLCJuZXh0X2hhc2giOiI3YzQyNmU1OWM1ZGQ0MTkwODJkZGQ0ZWY1Mzk3ZDdhMGM5NmJiYjQxNmVkMWNkOTg3OTMyNDFiYjg5NTY4YjBjIiwiYWNrbm93bGVkZ2VfYmxvY2tjaGFpbl9pZCI6ImFja25vd2xlZGdlLWVscHJveHlzaWduZWQtbXljdXN0b21lcjItaW5maW5pdGUtMCJ9
--------------------------------------------------------
Log verification error 1 details:
- Error code: Log Verification Wrong Hash
- Error type: Wrong Log Hash
- Failed iteration_id: 16
- Expected hash: 34f5bd40d5042ba289d4c5032c75a426306a57e41c0703df4c7698df104f75ed
- Found hash   : 593bcd654fd80091105a21f548c1e6a8dd07c80380e72ceeeb1a3e7b126d26bb
- CorruptionId: eyJmcm9tX2l0ZXJhdGlvbiI6MTYsInRvX2l0ZXJhdGlvbiI6MTYsInByZXZpb3VzX2hhc2giOiIwNTE0YmY3YTBmNmRmNmNhMjg1YTIwYTM2OGFiNTA5M2I5NjgxMWZkZWFmMmQ1YThhYjFkOTYwYzgyNDRiNzJlIiwiYWNrbm93bGVkZ2VfYmxvY2tjaGFpbl9pZCI6ImFja25vd2xlZGdlLWVscHJveHlzaWduZWQtbmV0ZXllLW9uZV93ZWVrLTAifQ==
--------------------------------------------------------
--------------------------------------------------------

For more details about possible errors that can be reported by the El Proxy verification and information on the recommended instructions, please review the associated errors table.

Errors different from the Empty Blockchain Corruption error are provided with a CorruptionId that uniquely identifies them. The CorruptionId is a base64 encoded JSON that contains data to identify a specific corruption of a blockchain.

Once the admin has reviewed and investigated the conditions that led to the error, it is possible to fix the corruptions in one of the following ways:

  1. Recover the logs via the dlq recover command, in case the corruption was caused by logs that ended up in DLQ

  2. Acknowledge the corruptions via the acknowledge or acknowledge-range commands

Acknowledging Blockchain Corruptions

The acknowledge or acknowledge-range commands create an acknowledgment for a specific CorruptionId so that, when the verify command is executed next time, the linked error in the blockchain will be considered as resolved.

The acknowledgement data is persisted in a dedicated Elasticsearch index whose name is generated from the name of the corrupted data stream. For example, if the corrupted data stream is *-*-elproxysigned-neteye-one_week-0, then the name of the acknowledged blockchain will be acknowledge-elproxysigned-neteye-one_week-0.

For instance, we can acknowledge the first error reported in the above example by connecting to the tenant specific container on the DPO machine, as described here, and executing the following command:

elastic_blockchain_proxy acknowledge \
--key-file /path/to/secret/key \
--es-auth-method 'pemcertificatepath' \
--es-client-cert '/root/elproxy-verification/conf/certs/ElasticBlockchainProxyAcknowledge.crt.pem' \
--es-client-key '/root/elproxy-verification/conf/certs/private/ElasticBlockchainProxyAcknowledge.key.pem' \
--corruption-id=eyJmcm9tX2l0ZXJhdGlvbiI6MCwidG9faXRlcmF0aW9uIjoxLCJwcmV2aW91c19oYXNoIjpudWxsLCJuZXh0X2hhc2giOiI3YzQyNmU1OWM1ZGQ0MTkwODJkZGQ0ZWY1Mzk3ZDdhMGM5NmJiYjQxNmVkMWNkOTg3OTMyNDFiYjg5NTY4YjBjIiwiYWNrbm93bGVkZ2VfYmxvY2tjaGFpbl9pZCI6ImFja25vd2xlZGdlLWVscHJveHlzaWduZWQtbXljdXN0b21lcjItaW5maW5pdGUtMCJ9=

The acknowledge command is intended to be used for the acknowledgment of a single corruption and it is not optimized for the acknowledgment of multiple corruptions. If we want to acknowledge multiple corruptions, we can use the acknowledge-range command.

For example, assuming all the corruptions reported in the example above occurred from January 1st 2023 to July 31st 2023 (UTC time) on the blockchain of tenant mycustomer, retention 6_months and tag 0, we can run, always from the blockchain specific container on the DPO machine, the following command:

elastic_blockchain_proxy acknowledge-range \
--tenant 'mycustomer' \
--retention '6_months' \
--tag '0' \
--key-file '/path/to/secret/key' \
--from '2023-01-01T00:00:00Z' \
--to '2023-07-31T00:00:00Z' \
--es-read-auth-method 'pemcertificatepath' \
--es-read-client-cert '/root/elproxy-verification/conf/certs/neteye_elproxy_verify_mycustomer.crt.pem' \
--es-read-client-key '/root/elproxy-verification/conf/certs/private/neteye_elproxy_verify_mycustomer.key.pem' \
--es-write-auth-method 'pemcertificatepath' \
--es-write-client-cert '/root/elproxy-verification/conf/certs/ElasticBlockchainProxyAcknowledge.crt.pem' \
--es-write-client-key '/root/elproxy-verification/conf/certs/private/ElasticBlockchainProxyAcknowledge.key.pem'

For more information about the acknowledge and acknowledge-range commands please refer to the El Proxy Commands section.

El Proxy Metrics

El Proxy performance metrics can be useful for inspecting the workload, identifying bottlenecks, and debugging issues. For more info about the individual metrics collected by El Proxy, please refer to the El Proxy REST Endpoints section.

You can configure the parameters for retrieving and storing metrics via GUI under Configuration / Modules / kibana / Configuration. There are two parameters available:

  • El Proxy Monitoring Retention Policy: defines the number of days for which metrics are retained in InfluxDB and defaults to 7 days, after which data will no longer be available.

  • El Proxy Monitoring Polling Interval: sets how often the Telegraf agent queries El Proxy APIs to gather metrics and defaults to 2 seconds.

To apply changes, you can either run neteye_secure_install for both options or execute /usr/share/neteye/kibana/scripts/apply_elproxy_monitoring_retention_policy.sh or /usr/share/neteye/kibana/scripts/apply_elproxy_monitoring_polling_interval.sh, depending on the parameter to be changed.

Note

On a NetEye Cluster, execute the command on any PCS node.

NetEye also provides an out-of-the-box solution for visualizing the metrics, which consists of a set of dashboards that allow you to analyze the work of El Proxy. To check out the dashboards please click on the ITOA module and navigate to the El Proxy folder of the Grafana Organization Main Org.. Here you will see dashboards regarding troubleshooting and performance of El Proxy.

El Proxy dashboards rely on the ElProxyMonitoring Grafana Data Source, which is also automatically created by NetEye and you should never modify it manually.

Note

The dashboards present in El Proxy folder and the ElProxyMonitoring Data Source are managed by NetEye and you should never modify them manually because they will be overwritten by NetEye itself. If you would like to modify one of the dashboards, first you should copy it and then modify a copied version.