Integrate Elasticsearch and Logstash

As of PMUL 22.1, you can route events to Elasticsearch or Logstash instances located either on the customer premises or in the cloud.

You can use HTTP and HTTPS communication for both destination types. The HTTPS communication options for on-premises instances are different from their cloud counterparts.

The destination types support authenticated connections. Credentials are stored in the file /opt/pbul/dbs/pbelkcred.db.

Configure Elasticsearch and Logstash

There are three ways to configure Elasticsearch and Logstash instances:

  • Elasticsearch over HTTP (on-premises instance)
  • Elasticsearch over HTTPS (on-premises instance)
  • Elasticsearch over HTTPS (cloud instance)

Elasticsearch over HTTP (On-Premises Instance)

To use this configuration, add elkinstances to /etc/pb.settings on the log server:

### Elasticsearch
elkinstances        elasticsearch=http://<elk-host-name>:9200

In this scenario, an Elasticsearch instance is configured on elk-host-name to listen on TCP:9200. When making a similar change in your setup, do the following:

  1. Stop the pblighttpd service (pblighttpd_svc.sh stop).
  2. Add your Elasticsearch server to /etc/pb.settings.
  3. Start the pblighttpd service (pblighttpd_svc.sh start).
  4. Run a pbrun <command> on any server to create and write events to your log server.

After performing the steps above, verify the Accept and Finish events are sent to Elasticsearch. To do this, query the Elasticsearch host using a command like the following:

curl -X GET "http://<elk-host-name>:9200/pmul-eventlog-*/_search?pretty" \
-H 'Content-Type: application/json' -d'
 
$ curl -X GET "http://example-host:9200/pmul-eventlog-*/_search?pretty" \
-H 'Content-Type: application/json' -d'
{
  "query": {
    "match_all": { }
  },
  "sort": [
    {
      "@timestamp": "desc"
    }
  ]
}
'

You will see a lot of output data, most of which is for the Accept event. If you don’t see data, view the /var/log/pbrest.log file to ensure a problem is reported.

Elasticsearch over HTTPS (On-Premises Instance)

To configure an Elasticsearch on-premises instance over HTTPS:

  • Create certificates on the Elasticsearch machine.
  • Configure Elasticsearch to use the certificates.
  • Configure PMUL to use HTTPS communication with the generated client certificates.
  • (Optional) Configure Elasticsearch and PMUL to support authentication.

Create the Certificate File (certs.zip) on the Elasticsearch System

To create a certificate, you can use the following website for guidance: https://www.elastic.co/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash#create-ssl. Details are summarized in this section.

Change <elk-host-name> to the name of your Elasticsearch server.

# cd /usr/share/elasticsearch
# service elasticsearch stop
# mkdir -p tmp/certs
# vim tmp/certs/instances.yml
# cat tmp/certs/instances.yml
instances:
  - name: 'elk-host-name'
    dns: [ 'elk-host-name' ]
  - name: 'client'
    dns: [ 'client' ]
# ./bin/elasticsearch-certutil cert –-keep-ca-key –-pem –-in tmp/certs/instances.yml \
--out tmp/certs/certs.zip
# cd tmp/certs
# unzip certs.zip
# mkdir /etc/elasticsearch/certs
# cp ca/ca.crt /etc/elasticsearch.certs
# cp <elk-host-name>/<elk-host-name>.crt /etc/elasticsearch/certs
# cp <elk-host-name>/<elk-host-name>.key /etc/elasticsearch/certs

Configure Elasticsearch to Use Certificates

  1. Based on the above commands, add the following lines to /etc/elasticsearch/elasticsearch.yml to use HTTPS:
    #
    # -------------------------------- Security ------------------------------------
    #
    xpack.security.http.ssl.enabled: true
    xpack.security.http.ssl.key: certs/elk-host-name.key
    xpack.security.http.ssl.certificate: certs/elk-host-name.crt
    xpack.security.http.ssl.certificate_authorities: certs/ca.crt
    
  2. Run service elasticsearch start as the root user. If that fails, run service elasticsearch status and follow the suggested steps.
  3. When Elasticsearch starts correctly, verify insecure HTTPS connectivity from the log server using a command like the following (and seeing the displayed response):
    $ curl --insecure -X GET https://<elk-host-name>:9200/?pretty \
     -H 'Content-Type: application/json'
    {
      "name" : "elk-host-name",
      "cluster_name" : "elasticsearch",
      "cluster_uuid" : "u1EW_94MRCGnQkvoO2tYdQ",
      "version" : {
        "number" : "7.14.1",
        ...
      },
      "tagline" : "You Know, for Search"
    }
    
  4. After --insecure works, use the certificate authority (CA) certificate. Copy the CA certificate from the Elasticsearch server to the PMUL log server using commands similar to the following:
    # mkdir /opt/pbul/certs
    # cd /opt/pbul/certs
    # scp root@<elk-host-name>:/usr/share/elasticsearch/tmp/certs/ca/ca.crt
    
  5. Use curl with the CA certificate:
    $ curl -X GET https://<elk-host-name>:9200/?pretty \
     --cacert /opt/pbul/certs/ca.crt -H 'Content-Type: application/json'
    

Running --cacert is the same output as running --insecure.

Deploy the Client Certificates to PMUL

  1. With insecure communication established, and the CA certificate shown to work, migrate the created client certificates to PMUL, and then test with curl. The curl command returns the same data as shown in the previous procedure.
    # scp root@<elk-host-name>:/usr/share/elasticsearch/tmp/certs/client/client* .
    # curl -X GET https://<elk-host-name>:9200/?pretty \
     --cacert /opt/pbul/certs/ca.crt –-cert /opt/pbul/certs/client.crt \
     --key /opt/pbul/certs/client.key -H 'Content-Type: application/json'
    
  2. Modify the Elasticsearch section of /etc/pb.settings as shown:
    ### Elasticsearch
    elkinstances              elasticsearch=https://<elk-host-name>:9200
    elkcafile                 /opt/pbul/certs/ca.crt
    elkcertfile               /opt/pbul/certs/client.crt
    elkkeyfile                /opt/pbul/certs/client.key
    

 

You must ensure that the host name in the elkinstances URL exactly matches the name in the instances.yml file from which the certificates were created. Communication will not work if you mix-and-match FQDN and short names. For example, <elk-host-name>.pmul.net in pb.settings and <elk-host-name> in instances.yml.

  1. Restart pblighttpd (pblighttpd_svc.sh restart) and run pbrun. To verify that the Accept and Finish events are delivered, view the last few lines of /var/log/pbrest.log to ensure delivery succeeded and to get the last few characters of the event’s unique ID.
  2. Run the following command to ensure the event was delivered to the log server:
    $ curl -X GET "https://<elk-host-name>:9200/pmul-eventlog-*/_search?pretty" \
    --cacert /opt/pbul/certs/ca.crt --cert /opt/pbul/certs/client.crt \
    --key /opt/pbul/certs/client.key -H 'Content-Type: application/json' -d'
    {
      "query": { "match_all": { } },
      "sort": [ { "@timestamp": "desc" } ]
    }
    ' | grep <unique_id>-
    

The output displays two lines with the IDs of the Accept and Finish events. To display more information, eliminate the grep.

Enable Authentication (Optional)

This step can be done before deploying client certificates. Assuming certificates are in place, temporarily disable certificates and HTTPS and configure authentication over HTTP.

  1. On the Elasticsearch machine, stop Elasticsearch: service elasticsearch stop.
  2. Modify the Security settings in /etc/elasticsearch/elasticsearch.yml:
    #
    # -------------------------------- Security ------------------------------------
    #
    xpack.security.enabled: true
    # xpack.security.http.ssl.enabled: true
    # xpack.security.http.ssl.key: certs/<elk-host-name>.key
    # xpack.security.http.ssl.certificate: certs/<elk-host-name>.crt
    # xpack.security.http.ssl.certificate_authorities: certs/ca.crt
    
  3. Restart Elasticsearch: service elasticsearch start, and then create some authenticated users and passwords:
    # cd /usr/share/elasticsearch
    # bin/elasticsearch-setup-passwords interactive
    
  4. You are prompted to enter passwords for several built-in Elasticsearch users (including the elastic user). Make a note of the passwords.
  5. Use curl to test authentication from the log server. In the case where the password for the elastic user is elastic, you can use the following command:
    $ curl -X GET "http://<elk-host-name>:9200/pmul-eventlog-*/_search?pretty" \
    -u elastic:elastic -H 'Content-Type: application/json' -d'
    {
      "query": { "match_all": { } },
      "sort": [ { "@timestamp": "desc" } ]
    }
    '
    
  6. Any messages from Elasticsearch previously stored in the database are displayed.
  7. Configure PMUL to authenticate (initially via HTTP) to deliver events to Elasticsearch. Add an Elasticsearch credential to PMUL:
    # pbrestcall -l -X PUT -a <appid> -k <key> \
    https://localhost:24351/REST/elkcred/elastic_basic -d \
    '{"id": "elastic_basic", "type": "basic", "username": "elastic", \
    "password": "elastic"}'
    
  8. The command replies with a JSON message that includes { “status”: 0 }.
  9. Set up PMUL to use the credential. Modify the /etc/pb.settings file on the log server as shown:
    ### Elasticsearch
    elkinstances              elasticsearch=http://<elk-host-name>:9200
    elkcredential             elastic_basic
    # elkcafile               /opt/pbul/certs/ca.crt
    # elkcertfile             /opt/pbul/certs/client.crt
    # elkkeyfile              /opt/pbul/certs/client.key
    
  10. Restart the pblighttpd service (pblighttpd_svc.sh restart) and execute a pbrun command that sends an event to the log server.
  11. Inspect /var/log/pbrest.log to ensure that no error occurred. If no error occurred, then query the Elasticsearch server to ensure the event was delivered there. Obtain the last few characters of the event’s unique ID and issue a command such as the following:
    $ curl -X GET "http://<elk-host-name>:9200/pmul-eventlog-*/_search?pretty" \
    -u elastic:elastic -H 'Content-Type: application/json' -d'
    {
      "query": { "match_all": { } },
      "sort": [ { "@timestamp": "desc" } ]
    }
    ' | grep <uniqueid>-
    
  12. Add HTTPS and certificates back into the configuration. To start, go to the Elasticsearch server and modify the Security settings in /etc/elasticsearch/elasticsearch.yml to re-enable secure communications:
    #
    # -------------------------------- Security ------------------------------------
    #
    xpack.security.enabled: true
    xpack.security.http.ssl.enabled: true
    xpack.security.http.ssl.key: certs/<elk-host-name>.key
    xpack.security.http.ssl.certificate: certs/<elk-host-name>.crt
    xpack.security.http.ssl.certificate_authorities: certs/ca.crt
    
  13. Restart Elasticsearch (service elasticsearch restart) and check the communications configuration from the log server side:
    $ curl -X GET "https://<elk-host-name>:9200/pmul-eventlog-*/_search?pretty" \
    --cacert /opt/pbul/certs/ca.crt --cert /opt/pbul/certs/client.crt \
    --key /opt/pbul/certs/client.key -u elastic:elastic \
    -H 'Content-Type: application/json' -d'
    {
      "query": { "match_all": { } },
      "sort": [ { "@timestamp": "desc" } ]
    }
    ' | grep <unique_id>-
    
  14. Reconfigure PMUL on the log server to use certificates with authentication. Modify /etc/pb.settings with the Elasticsearch configuration as shown:
    ### Elasticsearch
    elkinstances              elasticsearch=https://<elk-host-name>:9200
    elkcredential             elastic_basic
    elkcafile                 /opt/pbul/certs/ca.crt
    elkcertfile               /opt/pbul/certs/client.crt
    elkkeyfile                /opt/pbul/certs/client.key
    
  15. Restart pblighttpd (pblighttpd_svc.sh restart) on the log server and issue a pbrun command that generates a log event. Obtain that event ID from /var/log/pbrest.log and ensure that it was delivered to Elasticsearch. You can do this with the curl command shown above but grepping for a different unique_id. You will see two lines of output: one for the Accept event and the other for the Finish event.

Add Token and API Key Credentials

Elasticsearch supports authentication with tokens and API keys.

  • Token authentication: The user submits a username and password, and Elasticsearch replies with a token that is used in subsequent requests without the username and password
  • API key requests: The Elasticsearch administrator supplies the API ID and key.

Adding a token credential is like adding a basic credential, but requires an additional endpoint argument as shown:

# pbrestcall -l -X PUT -a <appid> -k <key> \
https://localhost:24351/REST/elkcred/elastic_token -d \
'{"id": "elastic_token", "type": "token", "username": "elastic", \
"password": "elastic", "endpoint": "_security/oath2/token"}'

Adding an API key (type apikey) credential is like adding a basic credential, but the arguments are different:

# pbrestcall -l -X PUT -a <appid> -k <key> \ 
https://localhost:24351/REST/elkcred/elastic_apikey -d \
'{"id": "elastic_apikey", "type": "apikey", "apiid": "<apiid>", \
"apikey": "<apikey>"}'

The values of <apiid> and <apikey> are provided by the Elasticsearch system administrator. It is possible to obtain the keys for testing using curl:

# curl -u <username>:<password> -X POST <elastic_url>/_security/api_key?pretty \
-H 'Content-Type: application/json' -d '{ "name": "elastic_apikey" }'

{
  "id" : "bTltoX0BTJT5yB8Hshto",
  "name" : "elastic_apikey",
  "api_key" : "6Y2qXmHgSsKaQN-pB6D5TQ"
}

Cloud Instance (Elasticsearch over HTTPS)

You need the following information from the cloud provider or someone in your organization:

  • URL (for example, https://d6a3bf3dezr92180de2182410be.us-east-1.aws.found.io)
  • Username
  • Password

Elasticsearch Keywords

elkinstances

  • Version 22.1 and later: elkinstances setting available.

Events are sent to the URLs in elkinstances until an attempt succeeds. No subsequent URLs are attempted following a successful send.

Syntax
elkinstances    [elasticsearch|logstash]=url1,…,urlN

where the URL items are http or https URLs that specify the endpoint up to and including the port number.

 
elkinstances    elasticsearch=https://elastic.io,http://localhost:9200

elkcredential

  • Version 22.1 and later: elkcredential setting available.

Set the identifier of a stored credential to authenticate to the endpoints in elkinstances. The credential, if specified, is applied to all the endpoints.

Syntax
elkcredential <credential_id>

elkcafile

  • Version 22.1 and later: elkcafile setting available.

The name of a copy of the certificate authority, in PEM format, file used by the remote Elasticsearch or Logstash server. This is used only for communicating with on-premises instances.

Syntax
elkcafile    <filename>

elkcertfile

  • Version 22.1 and later: elkcertfile setting available.

A client certificate, in PEM format, counterpart of the server certificate used by Elasticsearch or Logstash. This is used only when communicating with on-premises instances.

Syntax
elkcertfile    <filename>

elkkeyfile

  • Version 22.1 and later: elkkeyfile setting available.

The client key file, in PEM format, associated with elkcertfile. This differs from the private key present on the Elasticsearch or Logstash server. This is used only when communicating with on-premises instances.

Syntax
elkkeyfile    <filename>

elkdatatypes

  • Version 22.2 and later: elkdatatypes setting available.

Set one or more types of data to deliver to Elasticsearch. Current options are eventlog (Accept, Reject, Finish, Keystroke) and iolog.

The default is to send only eventlog data to Elasticsearch if PMUL is configured to communicate with Elasticsearch (that is, elkinstances and elkcredential are configured).

Syntax
elkdatatypes eventlog iolog

elkiologfieldsizekb

  • Version 22.2 and later: elkiologfieldsizekb setting available.

The size, in kilobytes, of the amount of iolog session data that pbreplay writes to each chunk, in cases where pbreplay sends data to Elasticsearch in multiple chunks. The default is 1024, in which case pbreplay sends iolog data to Elasticsearch in 1MB (1024 * 1024) chunks.

Syntax
elkiologfieldsizekb N

Acceptable values for N range from 8 (for 8 KB) to 65536 (for 64 MB).

Default Value

1024

elasticsearchidxtemplate

  • Version 22.2 and later: elasticsearchidxtemplate setting available.

The path to a template file that can be used to specify both default handling and handling of specific variables that PMUL sends to Elasticsearch.

Syntax
elasticsearchidxtemplate [absolute or relative file path]
Default Value

/opt/pbul/elk/etc/pbelasticsearchtemplate.json

elkecsconfiguration

  • Version 22.2 and later: elkecsconfiguration setting available.

The main role of this file (the default is /opt/pbul/elk/pbelkecsconfiguration.json) is to set the mapping of PMUL field names to Elasticsearch Elastic Common Schema (ECS) field names. This is done in the mappedFields JSON object in the file. The objects in the file are described here:

  • "version": <number>

    The internal version number for the JSON file. The current value is 1. It might be used in later versions of PMUL.

  • "mappedFields": { … }

    Contains entries that map PMUL fields to one or more ECS fields. For example, the entry:

  • "user": ["related.user", "user.name"]

    maps the PMUL variable user to ECS variables related.user and user.name – both ECS fields are populated with the value of the PMUL user variable. An ECS variable typically receives the value of the last PMUL variable written to it during processing. There are, however, three special ECS variables – related.user, related.ip, and related.hosts – that store all PMULvalues sent to them in arrays. The related.hosts variable, for example, might contain the values for PMUL variables logserver, masterhost, runhost and submithost.

  • "subsetFields": { … }

    Most PMUL fields are not written to Elasticsearch for iolog Finish events that have a corresponding Accept event. This is because most such fields duplicate their values in the Accept event. However, you might want a subset of some of the fields to be included in the iolog event sent to Elasticsearch anyway. An entry such as:

  • "logserver": true

    ensures the variable logserver is sent to Elasticsearch for both Accept and Finish events.

  • "excludeFields": { … }

    Use to exclude unwanted fields from the Elasticsearch record. The fields are not populated. You can, however, exclude the PMUL fields true and false with the entries:

  • "true": true,
  • "false": true

    in which case they are not forwarded to Elasticsearch.

Default Configuration File

/opt/pbul/elk/etc/pbelkecsconfiguration.json

elkdeliverytimeout

  • Version 22.1 and later: elkdeliverytimeout setting available.

The number of seconds within which the event must be delivered to Elasticsearch or Logstash. The default is 30. The minimum is 0 (no timeout) and the maximum is 120.

Syntax
elkdeliverytimeout    <value>

elkindexpattern

  • Version 22.1 and later: elkindexpattern setting available.

Use an index pattern to set default index suffixes for event and iolog messages delivered to Elasticsearch.

  • %fieldN% is used to define substitutions: %year%, %month%, and %day% are placeholders for today's year, month and day.
  • %<pmul-field-name>% is a substitute for a PMUL field value, e.g., %user% for the value of the field user.

In cases where a non-existent or missing variable is used, the index defaults to one relevant to the current date (e.g., pmul-eventlog-ecs-20220531). Similar action is taken in cases when the syntax is used incorrectly.

We do not recommend using a pattern for eventlog; a field available in the Accept event (for example, user) might not exist in the Finish event. In this case, Elasticsearch cannot process Accept and Finish events to a single object.

Syntax
elkindexpattern    iolog=%field1%..%fieldX% eventlog=%field1%..%fieldY%

elkoptions

  • Version 22.1 and later: elkoptions setting available.

Configure additional options.

Valid Values
  • batchsize: Sets the batch size for reading rows from the SQLite cache database.
 
elkoptions batchsize=20

siemcachedb

  • Version 22.1 and later: siemcachedb setting available.

The siemcachedb setting specifies the absolute path of the SQLite database created by message router to store the events that did not make it to Elasticsearch or Logstash. The database is rotated to the <dequeuedatabasedir>/mrsiem directory on reaching the limit set by the setting siemcachedblimit.

Default Value
/opt/pbul/dbs/pbsiemcache.db

Where /opt/pbul/dbs is the value from the setting databasedir.

siemcachedblimit

  • Version 22.1 and later: siemcachedblimit setting available.

The siemcachedblimit setting sets the maximum database size of siemcachedb and also the interval in which the siemcachedb is rotated to <dequeuedatabasedir>/mrsiem directory.

Syntax
siemcachedblimit size=<size-n>[K|M|G] limit=<time-x>[m|h]

Where:

  • size-n is the maximum size limit of the siemcachedb database in KB [K] or MB [M] or GB [G].
  • time-x is the interval in minutes [m] or hour [h] on which the siemcachedb database is rotated.
Examples
When size=0K, there is no size limit set on the database but database is rotated for every 10 minutes:
siemcachedblimit    size=0K limit=10m
In this example, the database is rotated on reaching the size limit of 1GB or the time interval of 2 hours, which ever is the first:
siemcachedblimit    size=1G limit=2h
Default Value
siemcachedblimit    size=0K limit=10m

siemdbencryption

  • Version 22.1 and later: siemdbencryption setting available.

The siemdbencryption setting specifies the encryption for siemcachedb and the dequeue databases created by Privilege Management for Unix and Linux.

Syntax
siemdbencryption <algorithm-1>:<keyfile=/fullpath/data-file-1>[:<startdate=yyyy/mm/dd>:<enddate=yyyy/mm/dd>] <algorithm-2>:<keyfile=/fullpath/data-file-2>[:<startdate=yyyy/mm/dd>:<enddate=yyyy/mm/dd>] ...

Where:

  • algorithm-n is the name of the algorithm type.
  • /fullpath/data-file (optional) specifies the full path and file name of the data file, which is used to dynamically derive the encryption key.
  • startdate=yyyyy/mm/dd specifies the earliest date that this algorithm is to be used.
  • enddate=yyyy/mm/dd specifies the latest date that this algorithm is to be used.
Default Value

None

dequeuedatabasedir

  • Version 22.1 and later: dequeuedatabasedir setting available.

The dequeuedatabasedir setting is the absolute path of the directory in which subdirectory mrsiem is created by message router. The siemcachedb is rotated to this directory. A scheduled pbconfigd process attempts to dequeue the databases in dequeuedatabasedir/mrsiem for every siemdqrefreshtime interval.

Default value of the setting dequeuedatabasedir is dependant on the setting basedir.

Default Value
dequeuedatabasedir  /opt/pbul/dequeuedbs

siemdqrefreshtime

  • Version 22.1 and later: siemdqrefreshtime setting available.

The siemdqrefreshtime setting specifies the interval in minutes between the scheduled pbconfigd database dequeue tasks.

Default Value
siemdqrefreshtime  10

Credential Store

Starting in PMUL 22.1, a credential store is available. The credential store is an encrypted SQLite database stored in the file /opt/pbul/dbs/pbelkcred.db. Initially, the file has only one table with the following schema:

CREATE TABLE elkcreds (id TEXT PRIMARY KEY, value TEXT UNIQUE NOT NULL)

The id field is a simple string referenced by the text that optionally follows a URL in the elkinstances setting described previously. The value field contains a JSON fragment that describes one of the following credential types:

  • Basic
  • Token
  • API key

Basic Authentication

This is a simple combination of username and password, with the format:

{ “id”: “<id>”, “type”: “basic”, “username”: “<username>”, “password”: “<password>” }

It can be used to authenticate to both cloud and on-premises instances, but most likely will only be used for cloud instances. The username and password are included (though typically within HTTPS) with every event submitted to Elasticsearch or Logstash.

Elasticsearch and Logstash support basic authentication.

Token Authentication

For token authentication, PMUL would not send the username and password with every request. Rather, it would send the username and password only to obtain a token that can be used for subsequent requests until the token expires. The expiration would be given in the reply to the query that includes the username and password.

The format of the value field for token authentication:

{ “id”: “<id>”, “type”: “token”, “username”: “<username>”,

“password”: “<password>”, “endpoint”: “<url_or_path>” }

The endpoint can either be a complete URL (starts with https://) or a path to be appended to the URL specified in the elkinstances field described above (for example, /_security/oauth2/token).

Token authentication is typically used with cloud instances but can be used with on-premises instances configured for token support.

Logstash does not support token authentication.

API Key Authentication

API key authentication is similar to token authentication but provides the ability to request the time to expire.

The API key authentication format:

{ “id”: “<id>”, “type”: “apikey”, “username”: “<username>”, “password”: “<password>”,

“endpoint”: “<url_or_path>”, “expiration”: “<valid_expiration>” }

The valid_expiration field can be a value like 1d to indicate that the returned API key is valid for one day. The field is required, otherwise created API keys are permanent.

Logstash does not support API key authentication.

Integrate I/O Logging and Elasticsearch

Starting in PMUL 22.2, the pbreplay program can be configured to send messages to Elasticsearch and Logstash as well as Solr.

The pbreplay program cannot send events to both Elasticsearch and Solr. If a log server has a valid configuration for sending iologs -- /etc/pb.settings contains a valid value for the elkinstances setting and includes iolog in its elkdatatypes setting – then iologs are sent to Elasticsearch only, regardless of whether Solr is configured or configured correctly.

Requirements

The destination Elasticsearch system must be version 7.10 or later.

PMUL to Elasticsearch Field Mappings

The file pbelkecsconfiguration.json contains standard mappings of PMUL fields to Elastic Common Schema (ECS) fields.

The iolog session text is not mapped to an ECS field. These sessions can contain a lot of data, and therefore must be of the wildcard type. Since the session data can be quite large, a single iolog session may result in many individual messages sent to Elasticsearch. In such cases, the unique ID (mapped to the ECS field event.id) will have a chunk number added in the format $$$$$$<chunk-number>$$$.

Mapping of PMUL Fields to ECS Fields

Fields are mapped in the mappedFields setting in the elkscsconfiguration file.

The PMUL field on the left maps to one or more ECS fields, as indicated by arrays, on the right.

"mappedFields": {
        "browserhost":          ["related.hosts"],
        "browserip":            ["related.ip"],
        "clienthost":           ["related.hosts"],
        "event":                ["event.action"],
        "eventlog":             ["log.file.path"],
        "group":                ["group.name"],
        "host":                 ["related.hosts"],
        "logdversion":          ["agent.version"],
        "loghostip":            ["related.ip"],
        "loghostname":          ["agent.name", "log.logger", "related.hosts"],
        "logserver":            ["agent.name", "log.logger", "related.hosts"],
        "masterhost":           ["related.hosts", "server.domain"],
        "masterhostip":         ["related.ip", "server.address", "server.ip"],
        "pbclientkerberosuser": ["related.user"],
        "pblogdrelease":        ["os.kernel"],
        "pblogdsysname":        ["os.name"],
        "pbrisklevel":          ["event.risk_score"],
        "requestuser":          ["related.user"],
        "runargv":              ["process.args"],
        "runcommand":           ["process.name"],
        "runconfirmuser":       ["related.user"],
        "runeffectiveuser":     ["related.user"],
        "runhost":              ["host.hostname", "host.name", "destination.domain",
                                "related.hosts"],
        "runhostip":            ["host.ip", "destination.ip", "destination.address",
                                "related.ip"],
        "runpid":               ["process.pid"],
        "runuser":              ["related.user", "user.effective.name"],
        "runutmpuser":          ["related.user"],
        "submithost":           ["client.domain", "related.hosts", "source.domain"],
        "submithostip":         ["related.ip", "client.address", "client.ip",
                                 "source.ip", "source.address"],
        "subprocuser":          ["related.user"],
        "timezone":             ["event.timezone"],
        "user":                 ["related.user", "user.name"]
    },

There are some PMUL fields (particularly in Accept events) with no direct mapping to an ECS field. The fields will be mapped to a non-ECS field beyondtrust_com.pmul.<fieldname>. For example, beyondtrust_com.pmul.optimizedrunmode for the PMUL optimizedrunmode field.

PMUL fields that are mapped to ECS fields are also mapped to a non-ECS beyondtrust_com.pmul… field.

Hard-coded Mappings to ECS Fields

PMUL Entity ECS Field Name
PMUL event time (submit_utc, logaccept_utc, logreject_utc, logfinish_utc, logkeystroke_utc, runfinish_utc) @timestamp
hardcoded string "logserver" agent.type
ECS Schema Version (hardcoded) ecs.version
iolog file name log.file.path
Number of command line arguments process.args_count
PMUL event finish time (runfinish_utc, logfinish_utc or last timestamp in iolog) process.end
PMUL event start time (submit_utc, keystroke_utc, logaccept_utc, logreject_utc) process.start

Migrate Existing PMUL Field Values to ECS Format

Starting with version 22.1, event data can be sent to Elasticsearch and Logstash. That data, however, was not delivered in the Elastic Common Schema (ECS) format.

Starting with version 22.2, event logs (and optionally iologs) are delivered in ECS format.

You can use the btelkecsconvert migration tool to convert existing non-ECS documents added in version 22.1 to ECS format supported in version 22.0 (and later).

Install and Run the Migration Tool

The migration tool is only available through BeyondTrust Technical Support.

The migration tool file is a self-contained executable with no external dependencies. The migration tool is availble for either Windows or Linux x86_64 operating systems.

Run the command against the Elasticsearch instance, as shown here:

C:\...\btelkecsconvert> btelkecsconvert -d yes -p pmul -u **:** http://host-name.pmul.net:9200
INFO: 2022/06/13 10:20:06 btelkecsconvert.go:370: found 1 index(es) with non-ECS document(s)
INFO: 2022/06/13 10:20:06 btelkecsconvert.go:388: Migrating 1 non-ECS document(s) from index
pmul-eventlog-20220613 to index pmul-eventlog-ecs-20220613
INFO: 2022/06/13 10:20:07 btelkecsconvert.go:668: reply: created: 1, updated: 0
INFO: 2022/06/13 10:20:09 btelkecsconvert.go:682: deleted index pmul-eventlog-20220613 and its
contained document(s)
C:\...\btelkecsconvert>

Arguments

-d, --delete-old-indexes

Required. Valid options are yes and no.

  • Enter yes to delete old indexes and documents for which all documents within the index were successfully converted to ECS format.
  • Enter no to retain the old indexes and documents.

-p, --product

Required. Enter the name as pmul.

-u, --user <credentials in username:password format> Optional. Enter login credentials for the Elasticsearch instance.
URL Required. The HTTP URL of the Elasticsearch instance. (e.g., http://elkhost:9200).
-v, --version Optional. Prints the version.
-h, --help Optional. Displays the Usage message to the console.