FAQ

AI-Hunter Support FAQ

ALL ITEMS BY CATEGORY

Active-Flow

Yes, as many as you want! All records will show up in a single AI-Hunter database. If you need to separate them into individual databases, you’ll need two Active-flow docker instances (which must be on separate physical systems.)

 

Direct Link to this FAQ Item: https://portal.activecountermeasures.com/support/faq/?Display_FAQ=2725

Category: Active-Flow

Yes. You can have one or more Active-Flow systems and one or more Zeek systems feeding a single AI-Hunter instance. Each one feeds a different database whose name is: “hostname__ipaddress-rolling” so you can distinguish between them.

Note: you can’t have Zeek and the Active-Flow module running on the same system, they both use /opt/bro/logs/ for their output.

 

Direct Link to this FAQ Item: https://portal.activecountermeasures.com/support/faq/?Display_FAQ=2724

Category: Active-Flow

Is the Active-Flow docker instance running?

On the Active-Flow system run:

sudo docker ps

The output should include one line that ends in “active-flow”, indicating that the instance is currently up and forwarding udp port 2055. Example:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
237ae0c61325 ai-hunter/flow "/home/flow/active-f…" 6 weeks ago Up 6 weeks 0.0.0.0:2055->2055/udp active-flow

 

Are inbound UDP port 2055 packets allowed by the firewall?

On the Active-Flow system get a firewall listing with:

sudo iptables -L INPUT -nxv

If the INPUT chain has no rules and a policy of ACCEPT (like the following):

Chain INPUT (policy ACCEPT 3130808 packets, 1218284392 bytes)
pkts bytes target prot opt in out source destination
$

that means all incoming traffic is allowed. If you do have rules in this chain and need help interpreting if that port is open or not, please send the above output to [email protected]

 

Are Netflow packets arriving on UDP port 2055 on the Active-Flow system?

The tcpdump program can show a single line output for each received packet. Here’s a sample command to report on received netflow packets, assuming that the primary network interface to the Internet is eth0:

sudo tcpdump -i eth0 -qtnp -c 10 'udp port 2055'

If Netflow records are arriving on that port, you’ll see output similar to:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 264
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 120
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 120
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 312
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 120
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 312
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 76
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 1080
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 1032
IP a.b.c.d.57001 > e.f.g.h.2055: UDP, length 456
10 packets captured
10 packets received by filter
0 packets dropped by kernel

This shows that the router at a.b.c.d is sending netflow records to the Active-Flow system at e.f.g.h .

If the tcpdump command prints “listening on eth0……” and stops, producing no more output, you may want to check your router configuration to make sure it’s feeding Netflow records to the right address and port. Please see the end of this document for a suggested configuration for Cisco ISR routers.

 

Are records making it out to log files?

Active-Flow saves its output to the “/opt/bro/logs/’ directory tree. In particular, the currently generated logs are in “/opt/bro/logs/flow-spool”.

To confirm that Active-Flow is saving records to disk, run the following on the Active-Flow system:

cd /opt/bro/logs/flow-spool/
ls -al
tail -f conn.log

Within 60 seconds you should see new lines being added to this file. (Note: in the first 10 minutes after rebooting Active-Flow’s system or restarting Active-Flow, you may not see entries being added until the router sends the first template. Either wait for 15 minutes to pass, or run “sudo docker logs -f active-flow –tail=20” and look for lines like:

time="2020-03-03T21:47:37Z" level=error msg="Could not decode incoming data" error="No info template 2615 found for and domain id 256" fatal=false

to confirm that this is why you’re not yet getting logs. This issue should definitely disappear by the time the system has been up for 15 minutes.)

 

Are the compressed logs getting sent to /opt/bro/logs/yyyy-mm-dd each hour?

At the end of each hour the active logs are compressed and moved to a directory for today’s date. To see them, run:

ls -al /opt/bro/logs/`date +%Y-%m-%d`/

With the exception of the hour right after midnight you should see multiple files with the extension “.log.gz”. If you don’t, check with [email protected]

 

Are ssh connections allowed from Active-Flow to AI-Hunter?

Can you ssh from Active-Flow to [email protected] without supplying a password?

Can you run bro_log_transport.sh and push logs to AI-Hunter?

Is the cron job set up to automatically transfer logs?

Are the logs showing up on the AI-Hunter system in /opt/bro/remotelogs/sensor-name/yyyy-mm-dd ?

The above 5 questions are covered in the FAQ at https://portal.activecountermeasures.com/support/faq/?Display_FAQ=861

 

Direct Link to this FAQ Item: https://portal.activecountermeasures.com/support/faq/?Display_FAQ=2722

Category: Active-Flow

The following are the lines relevant to enabling Netflow in a Cisco ISR:

What to Collect

flow record MyNetflow
match ipv4 tos
match ipv4 protocol
match ipv4 source address
match ipv4 destination address
match transport source-port
match transport destination-port
collect transport tcp flags
collect counter bytes long
collect counter packets long
collect timestamp absolute last
collect flow end-reason
collect timestamp absolute first
!

Where to Send the Data

flow exporter MyNetflow
destination destination.ip.goes.here
source GigabitEthernet0/0/0
transport udp 2055
template data timeout 60
!

Tie Them Together

flow monitor MyNetflow
exporter MyNetflow
cache timeout active 60
record MyNetflow
!

The “ip flow monitor” lines associate this interface with sending Netflow records:

interface GigabitEthernet0/0/1
ip flow monitor MyNetflow input
ip flow monitor MyNetflow output
ip address xxx.xxx.xxx.xxx xxx.xxx.xxx.xxx
ip nat inside
!

For more information about configuring Cisco routers, see:
https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/fnetflow/configuration/xe-16/fnf-xe-16-book/fnf-ipv4-uni.html

 

Direct Link to this FAQ Item: https://portal.activecountermeasures.com/support/faq/?Display_FAQ=2726

Category: Active-Flow

Data Management & Logs

To check that your span port is correctly feeding data to Bro, first install tcpdump,

Installing tcpdump (This package is provided in all supported Linux distributions.)

On Debian and Ubuntu Linux, run:

sudo apt-get -y install tcpdump

On Centos, RHEL, or Fedora Linux, run:

sudo yum -y install tcpdump

then run the following, replacing {ethernet_port} with the name of your network card on which Bro is listening:

tcpdump -i {ethernet_port} -c 100 -qtnp

If you see no output at all, press ctrl-c to kill the program and check that the network card is correctly connected to the span port on your switch.


Here’s how you can switch the docker storage location.

First, stop the docker daemon.

sudo systemctl stop docker

Next, move your docker directory. In this example we are moving the directory to /hunt/docker but you can choose your own location as long as you change the directory in the subsequent steps as well.

sudo mv /var/lib/docker /hunt/docker

Create /etc/docker/daemon.json and make it look like this:

{
"data-root": "/hunt/docker"
}

Or if the file already exists then add the “data-root” line immediately after the opening brace like this:

{
"data-root": "/hunt/docker",
...existing contents
}

Then start the docker daemon again:

sudo systemctl start docker

At this point you should be able to access AI-Hunter through the web interface, but if not you can try starting it manually using:

sudo ~/AIH-source/AI-Hunter-latest/hunt up -d


Automated approach

These steps perform the same tasks as all the steps under “Manual approach”.

  1. Log in to your Zeek sensor as a user that can read the Zeek logs and can run commands under sudo
  2. Run the following command.  You’ll need to replace “my.aihunter.system” with the hostname or ip address of your AI-Hunter system.  “[/zeek/log/top/dir/]” is an optional parameter pointing at the top level directory under which your Zeek logs can be found.  You only need to specify this if it’s not automatically detected.
    curl -fsSL https://raw.githubusercontent.com/activecm/zeek-log-transport/master/connect_sensor.sh -o - | /bin/bash -s my.aihunter.system [/zeek/log/top/dir/]
  3. If you wish to send your logs to a second AI-Hunter system, repeat step 2 using the second system name or IP address.

 

 

Manual approach

Generally yes.

1.) Pick a non-root account on your Bro sensor that can read the Bro logs. For these instructions, we’ll call that account “senduser”; if you use another account name, please replace “senduser” with that account name below.

 

2.) We need to run a script on your Bro sensor that will copy the logs across to the Rita system. The script is called zeek_log_transport.sh and can be found at

https://raw.githubusercontent.com/activecm/bro-install/master/zeek_log_transport.sh .

Copy this to /usr/local/bin/ on your new Bro sensor. Note that from “sudo curl” to “-O” is a single command line even if line-wrapped here:

cd /usr/local/bin/
sudo curl -s https://raw.githubusercontent.com/activecm/bro-install/master/zeek_log_transport.sh -O
cd -
sudo chown root.root /usr/local/bin/zeek_log_transport.sh
sudo chmod 755 /usr/local/bin/zeek_log_transport.sh

 

3.) Following command automatically installs rsync if not already installed.

 

rsync --version >/dev/null 2>&1 || sudo apt-get -y install rsync >/dev/null 2>&1 || sudo yum -y install rsync >/dev/null 2>&1

 

4.) You will need the ssh private key for transferring logs: ~/.ssh/id_rsa_dataimport . Note, the file with the same name but a “.pub” extension is not sufficient. This can be found on the system that started the installation, the Rita/AI-Hunter system, and/or any existing Bro sensors (in the ~/.ssh/ directory). Note that “~senduser” below represents the home directory for the “senduser” user. Please replace “senduser” with your username instead.

 

Copy this to:

~senduser/.ssh/

 

On the Bro sensor:

chown -R senduser ~senduser/.ssh/
chmod go-rwx ~senduser/.ssh/

 

5.) Test the script

 

On the Bro sensor as the “senduser” user, run the following to make sure we can connect to and transfer logs to the AI-Hunter system:

ssh [email protected] -i "$HOME/.ssh/id_rsa_dataimport" 'echo Successfully connected.'

 

You may be asked to confirm that the host key is correct; please do so.

/usr/local/bin/zeek_log_transport.sh --dest AIH.IP.ADDRESS --localdir /opt/bro/logs/

 

Replace AIH.IP.ADDRESS with the address of the Rita/AI-Hunter system. If your logs are stored somewhere other than /opt/bro/logs/ on this sensor, adjust that too (*). This should start sending logs over to the Rita/AI-Hunter system. It’s OK to leave this running; any files you successfully transfer now will not be resent later. You can leave this running while you switch to a new ssh connection to perform the next step.

6.) Install the script into cron as root on the Bro Sensor:

 

Edit /etc/cron.d/zeek_log_transport . Add the following line (all on one line even though it may be line wrapped here):

5 * * * * senduser /usr/local/bin/zeek_log_transport.sh --dest AIH.IP.ADDRESS --localdir /opt/bro/logs/

 

Replace “senduser” with the name of the account sending logs from this system, and AIH.IP.ADDRESS with the address of the Rita/AI-Hunter system. If your logs are stored somewhere other than /opt/bro/logs/ on this sensor, adjust that too (*).

Run both of the following:

sudo service cron reload 2>/dev/null
sudo service crond reload 2>/dev/null

 

* Note; common directories that hold Bro logs include:

/opt/bro/logs/ #Bro as installed by Rita
/usr/local/bro/logs/ #Bro default
/var/lib/docker/volumes/var_log_bro/_data/ #Blue Vector
/nsm/bro/logs #Security Onion
/storage/bro/logs/

 


By default we name a sensor “hostname__ipaddress”. If you want to force a name for a sensor, edit /etc/rita/agent.yaml on the Bro/Zeek sensor. Here are the commands to use as the file and its parent directory may not exist:

sudo mkdir -p /etc/rita
sudo vim /etc/rita/agent.yaml

Feel free to use any editor in place of vim, above. You’ll need to add a line to that file of this form:

Name: custom_sensor_name

The only characters you can use for the name are upper and lowercase letters, digits, the underscore, caret, plus and equals. The entire name needs to be 52 characters or less.
Here’s a sample:

sudo cat /etc/rita/agent.yaml
Name: bro_sensor_A17


To use this feature, you must be using version 3.4.0 or higher (we strongly recommend 3.4.1 or higher).

Overview: To create your own blacklist, you’ll create a file (“/etc/AI-Hunter/blacklist/ips.txt”) on the Rita/AI-Hunter system with the ipv4 and ipv6 addresses listed one per line, instruct Rita to use this file by editing “rita.yaml”, and load these new addresses into Mongo. Once this is done, these addresses will be tagged as blacklisted on new data imported from Bro (though old Bro logs will not be modified).

Detailed steps:

1. Add the following block to “/etc/AI-Hunter/rita.yaml” , verbatim. We don’t recommend changing the filename in this release.

BlackListed:
# Lists containing both IPv4 and IPv6 addresses are acceptable
CustomIPBlacklists: ["/etc/AI-Hunter/blacklist/ips.txt"]

2. Create “/etc/AI-Hunter/blacklist/ips.txt” and add your IPs, one per line.

3. After creating this file – and every time you make a change to it – run the following commands:

If you’re running AI-Hunter 4.0.0 or higher:

hunt run --rm db_client mongo_cmd.sh 'db.getSiblingDB("rita-bl").dropDatabase()'
hunt up -d --force-recreate

 

If you’re running AI-Hunter 3.8.0 or lower:

cd ~/AIH-source/AI-Hunter-latest/
./hunt run --rm db_client mongo_cmd.sh 'db.getSiblingDB("rita-bl").dropDatabase()'
./hunt up -d --force-recreate

 

The ip addresses you’ve placed in ips.txt will be tagged as blacklisted in log files imported from this point on. Logs that were imported previously will not show these IP addresses as blacklisted.

The following steps will delete any Bro logs older than 4 days on the AI-Hunter system. Note that this will not delete any AI-Hunter databases, just the raw Bro log files that were imported. It will also not delete them from your actual sensors, just the copies that were sent to the AI-Hunter system, so if you ever needed them again you could manually copy them from the originals on your Bro sensors.

To see what files would be deleted by this command, you can run the following under the “dataimport” account on your AI-Hunter server (run “sudo su – dataimport” if you’re not already logged in as that user, and then run):

find "/opt/bro/remotelogs/" -type f -mtime +4 -print0 | xargs -0 -r -n 20 echo

To set up a daily automatic delete, add the following line to /etc/cron.d/delete_old_bro_logs . Example command, though you can feel free to use any editor you like:

sudo vi /etc/cron.d/delete_old_bro_logs

Everything from “0 3” to “-f'” is one line. Please be especially careful when typing the path “/opt/bro/remotelogs/” and make sure there are no spaces between the first and last slash. The quotes on this line (before find and after dash-f) are single quotes (below the double quotes on a US keyboard) and we use double quotes around the log directory.

0 3 * * * dataimport /bin/bash -c 'find "/opt/bro/remotelogs/" -type f -mtime +4 -print0 | xargs -0 -r -n 20 rm -f'

Run both of the following:

sudo service cron reload 2>/dev/null
sudo service crond reload 2>/dev/null

Side note: You should not reduce the +4 in the above command; if you deleted logs that were 1, 2, or 3 days old, you’d run the risk of deleting files that bro_log_transport would then have to copy over again (it sends any new files from the previous 3 days worth of logs every time it runs).


If the underlying Linux system runs out of space, that can lead to processes dying unexpectedly or tasks not completing. To check if you’re running low, run:

df -h

This shows the amount of free space on each of your partitions. Example – your output will be different:

df -h

Filesystem Size Used Avail Use% Mounted on


/dev/vda1 60G 9.9G 51G 17% /

/dev/vda2 1.0T 101G 899G 10% /var

devtmpfs 1.9G 0 1.9G 0% /dev

tmpfs 1.9G 180K 1.9G 1% /dev/shm

tmpfs 1.9G 193M 1.7G 11% /run

tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup

tmpfs 380M 0 380M 0% /run/user/1001

tmpfs 380M 0 380M 0% /run/user/0

The important lines are the ones associated with the root filesystem(“/”), the home partition (“/home/”, if there is one), and “/var/”, if there is one. If any of these are very low on space, you’ll need to free some up.

A common tool for this is delete-databases.sh , available on the Bro/Rita and AI-Hunter systems. When run with no command line parameters, it lists the available databases.

Once you’ve found some you no longer need to keep, run the tool again with the name(s) of one or more of them to remove. We recommend running the same command on both systems to remove them from both.


cd ~/AIH-source/AI-Hunter-latest
sudo ./hunt logs web
sudo ./hunt logs api
sudo ./hunt logs db
sudo ./hunt logs auth

 


The earliest releases of AI-Hunter ran RITA every 2 hours. If you’ve had AI-Hunter for a long time and have upgraded it in place, that setting may still be there.

Current releases of AI-Hunter run RITA every hour so you can see your data with less delay. To make this change:

  • Log in to the AI-Hunter host.
  • Edit /etc/AI-Hunter/config.yaml with your preferred editor:
sudo vim /etc/AI-Hunter/config.yaml
  • Locate the Schedule line under the RITA: section (note; there are multiple “Schedule:” lines in this file). If it’s currently set to run every two hours, it will look like:
Schedule: "0 20 0-23/2 * * *"

(If it doesn’t have the “/2” following 0-23, RITA is already run every hour and you can stop here.)

  • Remove the “/2” from that line so it now looks like:
Schedule: "0 20 0-23 * * *"

Be careful not to change the number of spaces at the beginning of that line. Save your changes and exit.

  • Now load these changes into AI-Hunter with the following commands.
cd $HOME/AIH-source/AI-Hunter-latest
sudo ./hunt down
sudo ./hunt up -d --force-recreate
  • As a side note, when AI-Hunter is restarted, RITA is automatically run.

 

Direct Link to this FAQ Item: https://portal.activecountermeasures.com/support/faq/?Display_FAQ=2769

I have a Corelight sensor. What do I do with it to make it work with AI-Hunter?

1. Set the time zone on the sensor to UTC/GMT. This can be found in the “Connect” submenu:

2. Configure the Corelight sensor to export its Bro logs to your AI-Hunter box over SFTP.

a. When asked for a hostname to send the logs to, put in the IP address or hostname of your AI-Hunter box. (The sensor should be able to place outgoing ssh (tcp port 22) connections to this hostname/IP).

b. The Username to use is “dataimport”.

c. Ask the Corelight sensor to send the logs to /opt/bro/remotelogs/sensorname/ , where sensorname is the name of this Corelight sensor, made up of the following characters and 52 characters or less: a-z A-Z 0-9 _ ^ + =

d. The Bro log format to use is “Standard Bro format (TSV)”.

e. The Rotation interval should be 1 hour.

f. The sensor will generate an SSH key to use; append the key to the file /home/dataimport/.ssh/authorized_keys on your AI-Hunter box.

g. Once your corelight sensor has sent over one set of logs, find the directory that holds the logs on the AI-Hunter box. In the above example where we ask for the files to be placed under /opt/bro/remotelogs/kirklab/ , Corelight will actually placed them under /home/dataimport/opt/bro/remotelogs/kirklab/logs/ . To make them show up in the right directory, edit /etc/fstab with the following (substitute your favorite editor):

sudo vi /etc/fstab

Add the following line, replacing both instances of the sensorname and making sure the first directory matches where corelight sends the logs:

/home/dataimport/opt/bro/remotelogs/kirklab/logs/ /opt/bro/remotelogs/kirklab/ none defaults,bind 0 0

Save the file, exit, and reboot – the reboot step is required.

h. If you’re not able to do the above for some reason, contact [email protected] and ask them to connect the upload directory to the directory where the logs are being placed.

 

Yes there is! Run all the following commands on your AI-Hunter system.

Set the following variables to your own values:

export PCAP_FILE=/absolute/path/to/file.pcap
export BRO_DIR=/absolute/path/you/want/bro/logs/
export DATABASE=yourdatabasename

Convert your pcap to Bro logs:

sudo docker run --rm --volume "$PCAP_FILE:/capture.pcap" --volume "$BRO_DIR:/pcap" --env BRO_DNS_FAKE=true blacktop/bro:2.5 -r /capture.pcap local

Import the Bro logs into AI-Hunter using RITA:

~/AIH-source/AI-Hunter-latest/rita import $BRO_DIR $DATABASE

At this point your database should be visible in AI-Hunter.


If you have trouble running delete-databases.sh , such as:

./delete-databases.sh
exception: login failed
No pattern specified on the command line (such as ./delete-databases.sh '2018-05-31', so we will just list databases available to delete. Press enter when ready to see list.


2019-04-25T11:46:18.589-0400 Error: 18 { ok: 0.0, errmsg: "auth failed", code: 18, codeName: "AuthenticationFailed" } at src/mongo/shell/db.js:1292
Exiting.

This may be a result of a mismatch between the mongo command line client and the mongo server software. To check, run:

mongo --version

and see if you have version 3.6. If not, follow the instructions to upgrade the client at https://docs.mongodb.com/v3.6/administration/install-on-linux/ . It amounts to adding a third party package repo and then installing the “mongodb-org-shell” package.

You can tell if you were successful by running:

mongo --version

and checking that the MongoDB shell version is v3.6.x.


From the AI-Hunter system, run:

sudo docker image prune



The logs for each machine get placed in a different directory on the AI-Hunter system for each Bro sensor. They should be under /opt/bro/remotelogs/{bro_sensor_name}/ , with additional directories under that for each calendar day (such as /opt/bro/remotelogs/bro1__1921681213/2019-04-28/).

Please log in to your Bro system as the user under which you installed Bro and make sure you can ssh to the AI-Hunter system with:

ssh [email protected] -i "$HOME/.ssh/id_rsa_dataimport" 'echo Successfully connected.'

The “Successfully connected.” response should come back without having to enter a password; if you are asked for a password there’s something wrong with the ssh key setup.

As that same user on the Bro sensor, please run:

/usr/local/bin/bro_log_transport.sh --dest AIH.IP.ADDRESS --localdir /opt/bro/logs/

Replace AIH.IP.ADDRESS with the address of the Rita/AI-Hunter system. If your logs are stored somewhere other than /opt/bro/logs/ on this sensor, adjust that too. This should start sending logs over to the Rita/AI-Hunter system. It’s OK to leave this running; any files you successfully transfer now will not be resent later.

Please check the file that initiates sending logs:

cat /etc/cron.d/bro_log_transport

It should look like the following:

5 * * * * senduser /usr/local/bin/bro_log_transport.sh --dest AIH.IP.ADDRESS --localdir /opt/bro/logs/

“senduser” will need to be the account name on this system under which you did the installation, “AIH.IP.ADDRESS” should be the AI-Hunter system’s IP, and “/opt/bro/logs/” will need to be the directory where you have Bro logs on this system.


The whitelist is stored in json format, an industry standard for sharing data. Here’s a small part of the top of the default whitelist:

[
{
"Name": "8075",
"Type": "asn",
"Modules": [
{
"Name": "Beacons",
"Src": false,
"Dst": true
}
],
"Comment": "Microsoft patching and time servers"
},
{
"Name": "41231",
"Type": "asn",
"Modules": [
{
"Name": "Beacons",
"Src": false,
"Dst": true
}
],
"Comment": "Ubuntu patching servers"
},
...

 

You have the ability to edit this file to add new entries, take out existing entries, or modify entries. If you do, here are a few notes about the formatting in this file:

You must use double quotes, not single quotes, backquotes or “smart quotes” as used in word processors for all strings. For example, “Name” and “8075” from above are valid; ‘Name’ and `8075` are not.

Whenever using true, false, or null as values, these must be all lowercase.

Inside each matched pair of left and right square brackets (“[” and “]”, json lists), and inside each matched pair of curly braces (“{” and “}”, json dictionaries), the entries are separated by commas, but you don’t use a comma after the final entry. For example:

 {
"Name": "Beacons",
"Src": false,
"Dst": true
}

 

Most Linux distributions and the Mac OS offer a tool called jq (“json query”) that allows you to extract data from and modify json files. It’s not commonly installed by default, but should be available in your package manager. Once installed you can do the following:

To see the full contents in pretty-printed format (like the example above where entries are indented according to how deep they are), use:

cat edited-whitelist.json | jq . | less

 

To check whether a json file is in a valid format, run:

$ cat edited-whitelist.json | jq . >/dev/null
$

 

When you’re returned to a prompt directly, that means the format appears correct. If the file is not valid json, such as this one where I used single quotes instead of double quotes:

$ cat malformed-whitelist.json | jq . >/dev/null
parse error: Invalid numeric literal at line 1, column 16
$

 

you’ll get back some kind of error.

The default json output format (pretty-printing, as seen above) takes a lot of lines to display, especially when you have a large whitelist. To give each whitelist entry a single line run the following (all on one line, even if wrapped in this document):

( echo '[' ; cat edited-whitelist.json | jq -c '.[]' | sed -e '$!s/$/,/' ; echo ']' ) >whitelist-perline.json

 

The whitelist-perline.json contains the same content and is still a valid json file, but shows the whitelist entries one per line, such as:

[
{"Name":"8075","Type":"asn","Modules":[{"Name":"Beacons","Src":false,"Dst":true}],"Comment":"Microsoft patching and time servers"},
{"Name":"41231","Type":"asn","Modules":[{"Name":"Beacons","Src":false,"Dst":true}],"Comment":"Ubuntu patching servers"},
{"Name":"16625","Type":"asn","Modules":[{"Name":"Beacons","Src":false,"Dst":true}],"Comment":"Akamai CDN"},
]

 


Error Messages

EXAMPLE:

./install_acm.sh: line 177: /dev/stderr: Permission denied

This is commonly caused by a permission problem, frequently on Centos or Redhat Enterprise Linux. The permission problem shows up if you log in as one user, then use sudo or su to switch to another before running install_acm.sh .

FIX:

ssh to the target system on which you’ll run install_acm.sh as the user under which you plan to run that script.


Category: Error Messages

While running the AI-Hunter install script, the error “This processor does not have SSE4.2 support needed for AI Hunter” is returned. The processor on your AI-Hunter box must be new enough (post 2006) to include a set of instructions that can perform multiple math operations at once. The specific name for these features is “SSE4.2”. To check if your system supports these, run this command at the command line (as any user; one does not have to be root to check this):

grep '^flags.*sse4_2' /proc/cpuinfo

If your processor can run these instructions, you’ll see one or more lines that look something like:

flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon
pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64
monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt
tsc_deadline_timer xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow
vnmi flexpriority ept vpid

(Your output doesn’t have to match this; it just needs “sse4_2” somewhere on the line.) If your processor doesn’t support these instructions, you won’t see any output; you’ll be returned to a command prompt, like:

$ grep '^flags.*sse4_2' /proc/cpuinfo
$

If your processor is out of date and does not support these instructions, you will need to use a system with an upgraded processor.


Category: Error Messages

EXAMPLE:

An Error message along the lines of “Unable to load libz.so.1”.

The installer script has attempted to run programs from inside /tmp or /var/tmp/ ; Centos, RHEL, and possibly Fedora systems block running scripts from temporary directories with the “noexec” mount option.

FIX:

Edit /etc/fstab as root (or with sudo) and change the “noexec” option to “exec” on just the lines mounting “/tmp” and “/var/tmp”, if they exist and they have that option. When done, reboot or run:

mount -o remount,exec /tmp >/dev/null 2>&1

mount -o remount,exec /var/tmp >/dev/null 2>&1


Category: Error Messages

I get an “unrecoverable problem” or “unrecoverable error” while running the installation script.

Most installation problems occur due to a failure in authentication. Please try the following:

SSH from the system from which you are performing the install to the system that will be running Bro and RITA. Login with whichever account you are using. If this fails, you have a problem with the SSH server or login credentials on that system.

While still logged in via SSH, try to run a command via sudo. Something like:

sudo ifconfig

If the command fails, you have a problem with your sudo setup. Make sure sudo is configured properly and that the account you are using is part of the sudo group. Repeat the above tests on the system where you are installing AI-Hunter.


Category: Error Messages

Installation, Upgrades & Configuration

If you're using AI-Hunter 4.0.0 or higher, run:
sudo manage_web_user.sh add -u '[email protected]' -p 'newuserpassword'
If you're using AI-Hunter 3.8.0 or lower, run:

sudo ~/AIH-source/AI-Hunter-latest/scripts/manage_web_user.sh add -u '[email protected]' -p 'newuserpassword'

 

Note: The Username (-u) is required to be in the format of an email address.

 


This isn’t supported in AI-Hunter versions earlier than v3, because RITA and AI-Hunter need independent databases for their operation. The advantage of using 2 different systems is that no matter how much data is being processed by RITA, AI-Hunter will continue to operate smoothly.

In versions 3.x and later, AI-Hunter and RITA have been optimized to share the same system.


We’ve done the majority of our early testing on Ubuntu 16.04, and that’s why we encourage its use. If that’s not appropriate for you, the following list shows our recommendations, starting with the most likely to work at the top and working down to the choices most likely to take additional time and troubleshooting:

1) Ubuntu Linux 16.04
2) Centos Linux 7.x
3) Fedora Linux 25 or newer or recent Debian Linux

We don’t recommend Windows or Mac OS as platforms on which to run these packages, but heartily encourage their use by the analyst for running a web browser (Chrome or Firefox recommended) to review results.


sudo ~/AIH-source/AI-Hunter-latest/scripts/manage_web_user.sh reset -u '[email protected]' -p 'newpassword'

Note: The Username (-u) is required to be in the format of an email address.


(NOTE: This only applies to AI-Hunter version 3.8.0; 3.8.0 was the first version with BeaKer included.  Versions 4.0.0 and higher have this bug fixed.  To find out your AI-Hunter version, go to the Dashboard, Settings (gear in the upper right), and “About”.)

On your AI-Hunter system, look at /etc/AI-Hunter/config.yaml . Near the bottom of that file you should have a line starting with “BeakerHost:”, like one of the following forms:

BeakerHost: “https://14.96.107.22:5601”
or
BeakerHost: “https://beakerhostname.example.com:5601”
or
BeakerHost: “https://[2604:a340:206:d94::13:8001]:5601”
or
BeakerHost: “https://2604:a340:206:d94::13:8001:5601”

Each of these forms has the IP address or hostname of the BeaKer server, as well as the port on which that server is run (5601 by default).

The first three forms are fine – they tell AI-Hunter how to reach the BeaKer server when it’s on an IPv4 address, hostname, or IPv6 address, respectively. If you have one of these first three, this faq entry doesn’t apply to you; contact [email protected] for help.

The fourth form is almost identical to the third, but is missing the square brackets around the IPv6 address. These are required, so if your “BeakerHost” line is missing them as well, do the following:

  • While still logged in to your AI-Hunter server, edit the file /etc/AI-Hunter/config.yaml (substitute your favorite editor for vim):
sudo vim /etc/AI-Hunter/config.yaml
  • Scroll down to the BeakerHost line.
  • Edit that line and add a left square bracket immediately after “://”
  • Add a right square bracket immediately before “:5601”
  • While the IPv6 address inside the brackets will be different, your line should look like:
BeakerHost: "https://[2604:a340:206:d94::13:8001]:5601"
  • Save the file and exit
  • Run the following commands:
cd ~/AIH-source/AI-Hunter-latest/
sudo ./hunt down
sudo ./hunt up -d --force-recreate

Go back to your AI-Hunter console, force a reload of the page with shift-ctrl-R, and try clicking on the BeaKer icon again – a new tab should be opened with the BeaKer console.

 

Direct Link to this FAQ Item: https://portal.activecountermeasures.com/support/faq/?Display_FAQ=2757

You can reset your password from the command line. SSH into the system running AI-Hunter. Go to the home directory of the user account that was used to install AI-Hunter. Run the command:

sudo ~/AIH-source/AI-Hunter-latest/scripts/manage_web_user.sh reset -u '[email protected]' -p 'newpassword'

Replace [email protected] and newpassword with the existing username to reset and the new password you wish to use. Once complete, this account will now be able to login via the Web interface.

Note: The Username (-u) is required to be in the format of an email address


This script will show the status of both the AI-Hunter and Bro-Zeek systems. Example call:

aih_status.sh | less

If you’d prefer to create a compressed version of this output ready to attach to a tech support email thread, run the following, all on one line:

TF=$(mktemp -q /tmp/aihstat.$(date +%Y%m%d%H%M%S).XXXXXX) ; aih_status.sh >"$TF" 2>&1 ; gzip -9
"$TF"

The resulting file in /tmp/ whose name starts with aihstat and ends with gz is ready to send back as part of a support request.


Run:

sudo iptables -L -nxv | less -S

cd ~/AIH-source/AI-Hunter-latest/

On the following, from “sudo” to “grep email” is one line:

sudo ./hunt run --rm db_client mongo_cmd.sh "db.getSiblingDB('users').user.find({},{_id:0,email:1,active:1})" | grep email


This package is provided in all supported Linux distributions.

On Debian and Ubuntu linux, run:

sudo apt-get -y install tcpdump

On Centos, RHEL, or Fedora Linux, run:

sudo yum -y install tcpdump


This approach only works if you have created an actual DNS hostname for the AI-Hunter system and access it with a URL like https://aihunter.mydomain.com (https://aihunter.mydomain.com) , as opposed to accessing it with an IP address such as https://1.2.3.4 (https://1.2.3.4) .

On the AI-Hunter system, make a backup of the original key and certificate with:

sudo cp -p /etc/AI-Hunter/private.key /etc/AI-Hunter/private.key.orig
sudo cp -p /etc/AI-Hunter/public.crt /etc/AI-Hunter/public.crt.orig

Create the keys for the hostname you use. To use the built-in openssl command on the AI-Hunter system, ssh to it and run:

openssl req -new -newkey rsa:2048 -nodes -keyout SERVER_NAME.key -out SERVER_NAME.csr

Send this “.csr” (Certificate Signing Request) file and any other requested information to your chosen Certificate Authority and pay to have it signed. They’ll return a signed certificate file.

Please save a copy of the key, csr, and crt files in a different system.

Copy the key you generated above to /etc/AI-Hunter/private.key on the AI-Hunter system.

Download the certificate you received from the CA to /etc/AI-Hunter/public.crt on the AI-Hunter server.

As the user under which AI-Hunter was installed, run:

sudo chown root /etc/AI-Hunter/public.crt /etc/AI-Hunter/private.key
sudo chmod 644 /etc/AI-Hunter/public.crt /etc/AI-Hunter/private.key
sudo ~/AIH-source/AI-Hunter-latest/hunt up -d --force-recreate web

Now go back to your web browser and reload the AI-Hunter interface with Shift-Ctrl-R .

From this point on you should no longer see the warning about an unsigned certificate when starting AI-Hunter. To confirm that the new certificate is being used, go to https://aihunters.host.name (https://aihunters.host.name) and click on the lock to the left of the URL when it comes up (the steps to see certificate details vary between browsers). You should be able to see the details of your new certificate there; if you still see a certificate with the Organization set to either “OffensiveCounterMeasures” or “Active Countermeasures”, retry these steps or check with support.

We recommend setting a yearly reminder to replace the certificate before it expires.


If you’re not getting syslog events, start by checking the “Threshold:” line in /etc/AI-Hunter/config.yaml (Under Alert: and Syslog:).The default setting is 150, so you won’t get alerts unless a score crosses that number. You may wish to lower this if you’re not getting alerts. If you change this, you’ll need to run:

sudo ~/AIH-source/AI-Hunter-latest/hunt up -d --force-recreate

for it to take effect.

Confirm that your syslog server is ready to accept incoming syslog entries over UDP. On that system, run:

sudo netstat -anp | grep '^udp.*514'

You should get back something like:

udp 0 0 0.0.0.0:514 0.0.0.0:* 3609/rsyslogd
udp 0 0 :::514 :::* 3609/rsyslogd

The process ID and even the name of the syslog server may differ at the end of the line, but you should have at least one line showing a process listening on UDP port 514. If you don’t, see the documentation for your syslog server to see how to listen on that port:

For rsyslog and syslog-ng, see: https://raymii.org/s/tutorials/Syslog_config_for_remote_logservers_for_syslog-ng_and_rsyslog_client_server.html

For syslog, edit /etc/sysconfig/syslog and add -r inside the quotes on the SYSLOGD_OPTIONS= line and restart syslog with:

sudo service syslog restart

On the AI-Hunter system, listen for outbound syslog traffic. If it’s not already installed, install tcpdump with “yum -y install tcpdump” (on redhat/fedora/centos) or “apt-get -y install tcpdump” (on ubuntu/debian). Find the network interface that leads out of this system by running:

ip route get 8.8.8.8 | grep dev | sed -e 's/^.*dev //' -e 's/ .*//'

which will return something like:

eth0

Now start tcpdump with this command line, substituting the interface you just found for eth0 in the following command:

sudo tcpdump -i eth0 -qtnp -c 20 'udp port 514'

Leave this command running while you set up tcpdump on the syslog server in the next section.

Now look for incoming syslog traffic on the syslog server. As above, install tcpdump if it’s not already installed with “yum -y install tcpdump” (on redhat/fedora/centos) or “apt-get -y install tcpdump” (on ubuntu/debian). Find the network interface that leads out of this system by running:

ip route get 8.8.8.8 | grep dev | sed -e 's/^.*dev //' -e 's/ .*//'

which will return something like:

eth0

Now start tcpdump with this command line, substituting the interface you just found for eth0 in the following command:

sudo tcpdump -i eth0 -qtnp -c 20 'udp port 514'

Leave the tcpdump commands running for a while. If you used the defaults of sending alerts at 10 minutes and 40 minutes past each hour, make sure you listen across at least one of these times.

If you see outbound traffic from the AI-Hunter server but no corresponding inbound traffic on the syslog server, it could be that 1) the AI-Hunter server can’t reach the syslog server (can you ping syslog from AI-Hunter?), 2) there’s an outbound firewall on AI-Hunter that’s blocking outbound syslog, 3) there’s an inbound firewall blocking syslog traffic on the syslog server, or 4) a router or firewall in the middle is blocking syslog traffic.

If you do see both outbound and inbound syslog traffic, see if your syslog server is routing these messages to a non-default file.

If you don’t see any syslog traffic from either copy of tcpdump, it could be that the configuration for sending syslog messages is not correct. Please recheck the “Configuring Alerting” steps in the AI-Hunter Install Guide and check for quoting (use double quotes), indentation (make sure you’re using spaces and not tabs).

If these steps don’t solve the problem please get back to us at [email protected] .


sudo vim /etc/AI-Hunter/rita.yaml

Edit the values after AlwaysInclude and/or InternalSubnets; be careful to use double quotes, no spaces between the left bracket and right bracket, and always put a /32 after individual IP addresses (or the appropriate subnet size after network blocks). Once saved, activate the changes by running:

sudo ~/AIH-source/AI-Hunter-latest/hunt up -d --force-recreate


To use a proxy, run these commands before the installer:

export http_proxy='http://1.2.3.4:3128/'
export https_proxy='http://1.2.3.4:3128/'
git config --global http.proxy http://1.2.3.4:3128
git config --global https.proxy http://1.2.3.4:3128

(replacing http:// with https:// if one must make an https connection to the proxy, replacing 1.2.3.4 with the IP of the proxy, and 3128 with the port used by the proxy), wget and git will use that proxy automatically with no changes to the installer. The “git config” commands only need to be run once for that user and are stored permanently. The “export” commands would need to be run once per terminal, or could be placed in ~/.bash_profile to have them run at each login.

What outbound connections are needed?

– Outbound connections to raw.githubusercontent.com, https port 443

– Outbound connections to github.com, https port 443

– Outbound connections to your Linux distribution’s patch servers, likely over http or https

– Outbound connections to download.docker.com, https port 443


When using aih_status.sh for diagnostics or delete_databases.sh to pare down logs:

– Outbound ping to 8.8.8.8

– Outbound connections to keyserver.ubuntu.com, http port 80

– Outbound connections to repo.mongodb.org, https port 443


Future versions of AI-Hunter may use outbound https connections to retrieve reputation information. Details about this access will be added as these features are included.


Run the following commands on the AI-Hunter system:

cd $HOME/AIH-source/AI-Hunter-latest/
cat ./VERSION
./hunt run --rm api rita --version
./hunt run --rm db mongo --version

The “cat” command will return the version of AI-Hunter, while the 2 hunt commands will tell you the versions of Rita and Mongo on the system.


IPFIX Systems

To check that your router is correctly feeding IPFIX/NetFlow data to IPFIX-RITA, first install tcpdump,

Installing tcpdump (This package is provided in all supported Linux distributions.)

On Debian and Ubuntu Linux, run:

sudo apt-get -y install tcpdump

On Centos, RHEL, or Fedora Linux, run:

sudo yum -y install tcpdump

then run the following, replacing {ethernet_port} with the name of your network card on which Bro is listening:

tcpdump -i {ethernet_port} -c 100 -qtnp 'udp port 2055'

If you see no output at all, press ctrl-c to kill the program and check that:

– Your router is configured to send IPFIX or NetFlow packets to the IP address on this network card, via UDP port 2055.
– Your router is able to send packets to this network card; the intermediate routers know how to get to your IPFIX-RITA box and there are no firewalls along the way or on the IPFIX-RITA box itself discarding the traffic.


Category: IPFIX Systems
sudo /opt/ipfix-rita/bin/ipfix-rita logs | tail

Category: IPFIX Systems
sudo ipfix-rita stop && sudo ipfix-rita start

Additional commands:

Clear backlog of packets:

ipfix-rita down -v

Stop:

ipfix-rita stop

Start:

ipfix-rita start

Start and show status:

ipfix-rita up


Category: IPFIX Systems

Restarting Systems

On the AI-Hunter system, run:

sudo hunt up -d --force-recreate

 

If you’re running AI-Hunter 3.8.0 or below, use this instead:

sudo ~/AIH-source/AI-Hunter-latest/hunt up -d --force-recreate


sudo /opt/bro/bin/broctl stop && sudo /opt/bro/bin/broctl deploy


Still looking for answers? Check our documentation:

AI-Hunter Documentation

Need help from our technical support team?

AI-Hunter Support Request