Archive

Posts Tagged ‘argus’

My bleeding heart: Dear argus, I miss you.

April 9, 2014 Leave a comment

Since I started a new job, I’ve got a lot of stuff to master before I revisit implementing flow data.

With all the Heartbleed reaction craze, I noticed that some Snort defs were released the other day, and that means there are likely IOCs that can be found in historical flow data.

Carter looks like he’s going to start a write up shortly, so keep an eye on the mailing list.

Advertisements

A more effective monitoring architecture

December 31, 2013 Leave a comment

Needs some work


After having a conversation with Carter Bullard of argus fame about six months ago, two points stuck with me (loosely quoted):

  • “You throttle ICMP?! Why?! ICMP has a lot of useful data for everyone!”
  • “Why are you so focused on using argus data for security? Focus on using it to monitor performance. It’ll give you something to deliver to your manager so they don’t think you’re wasting your time and their money. Then focus on security.”

But how? Well, quite easily. At boundaries, use an argus probe to:

  • watch for ICMP status that isn’t a successful ECHO-ECHO REPLY:
    ra -S 127.0.0.1:561 -s ltime saddr daddr smac dmac spkts dpkts flgs state inode - "icmp and (dst pkts eq 0 or not echo)"
    
  • watch for no “heartbeat” (needs tuning):
    rabins -S 127.0.0.1:561 -B 15s -M 5m - src bytes lt 1 or dst bytes lt 1 or src rate lt 1 or dst rate lt 1
    
  • watch for `loss`:
    rabins -S 127.0.0.1:561 -B 15s -M 5s - ploss gt 0
    
  • watch for protocol indicated problems:
    rabins -S 127.0.0.1:561 -B 15s -M 5s - frag or retrans or outoforder or winshut
    
  • watch for performance degradation below a threshold:
    #requires at least argus-clients-3.0.7.19
    rabins -S 127.0.0.1:561 -B 15s -M 5s - src jit gt N or dst jit gt N or src intpkt gt N or dst intpkt gt N
    

If you want to filter in certain addresses to use a pipeline:

ra -S 127.0.0.1:561 -w - - icmp | rafilteraddr -r - -f raaddrfilter.txt -s ltime saddr daddr sbytes dbytes flgs state

Nagios et al are useful to get resource statistics via snmp for sure. It is also better at managing alerts than logstash (specifically schedules!).

The architecture would be like this:
monitoring_arch

nagios output from logstash is already coded.

Icinga et al. should still be used to send pings to devices, but no NOTICEs should be sent on these unreachable events, as the argus probe should be taking care of reachability monitoring.

I believe the bulk of the challenge will take place with processing argus data, but I believe it is quite doable. See: Using elasticsearch for logs (will probably run logstash or logstash-forwarder (aka lumberjack) on the local argus box for caching).

This consolidates performance monitoring into a single dashboard, who’s backend can be utilized for SIEM when the time comes. Producing reports should be very easy, and a ton of work has already been done as related to layman statistics on elasticsearch data, so this is great.

Processing icinga service and host check_results into elasticsearch should be very easy. Look at:

  • service_perfdata_file_template (very important for your logstash grok definition)
  • service_perfdata_file_mode
  • service_perfdata_file_processing_interval
  • service_perfdata_file_processing_command
  • service_perfdata_command

How I use argus to learn about outlying bandwidth consumers

December 19, 2013 Leave a comment

flow-inspector and ntopng are very useful for this sort of thing, as it generally takes care of visualizing all the traffic stats; but you can utilize ragraph all the same as follows. As I’ve yet to implement ntopng, specifically the historic feature, I’m relying on flow-inspector.

Graph of bytes per second (like load, bps) downloads initiated internally:

ra -r * -w - -t 10:50:00-11:00:00 - src net 192.168.100.0/24 | ragraph saddr dbytes -M 1s -r -

this should indicate the most downloading-ist client:

Then you can drill further into this client

ra -r * -w - -t 10:50:00-11:00:00 - src host 192.168.100.46 | ragraph dbytes daddr -title 'dbytes per second requested by 192.168.100.46' -M 1s -r -

then you can actually see what’s going on with…
this will give you the resulting load by destination address

ra -r * -w - -t 10:50:00-11:00:00 - src host 192.168.100.46 | racluster -M daddr -r - -w - | rasort -M byte load -s saddr daddr load:15 -r - | less

and also… this will give you load of all transactions between a src host and a dst host, per second:

ra -r * -w - -t 10:50:00-11:00:00 - src host 192.168.100.46 and dst host 128.122.215.45 | rabins -M 1s -s stime ltime saddr daddr load:15 -r - | less

which can also be expressed…

ra -r * -w - -t 10:50:00-11:00:00 - src host 192.168.100.46 and dst host 128.122.215.45 | ragraph dbytes -title 'dbytes initiated by 192.168.100.46 downloaded from 128.122.215.45' -M 1s -r -

Hackers in my network.

August 27, 2013 Leave a comment

I had a dream the other night after one too many whiskeys with the brother-in-law.

I was in what appeared to be a business, but it reminded me of my high school computer lab, CRTs and all.

I most definitely was responsible for the entire network.

I began to see input on a terminal screen (sorry, mono-chrome white, not green or amber) that appeared to be file copy jobs.
I killed the copy process. Then it appeared on all of the other computers.

I then killed the process again.

I typed “Who’s this?” into the terminal.
The terminal typed back “Hi! Sorry about this.”

Then they started the copy job again. I killed the copy. They started the copy again. Repeat maybe 10 times.

I gave up, walked away. Told my wife (who was there for some reason) “We’re being hacked… Sorry can’t talk right now. In fact, I don’t know what to do.”

I physically stopped in my tracks, took a deep breathe and thought the following:
1) Oh my god. I can’t do this. I can’t stop these guys. I can’t stop the data leak.
2) Oh my god. I’m going to get fired, because I can’t stop these guys.
3) Oh my god. wait… Hey… kill the firewall. Unplug the network cables from the machines. Damn it. I’m going to have the wipe the machines, that is going to take a lot of time.

Then I woke up in a cold sweat, took a deep breathe. Thought to myself, “thank god that was just a dream. I guess that was actually a nightmare.” Stumbled to the kitchen to grab some water, then went back to sleep.

argus wasn’t present. :(

Writing DNS lookup stuff to a DB using argus-client’s radump() and python

July 25, 2013 1 comment

A few times on the mailing list, the question of how to archive DNS lookup stuff has come up.

I spent a few days writing a python script that takes the output of radump(), parses and writes it to a DB. radump() is an argus-client example that takes the binary argus user data and prints it using protocol printers.

Creating the DB structure:
You must import the DB structure into the DB.
I have also posted a gist of the db structure.

For example:

cd
curl https://gist.github.com/mbrownnycnyc/6083357/raw/069c5f6b782c5623dc0d671a076c53b301193a6a/argus_dnsdb.sql > argus_dnsdb.sql
mysql -uroot -p < argus_dnsdb.sql

Create a user:
Here is some quick sql syntax to create a restricted user which you should be able to import as previous (change newpassword):

use mysql;
GRANT SELECT, INSERT ON argus_dnsdb.* TO 'argusdns'@'localhost' IDENTIFIED BY 'newpassword';
FLUSH PRIVILEGES;
SHOW GRANTS FOR 'argusdns'@'localhost';

The DB Writer:
You can then use the DB writer.
I have posted a gist of the db writer.

For example (change newpassword):

cd
curl https://gist.github.com/mbrownnycnyc/6158144/raw/068f20728b116b977c670aed9539273f91693276/radump_to_dns_db.py > radump_to_dns_db.py
sed s/\"passwordhere\"/\"newpassword\"/ -i radump_to_dns_db.py

Processing DNS data from argus flow binary data:
To import from a file and see the output of the command (where argus.file is your file):

grep -v ^# /root/.rarc | grep -v ^$ > ~/for_dnsdb.rarc && if grep ^RA_TIME_FORMAT ~/for_dnsdb.rarc > /dev/null ; then sed s/^RA_TIME_FORMAT/#RA_TIME_FORMAT/g -i ~/for_dnsdb.rarc && echo -e "RA_TIME_FORMAT=\"%Y-%m-%d %T.%f\"\nRA_PRINT_LABELS=-1\nRA_FIELD_DELIMITER='^'" >> ~/for_dnsdb.rarc ; fi
radump -F ~/for_dnsdb.rarc -r argus.file -s seq ltime saddr daddr suser:1024 duser:1024 - port domain | python radump_to_dns_db.py

To connect to an argus server and not see the output (where 127.0.0.1:561 is your server):

grep -v ^# /root/.rarc | grep -v ^$ > ~/for_dnsdb.rarc && if grep ^RA_TIME_FORMAT ~/for_dnsdb.rarc > /dev/null ; then sed s/^RA_TIME_FORMAT/#RA_TIME_FORMAT/g -i ~/for_dnsdb.rarc && echo -e "RA_TIME_FORMAT=\"%Y-%m-%d %T.%f\"\nRA_PRINT_LABELS=-1\nRA_FIELD_DELIMITER='^'" >> ~/for_dnsdb.rarc ; fi
nohup radump -F ~/for_dnsdb.rarc -S 127.0.0.1:561 -s seq ltime saddr daddr suser:1024 duser:1024 - port domain | python radump_to_dns_db.py > /dev/null &

Rotating the DB:
If you wish to rotate the log, you may want to create a MySQL EVENT.

I have confirmed that this is a safe procedure and that the EVENT will fail immediately if an INSERT fails, as to avoid the destructive action of DELETE.

This EVENT runs at 00:00:05 every day.
It takes any record who’s query time occurred before the current day’s midnight, and places it into a table that was created with the name of the previous date as ‘%Y%m%d’.

use argus_dnsdb;
DELIMITER |
CREATE EVENT `dnsdb_rotator`
ON SCHEDULE
EVERY 1 DAY
STARTS date_format(now(), '%Y-%m-%d 00:00:05')
ON COMPLETION NOT PRESERVE
ENABLE
DO BEGIN
set @target_table_name=CONCAT('`argus_dnsdb`.`',date_format(date_sub(now(),interval 1 day), '%Y%m%d'),'`');
set @create_table_stmt_str = CONCAT('CREATE TABLE ',@target_table_name,' like `argus_dnsdb`.`main`;');
PREPARE create_table_stmt FROM @create_table_stmt_str;
EXECUTE create_table_stmt;
DEALLOCATE PREPARE create_table_stmt;
set @a=unix_timestamp(date_format(now(), '%Y-%m-%d 00:00:00'));
set @insert_stmt_str = CONCAT('INSERT INTO ',@target_table_name,' SELECT * FROM `argus_dnsdb`.`main` WHERE qtime < ',@a,' ;');
PREPARE insert_stmt FROM @insert_stmt_str;
EXECUTE insert_stmt;
DEALLOCATE PREPARE insert_stmt;
DELETE FROM `argus_dnsdb`.`main` WHERE qtime < @a ;
END;
|
DELIMITER ;

Client is upcoming:
I will be writing a client and updating this post with the gist. The client will be able to take in the following:

`dnsdb_query.py` [-c [separator char]] -T [timespan WHERE clause injection] -atype [regex] -qtype [regex] -type [regex] -qhost [regex] -ahost [regex] -host [regex] -nsserver [regex] -nsclient [regex]

You can perform counts using `| wc -l` for instance.

How to build an raservices().conf file effectively

July 17, 2013 Leave a comment

After further investigation into the nDPI libs, it became clear that there was very little data to pull byte patterns out. A majority of the definitions consider MANY more aspects to be essential to classifying a flow.

Therefore, to actually generate an raservices().conf file effectively, I would say get a very large data set:
1) replay it against nDPI
2) replay it against libprotoident
3) replay it against rauserdata() -M printer=”encode32″

You will then be able to align protocol definitions.

There is no reason why efforts can’t be cumulative. As far as Carter is concerned, I’m sure he’d be happy to append a larger std.sig file to the distro.

So, although it was fun, it became clear that my work was going to fail to reach the goal at the reliability strength I had wished.

Generating raservices().conf files from the nDPI libs

July 16, 2013 Leave a comment

 

Understanding the raservices() conf file:

Let’s take an example config file and break down the lines: ../argus-clients-*/support/Config/std.sig

Service: http            tcp port 80    n =    34 src = "50524F5046494E44202F737973766F6C"  dst = "485454502F312E312034303420526573"

This declares:

Attribute

Parameter
service as defined in /etc/services (or not)

http
protocol as defined in /etc/services (or not)

tcp
port as defined in /etc/services (or not)

80
Occurances of the src and dst in source data that assist with determining the given patterns. (“n”)

34
data portion that is sent from the client to the server (“src”). If you are unsure, leave a space.

50524F5046494E44202F737973766F6C
data portion that is sent from the server to the client (“dst”). If you are unsure, leave a space.

485454502F312E312034303420526573
string that indicates whether this data is considered to be encrypted. likely a boolean that is consider during processing in some other way. (not present in this example)

“encrypted”

In this example, the data that has been generated (“src” and “dst”) is 16 bytes.

Let’s take the “src” and take a look.

#[offset]    [byte values]
0000    50 52 4F 50 46 49 4E 44 20 2F 73 79 73 76 6F 6C

Remember:
– 16 bytes in length = 128 bits in length
– 8 bytes = 0-F hex for each bit
– F = 1111 1111 = 8 bits and 0 = 0000 0000

Or we can display this as follows using the same display notation:

#[offset]    [byte values]
00    50
01    52
02    4F
03    50
04    46
05    49
06    4E
07    44
08    20
09    2F
0A    73
0B    79
0C    73
0D    76
0E    6F
0F    6C

Now we understand what falls where.

nDPI protocol definition conversion:

We know that the printer expected by rauserdata() is “encode32″ (normally written as -M printer=”encode32”), which is a included function.

ArgusEncode32() does the following:
1) takes each individual byte (provided at a memory point, aka pointer)
2) treats it as a numeric value in 0-255 (aka 0x0-0xFF) range
3) then generates two hex digits that represent this numeric value in base 16.
4) the printer outputs this as a string of hex digits.

Given the example from afp.c from nDPI [https://svn.ntop.org/svn/ntop/trunk/nDPI/src/lib/protocols/afp.c]:

#define get_u_int16_t (X,O)  (*(u_int16_t *)(((u_int8_t *)X) + O))

if (get_u_int16_t(packet->payload, 0) == htons(0x0004)) {
//do something
return;
})

Cutting out the middle man and assuming our host byte order is little endian, as is the binary representation of Argus data in memory (as accessed by raservices()), we can directly compare the values (purpose of raservices()).

In this example, we are sending 16 bits of memory made up of data that can be expressed as `0x0004` in to ArgusEncode32(), for which the output as a string of hex characters is “0000000000000004”.
We can see by the above c macro (as noted by “#defined”) that the O is the byte (16 bits) offset of 0.

Let’s take the whole definition of “AFP: DSI OpenSession detected” and convert it for use with raservices:

if:
packet->payload_packet_len >= 22 &&
get_u_int16_t(packet->payload, 0) == htons(0x0004) &&  //if the 16 bits starting at byte-offset 0 (meaning, bits 0 through 15) of the payload equals the 16 bit little endian "0x0004" and...
get_u_int16_t(packet->payload, 2) == htons(0x0001) &&  //if the 16 bits starting at byte-offset 2 (meaning, bits 16 through 31) of the payload equals the 16 bit little endian "0x0001" and...
get_u_int32_t(packet->payload, 4) == 0 && //if the 32 bits starting at byte-offset 4 (meaning bits 32-63) of the payload equals 0 and...
get_u_int32_t(packet->payload, 8) == htonl(packet->payload_packet_len - 16) && //if the 32 bits at byte-offset 8 (meaning, bits 64-95) are the same as a 32-bit little endian value equal to the size of the packet minus 16 [must be a check of sorts] and...
get_u_int32_t(packet->payload, 12) == 0 && //if the 32 bits at byte-offset 12 (bits 96-127) equals 0 and...
get_u_int16_t(packet->payload, 16) == htons(0x0104)) //if the 16 bits at byte-offset 16 (bits 128-144)
then:
this flow has the attribute "AFP: DSI OpenSession detected"

1) Because raservices() only considers the attributes mentioned in the previous section Understanding the raservices() conf file, we can toss the payload length out.
2) Next we’ll build out the entire 144 bits of data:

#[offset in hex]    [byte values for raservices().conf]
00    04
01    00
02    01
03    00
04    00
05    00
06    00
07    00
08    
09    
0A    
0B    
0C    00
0D    00
0E    00
0F    00
10    04
11    01

Then the raservices().conf line would be:

Service: afpovertcp            tcp port 548    n =  5000 src="010400000000        0000000000010004"
Service: afpovertcp            tcp port 548    n =  5000 dst="010400000000        0000000000010004"

I’ve given these definitions arbitrary weight of 5000. I am not sure how to algorithm takes this weight into account.

%d bloggers like this: