Archive

Posts Tagged ‘network’

Writing DNS lookup stuff to a DB using argus-client’s radump() and python

July 25, 2013 1 comment

A few times on the mailing list, the question of how to archive DNS lookup stuff has come up.

I spent a few days writing a python script that takes the output of radump(), parses and writes it to a DB. radump() is an argus-client example that takes the binary argus user data and prints it using protocol printers.

Creating the DB structure:
You must import the DB structure into the DB.
I have also posted a gist of the db structure.

For example:

cd
curl https://gist.github.com/mbrownnycnyc/6083357/raw/069c5f6b782c5623dc0d671a076c53b301193a6a/argus_dnsdb.sql > argus_dnsdb.sql
mysql -uroot -p < argus_dnsdb.sql

Create a user:
Here is some quick sql syntax to create a restricted user which you should be able to import as previous (change newpassword):

use mysql;
GRANT SELECT, INSERT ON argus_dnsdb.* TO 'argusdns'@'localhost' IDENTIFIED BY 'newpassword';
FLUSH PRIVILEGES;
SHOW GRANTS FOR 'argusdns'@'localhost';

The DB Writer:
You can then use the DB writer.
I have posted a gist of the db writer.

For example (change newpassword):

cd
curl https://gist.github.com/mbrownnycnyc/6158144/raw/068f20728b116b977c670aed9539273f91693276/radump_to_dns_db.py > radump_to_dns_db.py
sed s/\"passwordhere\"/\"newpassword\"/ -i radump_to_dns_db.py

Processing DNS data from argus flow binary data:
To import from a file and see the output of the command (where argus.file is your file):

grep -v ^# /root/.rarc | grep -v ^$ > ~/for_dnsdb.rarc && if grep ^RA_TIME_FORMAT ~/for_dnsdb.rarc > /dev/null ; then sed s/^RA_TIME_FORMAT/#RA_TIME_FORMAT/g -i ~/for_dnsdb.rarc && echo -e "RA_TIME_FORMAT=\"%Y-%m-%d %T.%f\"\nRA_PRINT_LABELS=-1\nRA_FIELD_DELIMITER='^'" >> ~/for_dnsdb.rarc ; fi
radump -F ~/for_dnsdb.rarc -r argus.file -s seq ltime saddr daddr suser:1024 duser:1024 - port domain | python radump_to_dns_db.py

To connect to an argus server and not see the output (where 127.0.0.1:561 is your server):

grep -v ^# /root/.rarc | grep -v ^$ > ~/for_dnsdb.rarc && if grep ^RA_TIME_FORMAT ~/for_dnsdb.rarc > /dev/null ; then sed s/^RA_TIME_FORMAT/#RA_TIME_FORMAT/g -i ~/for_dnsdb.rarc && echo -e "RA_TIME_FORMAT=\"%Y-%m-%d %T.%f\"\nRA_PRINT_LABELS=-1\nRA_FIELD_DELIMITER='^'" >> ~/for_dnsdb.rarc ; fi
nohup radump -F ~/for_dnsdb.rarc -S 127.0.0.1:561 -s seq ltime saddr daddr suser:1024 duser:1024 - port domain | python radump_to_dns_db.py > /dev/null &

Rotating the DB:
If you wish to rotate the log, you may want to create a MySQL EVENT.

I have confirmed that this is a safe procedure and that the EVENT will fail immediately if an INSERT fails, as to avoid the destructive action of DELETE.

This EVENT runs at 00:00:05 every day.
It takes any record who’s query time occurred before the current day’s midnight, and places it into a table that was created with the name of the previous date as ‘%Y%m%d’.

use argus_dnsdb;
DELIMITER |
CREATE EVENT `dnsdb_rotator`
ON SCHEDULE
EVERY 1 DAY
STARTS date_format(now(), '%Y-%m-%d 00:00:05')
ON COMPLETION NOT PRESERVE
ENABLE
DO BEGIN
set @target_table_name=CONCAT('`argus_dnsdb`.`',date_format(date_sub(now(),interval 1 day), '%Y%m%d'),'`');
set @create_table_stmt_str = CONCAT('CREATE TABLE ',@target_table_name,' like `argus_dnsdb`.`main`;');
PREPARE create_table_stmt FROM @create_table_stmt_str;
EXECUTE create_table_stmt;
DEALLOCATE PREPARE create_table_stmt;
set @a=unix_timestamp(date_format(now(), '%Y-%m-%d 00:00:00'));
set @insert_stmt_str = CONCAT('INSERT INTO ',@target_table_name,' SELECT * FROM `argus_dnsdb`.`main` WHERE qtime < ',@a,' ;');
PREPARE insert_stmt FROM @insert_stmt_str;
EXECUTE insert_stmt;
DEALLOCATE PREPARE insert_stmt;
DELETE FROM `argus_dnsdb`.`main` WHERE qtime < @a ;
END;
|
DELIMITER ;

Client is upcoming:
I will be writing a client and updating this post with the gist. The client will be able to take in the following:

`dnsdb_query.py` [-c [separator char]] -T [timespan WHERE clause injection] -atype [regex] -qtype [regex] -type [regex] -qhost [regex] -ahost [regex] -host [regex] -nsserver [regex] -nsclient [regex]

You can perform counts using `| wc -l` for instance.

Advertisements

Keeping an eye out for stable ntopng

June 26, 2013 Leave a comment

It appears that I missed the partial desublimation of ntopng, a newer version of ntop. I’ve yet to build it, but it looks great. Maybe it will replace the unstable flow-inspector that I am currently using. Unfortunately, I can not give it the time it deserves to contribute code at this point.

I will be keeping an eye on the ntop user’s mailing list that I am subscribed to via feedburner.

Anomaly detection methodology with argus data

June 21, 2013 Leave a comment

Monitoring policy packet and byte hit counts on a Fortigate via SNMP

June 14, 2013 1 comment

Within the MIB for the Fortigate, there are two OIDs that contain the policy hit counts:

fgFwPolPktCount
1.3.6.1.4.1.12356.101.5.1.2.1.1.2
Number of packets matched to policy (passed or blocked, depending on policy action). Count is from the time the policy became active.

1.3.6.1.4.1.12356.101.5.1.2.1.1.2.V.P = policy packet count for policy ID P, in VDOM V
fgFwPolByteCount
1.3.6.1.4.1.12356.101.5.1.2.1.1.3
Number of bytes in packets matching the policy. See fgFwPolPktCount.

1.3.6.1.4.1.12356.101.5.1.2.1.1.3.V.P = policy byte count for policy ID P, in VDOM V

I just created a DENY policy for a variety of geographic regions, a feature of the Fortigate. Although I am also monitoring destination country code information with argus, I have not yet integrated argus into an IDS platform. Before I do this, I can quickly set up a icinga/nagios service to query this value and report when it increases above 0. I am logging policy violations within the Fortigate so that I can quickly review the source, revert to argus, then to the workstation itself.

Effectively parsing PCAPs with python’s querycsv

Here is a quick method I used today, before implementing an argus probe in a sustainable way, to create and parse PCAPs to determine high bandwidth offends.

Create PCAPs:

ifconfig eth0 promisc
netstat -i | grep eth #check for the P flag
mkdir /pcaps/
nohup tcpdump -i 1 -w /pcaps/pcap -C 20 -W 250 'host 192.168.100.27' -Z root & #not really secure, look at -Z

Note that querycsv.py/sqlite3.py doesn’t like _underscores_, -dashes-, and files that contains only numbers.

This will generate /pcaps/pcapNNN…

Process PCAPs on a Windows box with tshark.exe:

echo echo ipsrc,ipdst,dstporttcp,srcporttcp,len ^> %1.csv > %temp%\tshark_len.bat
echo ".\Wireshark\tshark.EXE" -r %1 -T fields -e ip.src -e ip.dst -e tcp.dstport -e tcp.srcport -e frame.len -E separator=, -E quote=d ^>^> %1.csv >> %temp%\tshark_len.bat
for /r . %G in (pcap*) do %temp%\tshark_len.bat %G

`tcpdump` has relatively the same syntax.

Query CSVs with SQL statements:
Use a python module to quickly return calculations (http://pythonhosted.org/querycsv/):

pip install querycsv

Per-file operations
Find the total bytes incoming to host:

querycsv.py -i test.csv "select ipsrc, ipdst, dstporttcp, dstporttcp, srcporttcp, sum(len) as 'len_in_bytes' from test group by ipsrc"


Find the total bytes outgoing from host:

querycsv.py -i test.csv "select ipsrc, ipdst, dstporttcp, dstporttcp, srcporttcp, sum(len) as 'len_in_bytes' from test group by ipdst"

For all files:

echo ipsrc,ipdst,dstporttcp,srcporttcp,len > mergedpcap.csv
cat *.csv | grep -v "ipsrc,ipdst,dstporttcp,srcporttcp,len" >> mergedpcap.csv

Find the total bytes incoming to host:

querycsv.py -i mergedpcap.csv "select distinct dstporttcp, ipsrc, ipdst, sum(len) as 'len_in_bytes' from mergedpcap group by ipdst" > incoming.log
[/source ]
The record where ipsrc is the targeted host (in this case 192.168.100.27), will return the TOTAL length of all packets sent from the targeted host.  (all uploadeded, yes UPloaded, bytes)


Find the total bytes outgoing from host:

querycsv.py -i mergedpcap.csv "select distinct dstporttcp, ipsrc, ipdst, sum(len) as 'len_in_bytes' from mergedpcap group by ipdst" > outgoing.log

The record where ipdst is the targeted host (in this case 192.168.100.27), will return the TOTAL length of all packets sent to the targeted host. (all downloaded, yes DOWNloaded, bytes)

Other solutions:
You’ll notice that querycsv first imports the csv data to an in memory sqlite3 db. This makes offering a full set of sql queries and functions trivial.

There exists other options to solve this sort of situation:
1) PCAP to SQL platforms: pcap2sql and c5 sigma.
2) SQL querying PCAP directly: PacketQ (which lacks some SQL queries and functions, see here). Here is a neat example of displaying some results.
3) robust solutions like pcapr-local, with integration to mu studio.

Using ipfixFlowInspectorExporter with VERMONT merge-features branch

This write up is a partner write up to Configure VERMONT (VERsatile MONitoring Toolkit) on CentOS6.

 

As of May 1st, 2013, Lothar Braun has yet to merge the module that exports from VERMONT to the flow-inspector redis queue called ipfixFlowInspectorExported to the mainline git repo for VERMONT, but it is located in a repo called merge-features.

I will be continuing from the first reference to this page from the page on Configuring Vermont.

flow-inspector

The general data flow from packets to the flow-inspector web UI.

Clone the VERMONT repo and append the merged-features repo:

git clone http://github.com/constcast/vermont.git
cd ./vermont
git branch merge-features origin/merge-features
git checkout merge-features
cmake -DSUPPORT_SCTP=OFF -DSUPPORT_REDIS=ON -DWITH_TOOLS=OFF .
make
#find and remove the follow section from .\CMakeList.txt
#INSTALL(FILES ipfix-config-schema.xsd
#        DESTINATION share/vermont
#)
make install

You may continue back to section “Download and Build the VERMONT Manager web UI” (if you wish) on Configure VERMONT (VERsatile MONitoring Toolkit) on CentOS6.
or…

Start dumping flows to the redis queue:

The binary is located: /usr/local/bin/vermont
The flow-inspector config is located: ./configs/flowinspector_exporter.xml

cp /usr/local/share/vermont/configs/flowinspector_exporter.xml /usr/local/share/vermont/configs/flowinspector_exporter_original.xml

Refer to the Understanding a Vermont config file and modify `flowinspector_exporter.xml` as necessary.

Start the VERMONT probe:

/usr/local/bin/vermont -f /usr/local/share/vermont/configs/flowinspector_exporter.xml

Patience…
As Lothar describes in detail the waiting for data in his writeup, the following occurs:

1) VERMONT has a timeout on active flows of 10 minutes and inactive flows of 5 minutes. This means, it doesn’t push the flows to the redis queue until that time. You will see “Processed 0 flows” in the output of preprocess.py when there are no flows being pushed.

2) flow-inspector also caches data in the redis queue for five minutes before flushing it to the backend DB to be presented over the web UI. You will see “Live import. Flushing caches …” in the output of preprocess.py when this process occurs.

New Page: Implement flow-inspector on CentOS6

April 30, 2013 Leave a comment

I have published a new page called Implement flow-inspector on CentOS6.

I’m going to circle back to Vermont, as it natively supports pushing flow data to the redis queue used by flow-inspector.

%d bloggers like this: