Home > Uncategorized > Inserting records into a DB with argus-client rasqlinsert

Inserting records into a DB with argus-client rasqlinsert

If you’re here, you are probably interested in this to graphically present your flows through client-side rendered d3.js.

This post will cover some of the technicalities of rasqlinsert() so that you can insert records into an SQL DB. I’ll also cover working on structuing a basic database to store some records.  This post will serve as a streamlined summary (while making some sense independently), and be much less verbose than the argus database intro page, and this thread about mode (-M) parameters and general functioning of rasqlinsert and this thread about more aggregation functions.

Later posts will cover how to properly design a database to support historic data dumps, how to use rasqltimeindex() and rasql() effectively in your production environment.

After we populate a DB for about a day, we will get an idea of the size/count of our flows as well as have some real data to work with.

If you’re impatient, you can always use some pcaps from a large variety of places to populate your DB.

Which fields do we want?
There are many fields (called DSRs, data set records, in argus docs) available for flows that are exposed by argus. All of these fields are listed starting on page 3 of the ra() man page. Additionally, keep in mind that argus will happily process netflow data.

I’ll choose a few that interest me, and perform a simple query using the ra command to see how the SQL record entries will look:

ra -b -S localhost:561 -s stime dur flgs proto saddr sport dir daddr dport spkts dpkts sbytes dbytes state

Looks pretty good and gives us information we wish to record.

Building the database structure
Let’s start by rolling up a quick database. We’ll worry about planning for production later (such as planning for truncation and rolling the tables).

Install and configure mysql

yum -y install mysql mysql-server
service mysqld start
#enter for no root password
#Y to set root password
#enter a password
#Y to remove anonymous user
#Y to disallow root login remotely
#Y to remove test database and access to it
#Y to reload privilege tables now

Create the database and the dedicated argus SQL user:

echo "create database argus;" > ~/argus_db.sql
echo "create user 'argus'@'localhost' identified by 'argus_DBUSERPASSWORD';" >> ~/argus_db.sql
echo "grant all privileges on argus.* to 'argus'@'localhost';" >> ~/argus_db.sql
mysql -p < ~/argus_db.sql

You may also want to create a remote administrative user, and use some remote GUI tools.

Create and populate the DB:
You can populate the DB with all the argus flows using the following:

rasqlinsert -S localhost:561 -s stime dur flgs proto saddr sport dir daddr dport spkts dpkts sbytes dbytes state -w mysql://argus:argus_DBUSERPASSWORD@localhost/argus/argusTable -m none -d

`-m none` stops rasqlinsert from truncating the table, and re-populating it with a sequence of N flow count.

let it populate for a day or so.

Retaining records
rasqlinsert() will insert a column called ‘record’ that contains binary data that is the entire argus record. This is very cool, in that you lose no data upon insert, and can cause you to adjust your front end and database design.

rasql() can take the binary data that’s stored in the ‘record’ column and translate it back to useful data.

It’s quite clear that having the ability to pull other data out of the full record at a later time might prove to be useful. So, I suggest storing both your desired fields and the binary ‘record,’ as do the developers of argus (since you need to explicitly note that you do not want to store the binary ‘record’).

The amount of storage you want to dedicate will dictate your retention design. Here is a basic scheme (give age of a record) I will test and adjust for growth following by running a maintenance script daily:

0-15 days = full data fields + the binary ‘record’
15-30 days = full data fields + the binary ‘record’ stored compressed
30-90 days = full data fields stored compress + the binary ‘record’ stored compressed
90-365 days = full data fields stored compress
365 days = truncate or archive to slow second tier storage/server

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: