Bind Query Logging to Splunk from pfSense

I wanted to add a secondary DNS server (NS2) to my home network as a backup to the primary DNS server (NS1) to provide redundancy in case there is a connectivity issue with the primary (NS1). I went ahead and installed the Bind package on my pfSense gateway via web GUI and configured it as a Slave server. You need to configure a 'View' on the Bind server for zone transfers or look-ups to work. view

Alt text View settings. A simple, get-it-to-work setup

After configuring a View and a new Zone, zone transfers from my primary DNS server started to work along with queries. At this point, and after a minor tweak to the DHCP server, I had accomplished what I needed.

Then I thought I'd take it a step further. I'm already logging queries from NS1 to Splunk, so why not log NS2 queries as well? This way I can monitor when NS2 is being used and which devices are making queries. Logging on pfSense is done simply with syslogd and is not very configurable via the web GUI. I needed to get creative with how I setup Bind logging since pfSense is already sending firewall events to Splunk over standard UDP 514 (That is another blog post in itself).

There are two major issues I needed to overcome to make this work. First, I needed to get syslogd working inside the Bind (named) jail that pfSense creates. This was as simple as adding '-l /cf/named/var/run/log' to the syslogd_flag in /etc/defaults/rc.conf. On FreeBSD, '-l' specifies where syslogd should put additional log sockets, required when using syslogd within a chroot jail.

syslogd_flags="-s -l /cf/named/var/run/log" # Flags to syslogd (if enabled).

Second, Bind needed to be configured to log via syslog but because the pfSense web GUI is responsible for generating the named.conf, editing this file via the command line is not recommend. The Bind settings screen allows for additional custom Options to be inserted into the options section of the configuration file. By adding a "};" before the logging stanza, you essentially close the Options stanza and insert Logging or any other configuration. The web GUI will add the closing "};" which is why it's omitted from the screenshot below.

Alt text

To complete the setup, added an entry to syslog.conf to pass all local6 log entries to the Splunk server.

Alt text

More »


Updating Splunk DHCP App MAC Address List

Alt text

UPDATE: I discovered that the IEEE OUI list is much more comprehensive. I've update the script to parse this file instead.

I've been using Splunk to monitor my DHCP server with Linux DHCP for some time now. It provides good insight into the devices connecting to my network and how often IP address are being requested. The one issue I noticed with the app was the short list of MAC OUI (Organizational Unique Identifier, the first 24-bits of a MAC address). The app uses a CSV file of MAC address and the assigned organization, but this file is not comprehensives and is missing many manufactures. This causes some inaccurate graphs when using the app.

I wrote a Python script to take the IEEE manufacturer database and convert it to the correctly formatted CSV file for Linux DHCP. The script outputs dhcpd_mac-vendorname.csv which can be placed in $SPLUNKHOME/etc/apps/dhcpd/lookups for use by the Splunk app.

You can grab the Python script from GitHub https://github.com/alxhrck/public/blob/master/ieee-oui-parse.py.

More »


Crashing Android Messaging App

Recently, I decided to downgrade my phone from CyanogenMod 10.1 (Jelly Bean) to CyanogenMod 9.1 (ICS). I wanted to keep a few things, like Camera pictures and SMS. Android stores MMS and SMS messages in a SQLite database located in ./data/data/com.android.providers.telephony/databases/mmssms.db. I simply copied the mmssms.db file off my device before installing CM9.1. A problem arose when the Messaging app started crashing after replacing the mmssms.db in the freshly installed OS. Running logcat and opening Messaging app generated this error:

/DatabaseUtils( 2527): android.database.sqlite.SQLiteException: Can't upgrade read-only database from version 57 to 55: /data/data/com.android.providers.telephony/databases/mmssms.db
E/DatabaseUtils( 2527):     at android.database.sqlite.SQLiteOpenHelper.getReadableDatabase(SQLiteOpenHelper.java:244)
E/DatabaseUtils( 2527):     at com.android.providers.telephony.MmsSmsProvider.query(MmsSmsProvider.java:286)
E/DatabaseUtils( 2527):     at android.content.ContentProvider$Transport.query(ContentProvider.java:178)
E/DatabaseUtils( 2527):     at android.content.ContentProviderNative.onTransact(ContentProviderNative.java:112)
E/DatabaseUtils( 2527):     at android.os.Binder.execTransact(Binder.java:338)
E/DatabaseUtils( 2527):     at dalvik.system.NativeStart.run(Native Method)

After some investigation I found the answer on StackOverflow: http://stackoverflow.com/questions/8030779/change-sqlite-database-version-number

You can manually set the user_version of the SQLite database by entering PRAGMA user_version = 55; After I made this change, the Messaging app opened normally and there were no errors when watching logcat. Simple fix.

Some additional information about SQLite versions: https://www.sqlite.org/pragma.html#pragma_schema_version

More »


Geolocate SSH Brute Force Attemps

Running a public facing server is an interesting endeavor. Whether you’re running a SSH, web, FTP or any other number of services, your system is constantly being bombarded by service scanning tools. Most of these tools have malicious intent and are testing hundred, if not thousands, of other public servers for weak SSH passwords or vulnerable web applications. Since running alex.hrck.net, I’ve always been interested in SSH scans I see in the server logs. Sometime hundreds of login attempts from users non-existent on the system.

Jan 12 19:46:15 regulus sshd[20524]: Invalid user brad from 101.44.1.135
Jan 12 19:46:17 regulus sshd[20526]: Invalid user remote from 101.44.1.135
Jan 12 19:46:19 regulus sshd[20528]: Invalid user internet from 101.44.1.135
Jan 12 19:46:21 regulus sshd[20530]: Invalid user postmaster from 101.44.1.135
Jan 12 19:46:23 regulus sshd[20532]: Invalid user squid from 101.44.1.135
Jan 12 19:46:25 regulus sshd[20534]: Invalid user ldap from 101.44.1.135
Jan 12 19:46:27 regulus sshd[20536]: Invalid user marcus from 101.44.1.135
Jan 12 19:46:29 regulus sshd[20538]: Invalid user newsletter from 101.44.1.135

8 login attempt in a 30 sec time frame

The geographical location of the scans origin is particularly interesting to me. It highlights the interconnectedness of the Internet and shows that even a single, personal web server on the public Internet is exposed to some level of real risk. A larger organization may find geographical location of attacks useful data when compiling threat profiles or when implementing restrictions on incoming network traffic.

To better understand where these SSH login attempts were orginanting, I wrote a Python script to lookup location of IP address and map them on Google Maps. I originally got the concept for this from a friend of mine, Chris Long's website (www.cl0ng.com). The script, creatively named, ip_geolocate.py, uses IPInfoDB.com to fetch the location data for each IP read in from a text file. It then parses the information and creates a .kml (Keyhole Markup Language) file that can be read by Google Maps.

My current iteration of this script relies on reading from a static file called ip.txt. This file is generated by a simple bash script that searches the server authentication logs for every instance of failed SSH logins.

/bin/cat /var/log/secure* |egrep '(failed|Invalid)' | egrep -o '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}' > ip.txt

Bash script to generate ip.txt

The output from ip_geolocate.py is a .kml file called location_data.kml. This file can be imported to Google Earth or read by Google Maps API. A cron job runs the bash script and ip_geolocate.py on daily.

Google Maps example: http://hrck.net/brute_map/ Source: view-source:http://hrck.net/brute_map/

Download: ip_geolocate.py

More »