2011-12-05

Enabling the RabbitMQ Management Plugin

Prior to 2.7.x version of RabbitMQ it was necessary to manually install the plug-ins that provided the management interface [as well as their dependencies]. Now in the 2.7.x series the management interface plug-in and related dependencies are included - but not enabled by default.  The management plug-in must be toggled into the enabled state using the new rabbitmq-plugins command.  Enabling a plug-in will automatically enable any other plug-ins that the specified plug-in depends on Whenever you enable or disable a plug-in you must restart the sever.
If you have a brand new 2.7.x instance installed, turn on the plug-in with:
service rabbitmq-server stop
rabbitmq-plugins enable rabbitmq_management
service rabbitmq-server restart
When you performed the rabbitmq-plugins command you should have seen the following output:

The following plugins have been enabled:
  mochiweb
  webmachine
  rabbitmq_mochiweb
  amqp_client
  rabbitmq_management_agent
  rabbitmq_management
You management interface at TCP/55672 should be available.  The initial login and password are "guest" and "guest".  You want to change those.

2011-12-03

Idjit's Guide To Installing RabbitMQ on openSUSE 12.1

The RabbitMQ team provides a generic SUSE RPM which works on openSUSE 11.x, openSUSE 12.1, and I presume on the pay-to-play versions of SuSE Enterprise Server. About the only real dependency for RabbitMQ is the erlang platform which is packages in the erlang language repo. So the only real trick is getting the RabbitMQ package itself [from this page].  Then install and start is as simple as:
zypper ar http://download.opensuse.org/repositories/devel:/languages:/erlang/openSUSE_12.1 erlang
zypper in erlang
wget http://www.rabbitmq.com/releases/rabbitmq-server/v2.8.1/rabbitmq-server-2.8.1-1.suse.noarch.rpm
rpm -Uvh rabbitmq-server-2.8.1-1.suse.noarch.rpm
service rabbitmq-server start
Now you probably want to do some configuration and provisioning using the rabbitmqctl command; but your RabbitMQ instance is up and running.

Update 2012-04-10: Updated these instructions to install RabbitMQ 2.8.1.  The later 2.7.x series have start-up script issues as those scripts use the "runuser" command which is not present on openSUSE.  Running the latest RabbitMQ is generally a good idea in any case,  recent versions have corrected several memory leaks and manage resources more efficiently.

2011-11-27

All those SQLite databases...

Many current application use the SQLite database for tracking information; this includes F-Spot, Banshee, Hamster, Evolution, and others.  Even the WebKit component uses SQLite [you might be surprised to discover ~/.local/share/webkit/databases].  It is wonderfully efficient that there is one common local data storage technique all these applications can use,  especially since it is one that is managable using a universally known dialect [SQL]. But there is a dark-side to SQLite.  Much like old Dbase databases it needs to be vacuumed.  And how reliably are all those applications providing their little databases with the required affection?  Also, do you trust those lazy developers to have dealt with the condition of a corrupted database?   If an application hangs, or is slow, or doesn't open... maybe that little database is corrupted?
Aside: As a system administrator for almost two decades I do not trust developers. They still put error messages in applications like "File not found!".  Argh!
On the other hand SQLite provides a handy means of performing an integrity check on databases - the "PRAGMA integrity_check" command.  I've watched a few of these little databases and discovered that (a) they aren't often all that little, and (b) manually performing a VACUUM may dramatically reduce their on-disk size.  Both these facts indicate that developers are lazy and should not be trusted.
Note: in at least one of these cases the application has subsequently been improved. Developers do respond rather quickly when offered a blend of compliments spiced with bug reports.  No, I'm not going to name offending applications as that is too easily used as fodder by nattering nabobs.  And even the laziest Open Source developer is working harder than their proprietary brothers.
In light of this situation my solution is a hack - a Python script [download] that crawls around looking for SQLite databases.  First the script attempts to open the database in exclusive mode, then it performs an integrity check, and if that succeeds it performs a vacuum operation.  Currently it looks for databases in "~/.local/share" [where it will find databases managed by application appropriately following the XDG specification], "~/.cache", "~/.pki", "~/.rcc", and "~/.config".
Download the script and run it. Worst thing that happens is that it accomplishes nothing.  On the other hand it might recover some disk space, improve application performance, or reveal a busted database.

2011-10-31

Implementing Queue Expiration w/RabbitMQ

The latest versions of RabbitMQ support a feature where idle queues can be automatically deleted from the server.  For queues used in an RPC or workflow model this can save a lot of grief - as the consumers for these queues typically vanish leaving the queue behind. Over time these unused queues accumulate and consume resources on the server(s). If you are using pyamqplib setting the expiration on a queue is as simple as:

import amqplib.client_0_8 as amq
connection = amq.Connection(host="localhost:5672", userid=*, password=*, virtual_host="/", insist=False)
channel = connection.channel()
queue = channel.queue_declare(queue="testQueue", durable=True, exclusive=False, auto_delete=False, arguments={'x-expires': 9000})
channel.exchange_declare(exchange='testExchange', type="fanout", durable=False, auto_delete=False)
channel.queue_bind(queue="testQueue", exchange='exchange')

Now if that queue goes unused for 9 seconds it will be dropped by the server [the value is in milliseconds]. So long as the queue has consumers it will persist, but once the last consumer has disconnected and no further operations have occurred - poof, you get your resources back.

2011-10-13

Finding Address Coordinates using Python, SOAP, & the Bing Maps API

Bing maps provides a SOAP API that can be easily accessed via the Python suds module.  Using the API it is trivial to retrieve the coordinates of a postal address.  The only requirement is to acquire a Bing API application key; this process is free, quick, and simple.


import sys, urllib2, suds

if __name__ == '__main__':  
    url = 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'
    
    client = suds.client.Client(url)
    client.set_options(port='BasicHttpBinding_IGeocodeService')
    request = client.factory.create('GeocodeRequest')

    credentials = client.factory.create('ns0:Credentials')
    credentials.ApplicationId = 'YOUR-APPLICATION-KEY'
    request.Credentials = credentials

    #Address
    address = client.factory.create('ns0:Address')
    address.AddressLine = "535 Shirley St. NE"
    address.AdminDistrict = "Michigan"
    address.Locality = "Grand Rapids"      
    address.CountryRegion = "United States"
    request.Address = address

    try:        
        response = client.service.Geocode(request)    
    except suds.client.WebFault, e:        
        print "ERROR!"        
        print(e)
        sys.exit(1)

    locations = response['Results']['GeocodeResult'][0]['Locations']['GeocodeLocation']
    for location in locations:        
        print(location)


If you need to make the request via an HTTP proxy server expand the line client = suds.client.Client(url) to:


    proxy = urllib2.ProxyHandler({'http': 'http://YOUR-PROXY-SERVER:3128'})\
    transport = suds.transport.http.HttpTransport()
    transport.urlopener = urllib2.build_opener(proxy)
    client = suds.client.Client(url, transport=transport)


The results will be Bing API GeocodeLocation objects that have an Longitude and Latitude properties.  Note that you may receive multiple coordinates for an address as there are multiple mechanism for locating an address; the method corresponding to the coordinates is a string in the CalculationMethod property of the GeocodeLocation objects.

2011-08-04

Suppressing SNMP Connection Messages

You have, of course, done the responsible sys-admin thing and setup an NMS (be it ZenOSS, OpenNMS, Nagios, whatever...).  Then there is the concommitant action of configuring SNMP services on all the relevant hosts.  All is good.  But running SNMP on several distributions churns out the log messages;  when you go to the logs to research a problem you have to filter out and sort through thousands upon thousands of pointless messages like:
Aug  1 16:08:38 flask-yellow snmpd[1976]: Connection from UDP: [192.168.1.38]:52021
Aug  1 16:08:38 flask-yellow last message repeated 24 times
Argh.  Detail logging is good, but pointless noise is not.  The solution isn't very well documented but you can bring this to a stop. 

Step 1.) Make sure you have net-snmp 5.3.2.2 or later.  This should not be a problem as even RHEL5/CentOS5 provide this version via update.
    $ rpm -q net-snmp
    net-snmp-5.3.2.2-9.el5_5.1
Step 2.) Edit /etc/sysconfig/snmpd.options or your system's equivalent making sure you do not pass the "-a" option to the SNMP daemon.  The "-a" option enables the logging of the source IP addresses of all incoming requests.  If you want to know about these kind of events iptables and ulog are more reliable and efficient methods for capturing that information.
    # OPTIONS="-Lsd -Lf /dev/null -p /var/run/snmpd.pid -a"
    OPTIONS="-Lsd -Lf /dev/null -p /var/run/snmpd.pid"
Step 3.) Edit the /etc/snmpd/snmpd.conf verifying you have the dontLogTCPWrappersConnects directive set to 1 (true).  Add this directive to the configuration file if it is not specified.
Step 4.) Restart the SNMP service.

Now when you go to look into the log files it is again possible to hear the breeze, the singing of the birds, and the distant growling of that guy from Kazakstan who is trying to crack your SSH daemon.

2011-05-17

Automated Backup of IOS Router Configuration

Who hasn't had the experience of remoting to a router and making a configuration change... and not saving that change. Inevitably that is the weekend that facility will experience a power outage long enough to deplete their UPS. And then you get that dreaded text message from NetOps that a facility is down. Argh! Fortunately Cisco IOS 12.x and later supports a cron like service known as "kron". One of the handiest uses for kron is to configure automatic backup of the router's configuration to a TFTP server.

kron occurrence backup at 0:00 Thu recurring
 policy-list backup
!
kron policy-list backup
 cli write
 cli show running-config | redirect tftp://192.168.1.38/brtgate.config
!

This creates a batch of commands named "backup" [where in typical ISO fashion everything is referred to as a "policy"] that will be executed every Thursday morning. This batch commits the running configuration to flash memory ["cli write"] and copies the running configuration to the specified TFTP server ["cli show running-config | redirect tftp://192.168.1.38/brtgate.config"]. The rather odd looking use of "redirect" is because the IOS "copy" command is interactive and interactive commands cannot be run via "kron".

Remember that the file on the TFTP server has to exist, even if zero sized, and be world writable; otherwise the redirect will fail with a permission denied error.

2011-04-20

block_dump logging

There are lots of tools for studying the systems use of CPU and memory, but I/O is generally harder to track down.  A useful trick is available via the block dump.  Setting the value to "1" turns on block access logging to the kernel ring-buffer [aka dmesg] and a value of "0" turns it back on.  This means it can be turned on by a simple:
echo "1" > /proc/sys/vm/block_dump
This logs the accesses to the block storage as:
[ 2032.934178] postmaster(11528): READ block 5058592 on dm-3 (16 sectors)
[ 2032.934200] postmaster(11528): READ block 5058624 on dm-3 (32 sectors)
[ 2032.934240] postmaster(11528): READ block 3172800 on dm-3 (16 sectors)
[ 2032.945328] banshee-1(11267): dirtied inode 1051864 (banshee.db-journal) on dm-0
[ 2032.945336] banshee-1(11267): dirtied inode 1051864 (banshee.db-journal) on dm-0
[ 2033.042671] python(11518): READ block 9017928 on dm-2 (32 sectors)
[ 2033.055771] python(11518): dirtied inode 267260 (expatbuilder.pyc) on dm-2
[ 2033.055808] python(11518): READ block 9017960 on dm-2 (40 sectors)
[ 2033.412972] nautilus(11078): dirtied inode 410492 (dav:host=127.0.0.1,port=8080,ssl=false) on dm-0
[ 2033.413001] nautilus(11078): READ block 50855560 on dm-0 (40 sectors)
[ 2033.431011] nautilus(11078): dirtied inode 410596 (dav:host=127.0.0.1,port=8080,ssl=false-ab9de673.log) on dm-0
[ 2033.431044] nautilus(11078): READ block 50855736 on dm-0 (64 sectors)
[ 2034.221831] jbd2/dm-2-8(386): WRITE block 21261800 on dm-2 (8 sectors)
[ 2034.221887] jbd2/dm-2-8(386): WRITE block 21261808 on dm-2 (8 sectors)
 Handy.