2011-12-12

Querying Connectivity

You're application almost always needs to know if there is a working network connection.  This is typically handled by placing the connection attempt in a try...catch block.  That works, but can be slow, and it means the UI can't really adapt to the level of current connectivity.  A much better solution is to query the NetworkManager [used by every mainstream distribution] via the System D-Bus for the current connectivity.  This method is used by many applications from GNOME's Evolution to Mozilla's Firefox - but it doesn't seem to get much press coverage.  So here is a simple example to query connectivity via Python [assuming NetworkManager 0.9 or later]:

#!/usr/bin/env python
import dbus

NM_BUS_NAME       = 'org.freedesktop.NetworkManager'
NM_OBJECT_PATH    = '/org/freedesktop/NetworkManager'
NM_INTERFACE_NAME = 'org.freedesktop.NetworkManager'
NM_STATE_INDEX = {  0: 'Unknown',
                   10: 'Asleep', 
                   20: 'Disconnected',
                   30: 'Disconnecting',
                   40: 'Connecting',
                   50: 'Connected (Local)',
                   60: 'Connected (Site)',
                   70: 'Connected (Global)' }

if __name__ == "__main__":
    bus = dbus.SystemBus()
    manager   = bus.get_object(NM_BUS_NAME, NM_OBJECT_PATH)
    interface = dbus.Interface(manager, NM_INTERFACE_NAME)

    state = interface.state()
    if state in NM_STATE_INDEX:
        print('Current Network State: {0}'.format(NM_STATE_INDEX[state]))
    else:
        print('Network Manager state not recognized.')
FYI: if you search the interwebz for the NetworkManager API specification ... every search engine will send you to the wrong place; either just wrong or to the documentation of an older version of the API. The current API specification is here.

GNOME3 Journal Extension

Now that's what I'm talking about!  A new extension just showed up on extensions.gnome.org that adds a "Journal" tab to the already awesome GNOME3 overview.  It integrates with Zeitgeist to provide access to recently or heavily used categories of items - sort of like "Recent" but all grown up and with college smarts.  And installing it is as easy as clicking "On" [assuming you have Zeitgeist already installed].
Journal tab in Overview
 A very handy addition that adds to the same concept provided by the gnome-activity-journal [which is packaged for openSUSE, BTW].

2011-12-05

Enabling the RabbitMQ Management Plugin

Prior to 2.7.x version of RabbitMQ it was necessary to manually install the plug-ins that provided the management interface [as well as their dependencies]. Now in the 2.7.x series the management interface plug-in and related dependencies are included - but not enabled by default.  The management plug-in must be toggled into the enabled state using the new rabbitmq-plugins command.  Enabling a plug-in will automatically enable any other plug-ins that the specified plug-in depends on Whenever you enable or disable a plug-in you must restart the sever.
If you have a brand new 2.7.x instance installed, turn on the plug-in with:
service rabbitmq-server stop
rabbitmq-plugins enable rabbitmq_management
service rabbitmq-server restart
When you performed the rabbitmq-plugins command you should have seen the following output:

The following plugins have been enabled:
  mochiweb
  webmachine
  rabbitmq_mochiweb
  amqp_client
  rabbitmq_management_agent
  rabbitmq_management
You management interface at TCP/55672 should be available.  The initial login and password are "guest" and "guest".  You want to change those.

2011-12-03

Idjit's Guide To Installing RabbitMQ on openSUSE 12.1

The RabbitMQ team provides a generic SUSE RPM which works on openSUSE 11.x, openSUSE 12.1, and I presume on the pay-to-play versions of SuSE Enterprise Server. About the only real dependency for RabbitMQ is the erlang platform which is packages in the erlang language repo. So the only real trick is getting the RabbitMQ package itself [from this page].  Then install and start is as simple as:
zypper ar http://download.opensuse.org/repositories/devel:/languages:/erlang/openSUSE_12.1 erlang
zypper in erlang
wget http://www.rabbitmq.com/releases/rabbitmq-server/v2.8.1/rabbitmq-server-2.8.1-1.suse.noarch.rpm
rpm -Uvh rabbitmq-server-2.8.1-1.suse.noarch.rpm
service rabbitmq-server start
Now you probably want to do some configuration and provisioning using the rabbitmqctl command; but your RabbitMQ instance is up and running.

Update 2012-04-10: Updated these instructions to install RabbitMQ 2.8.1.  The later 2.7.x series have start-up script issues as those scripts use the "runuser" command which is not present on openSUSE.  Running the latest RabbitMQ is generally a good idea in any case,  recent versions have corrected several memory leaks and manage resources more efficiently.

2011-12-01

Using gedit to make a list of values into a set.

gedit is awesome;  the flexibility of the tool continues to impress me.  One problem I'm frequently faced with is a list of id values from some query, or utility, or e-mail message... and I want to do something with them.  So, for example I have:
10731
10732
10733
10734
10735
10736
10737
10738
10739
but what I need is those id values as a sequence such as for use in an SQL IN expression or to assign to a Python set or list.  What I want is:
(10731,'10732', '10733', '10734', '10735', '10736', '10737',
'10738', '10739')
Reformatting a few numbers by hand isn't too hard - but what if I have a list of hundreds of id values? The answer, of course, is provided by gedit.  Under Tools -> Manage External Tools the user can build filters that can be applied to documents and have the results returned to gedit.  If I create a new external tool that accepts as input the "Current document" and as output has "Replace current document" then gedit will replace the contents of the current document with the results of the filter [pretty obvious;  and if it doesn't work I can always Ctrl-Z].  The body of the filter can be any script - a Python script is perfectly valid. Like this hack:
#!/usr/bin/env python
import sys

iteration = 0
line_length = 0
text = sys.stdin.readline()
while (text !=  ''):
  text = text.strip()
  if (len(text) > 0):
    if (iteration == 0):
      sys.stdout.write('(')
    else:
      sys.stdout.write(', ') 
    if (line_length > 74):
      sys.stdout.write('\n ')
      line_length = 0
    if (len(text) > 0):
      sys.stdout.write('\'{0}\''.format(text))
    line_length = line_length + len(text) + 4
    iteration = iteration + 1
  text = sys.stdin.readline()
sys.stdout.write(')')  
sys.stdout.flush()
The current document becomes the standard-input for the script and the standard-output of the script will replace the current document. The above hack reads in a list of lines and returns them as a set enumeration nicely wrapped to 80 characters per line. External tools are saved under names; for this one I saved it as "IN-Clause-Filter".
Now that I've setup the external tool every time I paste a list of id values into gedit I can simply select Tools -> External Tools -> IN-Clause-Filter and my list is instantly turned into a set enumeration.

2011-11-27

Overlooked Content From 2011

Yes, it is almost 2012.
Looking back for overlooked content I'm compelled to mention the 2011 annual "LDAPCon" conference focused on directory services and LDAP; the organizers/hosts have made available some excellent videos and papers [index page]. These are an unrivaled source of information on the current state and future of directory services.
The same is also true of the annual SambaXP conference in relation to all things Samba and CIFS [2011 content index].
Podcast content in general tends to be lite and aimless fare;  those sources provide some meat for the meal.

All those SQLite databases...

Many current application use the SQLite database for tracking information; this includes F-Spot, Banshee, Hamster, Evolution, and others.  Even the WebKit component uses SQLite [you might be surprised to discover ~/.local/share/webkit/databases].  It is wonderfully efficient that there is one common local data storage technique all these applications can use,  especially since it is one that is managable using a universally known dialect [SQL]. But there is a dark-side to SQLite.  Much like old Dbase databases it needs to be vacuumed.  And how reliably are all those applications providing their little databases with the required affection?  Also, do you trust those lazy developers to have dealt with the condition of a corrupted database?   If an application hangs, or is slow, or doesn't open... maybe that little database is corrupted?
Aside: As a system administrator for almost two decades I do not trust developers. They still put error messages in applications like "File not found!".  Argh!
On the other hand SQLite provides a handy means of performing an integrity check on databases - the "PRAGMA integrity_check" command.  I've watched a few of these little databases and discovered that (a) they aren't often all that little, and (b) manually performing a VACUUM may dramatically reduce their on-disk size.  Both these facts indicate that developers are lazy and should not be trusted.
Note: in at least one of these cases the application has subsequently been improved. Developers do respond rather quickly when offered a blend of compliments spiced with bug reports.  No, I'm not going to name offending applications as that is too easily used as fodder by nattering nabobs.  And even the laziest Open Source developer is working harder than their proprietary brothers.
In light of this situation my solution is a hack - a Python script [download] that crawls around looking for SQLite databases.  First the script attempts to open the database in exclusive mode, then it performs an integrity check, and if that succeeds it performs a vacuum operation.  Currently it looks for databases in "~/.local/share" [where it will find databases managed by application appropriately following the XDG specification], "~/.cache", "~/.pki", "~/.rcc", and "~/.config".
Download the script and run it. Worst thing that happens is that it accomplishes nothing.  On the other hand it might recover some disk space, improve application performance, or reveal a busted database.

2011-11-25

gnome-tweak-tool

GNOME3 simplified many things.  In the process some settings and preferences got removed from the primary user interface.  A side-effect of that is a fair number of BLOG posts like "Dude where’s my settings?".  Many of these BLOG posts are informative; they explain how to get to the preference value via dconf either by the command line gsettings tool or the GUI dconf-editor [which replaces GNOME2's gconf-editor].  Both of those are good methods. Knowing how to use gsettings in case of an epic-fail situation is a very useful skill to have. On the other hand - most of the settings discussed in these posts can be easily tweaked using the appropriately named gnome-tweak-toolgnome-tweak-tool provides a friendly GUI to a variety of maybe-you-shouldn't-mess-with-this-but-here-we-are kind of preferences.  The gnome-tweak-tool package is available in the standard repositories for openSUSE 12.1

GAJ, Zeitgeist, & openSUSE 12.1

In openSUSE 12.1 the GNOME Activity Journal and Zeitgeist data hub are only a package install away -
zypper in gnome-activity-journal
Now the GNOME Activity Journal is available;  an excellent productivity tool.  Hopefully more data providers will appear soon.

Ctrl-Alt-Shift-V ... Pasting Happiness

If you cut or copy text from an application [especially a web browser] and then paste it into LibreOffice what you often get is formatted text, or at least some approximation of the text's original formatting.  This is awful.  When using LibreOffice appropriately all formatting is managed via the excellent support for styles. All you want is the text - and nothing about the text.  I've used the brute force solution in the past of bouncing my cut-n-paste through gedit.  Until I discovered Ctrl-Alt-Shift-V.  Yes, all four keys at once.  Ctrl-Alt-Shift-V is un-formatted paste [it pastes nothing but the text].  Pasting happiness!

2011-11-01

Converting M4B's to MP3

I ended up with some M4B audio files; these are "MPEG v4 system, iTunes AAC-LC" files.  In order to reliably manage these files along with every other audio file [all of which are MP3] the simplest solution is just to convert them to MP3.  In order to accomplish that I dumped them back out to WAV using mplayer and re-encoded them to MP3 using lame.  Both lame and mplayer are available for openSUSE from the PacMan repositories, so you can easily install them via zypper.

mplayer "filename.m4b" -nojoystick -ao pcm:file=tmp.wav
lame -b 128 -q1 tmp.wav tmp.mp3

The "-nojoystick" option for mplayer isn't required but it prevents mountains of output about mplayer being unable to read the joystick device [most likely due to the fact that I don't have a joystick].  I left the bit-rate for lame at 128 since there is no point in re-encoding a file at a higher bit-rate than the originally encoded file - these files are mono-channel human speech not high-fidelity audio.

"samba-vscan" Is Dead, Long Live "samba-virusfilter"!

Noticed an interesting message on the Samba-Technical list today.  The Samba VFS module "samba-vscan" which has long been used to build integrated malware detection into Samba is no longer supported for the 3.6.x series.  SATOH Fumiyasu has been patching samba-vfs for 3.3 and up through 3.5; but with 3.6.1 he has created a new module named samba-virusfilter.  His message is here.

2011-10-31

Where and what is /var/run/named?

# service named start
Starting name server BIND checkproc: Can not stat /var/run/named/named.pid: Too many levels of symbolic links
- Warning: /var/run/named/named.pid exists! start_daemon: Can not stat /var/run/named/named.pid: Too many levels of symbolic links
                                                                     done

Eh?  Somehow I messed up wherever /var/run/named is supposed to be.  This happened when changing a root-jail DNS server to a non-jailed server.  After toggling the NAMED_RUN_CHROOTED value to "no" in /etc/sysconfig/named starting named complains about this [this named is meant to integrate with Samba4]. Seems strange.  Once you try to restart named after this change /var/run/named is automatically created as a directory - but it doesn't work.  This fix is to stop named and create the correct symbolic link:
ln -s /var/lib/named/var/run/named /var/run/named
Not sure how this situation happens; but now the fix/gotcha is here for the search engines to crawl.

Reformatting an iPod

I have an iPod whose content seems to have gone wonky.  Delete's fail, play-lists won't sync, etc... I wanted to start-over.  Unfortunately the iPod itself doesn't have any useful "reset" feature.  According to the interwebs you need Apple's iTunes application in order to reformat an iPod.  Alright, time to find another solution; and the winner is:

mkfs.vfat -F 32 -I -n "iPod Name" /dev/sdb1
A good old-school reformat. After reformatting and resetting [hold down select and play for 15 seconds] the device is back to its original state.

Implementing Queue Expiration w/RabbitMQ

The latest versions of RabbitMQ support a feature where idle queues can be automatically deleted from the server.  For queues used in an RPC or workflow model this can save a lot of grief - as the consumers for these queues typically vanish leaving the queue behind. Over time these unused queues accumulate and consume resources on the server(s). If you are using pyamqplib setting the expiration on a queue is as simple as:

import amqplib.client_0_8 as amq
connection = amq.Connection(host="localhost:5672", userid=*, password=*, virtual_host="/", insist=False)
channel = connection.channel()
queue = channel.queue_declare(queue="testQueue", durable=True, exclusive=False, auto_delete=False, arguments={'x-expires': 9000})
channel.exchange_declare(exchange='testExchange', type="fanout", durable=False, auto_delete=False)
channel.queue_bind(queue="testQueue", exchange='exchange')

Now if that queue goes unused for 9 seconds it will be dropped by the server [the value is in milliseconds]. So long as the queue has consumers it will persist, but once the last consumer has disconnected and no further operations have occurred - poof, you get your resources back.

2011-10-21

Changing Terminal Services License Mode

You are provisioning a Window 2008R2 server for remote desktop service; you've configured the terminal services license manager in one mode [ device | user ].  But when you receive the license documentation you discover that the CALs purchased were for the other mode.  Then the Windows terminal license server manager tells you to change the mode of the license server.... but there is no obvious way to change the mode [because Windows is user-friendly!].  One option is to go old-school - hack the registry!.

First, stop the license server. Then in regedit change the value of the "LicensingMode" key in the "HKEY_LOCAL_MACHINE \ System \CurrentControlSet \ Control \ Terminal Server \ RCM \ LicensingCore" collection.  A value of "2" indicates per device licensing and a value of "4" indicates a value of per user licensing.  Then reboot.

2011-10-13

Finding Address Coordinates using Python, SOAP, & the Bing Maps API

Bing maps provides a SOAP API that can be easily accessed via the Python suds module.  Using the API it is trivial to retrieve the coordinates of a postal address.  The only requirement is to acquire a Bing API application key; this process is free, quick, and simple.


import sys, urllib2, suds

if __name__ == '__main__':  
    url = 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'
    
    client = suds.client.Client(url)
    client.set_options(port='BasicHttpBinding_IGeocodeService')
    request = client.factory.create('GeocodeRequest')

    credentials = client.factory.create('ns0:Credentials')
    credentials.ApplicationId = 'YOUR-APPLICATION-KEY'
    request.Credentials = credentials

    #Address
    address = client.factory.create('ns0:Address')
    address.AddressLine = "535 Shirley St. NE"
    address.AdminDistrict = "Michigan"
    address.Locality = "Grand Rapids"      
    address.CountryRegion = "United States"
    request.Address = address

    try:        
        response = client.service.Geocode(request)    
    except suds.client.WebFault, e:        
        print "ERROR!"        
        print(e)
        sys.exit(1)

    locations = response['Results']['GeocodeResult'][0]['Locations']['GeocodeLocation']
    for location in locations:        
        print(location)


If you need to make the request via an HTTP proxy server expand the line client = suds.client.Client(url) to:


    proxy = urllib2.ProxyHandler({'http': 'http://YOUR-PROXY-SERVER:3128'})\
    transport = suds.transport.http.HttpTransport()
    transport.urlopener = urllib2.build_opener(proxy)
    client = suds.client.Client(url, transport=transport)


The results will be Bing API GeocodeLocation objects that have an Longitude and Latitude properties.  Note that you may receive multiple coordinates for an address as there are multiple mechanism for locating an address; the method corresponding to the coordinates is a string in the CalculationMethod property of the GeocodeLocation objects.

2011-08-04

Suppressing SNMP Connection Messages

You have, of course, done the responsible sys-admin thing and setup an NMS (be it ZenOSS, OpenNMS, Nagios, whatever...).  Then there is the concommitant action of configuring SNMP services on all the relevant hosts.  All is good.  But running SNMP on several distributions churns out the log messages;  when you go to the logs to research a problem you have to filter out and sort through thousands upon thousands of pointless messages like:
Aug  1 16:08:38 flask-yellow snmpd[1976]: Connection from UDP: [192.168.1.38]:52021
Aug  1 16:08:38 flask-yellow last message repeated 24 times
Argh.  Detail logging is good, but pointless noise is not.  The solution isn't very well documented but you can bring this to a stop. 

Step 1.) Make sure you have net-snmp 5.3.2.2 or later.  This should not be a problem as even RHEL5/CentOS5 provide this version via update.
    $ rpm -q net-snmp
    net-snmp-5.3.2.2-9.el5_5.1
Step 2.) Edit /etc/sysconfig/snmpd.options or your system's equivalent making sure you do not pass the "-a" option to the SNMP daemon.  The "-a" option enables the logging of the source IP addresses of all incoming requests.  If you want to know about these kind of events iptables and ulog are more reliable and efficient methods for capturing that information.
    # OPTIONS="-Lsd -Lf /dev/null -p /var/run/snmpd.pid -a"
    OPTIONS="-Lsd -Lf /dev/null -p /var/run/snmpd.pid"
Step 3.) Edit the /etc/snmpd/snmpd.conf verifying you have the dontLogTCPWrappersConnects directive set to 1 (true).  Add this directive to the configuration file if it is not specified.
Step 4.) Restart the SNMP service.

Now when you go to look into the log files it is again possible to hear the breeze, the singing of the birds, and the distant growling of that guy from Kazakstan who is trying to crack your SSH daemon.

2011-05-17

Changing Your FreeNode Password

There is no shortage of documentation for common IRC operations like registering your nick and managing channels. But I had quite the time figuring out the syntax for changing my password; turns out it's obvious:
/ns set password new_password
There is no password confirmation - so make sure you type it correctly.

On a related note, for anyone new to IRC, it is possible to bind multiple nicks to your FreeNode account. Just switch to the nick you want to bind and then issue the "group" command.
/nick alternate_nick
/msg nickserv group

You can now authenticate as this alternate nick using the same password as that of your master nick.

A Fortnight With GNOME3

A fortnight with GNOME3

I'm a skeptic of "revolutionary" change. Most [all?] revolutionary changes result in epic-fail; everyone who has been in IT for more than a decade knows this. And there have been no shortage of predictions that GNOME3 will face this same fate. KDE endured this storm recently with version 4. Anyone unfortunate enough to be on the openSUSE user lists remembers the swarm of incessant nattering nabobs that was kicked up when the distro switched from KDE3 to KDE4. But change seems to be in the wind: KDE with version 4, Canonical deciding to hoe a proprietary row with Unity, and GNOME developers finally launching GNOME3.

I've been to several GNOME3 talks at Ohio LINUX given by GNOME team members; I've seen it demonstrated and I understood, at least vaguely, the ideological premise. As a hard-core groupware guy the idea of focusing on the actual workflow of the user was like a marketing pitch designed with people like me as the target audience. But ideology is a dangerous master; ideologues usually end up skidding face-first across the rough ground of reality. Take away the maximize and minimize buttons? Remove, or even refactor, the task bar? Get rid of the system / notification / whatever-junk tray? Do that and the hue and cry will be so loud you'll never get a chance to explain your ideology [as though anyone but you cared in the first place].

But now it seems that hue and cry will fall on deaf ears; there isn't much of a refuge. Canonical, KDE, and GNOME - everyone is moving. So it is time to move. I installed GNOME3. I bit-the-bullet and used it.

Now a fortnight later... I like it. It is better. Performance is improved and common operations are smoother. Doing things like navigating applications, which was before a combination of launchers, menus, and third-party components like GNOME-Do, is significantly more intuitive. I should say: they are more intuitive once you get over the habituation of how you were doing it before. I can see the designer's intent. With GNOME2 my desktop usage was efficient but I'd made it that way; my desktop was different than every other GNOME2 desktop [and conversely their desktops were different from mine]. Installing a modern LINUX distribution like openSUSE 11.4 gives me 99.44% of the applications I need; but I still have to go about making the desktop configuration efficient for me with launchers, etc... No more with GNOME3. GNOME3 provides the functionality of GNOME-Do and other GNOME2 third-party enhancements and drops the cruft I was always, unknowingly, fighting against.

Yes, some changes seem a bit arbitrary, like removal of the maximize and minimize buttons [you can turn them back on BTW, using the very nice GNOME Tweak Tool]. But how often do I really minimize anything anymore? Something close to never.

[UPDATE#1: I should have included a link to sloshy's very helpful post "How To Tweak GNOME 3 To Your Needs". So I've now rectified that error. Please note that I don't actually tweak GNOME3 much or install the various GNOME Shell extensions which are available. I recommend you really give GNOME3 vanilla an honest try. His post "10 Things I Love About GNOME 3" is also an interesting read - and helps explain some of the GNOME3 ideology. I should also point out that there are at least two ways to maximize a window in vanilla GNOME3 so the removal of maximize button seems reasonable. Minimization in GNOME3, lacking a task bar, is awkward - so removal of the minimize button in order to discourage the behavior seems reasonable as well. As I said originally: I never minimize anyway.]

Yes, the task bar is gone. This I was certain I would notice. But after a few days I didn't. The much improved Alt-Tab (application switch) and Alt-` (window switch) is far more productive than the task bar [which required use of the mouse]. Do I need a list to remind me of which applications are running? No.

Marking an application as a favorite, creating the GNOME3 equivalent of a launcher, is intuitive. This is a big improvement over trying to find an application in the menus and then awkwardly dragging it to some empty space on the toolbar. In GNOME3 it is also possible to drag an application icon into a specific workspace to start it on that workspace - which is nothing short of elegant.

There are also long needed improvements. GNOME has always had excellent screen-capture capability. GNOME3 now provides integrated screencast capture. The need for third-party tools like GNOME-Do for launching and the tracker applet for searching have been eliminated. It is all built in, as it should be.

So after my fortnight I look at overwhelmingly negative articles and I wonder... what desktop environment are they talking about? Because I don't see their criticisms in GNOME3. Perhaps they are booting it up and just test-driving it for a few hours? That would certainly be frustrating. But quotes like "No matter how you look at GNOME Shell ... you are going to do a lot of clicking" is just incorrect. I do far less "clicking" in GNOME3. By the end of that article I don't recognize the DE he is talking about; it certainly isn't GNOME3.

None of this is to say that there aren't valid criticisms of GNOME3.

Ideologically the emphasis on making a one-size-fits-all DE is misguided. Talking about one DE for both my large-screen i7 laptop and a small low-power mobile device does not make sense. Comparing some facebook/gmail jockey's usage of a tablet to someone doing real work is nonsensical. But I have confidence that ideology will be tempered by the reality of these very different use-cases. GNOME Shell seems flexible enough to accommidate both; thus being the same while being different.

Technically, there are some dot-zero kind of warts. The network manager interface isn't nearly as robust or as feature complete as the excellent interface provided in GNOME2; the absence of VPN support is particularly painful. Not all applications currently support startup-notification so dragging a launcher to a specific workspace doesn't always work.


[UPDATE#2:Our web developer pointed out that VPN support is now available in the the openSUSE 11.4 GNOME3 Network Manager. Sure enough - it works. The only option I don't see is how to enable proxyarp for a PPTP connection. That is one of the biggest negatives taken care of; now I can't think of a compelling reason not to recommend GNOME3.]

Maybe GNOME3, most specifically because of its weak Network Manager, isn't ready for your desktop quite yet. But surf on over to the excellent GNOME3 website and take a look. Go into GNOME3 with an Open mind and I think you'll discover that you like it.

Automated Backup of IOS Router Configuration

Who hasn't had the experience of remoting to a router and making a configuration change... and not saving that change. Inevitably that is the weekend that facility will experience a power outage long enough to deplete their UPS. And then you get that dreaded text message from NetOps that a facility is down. Argh! Fortunately Cisco IOS 12.x and later supports a cron like service known as "kron". One of the handiest uses for kron is to configure automatic backup of the router's configuration to a TFTP server.

kron occurrence backup at 0:00 Thu recurring
 policy-list backup
!
kron policy-list backup
 cli write
 cli show running-config | redirect tftp://192.168.1.38/brtgate.config
!

This creates a batch of commands named "backup" [where in typical ISO fashion everything is referred to as a "policy"] that will be executed every Thursday morning. This batch commits the running configuration to flash memory ["cli write"] and copies the running configuration to the specified TFTP server ["cli show running-config | redirect tftp://192.168.1.38/brtgate.config"]. The rather odd looking use of "redirect" is because the IOS "copy" command is interactive and interactive commands cannot be run via "kron".

Remember that the file on the TFTP server has to exist, even if zero sized, and be world writable; otherwise the redirect will fail with a permission denied error.

2011-04-22

Compression & Decompress Of A Stream

So far in Python I had not found a good method / module for performing compression and decompression of data as streams;  most tools required files to be compressed which has some obvious limitations.  But then I saw a mention of pyLZMA roll by. It supports compression and decompression of streams using the Lempel–Ziv–Markov chain algorithm. The license of the module is LGPL-2.1; not MIT, but at least it is "Lesser" GPL'd.  I've taken it for a spin and it seems to successfully compress and decompress all the data I've thrown at it (remember to always checksum your data).

import pylzma, hashlib

# Calculate the SHA checksum for our input file
i = open('Brighton.jpg', 'rb')
h1 = hashlib.sha1()
while True:
    tmp = i.read(1024)
    if not tmp: break
    h1.update(tmp)
h1 = h1.hexdigest()
print 'Input SHA Checksum: {0}'.format(h1)
    
# Compress the input file (as a stream) to a file (as a stream)
o = open('compressed.lzma', 'wb')
i.seek(0)
s = pylzma.compressfile(i)
while True:
    tmp = s.read(1)
    if not tmp: break
    o.write(tmp)
o.close()
i.close()

# Decomrpess the file (as a stream) to a file (as a stream)
i = open('compressed.lzma', 'rb')
o = open('decompressed.raw', 'wb')
s = pylzma.decompressobj()
while True:
    tmp = i.read(1)
    if not tmp: break
    o.write(s.decompress(tmp))
o.close()
i.close()

# Check the decompressed file
i = open('decompressed.raw', 'rb')
h2 = hashlib.sha1()
while True:
    tmp = i.read(1024)
    if not tmp: break
    h2.update(tmp)
h2 = h2.hexdigest()
print 'Result SHA Checksum: {0}'.format(h2)
if (h1 == h2): print 'OK!'

Of course a JPEG file doesn't compress much, but that makes it an even better test case.

2011-04-20

block_dump logging

There are lots of tools for studying the systems use of CPU and memory, but I/O is generally harder to track down.  A useful trick is available via the block dump.  Setting the value to "1" turns on block access logging to the kernel ring-buffer [aka dmesg] and a value of "0" turns it back on.  This means it can be turned on by a simple:
echo "1" > /proc/sys/vm/block_dump
This logs the accesses to the block storage as:
[ 2032.934178] postmaster(11528): READ block 5058592 on dm-3 (16 sectors)
[ 2032.934200] postmaster(11528): READ block 5058624 on dm-3 (32 sectors)
[ 2032.934240] postmaster(11528): READ block 3172800 on dm-3 (16 sectors)
[ 2032.945328] banshee-1(11267): dirtied inode 1051864 (banshee.db-journal) on dm-0
[ 2032.945336] banshee-1(11267): dirtied inode 1051864 (banshee.db-journal) on dm-0
[ 2033.042671] python(11518): READ block 9017928 on dm-2 (32 sectors)
[ 2033.055771] python(11518): dirtied inode 267260 (expatbuilder.pyc) on dm-2
[ 2033.055808] python(11518): READ block 9017960 on dm-2 (40 sectors)
[ 2033.412972] nautilus(11078): dirtied inode 410492 (dav:host=127.0.0.1,port=8080,ssl=false) on dm-0
[ 2033.413001] nautilus(11078): READ block 50855560 on dm-0 (40 sectors)
[ 2033.431011] nautilus(11078): dirtied inode 410596 (dav:host=127.0.0.1,port=8080,ssl=false-ab9de673.log) on dm-0
[ 2033.431044] nautilus(11078): READ block 50855736 on dm-0 (64 sectors)
[ 2034.221831] jbd2/dm-2-8(386): WRITE block 21261800 on dm-2 (8 sectors)
[ 2034.221887] jbd2/dm-2-8(386): WRITE block 21261808 on dm-2 (8 sectors)
 Handy.

2011-03-17

Printing Via LPR

If you have a Python app, or almost any kind of app, the accepted manner for printing is to use some kind of subprocess to invoke some command-line utility to submit the print job.  Of course this requires that the underlying subsystems are aware of printers [and thus run a printer subsystem].  It also assumes the name of the command-line utility, the permissions are adequate to execute it, and all manner of other things.  To put it simply: this is terrible!  Why does my web server, workflow server, etc... need to run a print service?  Why can't my environment just support submitting a job to an actual print server?  Good question!  To answer this OpenGroupware Coils now has a simple implementation of the LPR/LPD protocol.  This code is under the MIT/X11 license so you are free to copy-and-paste it into your own application.  Sending a job to the server is as simple as:

f = open('Draft.ps', 'rb')
pr = LPR('crew.mormail.com', user='adam')
lpr.connect()
lpr.send_stream('cisps', 'test job', f, job_name='my awesome job')
lpr.close()
f.close()

This sends the contents of the file "Draft.ps" to the printer "cisps" on the LPD server "crew.mormail.com" with user name "adam", job name "my awesome job" and job file name "test job".

The code for the LPR class can be found here.