23 Sep 2013 21:46:42 UTC
- Distribution: App-Diskd
- Module version: 0.01
- Source (raw)
- Browse (raw)
- How to Contribute
- Issues (1)
- Testers (26 / 36 / 27)
- KwaliteeBus factor: 0
- 26.63% Coverage
- License: perl_5
- Activity24 month
- Download (14.52KB)
- MetaCPAN Explorer
- Subscribe to distribution
- This version
- Latest versionDMALONE Declan Maloneand 1 contributors
- Declan Malone
- POINTS OF INTEREST
- SEE ALSO
- COPYRIGHT AND LICENSE
diskd - An example POE-based, peer-to-peer disk finder/announcer
$ ./diskd -d & # run in network daemon mode $ ./diskd # start local client => help # get help => list # show information about known disks => ... => <EOF> # ^D exits client
This program is intended as an example of:
- 1. using multicast to send and receive data among several peers
- 2. communicating with local clients via a Unix domain socket
- 3. using POE to achieve both of the above
- 4. using POE to periodically run an external program without blocking
- 5. encapsulating a data structure that can be accessed and updated by the above
The information shared between peers in this example is the list of disks that are currently attached to each system. The "blkid" program is used to gather this information. It reports on all disks attached, regardless of whether the disk (or partition) is currently mounted or not.
A copy of diskd should be run on each of the peer machines. The first thing that it does is join a pre-defined multicast channel. The daemon then collects the list of disks attached to the system and schedules the collection to trigger again periodically. It also sets up a periodic event that will send the details of the disks attached to the machine to other peers that have joined the multicast channel. It also listens to the channel for incoming multicast messages from another peer and uses them to update its list of which disks are attached to that peer. As a result of this, each daemon will be able to build up a full list of which disks are available in the peer network and to which machine they are attached. Thus the primary function of the program is to be able to locate disks, no matter which machine they are currently attached to.
The diskd program can also be run in client mode on any machine that has a running diskd daemon. The client conencts via a local unix domain socket and, providing the connection succeeds, it will then be able to pass commands to the daemon. Currently the only useful command that is implemented is 'list', which prints a list of all the disks that the daemon knows about. More commands could be added quite easily.
The reason for writing this program was to explore three key areas:
- 1. Multicast (and peer-to-peer) networking
- 2. Daemons and method of communicating with them
- 3. Using POE to develop a non-trivial program with a focus on asynchronous, event-based operation
As I write this, the size of the program is significantly less than 1,000 lines (not including this documentation), while still managing to implement a reasonably complex network daemon. In all, it took about an evening's work to code and eliminate most of the major bugs. The reason for both the small size and quick development time can be attributed to the combination of Perl and POE. Despite this being my first time writing any program using POE, the speed of development was not down to amazing programming skill on my part. Rather, it boiled down to just one factor: almost all of the POE code I have here was based, in one way or another, on example code hosted on the POE Cookbook site.
Since I had already read up sufficiently on POE (and the examples in the cookbook) and knew in general how I wanted my daemon to work, selecting the relevant recipes and reworking them was a pretty straightforward process. Based on this experience, I would definitely recommend other Perl programmers to consider POE for programs of this sort (network daemons) as well as for any other task where the an event-based approach is suitable.
From the outset, I had decided that I would modularise the code and use different objects (classes) for each functional part of the overall program. Besides being a reasonable approach in general, it also turned out that this was a good practical fit with the POE way of doing things since I could use a separate POE session for each class. Using separate classes meant that, for example, I could have the same event name across several different sessions/classes without needing to worry about whether they would interfere with each other. This was a boon considering that most of my POE code started as cut and paste from other examples.
For the remainder of this section, I would like to simply go through each of the classes used in the program and give some brief notes. I have attempted to comment the code to make it easier to read and understand, but the notes here give some extra context and extra levels of detail.
This class simply encapsulates the data structures that are collected locally and shared among nodes. A distinction is made between the two so that calling classes have a convenient interface for updating only local data (eg, DiskWatcher), or querying globally-shared data (eg, a client running a 'list' command).
The Info class does not have an associated POE session, though a reference to the Info object is passed to every class/POE session that needs to access/update it. So even though it doesn't use POE itself, it is basically the glue that holds all the POE sessions together and gives them meaning.
The current implementation simply keeps all the data in memory, though it would be simple enough to either:
provide a routine to be called at program startup to read in saved data from a file or other backing storage (along with a complementary routine to save the data when the program is shutting down); or
interface with a database to act as a permanent storage medium (POE provides mechanisms for doing this asynchronously, which might be appropriate here)
Internally, this class also uses YAML to pack and unpack (serialise and deserialise) the stored data. This is used by the MulticastServer class to safely transmit and receive data within the UDP packets. It could also be used to load/save the data to local storage between program runs (ie, provide persistence of data).
This class sets up a POE session that periodically calls the external 'blkid' program. It uses POE::Wheel::Run to do this in the background so that the parent program does not block while waiting on the child program to run to completion.
In some cases, blkid can hang (such as if a device has disappeared without being cleanly unmounted or disconnected) or fail altogether (such as the user not having sufficient rights, or the program not being present on the system). This class handles both cases gracefully.
This is not implemented, but the idea is that in addition to peers announcing and tracking which disks are attached to which machines, they would also share information about which of those disks are currently mounted.
A simple implementation would simply call the system 'mount' command in a similar way that 'blkid' is called in the DiskWatcher class.
If implemented, it might also make sense (subject to security considerations) to allow clients to issue commands to mount (and possibly unmount) selected disks. This would make it easier for other applications to search for a disk and, if it is found, issue the command for the machine to which the disk is attached to mount it before the remote host tries to mount it (with something like nfs or sshfs, for example). The point here would be to provide a relatively location-independent way of doing remote mounts.
This class is responsible for sending and receiving packets to and from a specific multicast channel. It begins by joining the multicast channel and then sets up:
a listener which receives updates from other peers; and
a periodic event that sends information about locally-attached disks to all peers
All packets are sent using UDP, so there is no acknowledgement process. Because packets are sent using multicast, a single packet should find its way to all members of the multicast group.
A "ttl" ("time to live") option is provided so that if peers are on different subnets, a multicast-aware router can forward the packets to any subnet that has a subscribed peer. I have tested this and confirmed that it works, at least for peers separated by a single router hop. Simply set the value to (maximum number of hops + 1).
The MulticastServer object relies on the Info object to provide (de-)serialisation of the data. The way this is currently implemented (using YAML and some extra checking on the received data structure), this prevents the possibility of a rogue peer joining the network and sending data packets that are specially crafted so as to allow them to execute arbitrary Perl code (ie, receiving arbitrary data should not present a security risk). The question of whether broadcasting (multicasting) information about what disks are attached represents a security risk is left to the user to decide.
Using standard OO terminology, the UnixSocketServer class is a "Factory" that creates UnixSocketServer::Session objects. The "Factory" class listens for new connections on a private Unix-domain socket (basically, a file in the user's home directory that only that user can access, which acts like a local socket). When a new connection comes in, it creates a new UnixSocketServer::Session object. Multiple connections can be created, with a new Session object created for each one.
Once it is up and running, a UnixSocketServer::Session object then reponds to commands like "help", "list" and so on that come through the socket.
A simple enough extension of the current program would be implement a command (in UnixSocketServer::Session) that causes the daemon to multicast the current list of locally-attached disks to all peers, regardless of the current timeout value. Similar commands could cause the daemon to trigger the DiskWatcher or MountWatcher classes to refresh their data.
A slightly more complicated extension would be a "ping"-like command. The Session object would recognise it and then send out a message to all peers requesting that they send their list of local disks again. In order to prevent this from being abused (eg, a rogue peer on the network using it to flood the network with traffic and cause a Denial of Service attack), you might want to implement some form of rate limiting in the MulticastServer class: basically, it would limit the number of "ping" requests it would send answers to, so that any excess ping requests in a given time period would be ignored.
This class is the counterpart to the UnixSocketServer and UnixSocketServer::Session classes. It takes commands typed in by the user, sends them to the server and displays the output.
This client incorporates ReadLine support (for editing of command lines, as well as a history buffer) and graceful shutdown (on the client side at least---the server side must close the Session down based on seeing that the client side has closed the socket connection).
It should be noted that this class is, strictly speaking, not necessary. By passing the correct parameters to the "telnet" program, it should be possible to communicate with the local daemon directly. However, telnet does not generally have ReadLine support, whereas this class does. Given the size of the class (150 lines, including copious comments) and the fact that it can be adapted to connect to many different kinds of server, it does seem to be worth including here.
(insert links here)
Declan Malone, <email@example.com>
Copyright (C) 2013 by Declan Malone
This program is free software; you can redistribute it and/or modify it under the terms of version 2 (or, at your discretion, any later version) of the "GNU General Public License" ("GPL").
Please refer to http://www.gnu.org/licenses/gpl.html for the full text of this license.
This package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the "GNU General Public License" for more details.