Below we describe how to build and run these components.
This document is still a work-in-progress, so if something is not clear or is buggy or whatever, please email us at dqe at the domain nms.csail.mit.edu.
Get the packages on which the enforcer depends:
To get and install sfslite, follow these instructions; you'll be getting the code either using a command like:
$ cvs -z5 -d :pserver:anoncvs@cvs.pdos.lcs.mit.edu:/cvs co -P sfslite1
or else by pulling a tarball from this location.
Getting decent performance with DQE depends on a few small performance improvements to SFS. To get these improvements, apply the patch dqe/enforcer/sfs-changes.patch to the sfslite source tree.
When you run configure to get the build directories set up for sfslite, you should pass the following options:
Now you need to hack the Makefile for the enforcer so that DQE builds properly. In dqe/enforcer/rules.Makefile, set
Now you're ready to build dqe:
$ cd dqe/storer
$ gmake
or
$ cd dqe/storer
$ gmake mode={optmz,profile}
At the end, you should have a binary named dqm_node. Try doing:
$ ./dqm_node
The result should be a usage message.
The config file also specifies which UDP ports the enforcer node uses. There are three UDP ports that are relevant: the first one, known as the portal_port, is the port that an enforcer client contacts (see the section below on running enforcer clients) to run TEST or SET operations. The second port is another port on which the enforcer node listens: it is called the storer_port and this is the port that other enforcer nodes contact to run PUT or GET operations. The third port is the source UDP port that the enforcer node uses when invoking PUT or GET at other enforcer nodes. (The reasons that we have all of these ports relate to prioritizing requests based on their type; see the paper for details.)
To run an enforcer node, you should choose three unused ports per enforcer node, with the understanding that the portal_port will be the "external" port that enforcer clients use.
There are a few ways to generate config files:
Decide on which three ports you are going to use. Let's call those ports portal_port, internal_port_1, and internal_port_2.
Manually. Construct a file with lines like:
<node_hostname>:<portal_port>:<internal_port_1>:<internal_port_2>
(Note that the node_hostname should ideally be a
fully-qualified domain name, or at least a domain name that
every other enforcer node can resolve.)
Automatically, using a simple bash script:
$ cd dqe/enforcer/eg_conf
$ chmod u+x gen-node-list.sh
$ ./gen-node-list.sh 32 8101 8201 8301 > node-list.txt
The output of gen-node-list in this case specifies 32 nodes, that each node will be named nodeX (for X between 0 and 31), and that each node will end up having its portal port be 8101 and its two internal ports being 8201 and 8301.
Run the script, gen-config.py, that actually outputs config files, given a list like node-list.txt. For example:
$ cd dqe/storer/eg_conf
$ mkdir /tmp/config_out_dir
$ python gen-config.py [-r <repl_factor>] node-list.txt /tmp/config_out_dir
The directory /tmp/config_out_dir should now contain a set of configuration files corresponding to the specifications in node-list.txt.
By this point you should have config files for the various enforcer nodes that you want to run.
Finally, run the enforcers themselves. On each enforcer node:
$ cd dqe/enforcer
$ ./dqm_node <config_file>
(To see a list of available command-line options for dqm_node, run it without a config file argument.) At this point, an enforcer client should be able to do TEST and SET operations at the enforcer nodes. See the next sections below for details on how to get enforcer clients running and talking to enforcer nodes.
To build and use the experimental client:
$ cd dqe/u
$ make
You should get a binary called u2. To run u2, you'll have to humor it with some number of command-line options, some of which should be self-explanatory.
Most of the enforcer client is written in Python, so there isn't much compiling, except for a Python wrapper of some crypto functions that are implemented in C++.
Anyway, here are the steps to getting the Python client going:
Install the packages:
Build the crypto wrapper:
$ cd dqe/dqm/cryptkey_wrap
$ make
Set your $PYTHONPATH environment variable appropriately, e.g.:
$ cd dqe
$ MYDQEDIR=`pwd`
$ PYTHONPATH=$PYTHONPATH:$MYDQEDIR
$ export $PYTHONPATH
To do simple TEST and SET operations using a bare-bones client, do something like this:
$ cd dqe/dqm/user
$ ./portal-client.py myenforcerportal:8101 zebra set
$ ./portal-client.py myenforcerportal:8101 zebra test
$ ./portal-client.py myenforcerportal:8101 monkey test
With the commands above, portal-client.py is going to use "zebra" and "monkey" as strings to hash to get key-value pairs, and it will use as a portal to the enforcer the host myenforcerportal and destination port 8101. With a "real" DQE client, as described immediately below, the key-value pairs will be actual stamps.
To print, verify, and cancel actual stamps, you need to run, respectively:
$ dqe/dqm/user/stamp-printer.py
$ dqe/dqm/user/stamp-verifier.py
$ dqe/dqm/user/stamp-canceller.py
Each of these programs needs a config file that points the program to an enforcer portal and also gives the program the public key of a trusted quota allocator. To actually get these programs to work and to integrate them with a mail setup requires a bit of annoying setup; see here for slightly outdated instructions. Please feel free to send us email with questions. Our email address is dqe at the domain nms.csail.mit.edu.