You are here

The old PROOF benchmark utilities (obsolete)

NB: this page describes the benchmark scripts under $ROOTSYS/test/ProofBench . These macros are provided for legacy reasons and are not supported any longer. The new benchmark suite steered by TProofBench is described in here.



Introduction

A set of utilities to test a PROOF cluster is available under the ProofBench sub-directory of $ROOTSYS/test . The tests use files containing trees of the Event structure defined in $ROOTSYS/test/Event.h and $ROOTSYS/test/Event.C . A shared library with the 'Event' class can be created in the following way:

$ cd $ROOTSYS; source bin/thisroot.sh
$ cd test; gmake Event
g++ -g -Wall -fPIC -pthread -m32 -I/home/ganis/local/root/cvs/root/include -c Event.cxx
Generating dictionary EventDict.cxx...
g++ -g -Wall -fPIC -pthread -m32 -I/home/ganis/local/root/cvs/root/include -c EventDict.cxx
g++ -shared -g -m32 Event.o EventDict.o -o  libEvent.so
libEvent.so done
g++ -g -Wall -fPIC -pthread -m32 -I/home/ganis/local/root/cvs/root/include -c MainEvent.cxx
g++ -g -m32 MainEvent.o /home/ganis/local/root/cvs/root/test/libEvent.so -L/home/ganis/local/root/cvs/root/lib
-lCore -lCint -lRIO -lNet -lHist -lGraf -lGraf3d -lGpad -lTree -lRint -lPostscript -lMatrix -lPhysics -pthread
-lm -ldl -rdynamic  -o Event
Event done

 

It is always advised to take the latest version of the utilities from the SVN trunk:

$ svn co https://root.cern.ch/svn/root/trunk/test/ProofBench ProofBench.

 

Benchmarking PROOF

Setup for a benchmark

See the installation page for instructions to install and configure PROOF.

After creating libEvent.so as explained in the introduction, create the 'event' PAR file by executing

$ ./make_event_par.sh

Create a PROOF session from a ROOT shell and enable the 'event' package:

root[] = TProof *p = TProof::Open("master")
root[] = p->UploadPackage("event")
root[] = p->EnablePackage("event")

The files on the nodes in the cluster can be created by running the 'make_event_trees.C' macro. The first argument is the directory the files will reside in, and the second argument is the number of events in each file. The last argument is the number of files on each node (not the number of files per worker!):

root [] .x make_event_trees.C("/data1/tmp", 100000, 4)

To create the TDSet for the files created by make_event_trees.C, the macro make_tdset.C is provided:

root [] .L make_tdset.C
root [] TDSet *d = make_tdset("/data1/tmp",4)
root [] d->Print("a")
OBJ: TDSet      type TTree      EventTree       in /    elements 2
TDSetElement file='root://gluon.local//data1/tmp/event_tree_gluon.local_1.root' dir='' obj='' first=0 num=-1
TDSetElement file="root://gluon.local//data1/tmp/event_tree_gluon.local_3.root' dir='' obj='' first=0 num=-1

To test the system with a simple command

root [] d->Draw("fTemperature")

You are now ready to run the benchmark!

Performance Monitoring

A set of performance histograms and a tree can be generated for detailed studies of a query. The way to generated such an information is described in a dedicated page.

Run the Benchmark

The benchmark provides 3 selectors, each reading a different amount of data:

EventTree_NoProc.C Reads no data
EventTree_ProcOpt.C Reads 25% of the data
EventTree_Proc.C Reads all the data

First make sure the PAR file is up to date and enabled

root[] p->UploadPackage("event")
root[] p->EnablePackage("event")

Request dynamic feedback of some of the monitoring histograms

root[] p->AddFeedback("PROOF_ProcTimeHist")
root[] p->AddFeedback("PROOF_LatencyHist")
root[] p->AddFeedback("PROOF_EventsHist")

Create a TDrawFeedback object to automatically draw these histograms

root[] TDrawFeedback fb(p)

And request the timing of each command

root[] gROOT->Time()

Running one of the provided selectors is straight forward, using the TDSet that was created earlier:

root[] p->Load("EventTree_Proc.C+");
root[] d->Process("EventTree_Proc","")

The monitoring histograms should appear shortly after the processing starts. The resulting histogram from the selector will also be drawn at the end. Loading the selector before processing is not strictly needed but it allows to circumvent a problem with file distribution that was present in some versions, including 5.22/00 and 5.22/00a.

The above set of commands are included in the script Run_Simple_Test.C .

Extra Scripts Included

The script Draw_Time_Hists.C can be used to create the timing histograms from the trace tree and draw them on screen.

The script Run_Node_Tests.C can be used to run a full sequence of tests. The results can be presented graphically using Draw_PerfProfiles.C .

The script Draw_Slave_Access.C will draw a graph depicting the number of workers accessing a file serving node as a function of time.


Adapted from M. Ballintijn's $ROOTSYS/test/ProofBench/README