Hi Georges,
The solution is simply to add
fHists[i]->SetDirectory(0);
after the statement
fHists[i].Streamer(buf);
If you take the responsability for the bookeeping, it is your
responsability to delete the corresponding objects.
Rene Brun
On Fri, 11 Aug 2000, George Heintzelman wrote:
>
> Rooters,
>
> I recently started running into many errors when exiting ROOT, of the
> form:
>
> #Fatal in <operator delete>: unreasonable size (1110139808)
> #aborting...
>
> I assumed that this was my problem and tried to hunt it down. Well, I
> finally did, but it's not just my problem. Here's the problem, as best
> I can describe it.
>
> I have a class that deals with many histograms all at once. This class
> has a data member fHists which is an array of TH1F allocated with
> operator new[]:
>
> TH1F *fHists; // In class definition
> fHists = new TH1F[Length()]; // In constructor
>
> When an object of this class is destroyed, it correctly calls operator
> delete[] on fHists to clean up after itself. So far, so good. But, in
> the streamer, there is code like this:
>
> // Read in the length
> fHists = new TH1F[Length()];
> for (int i = 0; i < Length(); ++i) {
> fHists[i].Streamer(buf);
> }
>
> and, as readers of roottalk will be aware, that Streamer call will put
> the TH1F into the gDirectory (usually the file/subdirectory being read
> from). Now comes the gotcha: if you fail to delete an fData (say, you
> read it in from file and did some interactive work with it), when ROOT
> shuts down, it will go through its still-open gDirectory and delete all
> the undeleted objects -- but it will use operator delete, NOT operator
> delete[]! This in turn causes Root's operator new/delete builtin
> protections to activate, causing the fatal error.
>
> This is even more of a big deal because it doesn't only happen when
> shutting down root -- you can trigger it also by doing a Close() on the
> TFile.
>
> At the moment, I can see two potential solutions, both of which have
> drawbacks, but from which I am probably going to select.
> 1) Protect the TH1F's from being put in the directory by setting
> gDirectory to 0. My interactive users don't like this option, because
> they like the convenience of being able to specify some from this arry
> of histograms by name, and will probably in at least some cases pull
> the histograms explicitly into a TDirectory, which causes more problems.
> 2) Add the object containing the TH1F's also to the gDirectory (ahead
> of its members) so that it gets cleaned up before its members, since it
> knows how to clean them up correctly. This doesn't completely solve the
> problem either, since I need to know all the implementation details of
> TH1F in order to know all the places where I will have to add/remove
> this object from a Directory in order to protect its members. And, it's
> still possible for users to circumvent this protection when the
> top-level object isn't (yet) in the directory.
>
> The best solution, of course, would be for the Root team to either:
> a) decide that TDirectorys should not own (and therefore delete) their
> contents, since it is just too easy to Add an object to a directory,
> then delete it and forget to Remove it. I don't see a huge problem with
> this solution, but it is a potentially significant change deep in the
> guts of ROOT...
> b) make it possible for TDirectories to tell the difference between new
> and new[]'d objects added to the directory, and not delete the latter
> (or delete only the zeroth element in the array).
> c) supply functionality for 'all of an array' of TObject things to be
> pulled into a TDirectory which will then do the delete[] correctly, in
> which case I can put protective code around the readin and explicitly
> put the array into gDirectory myself.
>
> George Heintzelman
> gah@bnl.gov
>
>
>
>
>
>
>
>
>
>
>
>
>
This archive was generated by hypermail 2b29 : Tue Jan 02 2001 - 11:50:31 MET