> Aaron Dominguez wrote:
> > Instead of the for() loop for reading/writing the std::vector, I've been using
> > ReadFastArray/WriteFastArray, like so:
> >
> > if (R__b.IsReading()) {
> > int n;
> > R__b >> n;
> > fCoeff.resize(n);
> > R__b.ReadFastArray(&fCoeff[0],n);
> > }
> > else {
> > R__b.WriteVersion(MyClass::IsA());
> > R__b << fCoeff.size();
> > R__b.WriteFastArray(&fCoeff[0],fCoeff.size());
> > }
> >
> > I'm pretty sure that this is ok since I believe that std::vectors are supposed
> > to be contiguous in memory from beginning to end. At least this is working
> > for us at CDF with ROOT v3.01/06a, KCC v4.0f or gcc v3. And we've always had
> > great success speeding up streamers with Read/WriteFastArray.
>
> I'm doing just the same, and it speeds up quite a bit. To the best of my
> knowledge did the original STL specification not guarantee that the elements
> of a vetor are stored in contiguous memory. However, all implementations I've
> seen do this, and afaik it is dicussed to explicitely require this in the
> future. But one has to keep in mind that this only applies to vector, but
> not for example for deque.
The original C++ standard did not guarantee that vector elements were
contiguous. However, as you say, all implementations I know of
implemented it this way, and it was clearly the intent of the
standard-writers. This requirement has gone past being 'discussed' at
this point. It is part of the first Technical Corrigendum to the
standard that has been voted on and accepted by the C++ Standard
Committee. So I would say it is safe to depend on it -- for vector
only, as you quite correctly point out. I don't think it is possible to
implement the other containers in contiguous memory and still satisfy
the performance requirements of the standard...
George Heintzelman
georgeh@aya.yale.edu
This archive was generated by hypermail 2b29 : Tue Jan 01 2002 - 17:51:10 MET