Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
TUnfold.cxx
Go to the documentation of this file.
1// Author: Stefan Schmitt
2// DESY, 13/10/08
3
4// Version 17.9, add new methods GetDF(), GetSURE(), ScanSURE(), GetSqrtEvEmatrix()
5//
6// History:
7// Version 17.8, add new method GetDXDY() for histograms
8// Version 17.7, change finite() -> TMath::Finite()
9// Version 17.6, updated doxygen-style comments, add one argument for scanLCurve
10// Version 17.5, fix memory leak with fVyyInv, bugs in GetInputInverseEmatrix(), GetInput(), bug in MultiplyMSparseMSparseTranspVector
11// Version 17.4, in parallel to changes in TUnfoldBinning
12// Version 17.3, in parallel to changes in TUnfoldBinning
13// Version 17.2, bug fix with GetProbabilityMatrix
14// Version 17.1, bug fixes in GetFoldedOutput, GetOutput
15// Version 17.0, option to specify an error matrix with SetInput(), new ScanRho() method
16// Version 16.2, in parallel to bug-fix in TUnfoldSys
17// Version 16.1, fix bug with error matrix in case kEConstraintArea is used
18// Version 16.0, fix calculation of global correlations, improved error messages
19// Version 15, simplified L-curve scan, new tau definition, new error calc., area preservation
20// Version 14, with changes in TUnfoldSys.cxx
21// Version 13, new methods for derived classes and small bug fix
22// Version 12, report singular matrices
23// Version 11, reduce the amount of printout
24// Version 10, more correct definition of the L curve, update references
25// Version 9, faster matrix inversion and skip edge points for L-curve scan
26// Version 8, replace all TMatrixSparse matrix operations by private code
27// Version 7, fix problem with TMatrixDSparse,TMatrixD multiplication
28// Version 6, replace class XY by std::pair
29// Version 5, replace run-time dynamic arrays by new and delete[]
30// Version 4, fix new bug from V3 with initial regularisation condition
31// Version 3, fix bug with initial regularisation condition
32// Version 2, with improved ScanLcurve() algorithm
33// Version 1, added ScanLcurve() method
34// Version 0, stable version of basic unfolding algorithm
35
36
37/** \class TUnfold
38An algorithm to unfold distributions from detector to truth level
39
40TUnfold is used to decompose a measurement y into several sources x,
41given the measurement uncertainties and a matrix of migrations A.
42The method can be applied to a large number of problems,
43where the measured distribution y is a linear superposition
44of several Monte Carlo shapes. Beyond such a simple template fit,
45TUnfold has an adjustable regularisation term and also supports an
46optional constraint on the total number of events.
47
48<b>For most applications, it is better to use the derived class
49TUnfoldDensity instead of TUnfold. TUnfoldDensity adds various
50features to TUnfold, such as:
51background subtraction, propagation of systematic uncertainties,
52complex multidimensional arrangements of the bins. For innocent
53users, the most notable improvement of TUnfoldDensity over TUnfold are
54the getter functions. For TUnfold, histograms have to be booked by the
55user and the getter functions fill the histogram bins. TUnfoldDensity
56simply returns a new, already filled histogram.</b>
57
58If you use this software, please consider the following citation
59<br/>
60<b>S.Schmitt, JINST 7 (2012) T10003 [arXiv:1205.6201]</b>
61<br/>
62Detailed documentation and updates are available on
63http://www.desy.de/~sschmitt
64
65Brief recipy to use TUnfold:
66<ul>
67<li>a matrix (truth,reconstructed) is given as a two-dimensional histogram
68 as argument to the constructor of TUnfold</li>
69<li>a vector of measurements is given as one-dimensional histogram using
70 the SetInput() method</li>
71<li>The unfolding is performed
72<ul>
73<li>either once with a fixed parameter tau, method DoUnfold(tau)</li>
74<li>or multiple times in a scan to determine the best chouce of tau,
75 method ScanLCurve()</li>
76</ul>
77<li>Unfolding results are retrieved using various GetXXX() methods
78</ul>
79
80Basic formulae:<br/>
81&chi;<sup>2</sup><sub>A</sub>=(Ax-y)<sup>T</sup>V<sub>yy</sub><sup>-1</sup>(Ax-y)<br/>
82&chi;<sup>2</sup><sub>L</sub>=(x-f*x<sub>0</sub>)<sup>T</sup>L<sup>T</sup>L(x-f*x<sub>0</sub>)<br/>
83&chi;<sup>2</sup><sub>unf</sub>=&chi;<sup>2</sup><sub>A</sub>+&tau;<sup>2</sup>&chi;<sup>2</sup><sub>L</sub>+&lambda;&Sigma;<sub>i</sub>(Ax-y)<sub>i</sub><br/>
84x:result, A:probabilities, y:data, V<sub>yy</sub>:data
85covariance, f:bias scale, x<sub>0</sub>:bias, L:regularisation conditions,
86&tau;:regularisation strength, &lambda;:Lagrangian multiplier<br/>
87Without area constraint, &lambda; is set to zero, and
88&chi;<sup>2</sup><sub>unf</sub> is minimized to determine x. With area
89constraint, both x and &lambda; are determined.
90*/
91
92
93/*
94 This file is part of TUnfold.
95
96 TUnfold is free software: you can redistribute it and/or modify
97 it under the terms of the GNU General Public License as published by
98 the Free Software Foundation, either version 3 of the License, or
99 (at your option) any later version.
100
101 TUnfold is distributed in the hope that it will be useful,
102 but WITHOUT ANY WARRANTY; without even the implied warranty of
103 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
104 GNU General Public License for more details.
105
106 You should have received a copy of the GNU General Public License
107 along with TUnfold. If not, see <http://www.gnu.org/licenses/>.
108*/
109
110#include <iostream>
111#include <TGraph.h>
112#include <TMatrixD.h>
113#include <TMatrixDSparse.h>
114#include <TMatrixDSym.h>
115#include <TMatrixDSymEigen.h>
116#include <TMath.h>
117#include "TUnfold.h"
118
119#include <map>
120#include <vector>
121
122//#define DEBUG
123//#define DEBUG_DETAIL
124//#define FORCE_EIGENVALUE_DECOMPOSITION
125
127
129{
130 // delete all data members
131
132 DeleteMatrix(&fA);
133 DeleteMatrix(&fL);
134 DeleteMatrix(&fVyy);
135 DeleteMatrix(&fY);
136 DeleteMatrix(&fX0);
137 DeleteMatrix(&fVyyInv);
138
139 ClearResults();
140}
141
142
143////////////////////////////////////////////////////////////////////////
144/// initialize data menbers, for use in constructors
146{
147 // reset all data members
148 fXToHist.Set(0);
149 fHistToX.Set(0);
150 fSumOverY.Set(0);
151 fA = nullptr;
152 fL = nullptr;
153 fVyy = nullptr;
154 fY = nullptr;
155 fX0 = nullptr;
156 fTauSquared = 0.0;
157 fBiasScale = 0.0;
160 // output
161 fX = nullptr;
162 fVyyInv = nullptr;
163 fVxx = nullptr;
164 fVxxInv = nullptr;
165 fAx = nullptr;
166 fChi2A = 0.0;
167 fLXsquared = 0.0;
168 fRhoMax = 999.0;
169 fRhoAvg = -1.0;
170 fNdf = 0;
171 fDXDAM[0] = nullptr;
172 fDXDAZ[0] = nullptr;
173 fDXDAM[1] = nullptr;
174 fDXDAZ[1] = nullptr;
175 fDXDtauSquared = nullptr;
176 fDXDY = nullptr;
177 fEinv = nullptr;
178 fE = nullptr;
179 fEpsMatrix=1.E-13;
180 fIgnoredBins=0;
181}
182
183////////////////////////////////////////////////////////////////////////
184/// delete matrix and invalidate pointer
185///
186/// \param[inout] m pointer to a matrix-pointer
187///
188/// if the matrix pointer os non-zero, thematrix id deleted. The matrox pointer is set to zero.
190{
191 if(*m) delete *m;
192 *m=nullptr;
193}
194
195////////////////////////////////////////////////////////////////////////
196/// delete sparse matrix and invalidate pointer
197///
198/// \param[inout] m pointer to a matrix-pointer
199///
200/// if the matrix pointer os non-zero, thematrix id deleted. The matrox pointer is set to zero.
202{
203 if(*m) delete *m;
204 *m=nullptr;
205}
206
207////////////////////////////////////////////////////////////////////////
208/// reset all results
210{
211 // delete old results (if any)
212 // this function is virtual, so derived classes may implement their own
213 // method to flag results as non-valid
214
215 // note: the inverse of the input covariance is not cleared
216 // because it does not change until the input is changed
217
221 for(Int_t i=0;i<2;i++) {
224 }
230 fChi2A = 0.0;
231 fLXsquared = 0.0;
232 fRhoMax = 999.0;
233 fRhoAvg = -1.0;
234}
235
236////////////////////////////////////////////////////////////////////////
237/// only for use by root streamer or derived classes
238///
240{
241 // set all matrix pointers to zero
242 InitTUnfold();
243}
244
245////////////////////////////////////////////////////////////////////////
246/// core unfolding algorithm
248{
249 // main unfolding algorithm. Declared virtual, because other algorithms
250 // could be implemented
251 //
252 // Purpose: unfold y -> x
253 // Data members required:
254 // fA: matrix to relate x and y
255 // fY: measured data points
256 // fX0: bias on x
257 // fBiasScale: scale factor for fX0
258 // fVyy: covariance matrix for y
259 // fL: regularisation conditions
260 // fTauSquared: regularisation strength
261 // fConstraint: whether the constraint is applied
262 // Data members modified:
263 // fVyyInv: inverse of input data covariance matrix
264 // fNdf: number of degrees of freedom
265 // fEinv: inverse of the matrix needed for unfolding calculations
266 // fE: the matrix needed for unfolding calculations
267 // fX: unfolded data points
268 // fDXDY: derivative of x wrt y (for error propagation)
269 // fVxx: error matrix (covariance matrix) on x
270 // fAx: estimate of distribution y from unfolded data
271 // fChi2A: contribution to chi**2 from y-Ax
272 // fChi2L: contribution to chi**2 from L*(x-x0)
273 // fDXDtauSquared: derivative of x wrt tau
274 // fDXDAM[0,1]: matrix parts of derivative x wrt A
275 // fDXDAZ[0,1]: vector parts of derivative x wrt A
276 // fRhoMax: maximum global correlation coefficient
277 // fRhoAvg: average global correlation coefficient
278 // return code:
279 // fRhoMax if(fRhoMax>=1.0) then the unfolding has failed!
280
281 ClearResults();
282
283 // get pseudo-inverse matrix Vyyinv and NDF
284 if(!fVyyInv) {
285 GetInputInverseEmatrix(nullptr);
287 fNdf--;
288 }
289 }
290 //
291 // get matrix
292 // T
293 // fA fV = mAt_V
294 //
296 //
297 // get
298 // T
299 // fA fVyyinv fY + fTauSquared fBiasScale Lsquared fX0 = rhs
300 //
303 if (fBiasScale != 0.0) {
307 }
308
309 //
310 // get matrix
311 // T
312 // (fA fV)fA + fTauSquared*fLsquared = fEinv
315
316 //
317 // get matrix
318 // -1
319 // fEinv = fE
320 //
321 Int_t rank=0;
323 if(rank != GetNx()) {
324 Warning("DoUnfold","rank of matrix E %d expect %d",rank,GetNx());
325 }
326
327 //
328 // get result
329 // fE rhs = x
330 //
332 fX = new TMatrixD(*xSparse);
335
336 // additional correction for constraint
339 TMatrixDSparse *epsilon=nullptr;
340 TMatrixDSparse *Eepsilon=nullptr;
342 // calculate epsilon: verctor of efficiencies
347 for(Int_t i=0;i<A_rows[fA->GetNrows()];i++) {
348 epsilonNosparse(A_cols[i],0) += A_data[i];
349 }
350 epsilon=new TMatrixDSparse(epsilonNosparse);
351 // calculate vector EE*epsilon
353 // calculate scalar product epsilon#*Eepsilon
355 Eepsilon);
356 // if epsilonEepsilon is zero, nothing works...
357 if(epsilonEepsilon->GetRowIndexArray()[1]==1) {
358 one_over_epsEeps=1./epsilonEepsilon->GetMatrixArray()[0];
359 } else {
360 Fatal("Unfold","epsilon#Eepsilon has dimension %d != 1",
361 epsilonEepsilon->GetRowIndexArray()[1]);
362 }
364 // calculate sum(Y)
366 for(Int_t iy=0;iy<fY->GetNrows();iy++) {
367 y_minus_epsx += (*fY)(iy,0);
368 }
369 // calculate sum(Y)-epsilon#*X
370 for(Int_t ix=0;ix<epsilonNosparse.GetNrows();ix++) {
371 y_minus_epsx -= epsilonNosparse(ix,0) * (*fX)(ix,0);
372 }
373 // calculate lambda_half
375 // calculate final vector X
376 const Int_t *EEpsilon_rows=Eepsilon->GetRowIndexArray();
377 const Double_t *EEpsilon_data=Eepsilon->GetMatrixArray();
378 for(Int_t ix=0;ix<Eepsilon->GetNrows();ix++) {
379 if(EEpsilon_rows[ix]<EEpsilon_rows[ix+1]) {
380 (*fX)(ix,0) += lambda_half * EEpsilon_data[EEpsilon_rows[ix]];
381 }
382 }
383 }
384 //
385 // get derivative dx/dy
386 // for error propagation
387 // dx/dy = E A# Vyy^-1 ( = B )
389
390 // additional correction for constraint
392 // transposed vector of dimension GetNy() all elements equal 1/epseEeps
393 Int_t *rows=new Int_t[GetNy()];
394 Int_t *cols=new Int_t[GetNy()];
395 Double_t *data=new Double_t[GetNy()];
396 for(Int_t i=0;i<GetNy();i++) {
397 rows[i]=0;
398 cols[i]=i;
400 }
402 (1,GetNy(),GetNy(),rows,cols,data);
403 delete[] data;
404 delete[] rows;
405 delete[] cols;
406 // B# * epsilon
408 // temp- one_over_epsEeps*Bepsilon
411 // correction matrix
413 DeleteMatrix(&temp);
414 // determine new derivative
415 AddMSparse(fDXDY,1.0,corr);
417 }
418
420
421 //
422 // get error matrix on x
423 // fDXDY * Vyy * fDXDY#
426
428
429 //
430 // get result
431 // fA x = fAx
432 //
434
435 //
436 // calculate chi**2 etc
437
438 // chi**2 contribution from (y-Ax)V(y-Ax)
441
442 const Int_t *VyyinvDy_rows=VyyinvDy->GetRowIndexArray();
443 const Double_t *VyyinvDy_data=VyyinvDy->GetMatrixArray();
444 fChi2A=0.0;
445 for(Int_t iy=0;iy<VyyinvDy->GetNrows();iy++) {
446 if(VyyinvDy_rows[iy]<VyyinvDy_rows[iy+1]) {
447 fChi2A += VyyinvDy_data[VyyinvDy_rows[iy]]*dy(iy,0);
448 }
449 }
452 const Int_t *LsquaredDx_rows=LsquaredDx->GetRowIndexArray();
453 const Double_t *LsquaredDx_data=LsquaredDx->GetMatrixArray();
454 fLXsquared = 0.0;
455 for(Int_t ix=0;ix<LsquaredDx->GetNrows();ix++) {
456 if(LsquaredDx_rows[ix]<LsquaredDx_rows[ix+1]) {
458 }
459 }
460
461 //
462 // get derivative dx/dtau
464
467 Double_t f=0.0;
468 if(temp->GetRowIndexArray()[1]==1) {
470 } else if(temp->GetRowIndexArray()[1]>1) {
471 Fatal("Unfold",
472 "epsilon#fDXDtauSquared has dimension %d != 1",
473 temp->GetRowIndexArray()[1]);
474 }
475 if(f!=0.0) {
477 }
478 DeleteMatrix(&temp);
479 }
480 DeleteMatrix(&epsilon);
481
484
485 // calculate/store matrices defining the derivatives dx/dA
486 fDXDAM[0]=new TMatrixDSparse(*fE);
487 fDXDAM[1]=new TMatrixDSparse(*fDXDY); // create a copy
488 fDXDAZ[0]=VyyinvDy; // instead of deleting VyyinvDy
489 VyyinvDy=nullptr;
490 fDXDAZ[1]=new TMatrixDSparse(*fX); // create a copy
491
493 // add correction to fDXDAM[0]
495 (Eepsilon,Eepsilon,nullptr);
498 // add correction to fDXDAZ[0]
499 Int_t *rows=new Int_t[GetNy()];
500 Int_t *cols=new Int_t[GetNy()];
501 Double_t *data=new Double_t[GetNy()];
502 for(Int_t i=0;i<GetNy();i++) {
503 rows[i]=i;
504 cols[i]=0;
505 data[i]=lambda_half;
506 }
508 (GetNy(),1,GetNy(),rows,cols,data);
509 delete[] data;
510 delete[] rows;
511 delete[] cols;
512 AddMSparse(fDXDAZ[0],1.0,temp2);
514 }
515
517
518 rank=0;
520 if(rank != GetNx()) {
521 Warning("DoUnfold","rank of output covariance is %d expect %d",
522 rank,GetNx());
523 }
524
529 for (int ix = 0; ix < fVxxInv->GetNrows(); ix++) {
530 for(int ik=VxxInv_rows[ix];ik<VxxInv_rows[ix+1];ik++) {
531 if(ix==VxxInv_cols[ik]) {
532 VxxInvDiag(ix)=VxxInv_data[ik];
533 }
534 }
535 }
536
537 // maximum global correlation coefficient
541
543 Double_t rho_sum = 0.0;
544 Int_t n_rho=0;
545 for (int ix = 0; ix < fVxx->GetNrows(); ix++) {
546 for(int ik=Vxx_rows[ix];ik<Vxx_rows[ix+1];ik++) {
547 if(ix==Vxx_cols[ik]) {
549 1. - 1. / (VxxInvDiag(ix) * Vxx_data[ik]);
552 if(rho_squared>0.0) {
554 n_rho++;
555 }
556 break;
557 }
558 }
559 }
561 fRhoAvg = (n_rho>0) ? (rho_sum/n_rho) : -1.0;
562
563 return fRhoMax;
564}
565
566////////////////////////////////////////////////////////////////////////
567/// create a sparse matrix, given the nonzero elements
568///
569/// \param[in] nrow number of rows
570/// \param[in] ncol number of columns
571/// \param[in] nel number of non-zero elements
572/// \param[in] row row indexes of non-zero elements
573/// \param[in] col column indexes of non-zero elements
574/// \param[in] data non-zero elements data
575///
576/// return pointer to a new sparse matrix
577///
578/// shortcut to new TMatrixDSparse() followed by SetMatrixArray()
580(Int_t nrow,Int_t ncol,Int_t nel,Int_t *row,Int_t *col,Double_t *data) const
581{
582 // create a sparse matri
583 // nrow,ncol : dimension of the matrix
584 // nel: number of non-zero elements
585 // row[nel],col[nel],data[nel] : indices and data of the non-zero elements
587 if(nel>0) {
588 A->SetMatrixArray(nel,row,col,data);
589 }
590 return A;
591}
592
593////////////////////////////////////////////////////////////////////////
594/// multiply two sparse matrices
595///
596/// \param[in] a sparse matrix
597/// \param[in] b sparse matrix
598///
599/// returns a new sparse matrix a*b.
600/// <br/>
601/// A replacement for:
602/// new TMatrixDSparse(a,TMatrixDSparse::kMult,b)
603/// the root implementation had problems in older versions of root
605 const TMatrixDSparse *b) const
606{
607 if(a->GetNcols()!=b->GetNrows()) {
608 Fatal("MultiplyMSparseMSparse",
609 "inconsistent matrix col/ matrix row %d !=%d",
610 a->GetNcols(),b->GetNrows());
611 }
612
613 TMatrixDSparse *r=new TMatrixDSparse(a->GetNrows(),b->GetNcols());
614 const Int_t *a_rows=a->GetRowIndexArray();
615 const Int_t *a_cols=a->GetColIndexArray();
616 const Double_t *a_data=a->GetMatrixArray();
617 const Int_t *b_rows=b->GetRowIndexArray();
618 const Int_t *b_cols=b->GetColIndexArray();
619 const Double_t *b_data=b->GetMatrixArray();
620 // maximum size of the output matrix
621 int nMax=0;
622 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
623 if(a_rows[irow+1]>a_rows[irow]) nMax += b->GetNcols();
624 }
625 if((nMax>0)&&(a_cols)&&(b_cols)) {
626 Int_t *r_rows=new Int_t[nMax];
627 Int_t *r_cols=new Int_t[nMax];
629 Double_t *row_data=new Double_t[b->GetNcols()];
630 Int_t n=0;
631 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
632 if(a_rows[irow+1]<=a_rows[irow]) continue;
633 // clear row data
634 for(Int_t icol=0;icol<b->GetNcols();icol++) {
635 row_data[icol]=0.0;
636 }
637 // loop over a-columns in this a-row
638 for(Int_t ia=a_rows[irow];ia<a_rows[irow+1];ia++) {
639 Int_t k=a_cols[ia];
640 // loop over b-columns in b-row k
641 for(Int_t ib=b_rows[k];ib<b_rows[k+1];ib++) {
643 }
644 }
645 // store nonzero elements
646 for(Int_t icol=0;icol<b->GetNcols();icol++) {
647 if(row_data[icol] != 0.0) {
648 r_rows[n]=irow;
649 r_cols[n]=icol;
651 n++;
652 }
653 }
654 }
655 if(n>0) {
656 r->SetMatrixArray(n,r_rows,r_cols,r_data);
657 }
658 delete[] r_rows;
659 delete[] r_cols;
660 delete[] r_data;
661 delete[] row_data;
662 }
663
664 return r;
665}
666
667////////////////////////////////////////////////////////////////////////
668/// multiply a transposed Sparse matrix with another Sparse matrix
669///
670/// \param[in] a sparse matrix (to be transposed)
671/// \param[in] b sparse matrix
672///
673/// returns a new sparse matrix a<sup>T</sup>*b
674/// <br/>
675/// this is a replacement for the root constructors
676/// new TMatrixDSparse(TMatrixDSparse(TMatrixDSparse::kTransposed,*a),TMatrixDSparse::kMult,*b)
677
679(const TMatrixDSparse *a,const TMatrixDSparse *b) const
680{
681 if(a->GetNrows() != b->GetNrows()) {
682 Fatal("MultiplyMSparseTranspMSparse",
683 "inconsistent matrix row numbers %d!=%d",
684 a->GetNrows(),b->GetNrows());
685 }
686
687 TMatrixDSparse *r=new TMatrixDSparse(a->GetNcols(),b->GetNcols());
688 const Int_t *a_rows=a->GetRowIndexArray();
689 const Int_t *a_cols=a->GetColIndexArray();
690 const Double_t *a_data=a->GetMatrixArray();
691 const Int_t *b_rows=b->GetRowIndexArray();
692 const Int_t *b_cols=b->GetColIndexArray();
693 const Double_t *b_data=b->GetMatrixArray();
694 // maximum size of the output matrix
695
696 // matrix multiplication
697 typedef std::map<Int_t,Double_t> MMatrixRow_t;
698 typedef std::map<Int_t, MMatrixRow_t > MMatrix_t;
700
701 for(Int_t iRowAB=0;iRowAB<a->GetNrows();iRowAB++) {
702 for(Int_t ia=a_rows[iRowAB];ia<a_rows[iRowAB+1];ia++) {
703 for(Int_t ib=b_rows[iRowAB];ib<b_rows[iRowAB+1];ib++) {
704 // this creates a new row if necessary
707 if(icol!=row.end()) {
708 // update existing row
709 (*icol).second += a_data[ia]*b_data[ib];
710 } else {
711 // create new row
712 row[b_cols[ib]] = a_data[ia]*b_data[ib];
713 }
714 }
715 }
716 }
717
718 Int_t n=0;
720 irow!=matrix.end();irow++) {
721 n += (*irow).second.size();
722 }
723 if(n>0) {
724 // pack matrix into arrays
725 Int_t *r_rows=new Int_t[n];
726 Int_t *r_cols=new Int_t[n];
728 n=0;
730 irow!=matrix.end();irow++) {
731 for(MMatrixRow_t::const_iterator icol=(*irow).second.begin();
732 icol!=(*irow).second.end();icol++) {
733 r_rows[n]=(*irow).first;
734 r_cols[n]=(*icol).first;
735 r_data[n]=(*icol).second;
736 n++;
737 }
738 }
739 // pack arrays into TMatrixDSparse
740 if(n>0) {
741 r->SetMatrixArray(n,r_rows,r_cols,r_data);
742 }
743 delete[] r_rows;
744 delete[] r_cols;
745 delete[] r_data;
746 }
747
748 return r;
749}
750
751////////////////////////////////////////////////////////////////////////
752/// multiply sparse matrix and a non-sparse matrix
753///
754/// \param[in] a sparse matrix
755/// \param[in] b matrix
756///
757/// returns a new sparse matrix a*b.
758/// <br/> A replacement for:
759/// new TMatrixDSparse(a,TMatrixDSparse::kMult,b)
760/// the root implementation had problems in older versions of root
762 const TMatrixD *b) const
763{
764 if(a->GetNcols()!=b->GetNrows()) {
765 Fatal("MultiplyMSparseM","inconsistent matrix col /matrix row %d!=%d",
766 a->GetNcols(),b->GetNrows());
767 }
768
769 TMatrixDSparse *r=new TMatrixDSparse(a->GetNrows(),b->GetNcols());
770 const Int_t *a_rows=a->GetRowIndexArray();
771 const Int_t *a_cols=a->GetColIndexArray();
772 const Double_t *a_data=a->GetMatrixArray();
773 // maximum size of the output matrix
774 int nMax=0;
775 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
776 if(a_rows[irow+1]-a_rows[irow]>0) nMax += b->GetNcols();
777 }
778 if(nMax>0) {
779 Int_t *r_rows=new Int_t[nMax];
780 Int_t *r_cols=new Int_t[nMax];
782
783 Int_t n=0;
784 // fill matrix r
785 for (Int_t irow = 0; irow < a->GetNrows(); irow++) {
786 if(a_rows[irow+1]-a_rows[irow]<=0) continue;
787 for(Int_t icol=0;icol<b->GetNcols();icol++) {
788 r_rows[n]=irow;
789 r_cols[n]=icol;
790 r_data[n]=0.0;
791 for(Int_t i=a_rows[irow];i<a_rows[irow+1];i++) {
792 Int_t j=a_cols[i];
793 r_data[n] += a_data[i]*(*b)(j,icol);
794 }
795 if(r_data[n]!=0.0) n++;
796 }
797 }
798 if(n>0) {
799 r->SetMatrixArray(n,r_rows,r_cols,r_data);
800 }
801 delete[] r_rows;
802 delete[] r_cols;
803 delete[] r_data;
804 }
805 return r;
806}
807
808
809////////////////////////////////////////////////////////////////////////
810/// calculate a sparse matrix product M1*V*M2<sup>T</sup> where the diagonal matrix V is
811/// given by a vector
812///
813/// \param[in] m1 pointer to sparse matrix with dimension I*K
814/// \param[in] m2 pointer to sparse matrix with dimension J*K
815/// \param[in] v pointer to vector (matrix) with dimension K*1
816///
817/// returns a sparse matrix R with elements
818/// r<sub>ij</sub>=&Sigma;<sub>k</sub>M1<sub>ik</sub>V<sub>k</sub>M2<sub>jk</sub>
819
821(const TMatrixDSparse *m1,const TMatrixDSparse *m2,
822 const TMatrixTBase<Double_t> *v) const
823{
824 if((m1->GetNcols() != m2->GetNcols())||
825 (v && ((m1->GetNcols()!=v->GetNrows())||(v->GetNcols()!=1)))) {
826 if(v) {
827 Fatal("MultiplyMSparseMSparseTranspVector",
828 "matrix cols/vector rows %d!=%d!=%d or vector rows %d!=1\n",
829 m1->GetNcols(),m2->GetNcols(),v->GetNrows(),v->GetNcols());
830 } else {
831 Fatal("MultiplyMSparseMSparseTranspVector",
832 "matrix cols %d!=%d\n",m1->GetNcols(),m2->GetNcols());
833 }
834 }
835 const Int_t *rows_m1=m1->GetRowIndexArray();
836 const Int_t *cols_m1=m1->GetColIndexArray();
837 const Double_t *data_m1=m1->GetMatrixArray();
838 Int_t num_m1=0;
839 for(Int_t i=0;i<m1->GetNrows();i++) {
840 if(rows_m1[i]<rows_m1[i+1]) num_m1++;
841 }
842 const Int_t *rows_m2=m2->GetRowIndexArray();
843 const Int_t *cols_m2=m2->GetColIndexArray();
844 const Double_t *data_m2=m2->GetMatrixArray();
845 Int_t num_m2=0;
846 for(Int_t j=0;j<m2->GetNrows();j++) {
847 if(rows_m2[j]<rows_m2[j+1]) num_m2++;
848 }
849 const TMatrixDSparse *v_sparse=dynamic_cast<const TMatrixDSparse *>(v);
850 const Int_t *v_rows=nullptr;
851 const Double_t *v_data=nullptr;
852 if(v_sparse) {
853 v_rows=v_sparse->GetRowIndexArray();
854 v_data=v_sparse->GetMatrixArray();
855 }
857 Int_t *row_r=new Int_t[num_r];
858 Int_t *col_r=new Int_t[num_r];
860 num_r=0;
861 for(Int_t i=0;i<m1->GetNrows();i++) {
862 for(Int_t j=0;j<m2->GetNrows();j++) {
863 data_r[num_r]=0.0;
866 while((index_m1<rows_m1[i+1])&&(index_m2<rows_m2[j+1])) {
869 if(k1<k2) {
870 index_m1++;
871 } else if(k1>k2) {
872 index_m2++;
873 } else {
874 if(v_sparse) {
876 if(v_index<v_rows[k1+1]) {
878 * v_data[v_index];
879 }
880 } else if(v) {
882 * (*v)(k1,0);
883 } else {
885 }
886 index_m1++;
887 index_m2++;
888 }
889 }
890 if(data_r[num_r] !=0.0) {
891 row_r[num_r]=i;
892 col_r[num_r]=j;
893 num_r++;
894 }
895 }
896 }
899 delete[] row_r;
900 delete[] col_r;
901 delete[] data_r;
902 return r;
903}
904
905////////////////////////////////////////////////////////////////////////
906/// add a sparse matrix, scaled by a factor, to another scaled matrix
907///
908/// \param[inout] dest destination matrix
909/// \param[in] f scaling factor
910/// \param[in] src matrix to be added to dest
911///
912/// a replacement for
913/// (*dest) += f * (*src)
914/// which suffered from a bug in old root versions
915
917 const TMatrixDSparse *src) const
918{
919 const Int_t *dest_rows=dest->GetRowIndexArray();
920 const Int_t *dest_cols=dest->GetColIndexArray();
921 const Double_t *dest_data=dest->GetMatrixArray();
922 const Int_t *src_rows=src->GetRowIndexArray();
923 const Int_t *src_cols=src->GetColIndexArray();
924 const Double_t *src_data=src->GetMatrixArray();
925
926 if((dest->GetNrows()!=src->GetNrows())||
927 (dest->GetNcols()!=src->GetNcols())) {
928 Fatal("AddMSparse","inconsistent matrix rows %d!=%d OR cols %d!=%d",
929 src->GetNrows(),dest->GetNrows(),src->GetNcols(),dest->GetNcols());
930 }
931 Int_t nmax=dest->GetNrows()*dest->GetNcols();
935 Int_t n=0;
936 for(Int_t row=0;row<dest->GetNrows();row++) {
938 Int_t i_src=src_rows[row];
939 while((i_dest<dest_rows[row+1])||(i_src<src_rows[row+1])) {
940 Int_t col_dest=(i_dest<dest_rows[row+1]) ?
941 dest_cols[i_dest] : dest->GetNcols();
942 Int_t col_src =(i_src <src_rows[row+1] ) ?
943 src_cols [i_src] : src->GetNcols();
944 result_rows[n]=row;
945 if(col_dest<col_src) {
948 } else if(col_dest>col_src) {
951 } else {
954 }
955 if(result_data[n] !=0.0) {
957 Fatal("AddMSparse","Nan detected %d %d %d",
958 row,i_dest,i_src);
959 }
960 n++;
961 }
962 }
963 }
964 if(n<=0) {
965 n=1;
966 result_rows[0]=0;
967 result_cols[0]=0;
968 result_data[0]=0.0;
969 }
970 dest->SetMatrixArray(n,result_rows,result_cols,result_data);
971 delete[] result_data;
972 delete[] result_rows;
973 delete[] result_cols;
974}
975
976////////////////////////////////////////////////////////////////////////
977/// get the inverse or pseudo-inverse of a positive, sparse matrix
978///
979/// \param[in] A the sparse matrix to be inverted, has to be positive
980/// \param[inout] rankPtr if zero, suppress calculation of pseudo-inverse
981/// otherwise the rank of the matrix is returned in *rankPtr
982///
983/// return value: 0 or a new sparse matrix
984/// <ul>
985/// <li>if(rankPtr==nullptr) return the inverse if it exists, or return 0</li>
986/// <li>else return a (pseudo-)inverse and store the rank of the matrix in
987/// *rankPtr </li>
988/// </ul>
989///
990/// the matrix inversion is optimized in performance for the case
991/// where a large submatrix of A is diagonal
992
994(const TMatrixDSparse *A,Int_t *rankPtr) const
995{
996
997 if(A->GetNcols()!=A->GetNrows()) {
998 Fatal("InvertMSparseSymmPos","inconsistent matrix row/col %d!=%d",
999 A->GetNcols(),A->GetNrows());
1000 }
1001
1002 Bool_t *isZero=new Bool_t[A->GetNrows()];
1003 const Int_t *a_rows=A->GetRowIndexArray();
1004 const Int_t *a_cols=A->GetColIndexArray();
1005 const Double_t *a_data=A->GetMatrixArray();
1006
1007 // determine diagonal elements
1008 // Aii: diagonals of A
1009 Int_t iDiagonal=0;
1010 Int_t iBlock=A->GetNrows();
1012 TVectorD aII(A->GetNrows());
1013 Int_t nError=0;
1014 for(Int_t iA=0;iA<A->GetNrows();iA++) {
1015 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1017 if(iA==jA) {
1018 if(!(a_data[indexA]>=0.0)) nError++;
1019 aII(iA)=a_data[indexA];
1020 }
1021 }
1022 }
1023 if(nError>0) {
1024 Fatal("InvertMSparseSymmPos",
1025 "Matrix has %d negative elements on the diagonal", nError);
1026 return nullptr;
1027 }
1028
1029 // reorder matrix such that the largest block of zeros is swapped
1030 // to the lowest indices
1031 // the result of this ordering:
1032 // swap[i] : for index i point to the row in A
1033 // swapBack[iA] : for index iA in A point to the swapped index i
1034 // the indices are grouped into three groups
1035 // 0..iDiagonal-1 : these rows only have diagonal elements
1036 // iDiagonal..iBlock : these rows contain a diagonal block matrix
1037 //
1038 Int_t *swap=new Int_t[A->GetNrows()];
1039 for(Int_t j=0;j<A->GetNrows();j++) swap[j]=j;
1040 for(Int_t iStart=0;iStart<iBlock;iStart++) {
1041 Int_t nZero=0;
1042 for(Int_t i=iStart;i<iBlock;i++) {
1043 Int_t iA=swap[i];
1044 Int_t n=A->GetNrows()-(a_rows[iA+1]-a_rows[iA]);
1045 if(n>nZero) {
1046 Int_t tmp=swap[iStart];
1047 swap[iStart]=swap[i];
1048 swap[i]=tmp;
1049 nZero=n;
1050 }
1051 }
1052 for(Int_t i=0;i<A->GetNrows();i++) isZero[i]=kTRUE;
1053 Int_t iA=swap[iStart];
1054 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1056 isZero[jA]=kFALSE;
1057 if(iA!=jA) {
1059 }
1060 }
1061 if(isDiagonal) {
1062 iDiagonal=iStart+1;
1063 } else {
1064 for(Int_t i=iStart+1;i<iBlock;) {
1065 if(isZero[swap[i]]) {
1066 i++;
1067 } else {
1068 iBlock--;
1069 Int_t tmp=swap[iBlock];
1070 swap[iBlock]=swap[i];
1071 swap[i]=tmp;
1072 }
1073 }
1074 }
1075 }
1076
1077 // for tests uncomment this:
1078 // iBlock=iDiagonal;
1079
1080
1081 // conditioning of the iBlock part
1082#ifdef CONDITION_BLOCK_PART
1083 Int_t nn=A->GetNrows()-iBlock;
1084 for(int inc=(nn+1)/2;inc;inc /= 2) {
1085 for(int i=inc;i<nn;i++) {
1086 int itmp=swap[i+iBlock];
1087 int j;
1088 for(j=i;(j>=inc)&&(aII(swap[j-inc+iBlock])<aII(itmp));j -=inc) {
1089 swap[j+iBlock]=swap[j-inc+iBlock];
1090 }
1091 swap[j+iBlock]=itmp;
1092 }
1093 }
1094#endif
1095 // construct array for swapping back
1096 Int_t *swapBack=new Int_t[A->GetNrows()];
1097 for(Int_t i=0;i<A->GetNrows();i++) {
1098 swapBack[swap[i]]=i;
1099 }
1100#ifdef DEBUG_DETAIL
1101 for(Int_t i=0;i<A->GetNrows();i++) {
1102 std::cout<<" "<<i<<" "<<swap[i]<<" "<<swapBack[i]<<"\n";
1103 }
1104 std::cout<<"after sorting\n";
1105 for(Int_t i=0;i<A->GetNrows();i++) {
1106 if(i==iDiagonal) std::cout<<"iDiagonal="<<i<<"\n";
1107 if(i==iBlock) std::cout<<"iBlock="<<i<<"\n";
1108 std::cout<<" "<<swap[i]<<" "<<aII(swap[i])<<"\n";
1109 }
1110 {
1111 // sanity check:
1112 // (1) sub-matrix swapped[0]..swapped[iDiagonal]
1113 // must not contain off-diagonals
1114 // (2) sub-matrix swapped[0]..swapped[iBlock-1] must be diagonal
1115 Int_t nDiag=0;
1116 Int_t nError1=0;
1117 Int_t nError2=0;
1118 Int_t nBlock=0;
1119 Int_t nNonzero=0;
1120 for(Int_t i=0;i<A->GetNrows();i++) {
1121 Int_t row=swap[i];
1122 for(Int_t indexA=a_rows[row];indexA<a_rows[row+1];indexA++) {
1124 if(i==j) nDiag++;
1125 else if((i<iDiagonal)||(j<iDiagonal)) nError1++;
1126 else if((i<iBlock)&&(j<iBlock)) nError2++;
1127 else if((i<iBlock)||(j<iBlock)) nBlock++;
1128 else nNonzero++;
1129 }
1130 }
1131 if(nError1+nError2>0) {
1132 Fatal("InvertMSparseSymmPos","sparse matrix analysis failed %d %d %d %d %d",
1134 }
1135 }
1136#endif
1137#ifdef DEBUG
1138 Info("InvertMSparseSymmPos","iDiagonal=%d iBlock=%d nRow=%d",
1140#endif
1141
1142 //=============================================
1143 // matrix inversion starts here
1144 //
1145 // the matrix is split into several parts
1146 // D1 0 0
1147 // A = ( 0 D2 B# )
1148 // 0 B C
1149 //
1150 // D1,D2 are diagonal matrices
1151 //
1152 // first, the D1 part is inverted by calculating 1/D1
1153 // dim(D1)= iDiagonal
1154 // next, the other parts are inverted using Schur complement
1155 //
1156 // 1/D1 0 0
1157 // Ainv = ( 0 E G# )
1158 // 0 G F^-1
1159 //
1160 // where F = C + (-B/D2) B#
1161 // G = (F^-1) (-B/D2)
1162 // E = 1/D2 + (-B#/D2) G)
1163 //
1164 // the inverse of F is calculated using a Cholesky decomposition
1165 //
1166 // Error handling:
1167 // (a) if rankPtr==nullptr: user requests inverse
1168 //
1169 // if D1 is not strictly positive, return NULL
1170 // if D2 is not strictly positive, return NULL
1171 // if F is not strictly positive (Cholesky decomposition failed)
1172 // return NULL
1173 // else
1174 // return Ainv as defined above
1175 //
1176 // (b) if rankPtr !=nullptr: user requests pseudo-inverse
1177 // if D2 is not strictly positive or F is not strictly positive
1178 // calculate singular-value decomposition of
1179 // D2 B#
1180 // ( ) = O D O^-1
1181 // B C
1182 // if some eigenvalues are negative, return NULL
1183 // else calculate pseudo-inverse
1184 // *rankPtr = rank(D1)+rank(D)
1185 // else
1186 // calculate pseudo-inverse of D1 (D1_ii==nullptr) ? 0 : 1/D1_ii
1187 // *rankPtr=rank(D1)+nrow(D2)+nrow(C)
1188 // return Ainv as defined above
1189
1190 Double_t *rEl_data=new Double_t[A->GetNrows()*A->GetNrows()];
1191 Int_t *rEl_col=new Int_t[A->GetNrows()*A->GetNrows()];
1192 Int_t *rEl_row=new Int_t[A->GetNrows()*A->GetNrows()];
1193 Int_t rNumEl=0;
1194
1195 //====================================================
1196 // invert D1
1197 Int_t rankD1=0;
1198 for(Int_t i=0;i<iDiagonal;++i) {
1199 Int_t iA=swap[i];
1200 if(aII(iA)>0.0) {
1201 rEl_col[rNumEl]=iA;
1202 rEl_row[rNumEl]=iA;
1203 rEl_data[rNumEl]=1./aII(iA);
1204 ++rankD1;
1205 ++rNumEl;
1206 }
1207 }
1208 if((!rankPtr)&&(rankD1!=iDiagonal)) {
1209 Fatal("InvertMSparseSymmPos",
1210 "diagonal part 1 has rank %d != %d, matrix can not be inverted",
1212 rNumEl=-1;
1213 }
1214
1215
1216 //====================================================
1217 // invert D2
1219 TMatrixDSparse *D2inv=nullptr;
1220 if((rNumEl>=0)&&(nD2>0)) {
1222 Int_t *D2inv_col=new Int_t[nD2];
1223 Int_t *D2inv_row=new Int_t[nD2];
1224 Int_t D2invNumEl=0;
1225 for(Int_t i=0;i<nD2;++i) {
1226 Int_t iA=swap[i+iDiagonal];
1227 if(aII(iA)>0.0) {
1231 ++D2invNumEl;
1232 }
1233 }
1234 if(D2invNumEl==nD2) {
1236 D2inv_data);
1237 } else if(!rankPtr) {
1238 Fatal("InvertMSparseSymmPos",
1239 "diagonal part 2 has rank %d != %d, matrix can not be inverted",
1240 D2invNumEl,nD2);
1241 rNumEl=-2;
1242 }
1243 delete [] D2inv_data;
1244 delete [] D2inv_col;
1245 delete [] D2inv_row;
1246 }
1247
1248 //====================================================
1249 // invert F
1250
1251 Int_t nF=A->GetNrows()-iBlock;
1252 TMatrixDSparse *Finv=nullptr;
1253 TMatrixDSparse *B=nullptr;
1254 TMatrixDSparse *minusBD2inv=nullptr;
1255 if((rNumEl>=0)&&(nF>0)&&((nD2==0)||D2inv)) {
1256 // construct matrices F and B
1257 Int_t nFmax=nF*nF;
1260 Int_t *F_col=new Int_t[nFmax];
1261 Int_t *F_row=new Int_t[nFmax];
1262 Int_t FnumEl=0;
1263
1264 Int_t nBmax=nF*(nD2+1);
1266 Int_t *B_col=new Int_t[nBmax];
1267 Int_t *B_row=new Int_t[nBmax];
1268 Int_t BnumEl=0;
1269
1270 for(Int_t i=0;i<nF;i++) {
1271 Int_t iA=swap[i+iBlock];
1272 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1275 if(j0>=iBlock) {
1276 Int_t j=j0-iBlock;
1277 F_row[FnumEl]=i;
1278 F_col[FnumEl]=j;
1280 FnumEl++;
1281 } else if(j0>=iDiagonal) {
1283 B_row[BnumEl]=i;
1284 B_col[BnumEl]=j;
1286 BnumEl++;
1287 }
1288 }
1289 }
1290 TMatrixDSparse *F=nullptr;
1291 if(FnumEl) {
1292#ifndef FORCE_EIGENVALUE_DECOMPOSITION
1294#endif
1295 }
1296 delete [] F_data;
1297 delete [] F_col;
1298 delete [] F_row;
1299 if(BnumEl) {
1301 }
1302 delete [] B_data;
1303 delete [] B_col;
1304 delete [] B_row;
1305
1306 if(B && D2inv) {
1308 if(minusBD2inv) {
1309 Int_t mbd2_nMax=minusBD2inv->GetRowIndexArray()
1310 [minusBD2inv->GetNrows()];
1311 Double_t *mbd2_data=minusBD2inv->GetMatrixArray();
1312 for(Int_t i=0;i<mbd2_nMax;i++) {
1313 mbd2_data[i] = - mbd2_data[i];
1314 }
1315 }
1316 }
1317 if(minusBD2inv && F) {
1322 }
1323
1324 if(F) {
1325 // cholesky decomposition with preconditioning
1326 const Int_t *f_rows=F->GetRowIndexArray();
1327 const Int_t *f_cols=F->GetColIndexArray();
1328 const Double_t *f_data=F->GetMatrixArray();
1329 // cholesky-type decomposition of F
1330 TMatrixD c(nF,nF);
1331 Int_t nErrorF=0;
1332 for(Int_t i=0;i<nF;i++) {
1333 for(Int_t indexF=f_rows[i];indexF<f_rows[i+1];indexF++) {
1334 if(f_cols[indexF]>=i) c(f_cols[indexF],i)=f_data[indexF];
1335 }
1336 // calculate diagonal element
1337 Double_t c_ii=c(i,i);
1338 for(Int_t j=0;j<i;j++) {
1339 Double_t c_ij=c(i,j);
1340 c_ii -= c_ij*c_ij;
1341 }
1342 if(c_ii<=0.0) {
1343 nErrorF++;
1344 break;
1345 }
1347 c(i,i)=c_ii;
1348 // off-diagonal elements
1349 for(Int_t j=i+1;j<nF;j++) {
1350 Double_t c_ji=c(j,i);
1351 for(Int_t k=0;k<i;k++) {
1352 c_ji -= c(i,k)*c(j,k);
1353 }
1354 c(j,i) = c_ji/c_ii;
1355 }
1356 }
1357 // check condition of dInv
1358 if(!nErrorF) {
1359 Double_t dmin=c(0,0);
1361 for(Int_t i=1;i<nF;i++) {
1362 dmin=TMath::Min(dmin,c(i,i));
1363 dmax=TMath::Max(dmax,c(i,i));
1364 }
1365#ifdef DEBUG
1366 std::cout<<"dmin,dmax: "<<dmin<<" "<<dmax<<"\n";
1367#endif
1369 }
1370 if(!nErrorF) {
1371 // here: F = c c#
1372 // construct inverse of c
1373 TMatrixD cinv(nF,nF);
1374 for(Int_t i=0;i<nF;i++) {
1375 cinv(i,i)=1./c(i,i);
1376 }
1377 for(Int_t i=0;i<nF;i++) {
1378 for(Int_t j=i+1;j<nF;j++) {
1379 Double_t tmp=-c(j,i)*cinv(i,i);
1380 for(Int_t k=i+1;k<j;k++) {
1381 tmp -= cinv(k,i)*c(j,k);
1382 }
1383 cinv(j,i)=tmp*cinv(j,j);
1384 }
1385 }
1389 }
1390 DeleteMatrix(&F);
1391 }
1392 }
1393
1394 // here:
1395 // rNumEl>=nullptr: diagonal part has been inverted
1396 // (nD2==nullptr)||D2inv : D2 part has been inverted or is empty
1397 // (nF==nullptr)||Finv : F part has been inverted or is empty
1398
1400 if(rNumEl>=0) {
1401 if( ((nD2==0)||D2inv) && ((nF==0)||Finv) ) {
1402 // all matrix parts have been inverted, compose full matrix
1403 // 1/D1 0 0
1404 // Ainv = ( 0 E G# )
1405 // 0 G F^-1
1406 //
1407 // G = (F^-1) (-B/D2)
1408 // E = 1/D2 + (-B#/D2) G)
1409
1410 TMatrixDSparse *G=nullptr;
1411 if(Finv && minusBD2inv) {
1413 }
1414 TMatrixDSparse *E=nullptr;
1415 if(D2inv) E=new TMatrixDSparse(*D2inv);
1416 if(G && minusBD2inv) {
1419 if(E) {
1422 } else {
1424 }
1425 }
1426 // pack matrix E to r
1427 if(E) {
1428 const Int_t *e_rows=E->GetRowIndexArray();
1429 const Int_t *e_cols=E->GetColIndexArray();
1430 const Double_t *e_data=E->GetMatrixArray();
1431 for(Int_t iE=0;iE<E->GetNrows();iE++) {
1432 Int_t iA=swap[iE+iDiagonal];
1433 for(Int_t indexE=e_rows[iE];indexE<e_rows[iE+1];++indexE) {
1435 Int_t jA=swap[jE+iDiagonal];
1436 rEl_col[rNumEl]=iA;
1437 rEl_row[rNumEl]=jA;
1439 ++rNumEl;
1440 }
1441 }
1442 DeleteMatrix(&E);
1443 }
1444 // pack matrix G to r
1445 if(G) {
1446 const Int_t *g_rows=G->GetRowIndexArray();
1447 const Int_t *g_cols=G->GetColIndexArray();
1448 const Double_t *g_data=G->GetMatrixArray();
1449 for(Int_t iG=0;iG<G->GetNrows();iG++) {
1450 Int_t iA=swap[iG+iBlock];
1451 for(Int_t indexG=g_rows[iG];indexG<g_rows[iG+1];++indexG) {
1453 Int_t jA=swap[jG+iDiagonal];
1454 // G
1455 rEl_col[rNumEl]=iA;
1456 rEl_row[rNumEl]=jA;
1458 ++rNumEl;
1459 // G#
1460 rEl_col[rNumEl]=jA;
1461 rEl_row[rNumEl]=iA;
1463 ++rNumEl;
1464 }
1465 }
1466 DeleteMatrix(&G);
1467 }
1468 if(Finv) {
1469 // pack matrix Finv to r
1470 const Int_t *finv_rows=Finv->GetRowIndexArray();
1471 const Int_t *finv_cols=Finv->GetColIndexArray();
1472 const Double_t *finv_data=Finv->GetMatrixArray();
1473 for(Int_t iF=0;iF<Finv->GetNrows();iF++) {
1474 Int_t iA=swap[iF+iBlock];
1475 for(Int_t indexF=finv_rows[iF];indexF<finv_rows[iF+1];++indexF) {
1477 Int_t jA=swap[jF+iBlock];
1478 rEl_col[rNumEl]=iA;
1479 rEl_row[rNumEl]=jA;
1481 ++rNumEl;
1482 }
1483 }
1484 }
1486 } else if(!rankPtr) {
1487 rNumEl=-3;
1488 Fatal("InvertMSparseSymmPos",
1489 "non-trivial part has rank < %d, matrix can not be inverted",
1490 nF);
1491 } else {
1492 // part of the matrix could not be inverted, get eingenvalue
1493 // decomposition
1494 Int_t nEV=nD2+nF;
1495 Double_t epsilonEV2=fEpsMatrix /* *nEV*nEV */;
1496 Info("InvertMSparseSymmPos",
1497 "cholesky-decomposition failed, try eigenvalue analysis");
1498#ifdef DEBUG
1499 std::cout<<"nEV="<<nEV<<" iDiagonal="<<iDiagonal<<"\n";
1500#endif
1502 for(Int_t i=0;i<nEV;i++) {
1503 Int_t iA=swap[i+iDiagonal];
1504 for(Int_t indexA=a_rows[iA];indexA<a_rows[iA+1];indexA++) {
1507 if(j0>=iDiagonal) {
1509#ifdef DEBUG
1510 if((i<0)||(j<0)||(i>=nEV)||(j>=nEV)) {
1511 std::cout<<" error "<<nEV<<" "<<i<<" "<<j<<"\n";
1512 }
1513#endif
1514 if(!TMath::Finite(a_data[indexA])) {
1515 Fatal("InvertMSparseSymmPos",
1516 "non-finite number detected element %d %d\n",
1517 iA,jA);
1518 }
1519 EV(i,j)=a_data[indexA];
1520 }
1521 }
1522 }
1523 // EV.Print();
1525#ifdef DEBUG
1526 std::cout<<"Eigenvalues\n";
1527 // Eigen.GetEigenValues().Print();
1528#endif
1529 Int_t errorEV=0;
1530 Double_t condition=1.0;
1531 if(Eigen.GetEigenValues()(0)<0.0) {
1532 errorEV=1;
1533 } else if(Eigen.GetEigenValues()(0)>0.0) {
1534 condition=Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0);
1535 }
1536 if(condition<-epsilonEV2) {
1537 errorEV=2;
1538 }
1539 if(errorEV) {
1540 if(errorEV==1) {
1541 Error("InvertMSparseSymmPos",
1542 "Largest Eigenvalue is negative %f",
1543 Eigen.GetEigenValues()(0));
1544 } else {
1545 Error("InvertMSparseSymmPos",
1546 "Some Eigenvalues are negative (EV%d/EV0=%g epsilon=%g)",
1547 nEV-1,
1548 Eigen.GetEigenValues()(nEV-1)/Eigen.GetEigenValues()(0),
1549 epsilonEV2);
1550 }
1551 }
1552 // compose inverse matrix
1553 rankD2Block=0;
1554 Double_t setToZero=epsilonEV2*Eigen.GetEigenValues()(0);
1556 for(Int_t i=0;i<nEV;i++) {
1557 Double_t x=Eigen.GetEigenValues()(i);
1558 if(x>setToZero) {
1559 inverseEV(i,0)=1./x;
1560 ++rankD2Block;
1561 }
1562 }
1563 TMatrixDSparse V(Eigen.GetEigenVectors());
1565 (&V,&V,&inverseEV);
1566
1567 // pack matrix VDVt to r
1568 const Int_t *vdvt_rows=VDVt->GetRowIndexArray();
1569 const Int_t *vdvt_cols=VDVt->GetColIndexArray();
1570 const Double_t *vdvt_data=VDVt->GetMatrixArray();
1571 for(Int_t iVDVt=0;iVDVt<VDVt->GetNrows();iVDVt++) {
1572 Int_t iA=swap[iVDVt+iDiagonal];
1576 Int_t jA=swap[jVDVt+iDiagonal];
1577 rEl_col[rNumEl]=iA;
1578 rEl_row[rNumEl]=jA;
1580 ++rNumEl;
1581 }
1582 }
1584 }
1585 }
1586 if(rankPtr) {
1588 }
1589
1590
1593 DeleteMatrix(&B);
1595
1596 delete [] swap;
1597 delete [] swapBack;
1598 delete [] isZero;
1599
1600 TMatrixDSparse *r=(rNumEl>=0) ?
1602 rEl_row,rEl_col,rEl_data) : nullptr;
1603 delete [] rEl_data;
1604 delete [] rEl_col;
1605 delete [] rEl_row;
1606
1607#ifdef DEBUG_DETAIL
1608 // sanity test
1609 if(r) {
1613
1614 TMatrixD ar(*Ar);
1615 TMatrixD a(*A);
1616 TMatrixD ara(*ArA);
1617 TMatrixD R(*r);
1618 TMatrixD rar(*rAr);
1619
1620 DeleteMatrix(&Ar);
1621 DeleteMatrix(&ArA);
1622 DeleteMatrix(&rAr);
1623
1624 Double_t epsilonA2=fEpsMatrix /* *a.GetNrows()*a.GetNcols() */;
1625 for(Int_t i=nullptr;i<a.GetNrows();i++) {
1626 for(Int_t j=nullptr;j<a.GetNcols();j++) {
1627 // ar should be symmetric
1628 if(TMath::Abs(ar(i,j)-ar(j,i))>
1629 epsilonA2*(TMath::Abs(ar(i,j))+TMath::Abs(ar(j,i)))) {
1630 std::cout<<"Ar is not symmetric Ar("<<i<<","<<j<<")="<<ar(i,j)
1631 <<" Ar("<<j<<","<<i<<")="<<ar(j,i)<<"\n";
1632 }
1633 // ara should be equal a
1634 if(TMath::Abs(ara(i,j)-a(i,j))>
1635 epsilonA2*(TMath::Abs(ara(i,j))+TMath::Abs(a(i,j)))) {
1636 std::cout<<"ArA is not equal A ArA("<<i<<","<<j<<")="<<ara(i,j)
1637 <<" A("<<i<<","<<j<<")="<<a(i,j)<<"\n";
1638 }
1639 // ara should be equal a
1640 if(TMath::Abs(rar(i,j)-R(i,j))>
1641 epsilonA2*(TMath::Abs(rar(i,j))+TMath::Abs(R(i,j)))) {
1642 std::cout<<"rAr is not equal r rAr("<<i<<","<<j<<")="<<rar(i,j)
1643 <<" r("<<i<<","<<j<<")="<<R(i,j)<<"\n";
1644 }
1645 }
1646 }
1647 if(rankPtr) std::cout<<"rank="<<*rankPtr<<"\n";
1648 } else {
1649 std::cout<<"Matrix is not positive\n";
1650 }
1651#endif
1652 return r;
1653
1654}
1655
1656////////////////////////////////////////////////////////////////////////
1657/// Get bin name of an outpt bin
1658///
1659/// \param[in] iBinX bin number
1660///
1661/// Return value: name of the bin
1662/// <br/>
1663/// For TUnfold and TUnfoldSys, this function simply returns the bin
1664/// number as a string. This function really only makes sense in the
1665/// context of TUnfoldDensity, where binnig schemes are implemented
1666/// using the class TUnfoldBinning, and non-trivial bin names are
1667/// returned.
1669{
1670 return TString::Format("#%d",iBinX);
1671}
1672
1673////////////////////////////////////////////////////////////////////////
1674/// Set up response matrix and regularisation scheme
1675///
1676/// \param[in] hist_A matrix of MC events that describes the migrations
1677/// \param[in] histmap mapping of the histogram axes
1678/// \param[in] regmode (default=kRegModeSize) global regularisation mode
1679/// \param[in] constraint (default=kEConstraintArea) type of constraint
1680///
1681/// Treatment of overflow bins in the matrix hist_A
1682/// <ul>
1683/// <li>Events reconstructed in underflow or overflow bins are counted
1684/// as inefficiency. They have to be filled properly.</li>
1685/// <li>Events where the truth level is in underflow or overflow bins are
1686/// treated as a part of the generator level distribution.
1687/// The full truth level distribution (including underflow and
1688/// overflow) is unfolded.</li>
1689/// </ul>
1690/// If unsure, do the following:
1691/// <ul>
1692/// <li>store evens where the truth is in underflow or overflow
1693/// (sometimes called "fakes") in a separate TH1. Ensure that the
1694/// truth-level underflow and overflow bins of hist_A are all zero.
1695/// <li>the fakes are background to the
1696/// measurement. Use the classes TUnfoldSys and TUnfoldDensity instead
1697/// of the plain TUnfold for subtracting background</li>
1698/// </ul>
1699
1701 EConstraint constraint)
1702{
1703 // data members initialized to something different from zero:
1704 // fA: filled from hist_A
1705 // fDA: filled from hist_A
1706 // fX0: filled from hist_A
1707 // fL: filled depending on the regularisation scheme
1708 InitTUnfold();
1709 SetConstraint(constraint);
1710 Int_t nx0, nx, ny;
1712 // include overflow bins on the X axis
1713 nx0 = hist_A->GetNbinsX()+2;
1714 ny = hist_A->GetNbinsY();
1715 } else {
1716 // include overflow bins on the X axis
1717 nx0 = hist_A->GetNbinsY()+2;
1718 ny = hist_A->GetNbinsX();
1719 }
1720 nx = 0;
1721 // fNx is expected to be nx0, but the input matrix may be ill-formed
1722 // -> all columns with zero events have to be removed
1723 // (because y does not contain any information on that bin in x)
1724 fSumOverY.Set(nx0);
1725 fXToHist.Set(nx0);
1726 fHistToX.Set(nx0);
1727 Int_t nonzeroA=0;
1728 // loop
1729 // - calculate bias distribution
1730 // sum_over_y
1731 // - count those generator bins which can be unfolded
1732 // fNx
1733 // - histogram bins are added to the lookup-tables
1734 // fHistToX[] and fXToHist[]
1735 // these convert histogram bins to matrix indices and vice versa
1736 // - the number of non-zero elements is counted
1737 // nonzeroA
1738 Int_t nskipped=0;
1739 for (Int_t ix = 0; ix < nx0; ix++) {
1740 // calculate sum over all detector bins
1741 // excluding the overflow bins
1742 Double_t sum = 0.0;
1743 Int_t nonZeroY = 0;
1744 for (Int_t iy = 0; iy < ny; iy++) {
1745 Double_t z;
1747 z = hist_A->GetBinContent(ix, iy + 1);
1748 } else {
1749 z = hist_A->GetBinContent(iy + 1, ix);
1750 }
1751 if (z != 0.0) {
1752 nonzeroA++;
1753 nonZeroY++;
1754 sum += z;
1755 }
1756 }
1757 // check whether there is any sensitivity to this generator bin
1758
1759 if (nonZeroY) {
1760 // update mapping tables to convert bin number to matrix index
1761 fXToHist[nx] = ix;
1762 fHistToX[ix] = nx;
1763 // add overflow bins -> important to keep track of the
1764 // non-reconstructed events
1765 fSumOverY[nx] = sum;
1767 fSumOverY[nx] +=
1768 hist_A->GetBinContent(ix, 0) +
1769 hist_A->GetBinContent(ix, ny + 1);
1770 } else {
1771 fSumOverY[nx] +=
1772 hist_A->GetBinContent(0, ix) +
1773 hist_A->GetBinContent(ny + 1, ix);
1774 }
1775 nx++;
1776 } else {
1777 nskipped++;
1778 // histogram bin pointing to -1 (non-existing matrix entry)
1779 fHistToX[ix] = -1;
1780 }
1781 }
1783 for (Int_t ix = 0; ix < nx; ix++) {
1784 Int_t ibinx = fXToHist[ix];
1785 if(ibinx<1) underflowBin=1;
1787 if(ibinx>hist_A->GetNbinsX()) overflowBin=1;
1788 } else {
1789 if(ibinx>hist_A->GetNbinsY()) overflowBin=1;
1790 }
1791 }
1792 if(nskipped) {
1793 Int_t nprint=0;
1794 Int_t ixfirst=-1,ixlast=-1;
1796 int nDisconnected=0;
1797 for (Int_t ix = 0; ix < nx0; ix++) {
1798 if(fHistToX[ix]<0) {
1799 nprint++;
1800 if(ixlast<0) {
1801 binlist +=" ";
1802 binlist +=ix;
1803 ixfirst=ix;
1804 }
1805 ixlast=ix;
1806 nDisconnected++;
1807 }
1808 if(((fHistToX[ix]>=0)&&(ixlast>=0))||
1809 (nprint==nskipped)) {
1810 if(ixlast>ixfirst) {
1811 binlist += "-";
1812 binlist += ixlast;
1813 }
1814 ixfirst=-1;
1815 ixlast=-1;
1816 }
1817 if(nprint==nskipped) {
1818 break;
1819 }
1820 }
1822 Info("TUnfold","underflow and overflow bin "
1823 "do not depend on the input data");
1824 } else {
1825 Warning("TUnfold","%d output bins "
1826 "do not depend on the input data %s",nDisconnected,
1827 static_cast<const char *>(binlist));
1828 }
1829 }
1830 // store bias as matrix
1831 fX0 = new TMatrixD(nx, 1, fSumOverY.GetArray());
1832 // store non-zero elements in sparse matrix fA
1833 // construct sparse matrix fA
1834 Int_t *rowA = new Int_t[nonzeroA];
1835 Int_t *colA = new Int_t[nonzeroA];
1837 Int_t index=0;
1838 for (Int_t iy = 0; iy < ny; iy++) {
1839 for (Int_t ix = 0; ix < nx; ix++) {
1840 Int_t ibinx = fXToHist[ix];
1841 Double_t z;
1843 z = hist_A->GetBinContent(ibinx, iy + 1);
1844 } else {
1845 z = hist_A->GetBinContent(iy + 1, ibinx);
1846 }
1847 if (z != 0.0) {
1848 rowA[index]=iy;
1849 colA[index] = ix;
1850 dataA[index] = z / fSumOverY[ix];
1851 index++;
1852 }
1853 }
1854 }
1855 if(underflowBin && overflowBin) {
1856 Info("TUnfold","%d input bins and %d output bins (includes 2 underflow/overflow bins)",ny,nx);
1857 } else if(underflowBin) {
1858 Info("TUnfold","%d input bins and %d output bins (includes 1 underflow bin)",ny,nx);
1859 } else if(overflowBin) {
1860 Info("TUnfold","%d input bins and %d output bins (includes 1 overflow bin)",ny,nx);
1861 } else {
1862 Info("TUnfold","%d input bins and %d output bins",ny,nx);
1863 }
1865 if(ny<nx) {
1866 Error("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1867 } else if(ny==nx) {
1868 Warning("TUnfold","too few (ny=%d) input bins for nx=%d output bins",ny,nx);
1869 }
1870 delete[] rowA;
1871 delete[] colA;
1872 delete[] dataA;
1873 // regularisation conditions squared
1874 fL = new TMatrixDSparse(1, GetNx());
1875 if (regmode != kRegModeNone) {
1877 if(nError>0) {
1878 if(nError>1) {
1879 Info("TUnfold",
1880 "%d regularisation conditions have been skipped",nError);
1881 } else {
1882 Info("TUnfold",
1883 "One regularisation condition has been skipped");
1884 }
1885 }
1886 }
1887}
1888
1889////////////////////////////////////////////////////////////////////////
1890/// set bias vector
1891///
1892/// \param[in] bias histogram with new bias vector
1893///
1894/// the initial bias vector is determined from the response matrix
1895/// but may be changed by using this method
1897{
1898 DeleteMatrix(&fX0);
1899 fX0 = new TMatrixD(GetNx(), 1);
1900 for (Int_t i = 0; i < GetNx(); i++) {
1901 (*fX0) (i, 0) = bias->GetBinContent(fXToHist[i]);
1902 }
1903}
1904
1905////////////////////////////////////////////////////////////////////////
1906/// add a row of regularisation conditions to the matrix L
1907///
1908/// \param[in] i0 truth histogram bin number
1909/// \param[in] f0 entry in the matrix L, column i0
1910/// \param[in] i1 truth histogram bin number
1911/// \param[in] f1 entry in the matrix L, column i1
1912/// \param[in] i2 truth histogram bin number
1913/// \param[in] f2 entry in the matrix L, column i2
1914///
1915/// the arguments are used to form one row (k) of the matrix L, where
1916/// <br/> L<sub>k,i0</sub>=f0 and L<sub>k,i1</sub>=f1 and L<sub>k,i2</sub>=f2
1917/// <br>negative indexes i0,i1,i2 are ignored
1920{
1921 Int_t indices[3];
1922 Double_t data[3];
1923 Int_t nEle=0;
1924
1925 if(i2>=0) {
1926 data[nEle]=f2;
1927 indices[nEle]=i2;
1928 nEle++;
1929 }
1930 if(i1>=0) {
1931 data[nEle]=f1;
1932 indices[nEle]=i1;
1933 nEle++;
1934 }
1935 if(i0>=0) {
1936 data[nEle]=f0;
1937 indices[nEle]=i0;
1938 nEle++;
1939 }
1940 return AddRegularisationCondition(nEle,indices,data);
1941}
1942
1943////////////////////////////////////////////////////////////////////////
1944/// add a row of regularisation conditions to the matrix L
1945///
1946/// \param[in] nEle number of valid entries in indeces and rowData
1947/// \param[in] indices column numbers of L to fill
1948/// \param[in] rowData data to fill into the new row of L
1949///
1950/// returns true if a row was added, false otherwise
1951/// <br/>
1952/// A new row k is added to the matrix L, its dimension is expanded.
1953/// The new elements L<sub>ki</sub> are filled from the array rowData[]
1954/// where the indices i which are taken from the array indices[].
1956(Int_t nEle,const Int_t *indices,const Double_t *rowData)
1957{
1958 Bool_t r=kTRUE;
1959 const Int_t *l0_rows=fL->GetRowIndexArray();
1960 const Int_t *l0_cols=fL->GetColIndexArray();
1962
1964 Int_t *l_row=new Int_t[nF];
1965 Int_t *l_col=new Int_t[nF];
1966 Double_t *l_data=new Double_t[nF];
1967 // decode original matrix
1968 nF=0;
1969 for(Int_t row=0;row<fL->GetNrows();row++) {
1970 for(Int_t k=l0_rows[row];k<l0_rows[row+1];k++) {
1971 l_row[nF]=row;
1972 l_col[nF]=l0_cols[k];
1973 l_data[nF]=l0_data[k];
1974 nF++;
1975 }
1976 }
1977
1978 // if the original matrix does not have any data, reset its row count
1979 Int_t rowMax=0;
1980 if(nF>0) {
1981 rowMax=fL->GetNrows();
1982 }
1983
1984 // add the new regularisation conditions
1985 for(Int_t i=0;i<nEle;i++) {
1986 Int_t col=fHistToX[indices[i]];
1987 if(col<0) {
1988 r=kFALSE;
1989 break;
1990 }
1991 l_row[nF]=rowMax;
1992 l_col[nF]=col;
1993 l_data[nF]=rowData[i];
1994 nF++;
1995 }
1996
1997 // replace the old matrix fL
1998 if(r) {
1999 DeleteMatrix(&fL);
2001 }
2002 delete [] l_row;
2003 delete [] l_col;
2004 delete [] l_data;
2005 return r;
2006}
2007
2008////////////////////////////////////////////////////////////////////////
2009/// add a regularisation condition on the magnitude of a truth bin
2010///
2011/// \param[in] bin bin number
2012/// \param[in] scale (default=1) scale factor
2013///
2014/// this adds one row to L, where the element <b>bin</b> takes the
2015/// value <b>scale</b>
2016///
2017/// return value: 0 if ok, 1 if the condition has not been
2018/// added. Conditions which are not added typically correspond to bin
2019/// numbers where the truth can not be unfolded (either response
2020/// matrix is empty or the data do not constrain).
2021///
2022/// The RegularizeXXX() methods can be used to set up a custom matrix
2023/// of regularisation conditions. In this case, start with an empty
2024/// matrix L (argument regmode=kRegModeNone in the constructor)
2025
2027{
2028 // add regularisation on the size of bin i
2029 // bin: bin number
2030 // scale: size of the regularisation
2031 // return value: number of conditions which have been skipped
2032 // modifies data member fL
2033
2036
2037 return AddRegularisationCondition(bin,scale) ? 0 : 1;
2038}
2039
2040////////////////////////////////////////////////////////////////////////
2041/// add a regularisation condition on the difference of two truth bin
2042///
2043/// \param[in] left_bin bin number
2044/// \param[in] right_bin bin number
2045/// \param[in] scale (default=1) scale factor
2046///
2047/// this adds one row to L, where the element <b>left_bin</b> takes the
2048/// value <b>-scale</b> and the element <b>right_bin</b> takes the
2049/// value <b>+scale</b>
2050///
2051/// return value: 0 if ok, 1 if the condition has not been
2052/// added. Conditions which are not added typically correspond to bin
2053/// numbers where the truth can not be unfolded (either response
2054/// matrix is empty or the data do not constrain).
2055///
2056/// The RegularizeXXX() methods can be used to set up a custom matrix
2057/// of regularisation conditions. In this case, start with an empty
2058/// matrix L (argument regmode=kRegModeNone in the constructor)
2059
2062{
2063 // add regularisation on the difference of two bins
2064 // left_bin: 1st bin
2065 // right_bin: 2nd bin
2066 // scale: size of the regularisation
2067 // return value: number of conditions which have been skipped
2068 // modifies data member fL
2069
2072
2074}
2075
2076////////////////////////////////////////////////////////////////////////
2077/// add a regularisation condition on the curvature of three truth bin
2078///
2079/// \param[in] left_bin bin number
2080/// \param[in] center_bin bin number
2081/// \param[in] right_bin bin number
2082/// \param[in] scale_left (default=1) scale factor
2083/// \param[in] scale_right (default=1) scale factor
2084///
2085/// this adds one row to L, where the element <b>left_bin</b> takes the
2086/// value <b>-scale_left</b>, the element <b>right_bin</b> takes the
2087/// value <b>-scale_right</b> and the element <b>center_bin</b> takes
2088/// the value <b>scale_left+scale_right</b>
2089///
2090/// return value: 0 if ok, 1 if the condition has not been
2091/// added. Conditions which are not added typically correspond to bin
2092/// numbers where the truth can not be unfolded (either response
2093/// matrix is empty or the data do not constrain).
2094///
2095/// The RegularizeXXX() methods can be used to set up a custom matrix
2096/// of regularisation conditions. In this case, start with an empty
2097/// matrix L (argument regmode=kRegModeNone in the constructor)
2098
2100 int right_bin,
2103{
2104 // add regularisation on the curvature through 3 bins (2nd derivative)
2105 // left_bin: 1st bin
2106 // center_bin: 2nd bin
2107 // right_bin: 3rd bin
2108 // scale_left: scale factor on center-left difference
2109 // scale_right: scale factor on right-center difference
2110 // return value: number of conditions which have been skipped
2111 // modifies data member fL
2112
2115
2120 ? 0 : 1;
2121}
2122
2123////////////////////////////////////////////////////////////////////////
2124/// add regularisation conditions for a group of bins
2125///
2126/// \param[in] start first bin number
2127/// \param[in] step step size
2128/// \param[in] nbin number of bins
2129/// \param[in] regmode regularisation mode (one of: kRegModeSize,
2130/// kRegModeDerivative, kRegModeCurvature)
2131///
2132/// add regularisation conditions for a group of equidistant
2133/// bins. There are <b>nbin</b> bins, starting with bin <b>start</b>
2134/// and with a distance of <b>step</b> between bins.
2135///
2136/// Return value: number of regularisation conditions which could not
2137/// be added.
2138/// <br/>
2139/// Conditions which are not added typically correspond to bin
2140/// numbers where the truth can not be unfolded (either response
2141/// matrix is empty or the data do not constrain).
2142///
2143
2144Int_t TUnfold::RegularizeBins(int start, int step, int nbin,
2146{
2147 // set regulatisation on a 1-dimensional curve
2148 // start: first bin
2149 // step: distance between neighbouring bins
2150 // nbin: total number of bins
2151 // regmode: regularisation mode
2152 // return value:
2153 // number of errors (i.e. conditions which have been skipped)
2154 // modifies data member fL
2155
2156 Int_t i0, i1, i2;
2157 i0 = start;
2158 i1 = i0 + step;
2159 i2 = i1 + step;
2160 Int_t nSkip = 0;
2161 Int_t nError= 0;
2162 if (regmode == kRegModeDerivative) {
2163 nSkip = 1;
2164 } else if (regmode == kRegModeCurvature) {
2165 nSkip = 2;
2166 } else if(regmode != kRegModeSize) {
2167 Error("RegularizeBins","regmode = %d is not valid",regmode);
2168 }
2169 for (Int_t i = nSkip; i < nbin; i++) {
2170 if (regmode == kRegModeSize) {
2172 } else if (regmode == kRegModeDerivative) {
2174 } else if (regmode == kRegModeCurvature) {
2176 }
2177 i0 = i1;
2178 i1 = i2;
2179 i2 += step;
2180 }
2181 return nError;
2182}
2183
2184////////////////////////////////////////////////////////////////////////
2185/// add regularisation conditions for 2d unfolding
2186///
2187/// \param[in] start_bin first bin number
2188/// \param[in] step1 step size, 1st dimension
2189/// \param[in] nbin1 number of bins, 1st dimension
2190/// \param[in] step2 step size, 2nd dimension
2191/// \param[in] nbin2 number of bins, 2nd dimension
2192/// \param[in] regmode regularisation mode (one of: kRegModeSize,
2193/// kRegModeDerivative, kRegModeCurvature)
2194///
2195/// add regularisation conditions for a grid of bins. The start bin is
2196/// <b>start_bin</b>. Along the first (second) dimension, there are
2197/// <b>nbin1</b> (<b>nbin2</b>) bins and adjacent bins are spaced by
2198/// <b>step1</b> (<b>step2</b>) units.
2199///
2200/// Return value: number of regularisation conditions which could not
2201/// be added. Conditions which are not added typically correspond to bin
2202/// numbers where the truth can not be unfolded (either response
2203/// matrix is empty or the data do not constrain).
2204///
2206 int step2, int nbin2, ERegMode regmode)
2207{
2208 // set regularisation on a 2-dimensional grid of bins
2209 // start_bin: first bin
2210 // step1: distance between bins in 1st direction
2211 // nbin1: number of bins in 1st direction
2212 // step2: distance between bins in 2nd direction
2213 // nbin2: number of bins in 2nd direction
2214 // return value:
2215 // number of errors (i.e. conditions which have been skipped)
2216 // modifies data member fL
2217
2218 Int_t nError = 0;
2219 for (Int_t i1 = 0; i1 < nbin1; i1++) {
2221 }
2222 for (Int_t i2 = 0; i2 < nbin2; i2++) {
2224 }
2225 return nError;
2226}
2227
2228////////////////////////////////////////////////////////////////////////
2229/// perform the unfolding for a given input and regularisation
2230///
2231/// \param[in] tau_reg regularisation parameter
2232/// \param[in] input input distribution with uncertainties
2233/// \param[in] scaleBias (default=0.0) scale factor applied to the bias
2234///
2235/// This is a shortcut for { SetInput(input,scaleBias); DoUnfold(tau); }
2238{
2239// Data members required:
2240 // fA, fX0, fL
2241 // Data members modified:
2242 // those documented in SetInput()
2243 // and those documented in DoUnfold(Double_t)
2244 // Return value:
2245 // maximum global correlation coefficient
2246 // NOTE!!! return value >=1.0 means error, and the result is junk
2247 //
2248 // Overflow bins of the input distribution are ignored!
2250 return DoUnfold(tau_reg);
2251}
2252
2253////////////////////////////////////////////////////////////////////////
2254/// Define input data for subsequent calls to DoUnfold(tau)
2255/// \param[in] input input distribution with uncertainties
2256/// \param[in] scaleBias (default=nullptr) scale factor applied to the bias
2257/// \param[in] oneOverZeroError (default=nullptr) for bins with zero error, this number defines 1/error.
2258/// \param[in] hist_vyy (default=nullptr) if non-zero, this defines the data covariance matrix
2259/// \param[in] hist_vyy_inv (default=nullptr) if non-zero and hist_vyy is
2260/// set, defines the inverse of the data covariance matrix. This
2261/// feature can be useful for repeated unfoldings in cases where the
2262/// inversion of the input covariance matrix is lengthy
2263///
2264/// Return value: nError1+10000*nError2
2265/// <ul>
2266/// <li>nError1: number of bins where the uncertainty is zero.
2267/// these bins either are not used for the unfolding (if
2268/// oneOverZeroError==nullptr) or 1/uncertainty is set to oneOverZeroError.</li>
2269/// <li>nError2: return values>10000 are fatal errors, because the
2270/// unfolding can not be done. The number nError2 corresponds to the
2271/// number of truth bins which are not constrained by data points.
2272/// </li>
2273/// </ul>
2274
2277 const TH2 *hist_vyy_inv)
2278{
2279 // Data members modified:
2280 // fY, fVyy, , fBiasScale
2281 // Data members cleared
2282 // fVyyInv, fNdf
2283 // + see ClearResults
2284
2286 fNdf=0;
2287
2289
2290 // delete old results (if any)
2291 ClearResults();
2292
2293 // construct error matrix and inverted error matrix of measured quantities
2294 // from errors of input histogram or use error matrix
2295
2296 Int_t *rowVyyN=new Int_t[GetNy()*GetNy()+1];
2297 Int_t *colVyyN=new Int_t[GetNy()*GetNy()+1];
2298 Double_t *dataVyyN=new Double_t[GetNy()*GetNy()+1];
2299
2300 Int_t *rowVyy1=new Int_t[GetNy()];
2301 Int_t *colVyy1=new Int_t[GetNy()];
2304
2307 Int_t nVyyN=0;
2308 Int_t nVyy1=0;
2309 for (Int_t iy = 0; iy < GetNy(); iy++) {
2310 // diagonals
2311 Double_t dy2;
2312 if(!hist_vyy) {
2313 Double_t dy = input->GetBinError(iy + 1);
2314 dy2=dy*dy;
2315 if (dy2 <= 0.0) {
2316 if(oneOverZeroError>0.0) {
2319 } else {
2320 nVarianceZero++;
2321 }
2322 }
2323 } else {
2324 dy2 = hist_vyy->GetBinContent(iy+1,iy+1);
2325 if (dy2 <= 0.0) {
2326 nVarianceZero++;
2327 }
2328 }
2329 rowVyyN[nVyyN] = iy;
2330 colVyyN[nVyyN] = iy;
2331 rowVyy1[nVyy1] = iy;
2332 colVyy1[nVyy1] = 0;
2333 dataVyyDiag[iy] = dy2;
2334 if(dy2>0.0) {
2335 dataVyyN[nVyyN++] = dy2;
2336 dataVyy1[nVyy1++] = dy2;
2337 }
2338 }
2339 if(hist_vyy) {
2340 // non-diagonal elements
2342 for (Int_t iy = 0; iy < GetNy(); iy++) {
2343 // ignore rows where the diagonal is zero
2344 if(dataVyyDiag[iy]<=0.0) {
2345 for (Int_t jy = 0; jy < GetNy(); jy++) {
2346 if(hist_vyy->GetBinContent(iy+1,jy+1)!=0.0) {
2348 }
2349 }
2350 continue;
2351 }
2352 for (Int_t jy = 0; jy < GetNy(); jy++) {
2353 // skip diagonal elements
2354 if(iy==jy) continue;
2355 // ignore columns where the diagonal is zero
2356 if(dataVyyDiag[jy]<=0.0) continue;
2357
2358 rowVyyN[nVyyN] = iy;
2359 colVyyN[nVyyN] = jy;
2360 dataVyyN[nVyyN]= hist_vyy->GetBinContent(iy+1,jy+1);
2361 if(dataVyyN[nVyyN] == 0.0) continue;
2362 nVyyN ++;
2363 }
2364 }
2365 if(hist_vyy_inv) {
2366 Warning("SetInput",
2367 "inverse of input covariance is taken from user input");
2368 Int_t *rowVyyInv=new Int_t[GetNy()*GetNy()+1];
2369 Int_t *colVyyInv=new Int_t[GetNy()*GetNy()+1];
2371 Int_t nVyyInv=0;
2372 for (Int_t iy = 0; iy < GetNy(); iy++) {
2373 for (Int_t jy = 0; jy < GetNy(); jy++) {
2374 rowVyyInv[nVyyInv] = iy;
2375 colVyyInv[nVyyInv] = jy;
2376 dataVyyInv[nVyyInv]= hist_vyy_inv->GetBinContent(iy+1,jy+1);
2377 if(dataVyyInv[nVyyInv] == 0.0) continue;
2378 nVyyInv ++;
2379 }
2380 }
2383 delete [] rowVyyInv;
2384 delete [] colVyyInv;
2385 delete [] dataVyyInv;
2386 } else {
2387 if(nOffDiagNonzero) {
2388 Error("SetInput",
2389 "input covariance has elements C(X,Y)!=nullptr where V(X)==0");
2390 }
2391 }
2392 }
2396
2397 delete[] rowVyyN;
2398 delete[] colVyyN;
2399 delete[] dataVyyN;
2400
2403
2404 delete[] rowVyy1;
2405 delete[] colVyy1;
2406 delete[] dataVyy1;
2407
2408 //
2409 // get input vector
2410 DeleteMatrix(&fY);
2411 fY = new TMatrixD(GetNy(), 1);
2412 for (Int_t i = 0; i < GetNy(); i++) {
2413 (*fY) (i, 0) = input->GetBinContent(i + 1);
2414 }
2415 // simple check whether unfolding is possible, given the matrices fA and fV
2418 Int_t nError2=0;
2419 Int_t error2bin=-1;
2420 for (Int_t i = 0; i <mAtV->GetNrows();i++) {
2421 if(mAtV->GetRowIndexArray()[i]==
2422 mAtV->GetRowIndexArray()[i+1]) {
2423 nError2 ++;
2424 error2bin=i;
2425 }
2426 }
2427 if(nVarianceForced) {
2428 if(nVarianceForced>1) {
2429 Warning("SetInput","%d/%d input bins have zero error,"
2430 " 1/error set to %lf.",
2432 } else {
2433 Warning("SetInput","One input bin has zero error,"
2434 " 1/error set to %lf.",oneOverZeroError);
2435 }
2436 }
2437 if(nVarianceZero) {
2438 if(nVarianceZero>1) {
2439 Warning("SetInput","%d/%d input bins have zero error,"
2440 " and are ignored.",nVarianceZero,GetNy());
2441 } else {
2442 Warning("SetInput","One input bin has zero error,"
2443 " and is ignored.");
2444 }
2446 }
2447 if(nError2>0) {
2448 // check whether data points with zero error are responsible
2449 if(oneOverZeroError<=0.0) {
2450 //const Int_t *a_rows=fA->GetRowIndexArray();
2451 //const Int_t *a_cols=fA->GetColIndexArray();
2452 for (Int_t col = 0; col <mAtV->GetNrows();col++) {
2453 if(mAtV->GetRowIndexArray()[col]==
2454 mAtV->GetRowIndexArray()[col+1]) {
2455 TString binlist("no data to constrain output bin ");
2457 /* binlist +=" depends on ignored input bins ";
2458 for(Int_t row=nullptr;row<fA->GetNrows();row++) {
2459 if(dataVyyDiag[row]>0.0) continue;
2460 for(Int_t i=a_rows[row];i<a_rows[row+1];i++) {
2461 if(a_cols[i]!=col) continue;
2462 binlist +=" ";
2463 binlist +=row;
2464 }
2465 } */
2466 Warning("SetInput","%s",(char const *)binlist);
2467 }
2468 }
2469 }
2470 if(nError2>1) {
2471 Error("SetInput","%d/%d output bins are not constrained by any data.",
2472 nError2,mAtV->GetNrows());
2473 } else {
2474 Error("SetInput","One output bin [%d] is not constrained by any data.",
2475 error2bin);
2476 }
2477 }
2479
2480 delete[] dataVyyDiag;
2481
2483}
2484
2485////////////////////////////////////////////////////////////////////////
2486/// perform the unfolding for a given regularisation parameter tau
2487///
2488/// \param[in] tau regularisation parameter
2489///
2490/// this method sets tau and then calls the core unfolding algorithm
2491
2493{
2494 // required data members:
2495 // fA: matrix to relate x and y
2496 // fY: measured data points
2497 // fX0: bias on x
2498 // fBiasScale: scale factor for fX0
2499 // fV: inverse of covariance matrix for y
2500 // fL: regularisation conditions
2501 // modified data members:
2502 // fTauSquared and those documented in DoUnfold(void)
2503 fTauSquared=tau*tau;
2504 return DoUnfold();
2505}
2506
2507////////////////////////////////////////////////////////////////////////
2508// calculate square roots of the eigenvalues of the matrix E
2509// returns: vector of eigenvalues
2511 int nRow=GetE()->GetNrows();
2513 const Int_t *rows=GetE()->GetRowIndexArray();
2514 const Int_t *cols=GetE()->GetColIndexArray();
2515 const Double_t *data=GetE()->GetMatrixArray();
2516 for(Int_t irow=0;irow<nRow;irow++) {
2517 for(Int_t indexE=rows[irow];indexE<rows[irow+1];indexE++) {
2520 }
2521 }
2523 TVectorD r=ev.GetEigenValues();
2524 for(int i=0;i<r.GetNrows();i++) {
2525 double evi=r(i);
2526 if(evi<0.) evi= - TMath::Sqrt(evi);
2527 else evi = TMath::Sqrt(evi);
2528 r(i)=evi;
2529 }
2530 return r;
2531}
2532
2533////////////////////////////////////////////////////////////////////////
2534/// scan the L curve, determine tau and unfold at the final value of
2535/// tau
2536///
2537/// \param[in] nPoint number of points used for the scan
2538/// \param[in] tauMin smallest tau value to study
2539/// \param[in] tauMax largest tau value to study. If tauMin=tauMax=nullptr,
2540/// a scan interval is determined automatically.
2541/// \param[out] lCurve if nonzero, a new TGraph is returned,
2542/// containing the L-curve
2543/// \param[out] logTauX if nonzero, a new TSpline is returned, to
2544/// parameterize the L-curve's x-coordinates as a function of log10(tau)
2545/// \param[out] logTauY if nonzero, a new TSpline is returned, to
2546/// parameterize the L-curve's y-coordinates as a function of log10(tau)
2547/// \param[out] logTauCurvature if nonzero, a new TSpline is returned
2548/// of the L-curve curvature as a function of log10(tau)
2549///
2550/// return value: the coordinate number in the logTauX,logTauY graphs
2551/// corresponding to the "final" choice of tau
2552///
2553/// Recommendation: always check <b>logTauCurvature</b>, it
2554/// should be a peaked function (similar to a Gaussian), the maximum
2555/// corresponding to the final choice of tau. Also, check the <b>lCurve</b>
2556/// it should be approximately L-shaped. If in doubt, adjust tauMin
2557/// and tauMax until the results are satisfactory.
2558
2563{
2564 typedef std::map<Double_t,std::pair<Double_t,Double_t> > XYtau_t;
2565 XYtau_t curve;
2566
2567 //==========================================================
2568 // algorithm:
2569 // (1) do the unfolding for nPoint-1 points
2570 // and store the results in the map
2571 // curve
2572 // (1a) store minimum and maximum tau to curve
2573 // (1b) insert additional points, until nPoint-1 values
2574 // have been calculated
2575 //
2576 // (2) determine the best choice of tau
2577 // do the unfolding for this point
2578 // and store the result in
2579 // curve
2580 // (3) return the result in
2581 // lCurve logTauX logTauY
2582
2583 //==========================================================
2584 // (1) do the unfolding for nPoint-1 points
2585 // and store the results in
2586 // curve
2587 // (1a) store minimum and maximum tau to curve
2588
2589 if((tauMin<=0)||(tauMax<=0.0)||(tauMin>=tauMax)) {
2590 // here no range is given, has to be determined automatically
2591 // the maximum tau is determined from the chi**2 values
2592 // observed from unfolding without regulatisation
2593
2594 // first unfolding, without regularisation
2595 DoUnfold(0.0);
2596
2597 // if the number of degrees of freedom is too small, create an error
2598 if(GetNdf()<=0) {
2599 Error("ScanLcurve","too few input bins, NDF<=nullptr %d",GetNdf());
2600 }
2601
2602 Double_t x0=GetLcurveX();
2604 Info("ScanLcurve","logtau=-Infinity X=%lf Y=%lf",x0,y0);
2605 if(!TMath::Finite(x0)) {
2606 Fatal("ScanLcurve","problem (too few input bins?) X=%f",x0);
2607 }
2608 if(!TMath::Finite(y0)) {
2609 Fatal("ScanLcurve","problem (missing regularisation?) Y=%f",y0);
2610 }
2611 {
2612 // unfolding guess maximum tau and store it
2614 0.5*(TMath::Log10(fChi2A+3.*TMath::Sqrt(GetNdf()+1.0))
2615 -GetLcurveY());
2617
2619 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2621 }
2622 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2623 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2625 }
2626 if((*curve.begin()).second.first<x0) {
2627 // if the point at tau==nullptr seems numerically unstable,
2628 // try to find the minimum chi**2 as start value
2629 //
2630 // "unstable" means that there is a finite tau where the
2631 // unfolding chi**2 is smaller than for the case of no
2632 // regularisation. Ideally this should never happen
2633 do {
2634 x0=GetLcurveX();
2635 Double_t logTau=(*curve.begin()).first-0.5;
2638 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2640 }
2641 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2642 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2644 }
2645 while(((int)curve.size()<(nPoint-1)/2)&&
2646 ((*curve.begin()).second.first<x0));
2647 } else {
2648 // minimum tau is chosen such that is less than
2649 // 1% different from the case of no regularusation
2650 // log10(1.01) = 0.00432
2651
2652 // here, more than one point are inserted if necessary
2653 while(((int)curve.size()<nPoint-1)&&
2654 (((*curve.begin()).second.first-x0>0.00432)||
2655 ((*curve.begin()).second.second-y0>0.00432)||
2656 (curve.size()<2))) {
2657 Double_t logTau=(*curve.begin()).first-0.5;
2660 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2662 }
2663 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2664 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2666 }
2667 }
2668 } else {
2671 if(nPoint>1) {
2672 // insert maximum tau
2675 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2677 }
2678 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2680 curve[logTauMax]=std::make_pair(GetLcurveX(),GetLcurveY());
2681 }
2682 // insert minimum tau
2685 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2687 }
2688 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",
2690 curve[logTauMin]=std::make_pair(GetLcurveX(),GetLcurveY());
2691 }
2692
2693
2694 //==========================================================
2695 // (1b) insert additional points, until nPoint-1 values
2696 // have been calculated
2697
2698 while(int(curve.size())<nPoint-1) {
2699 // insert additional points, such that the sizes of the delta(XY) vectors
2700 // are getting smaller and smaller
2702 i0=curve.begin();
2703 i1=i0;
2704 Double_t logTau=(*i0).first;
2705 Double_t distMax=0.0;
2706 for(i1++;i1!=curve.end();i1++) {
2707 const std::pair<Double_t,Double_t> &xy0=(*i0).second;
2708 const std::pair<Double_t,Double_t> &xy1=(*i1).second;
2709 Double_t dx=xy1.first-xy0.first;
2710 Double_t dy=xy1.second-xy0.second;
2712 if(d>=distMax) {
2713 distMax=d;
2714 logTau=0.5*((*i0).first+(*i1).first);
2715 }
2716 i0=i1;
2717 }
2720 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2722 }
2723 Info("ScanLcurve","logtau=%lf X=%lf Y=%lf",logTau,GetLcurveX(),GetLcurveY());
2724 curve[logTau]=std::make_pair(GetLcurveX(),GetLcurveY());
2725 }
2726
2727 //==========================================================
2728 // (2) determine the best choice of tau
2729 // do the unfolding for this point
2730 // and store the result in
2731 // curve
2733 i0=curve.begin();
2734 i1=i0;
2735 i1++;
2736 Double_t logTauFin=(*i0).first;
2737 if( ((int)curve.size())<nPoint) {
2738 // set up splines and determine (x,y) curvature in each point
2739 Double_t *cTi=new Double_t[curve.size()-1];
2740 Double_t *cCi=new Double_t[curve.size()-1];
2741 Int_t n=0;
2742 {
2743 Double_t *lXi=new Double_t[curve.size()];
2744 Double_t *lYi=new Double_t[curve.size()];
2745 Double_t *lTi=new Double_t[curve.size()];
2746 for( XYtau_t::const_iterator i=curve.begin();i!=curve.end();i++) {
2747 lXi[n]=(*i).second.first;
2748 lYi[n]=(*i).second.second;
2749 lTi[n]=(*i).first;
2750 n++;
2751 }
2752 TSpline3 *splineX=new TSpline3("x vs tau",lTi,lXi,n);
2753 TSpline3 *splineY=new TSpline3("y vs tau",lTi,lYi,n);
2754 // calculate (x,y) curvature for all points
2755 // the curvature is stored in the array cCi[] as a function of cTi[]
2756 for(Int_t i=0;i<n-1;i++) {
2758 splineX->GetCoeff(i,ltau,xy,bi,ci,di);
2759 Double_t tauBar=0.5*(lTi[i]+lTi[i+1]);
2760 Double_t dTau=0.5*(lTi[i+1]-lTi[i]);
2761 Double_t dx1=bi+dTau*(2.*ci+3.*di*dTau);
2762 Double_t dx2=2.*ci+6.*di*dTau;
2763 splineY->GetCoeff(i,ltau,xy,bi,ci,di);
2764 Double_t dy1=bi+dTau*(2.*ci+3.*di*dTau);
2765 Double_t dy2=2.*ci+6.*di*dTau;
2766 cTi[i]=tauBar;
2767 cCi[i]=(dy2*dx1-dy1*dx2)/TMath::Power(dx1*dx1+dy1*dy1,1.5);
2768 }
2769 delete splineX;
2770 delete splineY;
2771 delete[] lXi;
2772 delete[] lYi;
2773 delete[] lTi;
2774 }
2775 // create curvature Spline
2776 TSpline3 *splineC=new TSpline3("L curve curvature",cTi,cCi,n-1);
2777 // find the maximum of the curvature
2778 // if the parameter iskip is non-zero, then iskip points are
2779 // ignored when looking for the largest curvature
2780 // (there are problems with the curvature determined from the first
2781 // few points of splineX,splineY in the algorithm above)
2782 Int_t iskip=0;
2783 if(n>4) iskip=1;
2784 if(n>7) iskip=2;
2787 for(Int_t i=iskip;i<n-2-iskip;i++) {
2788 // find maximum on this spline section
2789 // check boundary conditions for x[i+1]
2790 Double_t xMax=cTi[i+1];
2791 Double_t yMax=cCi[i+1];
2792 if(cCi[i]>yMax) {
2793 yMax=cCi[i];
2794 xMax=cTi[i];
2795 }
2796 // find maximum for x[i]<x<x[i+1]
2797 // get spline coefficiencts and solve equation
2798 // derivative(x)==nullptr
2799 Double_t x,y,b,c,d;
2800 splineC->GetCoeff(i,x,y,b,c,d);
2801 // coefficiencts of quadratic equation
2802 Double_t m_p_half=-c/(3.*d);
2803 Double_t q=b/(3.*d);
2805 if(discr>=0.0) {
2806 // solution found
2808 Double_t xx;
2809 if(m_p_half>0.0) {
2810 xx = m_p_half + discr;
2811 } else {
2812 xx = m_p_half - discr;
2813 }
2814 Double_t dx=cTi[i+1]-x;
2815 // check first solution
2816 if((xx>0.0)&&(xx<dx)) {
2817 y=splineC->Eval(x+xx);
2818 if(y>yMax) {
2819 yMax=y;
2820 xMax=x+xx;
2821 }
2822 }
2823 // second solution
2824 if(xx !=0.0) {
2825 xx= q/xx;
2826 } else {
2827 xx=0.0;
2828 }
2829 // check second solution
2830 if((xx>0.0)&&(xx<dx)) {
2831 y=splineC->Eval(x+xx);
2832 if(y>yMax) {
2833 yMax=y;
2834 xMax=x+xx;
2835 }
2836 }
2837 }
2838 // check whether this local minimum is a global minimum
2839 if(yMax>cCmax) {
2840 cCmax=yMax;
2841 cTmax=xMax;
2842 }
2843 }
2844 if(logTauCurvature) {
2846 } else {
2847 delete splineC;
2848 }
2849 delete[] cTi;
2850 delete[] cCi;
2854 Fatal("ScanLcurve","problem (missing regularisation?) X=%f Y=%f",
2856 }
2857 Info("ScanLcurve","Result logtau=%lf X=%lf Y=%lf",
2859 curve[logTauFin]=std::make_pair(GetLcurveX(),GetLcurveY());
2860 }
2861
2862
2863 //==========================================================
2864 // (3) return the result in
2865 // lCurve logTauX logTauY
2866
2867 Int_t bestChoice=-1;
2868 if(!curve.empty()) {
2869 Double_t *x=new Double_t[curve.size()];
2870 Double_t *y=new Double_t[curve.size()];
2871 Double_t *logT=new Double_t[curve.size()];
2872 int n=0;
2873 for( XYtau_t::const_iterator i=curve.begin();i!=curve.end();i++) {
2874 if(logTauFin==(*i).first) {
2875 bestChoice=n;
2876 }
2877 x[n]=(*i).second.first;
2878 y[n]=(*i).second.second;
2879 logT[n]=(*i).first;
2880 n++;
2881 }
2882 if(lCurve) {
2883 (*lCurve)=new TGraph(n,x,y);
2884 (*lCurve)->SetTitle("L curve");
2885 }
2886 if(logTauX) (*logTauX)=new TSpline3("log(chi**2)%log(tau)",logT,x,n);
2887 if(logTauY) (*logTauY)=new TSpline3("log(reg.cond)%log(tau)",logT,y,n);
2888 delete[] x;
2889 delete[] y;
2890 delete[] logT;
2891 }
2892
2893 return bestChoice;
2894}
2895
2896////////////////////////////////////////////////////////////////////////
2897/// histogram of truth bins, determined from suming over the response matrix
2898///
2899/// \param[out] out histogram to store the truth bins. The bin contents
2900/// are overwritten
2901/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
2902///
2903/// This vector is also used to initialize the bias
2904/// x<sub>0</sub>. However, the bias vector may be changed using the
2905/// SetBias() method.
2906///
2907/// The use of <b>binMap</b> is explained with the documentation of
2908/// the GetOutput() method
2909///
2910
2912{
2913
2914 ClearHistogram(out);
2915 for (Int_t i = 0; i < GetNx(); i++) {
2916 int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2917 if(dest>=0) {
2918 out->SetBinContent(dest, fSumOverY[i] + out->GetBinContent(dest));
2919 }
2920 }
2921}
2922
2923////////////////////////////////////////////////////////////////////////
2924/// get bias vector including bias scale
2925///
2926/// \param[out] out histogram to store the scaled bias vector. The bin
2927/// contents are overwritten
2928/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
2929///
2930/// This method returns the bias vector times scaling factor, f*x<sub>0</sub>
2931///
2932/// The use of <b>binMap</b> is explained with the documentation of
2933/// the GetOutput() method
2934///
2935
2936void TUnfold::GetBias(TH1 *out,const Int_t *binMap) const
2937{
2938
2939 ClearHistogram(out);
2940 for (Int_t i = 0; i < GetNx(); i++) {
2941 int dest=binMap ? binMap[fXToHist[i]] : fXToHist[i];
2942 if(dest>=0) {
2943 out->SetBinContent(dest, fBiasScale*(*fX0) (i, 0) +
2944 out->GetBinContent(dest));
2945 }
2946 }
2947}
2948
2949////////////////////////////////////////////////////////////////////////
2950/// get unfolding result on detector level
2951///
2952/// \param[out] out histogram to store the correlation coefficiencts. The bin
2953/// contents and errors are overwritten.
2954/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
2955///
2956/// This method returns the unfolding output folded by the response
2957/// matrix, i.e. the vector Ax.
2958///
2959/// The use of <b>binMap</b> is explained with the documentation of
2960/// the GetOutput() method
2961///
2962
2964{
2965 ClearHistogram(out);
2966
2968
2969 const Int_t *rows_A=fA->GetRowIndexArray();
2970 const Int_t *cols_A=fA->GetColIndexArray();
2971 const Double_t *data_A=fA->GetMatrixArray();
2972 const Int_t *rows_AVxx=AVxx->GetRowIndexArray();
2973 const Int_t *cols_AVxx=AVxx->GetColIndexArray();
2974 const Double_t *data_AVxx=AVxx->GetMatrixArray();
2975
2976 for (Int_t i = 0; i < GetNy(); i++) {
2977 Int_t destI = binMap ? binMap[i+1] : i+1;
2978 if(destI<0) continue;
2979
2980 out->SetBinContent(destI, (*fAx) (i, 0)+ out->GetBinContent(destI));
2981 Double_t e2=0.0;
2982 Int_t index_a=rows_A[i];
2984 while((index_a<rows_A[i+1])&&(index_av<rows_AVxx[i+1])) {
2987 if(j_a<j_av) {
2988 index_a++;
2989 } else if(j_a>j_av) {
2990 index_av++;
2991 } else {
2993 index_a++;
2994 index_av++;
2995 }
2996 }
2997 out->SetBinError(destI,TMath::Sqrt(e2));
2998 }
3000}
3001
3002////////////////////////////////////////////////////////////////////////
3003/// get matrix of probabilities
3004///
3005/// \param[out] A two-dimensional histogram to store the
3006/// probabilities (normalized response matrix). The bin contents are
3007/// overwritten for those bins where A is nonzero
3008/// \param[in] histmap specify axis along which the truth bins are
3009/// oriented
3010
3012{
3013 // retreive matrix of probabilities
3014 // histmap: on which axis to arrange the input/output vector
3015 // A: histogram to store the probability matrix
3016 const Int_t *rows_A=fA->GetRowIndexArray();
3017 const Int_t *cols_A=fA->GetColIndexArray();
3018 const Double_t *data_A=fA->GetMatrixArray();
3019 for (Int_t iy = 0; iy <fA->GetNrows(); iy++) {
3020 for(Int_t indexA=rows_A[iy];indexA<rows_A[iy+1];indexA++) {
3021 Int_t ix = cols_A[indexA];
3022 Int_t ih=fXToHist[ix];
3024 A->SetBinContent(ih, iy+1,data_A[indexA]);
3025 } else {
3026 A->SetBinContent(iy+1, ih,data_A[indexA]);
3027 }
3028 }
3029 }
3030}
3031
3032////////////////////////////////////////////////////////////////////////
3033/// get matrix describing gow the result changes with the input data
3034///
3035/// \param[out] dxdy two-dimensional histogram to store the
3036/// matrix connecting the output and input data.
3037/// The bin contents are overwritten for those bins where dxdy is non-zero.
3038
3040{
3041 // retreive matrix connecting input and output
3042 // dxdy: histogram to store the probability matrix
3046 for (Int_t ix = 0; ix <fDXDY->GetNrows(); ix++) {
3049 Int_t ih=fXToHist[ix];
3050 dxdy->SetBinContent(ih, iy+1,data_dxdy[indexDXDY]);
3051 }
3052 }
3053}
3054
3055////////////////////////////////////////////////////////////////////////
3056/// Input vector of measurements
3057///
3058/// \param[out] out histogram to store the measurements. Bin content
3059/// and bin errors are overwritte.
3060/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
3061///
3062/// Bins which had an uncertainty of zero in the call to SetInput()
3063/// maye acquire bin contents or bin errors different from the
3064/// original settings in SetInput().
3065///
3066/// The use of <b>binMap</b> is explained with the documentation of
3067/// the GetOutput() method
3068///
3069
3070void TUnfold::GetInput(TH1 *out,const Int_t *binMap) const
3071{
3072 ClearHistogram(out);
3073
3077
3078 for (Int_t i = 0; i < GetNy(); i++) {
3079 Int_t destI=binMap ? binMap[i+1] : i+1;
3080 if(destI<0) continue;
3081
3082 out->SetBinContent(destI, (*fY) (i, 0)+out->GetBinContent(destI));
3083
3084 Double_t e=0.0;
3085 for(int index=rows_Vyy[i];index<rows_Vyy[i+1];index++) {
3086 if(cols_Vyy[index]==i) {
3088 }
3089 }
3090 out->SetBinError(destI, e);
3091 }
3092}
3093
3094////////////////////////////////////////////////////////////////////////
3095/// get inverse of the measurement's covariance matrix
3096///
3097/// \param[out] out histogram to store the inverted covariance
3098
3100{
3101 // calculate the inverse of the contribution to the error matrix
3102 // corresponding to the input data
3103 if(!fVyyInv) {
3104 Int_t rank=0;
3106 // and count number of degrees of freedom
3107 fNdf = rank-GetNpar();
3108
3109 if(rank<GetNy()-fIgnoredBins) {
3110 Warning("GetInputInverseEmatrix","input covariance matrix has rank %d expect %d",
3111 rank,GetNy());
3112 }
3113 if(fNdf<0) {
3114 Error("GetInputInverseEmatrix","number of parameters %d > %d (rank of input covariance). Problem can not be solved",GetNpar(),rank);
3115 } else if(fNdf==0) {
3116 Warning("GetInputInverseEmatrix","number of parameters %d = input rank %d. Problem is ill posed",GetNpar(),rank);
3117 }
3118 }
3119 if(out) {
3120 // return matrix as histogram
3124
3125 for(int i=0;i<=out->GetNbinsX()+1;i++) {
3126 for(int j=0;j<=out->GetNbinsY()+1;j++) {
3127 out->SetBinContent(i,j,0.);
3128 }
3129 }
3130
3131 for (Int_t i = 0; i < fVyyInv->GetNrows(); i++) {
3132 for(int index=rows_VyyInv[i];index<rows_VyyInv[i+1];index++) {
3134 out->SetBinContent(i+1,j+1,data_VyyInv[index]);
3135 }
3136 }
3137 }
3138}
3139
3140////////////////////////////////////////////////////////////////////////
3141/// get matrix of regularisation conditions squared
3142///
3143/// \param[out] out histogram to store the squared matrix of
3144/// regularisation conditions. the bin contents are overwritten
3145///
3146/// This returns the square matrix L<sup>T</sup>L as a histogram
3147///
3148/// The histogram should have dimension nx times nx, where nx
3149/// corresponds to the number of histogram bins in the response matrix
3150/// along the truth axis.
3151
3153{
3154 // retreive matrix of regularisation conditions squared
3155 // out: prebooked matrix
3156
3158 // loop over sparse matrix
3159 const Int_t *rows=lSquared->GetRowIndexArray();
3160 const Int_t *cols=lSquared->GetColIndexArray();
3161 const Double_t *data=lSquared->GetMatrixArray();
3162 for (Int_t i = 0; i < GetNx(); i++) {
3163 for (Int_t cindex = rows[i]; cindex < rows[i+1]; cindex++) {
3164 Int_t j=cols[cindex];
3165 out->SetBinContent(fXToHist[i], fXToHist[j],data[cindex]);
3166 }
3167 }
3169}
3170
3171////////////////////////////////////////////////////////////////////////
3172/// get number of regularisation conditions
3173///
3174/// Ths returns the number of regularisation conditions, useful for
3175/// booking a histogram for a subsequent call of GetL().
3176
3178 return fL->GetNrows();
3179}
3180
3181////////////////////////////////////////////////////////////////////////
3182/// get matrix of regularisation conditions
3183///
3184/// \param[out] out histogram to store the regularisation conditions.
3185/// the bincontents are overwritten
3186///
3187/// The histogram should have dimension nr (y-axis) times nx (x-axis).
3188/// nr corresponds to the number of regularisation conditions, it can
3189/// be obtained using the method GetNr(). nx corresponds to the number
3190/// of histogram bins in the response matrix along the truth axis.
3191
3192void TUnfold::GetL(TH2 *out) const
3193{
3194 // loop over sparse matrix
3195 const Int_t *rows=fL->GetRowIndexArray();
3196 const Int_t *cols=fL->GetColIndexArray();
3197 const Double_t *data=fL->GetMatrixArray();
3198 for (Int_t row = 0; row < GetNr(); row++) {
3199 for (Int_t cindex = rows[row]; cindex < rows[row+1]; cindex++) {
3200 Int_t col=cols[cindex];
3201 Int_t indexH=fXToHist[col];
3202 out->SetBinContent(indexH,row+1,data[cindex]);
3203 }
3204 }
3205}
3206
3207////////////////////////////////////////////////////////////////////////
3208/// set type of area constraint
3209///
3210/// results of a previous unfolding are reset
3211
3213{
3214 // set type of constraint for the next unfolding
3215 if(fConstraint !=constraint) ClearResults();
3216 fConstraint=constraint;
3217 Info("SetConstraint","fConstraint=%d",fConstraint);
3218}
3219
3220
3221////////////////////////////////////////////////////////////////////////
3222/// return regularisation parameter
3223///
3225{
3226 // return regularisation parameter
3227 return TMath::Sqrt(fTauSquared);
3228}
3229
3230////////////////////////////////////////////////////////////////////////
3231/// get &chi;<sup>2</sup><sub>L</sub> contribution determined in recent unfolding
3233{
3234 // return chi**2 contribution from regularisation conditions
3235 return fLXsquared*fTauSquared;
3236}
3237
3238////////////////////////////////////////////////////////////////////////
3239/// get number of truth parameters determined in recent unfolding
3240///
3241/// empty bins of the response matrix or bins which can not be
3242/// unfolded due to rank deficits are not counted
3244{
3245 return GetNx();
3246}
3247
3248////////////////////////////////////////////////////////////////////////
3249/// get value on x-axis of L-curve determined in recent unfolding
3250///
3251/// x=log<sub>10</sub>(GetChi2A())
3253{
3254 return TMath::Log10(fChi2A);
3255}
3256
3257////////////////////////////////////////////////////////////////////////
3258/// get value on y-axis of L-curve determined in recent unfolding
3259///
3260/// y=log<sub>10</sub>(GetChi2L())
3262{
3263 return TMath::Log10(fLXsquared);
3264}
3265
3266////////////////////////////////////////////////////////////////////////
3267/// get output distribution, possibly cumulated over several bins
3268///
3269/// \param[out] output existing output histogram. content and errors
3270/// will be updated.
3271/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
3272///
3273/// If nonzero, the array <b>binMap</b> must have dimension n+2, where n
3274/// corresponds to the number of bins on the truth axis of the response
3275/// matrix (the histogram specified with the TUnfold
3276/// constructor). The indexes of <b>binMap</b> correspond to the truth
3277/// bins (including underflow and overflow) of the response matrix.
3278/// The element binMap[i] specifies the histogram number in
3279/// <b>output</b> where the corresponding truth bin will be stored. It is
3280/// possible to specify the same <b>output</b> bin number for multiple
3281/// indexes, in which case these bins are added. Set binMap[i]=-1 to
3282/// ignore an unfolded truth bin. The uncertainties are
3283/// calculated from the corresponding parts of the covariance matrix,
3284/// properly taking care of added truth bins.
3285/// <br/>
3286/// If the pointer <b>binMap</b> is zero, the bins are mapped
3287/// one-to-one. Truth bin zero (underflow) is stored in the
3288/// <b>output</b> underflow, truth bin 1 is stored in bin number 1, etc.
3289
3291{
3292 //
3293 // output: output histogram
3294 // binMap: for each bin of the original output distribution
3295 // specify the destination bin. A value of -1 means that the bin
3296 // is discarded. 0 means underflow bin, 1 first bin, ...
3297 // binMap[0] : destination of underflow bin
3298 // binMap[1] : destination of first bin
3299 // ...
3300
3302 /* Int_t nbin=output->GetNbinsX();
3303 Double_t *c=new Double_t[nbin+2];
3304 Double_t *e2=new Double_t[nbin+2];
3305 for(Int_t i=nullptr;i<nbin+2;i++) {
3306 c[i]=0.0;
3307 e2[i]=0.0;
3308 } */
3309
3310 std::map<Int_t,Double_t> e2;
3311
3315
3317 for(Int_t i=0;i<binMapSize;i++) {
3318 Int_t destBinI=binMap ? binMap[i] : i; // histogram bin
3319 Int_t srcBinI=fHistToX[i]; // matrix row index
3320 if((destBinI>=0)&&(srcBinI>=0)) {
3321 output->SetBinContent
3322 (destBinI, (*fX)(srcBinI,0)+ output->GetBinContent(destBinI));
3323 // here we loop over the columns of the error matrix
3324 // j: counts histogram bins
3325 // index: counts sparse matrix index
3326 // the algorithm makes use of the fact that fHistToX is ordered
3327 Int_t j=0; // histogram bin
3329 //Double_t e2=TMath::Power(output->GetBinError(destBinI),2.);
3330 while((j<binMapSize)&&(index_vxx<rows_Vxx[srcBinI+1])) {
3331 Int_t destBinJ=binMap ? binMap[j] : j; // histogram bin
3332 if(destBinI!=destBinJ) {
3333 // only diagonal elements are calculated
3334 j++;
3335 } else {
3336 Int_t srcBinJ=fHistToX[j]; // matrix column index
3337 if(srcBinJ<0) {
3338 // bin was not unfolded, check next bin
3339 j++;
3340 } else {
3342 // index is too small
3343 index_vxx++;
3344 } else if(cols_Vxx[index_vxx]>srcBinJ) {
3345 // index is too large, skip bin
3346 j++;
3347 } else {
3348 // add this bin
3350 j++;
3351 index_vxx++;
3352 }
3353 }
3354 }
3355 }
3356 //output->SetBinError(destBinI,TMath::Sqrt(e2));
3357 }
3358 }
3359 for(std::map<Int_t,Double_t>::const_iterator i=e2.begin();
3360 i!=e2.end();i++) {
3361 //cout<<(*i).first<<" "<<(*i).second<<"\n";
3362 output->SetBinError((*i).first,TMath::Sqrt((*i).second));
3363 }
3364}
3365
3366////////////////////////////////////////////////////////////////////////
3367/// add up an error matrix, also respecting the bin mapping
3368///
3369/// \param[inout] ematrix error matrix histogram
3370/// \param[in] emat error matrix stored with internal mapping (member fXToHist)
3371/// \param[in] binMap mapping of histogram bins
3372/// \param[in] doClear if true, ematrix is cleared prior to adding
3373/// elements of emat to it.
3374///
3375/// the array <b>binMap</b> is explained with the method GetOutput(). The
3376/// matrix emat must have dimension NxN where N=fXToHist.size()
3377/// The flag <b>doClear</b> may be used to add covariance matrices from
3378/// several uncertainty sources.
3379///
3381 const Int_t *binMap,Bool_t doClear) const
3382{
3383 Int_t nbin=ematrix->GetNbinsX();
3384 if(doClear) {
3385 for(Int_t i=0;i<nbin+2;i++) {
3386 for(Int_t j=0;j<nbin+2;j++) {
3387 ematrix->SetBinContent(i,j,0.0);
3388 ematrix->SetBinError(i,j,0.0);
3389 }
3390 }
3391 }
3392
3393 if(emat) {
3394 const Int_t *rows_emat=emat->GetRowIndexArray();
3395 const Int_t *cols_emat=emat->GetColIndexArray();
3396 const Double_t *data_emat=emat->GetMatrixArray();
3397
3399 for(Int_t i=0;i<binMapSize;i++) {
3400 Int_t destBinI=binMap ? binMap[i] : i;
3402 if((destBinI>=0)&&(destBinI<nbin+2)&&(srcBinI>=0)) {
3403 // here we loop over the columns of the source matrix
3404 // j: counts histogram bins
3405 // index: counts sparse matrix index
3406 // the algorithm makes use of the fact that fHistToX is ordered
3407 Int_t j=0;
3409 while((j<binMapSize)&&(index_vxx<rows_emat[srcBinI+1])) {
3412 if((destBinJ<0)||(destBinJ>=nbin+2)||(srcBinJ<0)) {
3413 // check next bin
3414 j++;
3415 } else {
3417 // index is too small
3418 index_vxx++;
3419 } else if(cols_emat[index_vxx]>srcBinJ) {
3420 // index is too large, skip bin
3421 j++;
3422 } else {
3423 // add this bin
3424 Double_t e2= ematrix->GetBinContent(destBinI,destBinJ)
3426 ematrix->SetBinContent(destBinI,destBinJ,e2);
3427 j++;
3428 index_vxx++;
3429 }
3430 }
3431 }
3432 }
3433 }
3434 }
3435}
3436
3437////////////////////////////////////////////////////////////////////////
3438/// get output covariance matrix, possibly cumulated over several bins
3439///
3440/// \param[out] ematrix histogram to store the covariance. The bin
3441/// contents are overwritten.
3442/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
3443///
3444/// The use of <b>binMap</b> is explained with the documentation of
3445/// the GetOutput() method
3446
3451
3452////////////////////////////////////////////////////////////////////////
3453/// get correlation coefficiencts, possibly cumulated over several bins
3454///
3455/// \param[out] rhoij histogram to store the correlation coefficiencts. The bin
3456/// contents are overwritten.
3457/// \param[in] binMap (default=nullptr) array for mapping truth bins to histogram bins
3458///
3459/// The use of <b>binMap</b> is explained with the documentation of
3460/// the GetOutput() method
3461
3463{
3465 Int_t nbin=rhoij->GetNbinsX();
3466 Double_t *e=new Double_t[nbin+2];
3467 for(Int_t i=0;i<nbin+2;i++) {
3468 e[i]=TMath::Sqrt(rhoij->GetBinContent(i,i));
3469 }
3470 for(Int_t i=0;i<nbin+2;i++) {
3471 for(Int_t j=0;j<nbin+2;j++) {
3472 if((e[i]>0.0)&&(e[j]>0.0)) {
3473 rhoij->SetBinContent(i,j,rhoij->GetBinContent(i,j)/e[i]/e[j]);
3474 } else {
3475 rhoij->SetBinContent(i,j,0.0);
3476 }
3477 }
3478 }
3479 delete[] e;
3480}
3481
3482////////////////////////////////////////////////////////////////////////
3483/// get global correlation coefficiencts, possibly cumulated over several bins
3484///
3485/// \param[out] rhoi histogram to store the global correlation
3486/// coefficients. The bin contents are overwritten.
3487/// \param[in] binMap (default=nullptr) array for mapping truth bins to
3488/// histogram bins
3489/// \param[out] invEmat (default=nullptr) histogram to store the inverted
3490/// covariance matrix
3491///
3492/// for a given bin, the global correlation coefficient is defined
3493/// as<br/>
3494/// &rho;<sub>i</sub>=sqrt(1-1/(V<sub>ii</sub>*V<sup>-1</sup><sub>ii</sub>))
3495/// <br/>
3496/// such that the calculation of global correlation coefficients
3497/// possibly involves the inversion of a covariance matrix.
3498///
3499/// return value: maximum global correlation coefficient
3500///
3501/// The use of <b>binMap</b> is explained with the documentation of
3502/// the GetOutput() method
3503///
3504
3506{
3507 ClearHistogram(rhoi,-1.);
3508
3509 if(binMap) {
3510 // if there is a bin map, the matrix needs to be inverted
3511 // otherwise, use the existing inverse, such that
3512 // no matrix inversion is needed
3514 } else {
3515 Double_t rhoMax=0.0;
3516
3520
3524
3525 for(Int_t i=0;i<GetNx();i++) {
3526 Int_t destI=fXToHist[i];
3527 Double_t e_ii=0.0,einv_ii=0.0;
3528 for(int index_vxx=rows_Vxx[i];index_vxx<rows_Vxx[i+1];index_vxx++) {
3529 if(cols_Vxx[index_vxx]==i) {
3531 break;
3532 }
3533 }
3535 index_vxxinv++) {
3536 if(cols_VxxInv[index_vxxinv]==i) {
3538 break;
3539 }
3540 }
3541 Double_t rho=1.0;
3542 if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3543 if(rho>=0.0) rho=TMath::Sqrt(rho);
3544 else rho=-TMath::Sqrt(-rho);
3545 if(rho>rhoMax) {
3546 rhoMax = rho;
3547 }
3548 rhoi->SetBinContent(destI,rho);
3549 }
3550 return rhoMax;
3551 }
3552}
3553
3555 const Int_t *binMap,TH2 *invEmat) const
3556{
3557 // get global correlation coefficients with arbitrary min map
3558 // rhoi: global correlation histogram
3559 // emat: error matrix
3560 // binMap: for each bin of the original output distribution
3561 // specify the destination bin. A value of -1 means that the bin
3562 // is discarded. 0 means underflow bin, 1 first bin, ...
3563 // binMap[0] : destination of underflow bin
3564 // binMap[1] : destination of first bin
3565 // ...
3566 // return value: maximum global correlation
3567
3568 Double_t rhoMax=0.;
3569 // original number of bins:
3570 // fHistToX.GetSize()
3571 // loop over binMap and find number of used bins
3572
3574
3575 // histToLocalBin[iBin] points to a local index
3576 // only bins iBin of the histogram rhoi whih are referenced
3577 // in the bin map have a local index
3578 std::map<int,int> histToLocalBin;
3579 Int_t nBin=0;
3580 for(Int_t i=0;i<binMapSize;i++) {
3581 Int_t mapped_i=binMap ? binMap[i] : i;
3582 if(mapped_i>=0) {
3585 nBin++;
3586 }
3587 }
3588 }
3589 // number of local indices: nBin
3590 if(nBin>0) {
3591 // construct inverse mapping function
3592 // local index -> histogram bin
3594 for(std::map<int,int>::const_iterator i=histToLocalBin.begin();
3595 i!=histToLocalBin.end();i++) {
3596 localBinToHist[(*i).second]=(*i).first;
3597 }
3598
3599 const Int_t *rows_eOrig=eOrig->GetRowIndexArray();
3600 const Int_t *cols_eOrig=eOrig->GetColIndexArray();
3601 const Double_t *data_eOrig=eOrig->GetMatrixArray();
3602
3603 // remap error matrix
3604 // matrix row i -> origI (fXToHist[i])
3605 // origI -> destI (binMap)
3606 // destI -> ematBinI (histToLocalBin)
3608 // i: row of the matrix eOrig
3609 for(Int_t i=0;i<GetNx();i++) {
3610 // origI: pointer in output histogram with all bins
3611 Int_t origI=fXToHist[i];
3612 // destI: pointer in the histogram rhoi
3614 if(destI<0) continue;
3617 index_eOrig++) {
3618 // j: column of the matrix fVxx
3620 // origJ: pointer in output histogram with all bins
3622 // destJ: pointer in the histogram rhoi
3624 if(destJ<0) continue;
3627 }
3628 }
3629 // invert remapped error matrix
3631 Int_t rank=0;
3633 if(rank!=einvSparse->GetNrows()) {
3634 Warning("GetRhoIFormMatrix","Covariance matrix has rank %d expect %d",
3635 rank,einvSparse->GetNrows());
3636 }
3637 // fill to histogram
3638 const Int_t *rows_eInv=einvSparse->GetRowIndexArray();
3639 const Int_t *cols_eInv=einvSparse->GetColIndexArray();
3640 const Double_t *data_eInv=einvSparse->GetMatrixArray();
3641
3642 for(Int_t i=0;i<einvSparse->GetNrows();i++) {
3643 Double_t e_ii=eRemap(i,i);
3644 Double_t einv_ii=0.0;
3646 index_einv++) {
3648 if(j==i) {
3650 }
3651 if(invEmat) {
3652 invEmat->SetBinContent(localBinToHist[i],localBinToHist[j],
3654 }
3655 }
3656 Double_t rho=1.0;
3657 if((einv_ii>0.0)&&(e_ii>0.0)) rho=1.-1./(einv_ii*e_ii);
3658 if(rho>=0.0) rho=TMath::Sqrt(rho);
3659 else rho=-TMath::Sqrt(-rho);
3660 if(rho>rhoMax) {
3661 rhoMax = rho;
3662 }
3663 //std::cout<<i<<" "<<localBinToHist[i]<<" "<<rho<<"\n";
3664 rhoi->SetBinContent(localBinToHist[i],rho);
3665 }
3666
3668 delete [] localBinToHist;
3669 }
3670 return rhoMax;
3671}
3672
3673
3674////////////////////////////////////////////////////////////////////////
3675/// Initialize bin contents and bin errors for a given histogram
3676///
3677/// \param[out] h histogram
3678/// \param[in] x new histogram content
3679///
3680/// all histgram errors are set to zero, all contents are set to <b>x</b>
3682{
3683 Int_t nxyz[3];
3684 nxyz[0]=h->GetNbinsX()+1;
3685 nxyz[1]=h->GetNbinsY()+1;
3686 nxyz[2]=h->GetNbinsZ()+1;
3687 for(int i=h->GetDimension();i<3;i++) nxyz[i]=0;
3688 Int_t ixyz[3];
3689 for(int i=0;i<3;i++) ixyz[i]=0;
3690 while((ixyz[0]<=nxyz[0])&&
3691 (ixyz[1]<=nxyz[1])&&
3692 (ixyz[2]<=nxyz[2])){
3693 Int_t ibin=h->GetBin(ixyz[0],ixyz[1],ixyz[2]);
3694 h->SetBinContent(ibin,x);
3695 h->SetBinError(ibin,0.0);
3696 for(Int_t i=0;i<3;i++) {
3697 ixyz[i] += 1;
3698 if(ixyz[i]<=nxyz[i]) break;
3699 if(i<2) ixyz[i]=0;
3700 }
3701 }
3702}
3703
3705 // set accuracy for matrix inversion
3706 if((eps>0.0)&&(eps<1.0)) fEpsMatrix=eps;
3707}
3708
3709////////////////////////////////////////////////////////////////////////
3710/// return a string describing the TUnfold version
3711///
3712/// The version is reported in the form Vmajor.minor
3713/// Changes of the minor version number typically correspond to
3714/// bug-fixes. Changes of the major version may result in adding or
3715/// removing data attributes, such that the streamer methods are not
3716/// compatible between different major versions.
3717
3719{
3720 return TUnfold_VERSION;
3721}
3722
3723
3724////////////////////////////////////////////////////////////////////////
3725/// return Stein's unbiased risk estimator
3726/// See e.g. arXiv:1612.09415
3727///
3728/// A minimum in the SURE variable is a good choice of regularisation strength
3729///
3730/// NOTE: the calculation of SURE depends on the calculation of DF.
3731/// See the method GetDF() for caveats with Poisson-distributed data.
3732
3733double TUnfold::GetSURE(void) const {
3734 return GetChi2A()+2.*GetDF();
3735}
3736
3737////////////////////////////////////////////////////////////////////////
3738/// return the effecive number of degrees of freedom
3739/// See e.g. arXiv:1612.09415 and the references therein
3740///
3741/// Here, DF is calculated using the dependence of
3742/// the unfolding result x on the data y
3743///
3744/// This calculation is done assuming a CONSTANT data variance.
3745/// I.e. the uncertainties reported to TUnfold in "SetInput()"
3746/// ought to be independent of the measurements.
3747/// This is *NOT* true for standard Poisson-distributed data.
3748/// In practice the impact is expected to be small
3749
3750double TUnfold::GetDF(void) const {
3751
3752 double r=0.;
3753 const Int_t *rows_A=fA->GetRowIndexArray();
3754 const Int_t *cols_A=fA->GetColIndexArray();
3755 const Double_t *data_A=fA->GetMatrixArray();
3756 for (Int_t iy = 0; iy <fA->GetNrows(); iy++) {
3757 for(Int_t indexA=rows_A[iy];indexA<rows_A[iy+1];indexA++) {
3758 Int_t ix = cols_A[indexA];
3759 r += data_A[indexA]*(*fDXDY)(ix,iy);
3760 }
3761 }
3762 return r;
3763}
3764
3765////////////////////////////////////////////////////////////////////////
3766/// minimize Stein's unbiased risk estimator "SURE"
3767/// using successive calls to DoUnfold at various tau.
3768/// Optionally, also the L-curve and its curvature are calculated
3769/// for comparison. See description of GetSURE()
3770/// See e.g. arXiv:1612.09415 for the definition of SURE
3771///
3772/// \param[in] nPoint : number of points
3773/// \param[in] tauMin : lower end of scan-range
3774/// \param[in] tauMax : upper end of scan-range
3775/// \param[out] logTauSURE : scan result, SURE as a function of log(tau)
3776/// \param[out] df_chi2A : parametric plot of DF against chi2A
3777/// \param[out] lCurve : parametric plot (lCurve)
3778///
3779/// return value: index of the "best" point
3780///
3781/// if tauMin is less than zero of if tauMin is not loer than tauMax, then
3782/// the scan range is determined automatically
3783/// if tau=nullptr is included in the scan, then the first x-coordinate
3784// of the result plot logTauSURE is set to -Infinity
3785
3789
3790 struct ScanResult {
3791 double DF,SURE,chi2A,x,y;
3792 };
3793 std::map<double,ScanResult > scan;
3794
3795 DoUnfold(0.);
3797
3798 while((int)scan.size()<nPoint) {
3799 // add a point
3800 double tau;
3801 int n=scan.size();
3802 if(n==0) {
3803 if((tauMin>=0.)&&(tauMax>tauMin)) {
3804 tau=tauMin;
3805 } else {
3806 tau=0.0;
3807 }
3808 } else if(n==1) {
3809 if((tauMin>=0.)&&(tauMax>tauMin)) {
3810 tau=tauMax;
3811 } else {
3812 // estimate of maximum tau
3813 tau=1./ev(ev.GetNrows()-1);
3814 if(tau<=tauMin) tau=tauMin+1.;
3815 }
3816 } else {
3817 // interpolate/extrapolate
3818 std::vector<double> t,s;
3819 t.reserve(n);
3820 s.reserve(n);
3821 for(std::map<double,ScanResult>::const_iterator i=scan.begin();
3822 i!=scan.end();i++) {
3823 t.push_back((*i).first);
3824 s.push_back((*i).second.SURE);
3825 }
3826 if((n<nPoint-1)||(n<3)) {
3827 // add another point
3828 int where=0;
3829
3830 if(t.size()>2) {
3831 // check whether all intervals not including tau=nullptr
3832 // have the same (logarithmic) size
3833 double s0=0.,s1=0.,s2=0.;
3834 double rMax=0.;
3835 for(size_t i=0;i<t.size()-1;i++) {
3836 if(t[i]>0.) {
3837 double r=TMath::Log(t[i+1]/t[i]);
3838 s0+=1.;
3839 s1+=r;
3840 s2+=r*r;
3841 if(r>rMax) {
3842 rMax=r;
3843 where=i;
3844 }
3845 }
3846 }
3847 double mean=s1/s0;
3848 double diffSq=s2-mean*mean*s0;
3849 if(diffSq<0.5*(rMax-mean)) {
3850 where=0;
3851 }
3852 }
3853
3854 if((where==0)&&(t[where]<=0.0)) {
3855 // automatic scan
3856 // change minimum by 1/2
3857 tau=t[where+1]/2.;
3858 } else {
3859 // geometric mean
3860 tau=TMath::Sqrt(t[where]*t[where+1]);
3861 }
3862 } else {
3863 // improve minimum
3864 int where=1;
3865 for(size_t i=2;i<t.size()-1;i++) {
3866 if(s[i]<s[where]) where=i;
3867 }
3868 // quadratic interpolation
3869 if(where!=1) {
3870 // log in tau
3871 for(int i=-1;i<2;i++) t[where+i]=TMath::Log10(t[where+i]);
3872 }
3873 Info("ScanSURE","minimum near: [%f,%f,%f] -> [%f,%f,%f}",
3874 t[where-1],t[where],t[where+1],s[where-1],s[where],s[where+1]);
3875 double s21=s[where+1]-s[where];
3876 double s01=s[where-1]-s[where];
3877 double t21=t[where+1]-t[where];
3878 double t01=t[where-1]-t[where];
3879 double t20=t[where+1]-t[where-1];
3880 double ct0t2=(s21*t01-s01*t21)/t20;
3881 double bt0t2=(s01*t21*t21-s21*t01*t01)/t20;
3882 if(ct0t2<0.0) {
3883 tau=t[where]-0.5*bt0t2/ct0t2;
3884 //Info("ScanSURE","try tau=%f",tau);
3885 } else {
3886 // not a minimum - not clear what to do
3887 if(s21<s01) {
3888 tau=t[where]+0.5*t21;
3889 } else {
3890 tau=t[where]+0.5*t01;
3891 }
3892 //Info("ScanSURE","half of interval: tau=%f",tau);
3893 }
3894 if(where!=1) tau=TMath::Power(10.,tau);
3895 }
3896 }
3897 DoUnfold(tau);
3898 ScanResult &r=scan[tau];
3899 r.SURE=GetSURE();
3900 r.DF=GetDF();
3901 r.chi2A=GetChi2A();
3902 r.x=GetLcurveX();
3903 r.y=GetLcurveY();
3904
3905 if((tau<=0.)&&(GetNdf()<=0)) {
3906 Error("ScanSURE","too few input bins, NDF<=nullptr %d",GetNdf());
3907 }
3908 if(tau<=0.) {
3909 Info("ScanSURE","logtau=-Infinity Chi2A=%lf SURE=%lf DF=%lf X=%lf Y=%lf",
3910 r.chi2A,r.SURE,r.DF,r.x,r.y);
3911 if(!TMath::Finite(r.x)) {
3912 Fatal("ScanSURE","problem (too few input bins?) x=%f",r.x);
3913 }
3914 if(!TMath::Finite(r.y)) {
3915 Fatal("ScanSURE","problem (missing regularisation?) y=%f",r.y);
3916 }
3917 } else {
3918 Info("ScanSURE","logtau=%lf Chi2A=%lf SURE=%lf DF=%lf X=%lf Y=%lf",
3919 TMath::Log10(tau),r.chi2A,r.SURE,r.DF,r.x,r.y);
3920 }
3921
3922 }
3923
3924 int iBest=-1;
3925 // store results
3926 std::vector<double> logTau,SURE,DF,chi2A,X,Y;
3927 for(std::map<double,ScanResult>::const_iterator i=scan.begin();
3928 i!=scan.end();i++) {
3929 if((*i).first>=0) {
3930 double log10tau;
3931 if((*i).first>0.0) {
3932 log10tau=TMath::Log10((*i).first);
3933 } else {
3934 continue; // log10tau=-HUGE_VAL;
3935 }
3936 double s=(*i).second.SURE;
3937 if((iBest<0)||(SURE[iBest]>s)) {
3938 iBest=SURE.size();
3939 }
3940 logTau.push_back(log10tau);
3941 SURE.push_back(s);
3942 DF.push_back((*i).second.DF);
3943 chi2A.push_back((*i).second.chi2A);
3944 X.push_back((*i).second.x);
3945 Y.push_back((*i).second.y);
3946 }
3947 }
3948
3949 int n=logTau.size();
3950 if(n) {
3951 // fill graphs and splines
3952 if(logTauSURE) {
3953 *logTauSURE=new TGraph(n,logTau.data(),SURE.data());
3954 }
3955 if(df_chi2A) {
3956 *df_chi2A=new TGraph(n,DF.data(),chi2A.data());
3957 }
3958 if(lCurve) {
3959 *lCurve=new TGraph(n,X.data(),Y.data());
3960 }
3961 }
3962 return iBest;
3963}
3964
3965
#define d(i)
Definition RSha256.hxx:102
#define b(i)
Definition RSha256.hxx:100
#define f(i)
Definition RSha256.hxx:104
#define c(i)
Definition RSha256.hxx:101
#define s0(x)
Definition RSha256.hxx:90
#define a(i)
Definition RSha256.hxx:99
#define s1(x)
Definition RSha256.hxx:91
#define h(i)
Definition RSha256.hxx:106
#define R(a, b, c, d, e, f, g, h, i)
Definition RSha256.hxx:110
#define e(i)
Definition RSha256.hxx:103
constexpr Bool_t kFALSE
Definition RtypesCore.h:94
constexpr Bool_t kTRUE
Definition RtypesCore.h:93
#define ClassImp(name)
Definition Rtypes.h:374
#define X(type, name)
ROOT::Detail::TRangeCast< T, true > TRangeDynCast
TRangeDynCast is an adapter class that allows the typed iteration through a TCollection.
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void data
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void input
Option_t Option_t cindex
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t dest
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t r
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t index
Option_t Option_t TPoint xy
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t src
float * q
TMatrixTSparse< Double_t > TMatrixDSparse
TMatrixT< Double_t > TMatrixD
Definition TMatrixDfwd.h:23
#define TUnfold_VERSION
Definition TUnfold.h:104
const_iterator begin() const
const_iterator end() const
void Set(Int_t n) override
Set size of this array to n doubles.
Definition TArrayD.cxx:106
const Double_t * GetArray() const
Definition TArrayD.h:43
void Set(Int_t n) override
Set size of this array to n ints.
Definition TArrayI.cxx:105
Int_t GetSize() const
Definition TArray.h:47
A TGraph is an object made of two arrays X and Y with npoints each.
Definition TGraph.h:41
TH1 is the base class of all histogram classes in ROOT.
Definition TH1.h:59
Service class for 2-D histogram classes.
Definition TH2.h:30
void SetBinContent(Int_t bin, Double_t content) override
Set bin content.
Definition TH2.cxx:2577
TMatrixDSymEigen.
Int_t GetNrows() const
Int_t GetNcols() const
TMatrixTBase< Element > & SetMatrixArray(const Element *data, Option_t *="") override
Copy array data to matrix .
const Int_t * GetRowIndexArray() const override
const Int_t * GetColIndexArray() const override
const Element * GetMatrixArray() const override
virtual void Warning(const char *method, const char *msgfmt,...) const
Issue warning message.
Definition TObject.cxx:1040
virtual void Error(const char *method, const char *msgfmt,...) const
Issue error message.
Definition TObject.cxx:1054
virtual void Fatal(const char *method, const char *msgfmt,...) const
Issue fatal error message.
Definition TObject.cxx:1082
Class to create third splines to interpolate knots Arbitrary conditions can be introduced for first a...
Definition TSpline.h:182
Base class for spline implementation containing the Draw/Paint methods.
Definition TSpline.h:31
Basic string class.
Definition TString.h:139
static TString Format(const char *fmt,...)
Static method which formats a string using a printf style format descriptor and return a TString.
Definition TString.cxx:2378
An algorithm to unfold distributions from detector to truth level.
Definition TUnfold.h:107
TArrayI fHistToX
mapping of histogram bins to matrix indices
Definition TUnfold.h:170
double GetDF(void) const
return the effecive number of degrees of freedom See e.g.
Definition TUnfold.cxx:3750
TMatrixDSparse * fE
matrix E
Definition TUnfold.h:213
TMatrixDSparse * fEinv
matrix E^(-1)
Definition TUnfold.h:211
virtual Double_t GetLcurveY(void) const
get value on y-axis of L-curve determined in recent unfolding
Definition TUnfold.cxx:3261
TMatrixDSparse * fAx
result x folded back A*x
Definition TUnfold.h:191
TMatrixDSparse * MultiplyMSparseM(const TMatrixDSparse *a, const TMatrixD *b) const
multiply sparse matrix and a non-sparse matrix
Definition TUnfold.cxx:761
virtual Double_t DoUnfold(void)
core unfolding algorithm
Definition TUnfold.cxx:247
Double_t fChi2A
chi**2 contribution from (y-Ax)Vyy-1(y-Ax)
Definition TUnfold.h:193
TMatrixD * fX0
bias vector x0
Definition TUnfold.h:164
double GetSURE(void) const
return Stein's unbiased risk estimator See e.g.
Definition TUnfold.cxx:3733
void GetBias(TH1 *bias, const Int_t *binMap=nullptr) const
get bias vector including bias scale
Definition TUnfold.cxx:2936
TMatrixDSparse * MultiplyMSparseTranspMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
multiply a transposed Sparse matrix with another Sparse matrix
Definition TUnfold.cxx:679
TMatrixDSparse * MultiplyMSparseMSparseTranspVector(const TMatrixDSparse *m1, const TMatrixDSparse *m2, const TMatrixTBase< Double_t > *v) const
calculate a sparse matrix product M1*V*M2T where the diagonal matrix V is given by a vector
Definition TUnfold.cxx:821
TMatrixDSparse * CreateSparseMatrix(Int_t nrow, Int_t ncol, Int_t nele, Int_t *row, Int_t *col, Double_t *data) const
create a sparse matrix, given the nonzero elements
Definition TUnfold.cxx:580
Int_t RegularizeSize(int bin, Double_t scale=1.0)
add a regularisation condition on the magnitude of a truth bin
Definition TUnfold.cxx:2026
Double_t fEpsMatrix
machine accuracy used to determine matrix rank after eigenvalue analysis
Definition TUnfold.h:181
void GetProbabilityMatrix(TH2 *A, EHistMap histmap) const
get matrix of probabilities
Definition TUnfold.cxx:3011
Double_t GetChi2L(void) const
get χ2L contribution determined in recent unfolding
Definition TUnfold.cxx:3232
TMatrixDSparse * fVxx
covariance matrix Vxx
Definition TUnfold.h:185
Int_t GetNy(void) const
returns the number of measurement bins
Definition TUnfold.h:238
virtual TString GetOutputBinName(Int_t iBinX) const
Get bin name of an outpt bin.
Definition TUnfold.cxx:1668
Double_t fBiasScale
scale factor for the bias
Definition TUnfold.h:162
virtual Int_t ScanSURE(Int_t nPoint, Double_t tauMin, Double_t tauMax, TGraph **logTauSURE=nullptr, TGraph **df_chi2A=nullptr, TGraph **lCurve=nullptr)
minimize Stein's unbiased risk estimator "SURE" using successive calls to DoUnfold at various tau.
Definition TUnfold.cxx:3787
virtual Int_t ScanLcurve(Int_t nPoint, Double_t tauMin, Double_t tauMax, TGraph **lCurve, TSpline **logTauX=nullptr, TSpline **logTauY=nullptr, TSpline **logTauCurvature=nullptr)
scan the L curve, determine tau and unfold at the final value of tau
Definition TUnfold.cxx:2559
Double_t fRhoAvg
average global correlation coefficient
Definition TUnfold.h:199
TMatrixDSparse * fDXDtauSquared
derivative of the result wrt tau squared
Definition TUnfold.h:207
static void DeleteMatrix(TMatrixD **m)
delete matrix and invalidate pointer
Definition TUnfold.cxx:189
void ClearHistogram(TH1 *h, Double_t x=0.) const
Initialize bin contents and bin errors for a given histogram.
Definition TUnfold.cxx:3681
Int_t RegularizeDerivative(int left_bin, int right_bin, Double_t scale=1.0)
add a regularisation condition on the difference of two truth bin
Definition TUnfold.cxx:2060
Int_t GetNx(void) const
returns internal number of output (truth) matrix rows
Definition TUnfold.h:230
static const char * GetTUnfoldVersion(void)
return a string describing the TUnfold version
Definition TUnfold.cxx:3718
void SetConstraint(EConstraint constraint)
set type of area constraint
Definition TUnfold.cxx:3212
void GetFoldedOutput(TH1 *folded, const Int_t *binMap=nullptr) const
get unfolding result on detector level
Definition TUnfold.cxx:2963
Int_t RegularizeBins(int start, int step, int nbin, ERegMode regmode)
add regularisation conditions for a group of bins
Definition TUnfold.cxx:2144
Bool_t AddRegularisationCondition(Int_t i0, Double_t f0, Int_t i1=-1, Double_t f1=0., Int_t i2=-1, Double_t f2=0.)
add a row of regularisation conditions to the matrix L
Definition TUnfold.cxx:1919
Int_t RegularizeCurvature(int left_bin, int center_bin, int right_bin, Double_t scale_left=1.0, Double_t scale_right=1.0)
add a regularisation condition on the curvature of three truth bin
Definition TUnfold.cxx:2099
void SetBias(const TH1 *bias)
set bias vector
Definition TUnfold.cxx:1896
void GetL(TH2 *l) const
get matrix of regularisation conditions
Definition TUnfold.cxx:3192
ERegMode fRegMode
type of regularisation
Definition TUnfold.h:176
Int_t GetNr(void) const
get number of regularisation conditions
Definition TUnfold.cxx:3177
TMatrixDSparse * fVxxInv
inverse of covariance matrix Vxx-1
Definition TUnfold.h:187
TMatrixD * fX
unfolding result x
Definition TUnfold.h:183
EConstraint
type of extra constraint
Definition TUnfold.h:113
@ kEConstraintNone
use no extra constraint
Definition TUnfold.h:116
virtual Double_t GetLcurveX(void) const
get value on x-axis of L-curve determined in recent unfolding
Definition TUnfold.cxx:3252
Double_t GetRhoI(TH1 *rhoi, const Int_t *binMap=nullptr, TH2 *invEmat=nullptr) const
get global correlation coefficiencts, possibly cumulated over several bins
Definition TUnfold.cxx:3505
TMatrixDSparse * fVyy
covariance matrix Vyy corresponding to y
Definition TUnfold.h:160
Int_t fNdf
number of degrees of freedom
Definition TUnfold.h:201
TArrayD fSumOverY
truth vector calculated from the non-normalized response matrix
Definition TUnfold.h:172
ERegMode
choice of regularisation scheme
Definition TUnfold.h:123
@ kRegModeNone
no regularisation, or defined later by RegularizeXXX() methods
Definition TUnfold.h:126
@ kRegModeDerivative
regularize the 1st derivative of the output distribution
Definition TUnfold.h:132
@ kRegModeSize
regularise the amplitude of the output distribution
Definition TUnfold.h:129
@ kRegModeCurvature
regularize the 2nd derivative of the output distribution
Definition TUnfold.h:135
@ kRegModeMixed
mixed regularisation pattern
Definition TUnfold.h:139
void GetInput(TH1 *inputData, const Int_t *binMap=nullptr) const
Input vector of measurements.
Definition TUnfold.cxx:3070
void SetEpsMatrix(Double_t eps)
set numerical accuracy for Eigenvalue analysis when inverting matrices with rank problems
Definition TUnfold.cxx:3704
const TMatrixDSparse * GetE(void) const
matrix E, using internal bin counting
Definition TUnfold.h:258
TVectorD GetSqrtEvEmatrix(void) const
Definition TUnfold.cxx:2510
void GetOutput(TH1 *output, const Int_t *binMap=nullptr) const
get output distribution, possibly cumulated over several bins
Definition TUnfold.cxx:3290
void GetRhoIJ(TH2 *rhoij, const Int_t *binMap=nullptr) const
get correlation coefficiencts, possibly cumulated over several bins
Definition TUnfold.cxx:3462
void ErrorMatrixToHist(TH2 *ematrix, const TMatrixDSparse *emat, const Int_t *binMap, Bool_t doClear) const
add up an error matrix, also respecting the bin mapping
Definition TUnfold.cxx:3380
TArrayI fXToHist
mapping of matrix indices to histogram bins
Definition TUnfold.h:168
TMatrixDSparse * fDXDY
derivative of the result wrt dx/dy
Definition TUnfold.h:209
TMatrixD * fY
input (measured) data y
Definition TUnfold.h:158
TMatrixDSparse * InvertMSparseSymmPos(const TMatrixDSparse *A, Int_t *rank) const
get the inverse or pseudo-inverse of a positive, sparse matrix
Definition TUnfold.cxx:994
TMatrixDSparse * fVyyInv
inverse of the input covariance matrix Vyy-1
Definition TUnfold.h:189
Double_t fLXsquared
chi**2 contribution from (x-s*x0)TLTL(x-s*x0)
Definition TUnfold.h:195
TMatrixDSparse * fDXDAM[2]
matrix contribution to the of derivative dx_k/dA_ij
Definition TUnfold.h:203
Double_t fTauSquared
regularisation parameter tau squared
Definition TUnfold.h:166
Int_t GetNpar(void) const
get number of truth parameters determined in recent unfolding
Definition TUnfold.cxx:3243
virtual void ClearResults(void)
reset all results
Definition TUnfold.cxx:209
Double_t fRhoMax
maximum global correlation coefficient
Definition TUnfold.h:197
void GetEmatrix(TH2 *ematrix, const Int_t *binMap=nullptr) const
get output covariance matrix, possibly cumulated over several bins
Definition TUnfold.cxx:3447
TMatrixDSparse * MultiplyMSparseMSparse(const TMatrixDSparse *a, const TMatrixDSparse *b) const
multiply two sparse matrices
Definition TUnfold.cxx:604
EConstraint fConstraint
type of constraint to use for the unfolding
Definition TUnfold.h:174
TUnfold(void)
only for use by root streamer or derived classes
Definition TUnfold.cxx:239
EHistMap
arrangement of axes for the response matrix (TH2 histogram)
Definition TUnfold.h:143
@ kHistMapOutputHoriz
truth level on x-axis of the response matrix
Definition TUnfold.h:146
void AddMSparse(TMatrixDSparse *dest, Double_t f, const TMatrixDSparse *src) const
add a sparse matrix, scaled by a factor, to another scaled matrix
Definition TUnfold.cxx:916
void GetNormalisationVector(TH1 *s, const Int_t *binMap=nullptr) const
histogram of truth bins, determined from suming over the response matrix
Definition TUnfold.cxx:2911
TMatrixDSparse * fDXDAZ[2]
vector contribution to the of derivative dx_k/dA_ij
Definition TUnfold.h:205
Double_t GetRhoIFromMatrix(TH1 *rhoi, const TMatrixDSparse *eOrig, const Int_t *binMap, TH2 *invEmat) const
Definition TUnfold.cxx:3554
void InitTUnfold(void)
initialize data menbers, for use in constructors
Definition TUnfold.cxx:145
Double_t GetTau(void) const
return regularisation parameter
Definition TUnfold.cxx:3224
Double_t GetChi2A(void) const
get χ2A contribution determined in recent unfolding
Definition TUnfold.h:329
Int_t RegularizeBins2D(int start_bin, int step1, int nbin1, int step2, int nbin2, ERegMode regmode)
add regularisation conditions for 2d unfolding
Definition TUnfold.cxx:2205
const TMatrixDSparse * GetDXDY(void) const
matrix of derivatives dx/dy
Definition TUnfold.h:250
void GetLsquared(TH2 *lsquared) const
get matrix of regularisation conditions squared
Definition TUnfold.cxx:3152
void GetInputInverseEmatrix(TH2 *ematrix)
get inverse of the measurement's covariance matrix
Definition TUnfold.cxx:3099
TMatrixDSparse * fA
response matrix A
Definition TUnfold.h:154
TMatrixDSparse * fL
regularisation conditions L
Definition TUnfold.h:156
virtual Int_t SetInput(const TH1 *hist_y, Double_t scaleBias=0.0, Double_t oneOverZeroError=0.0, const TH2 *hist_vyy=nullptr, const TH2 *hist_vyy_inv=nullptr)
Define input data for subsequent calls to DoUnfold(tau)
Definition TUnfold.cxx:2275
Int_t fIgnoredBins
number of input bins which are dropped because they have error=nullptr
Definition TUnfold.h:179
Int_t GetNdf(void) const
get number of degrees of freedom determined in recent unfolding
Definition TUnfold.h:339
std::ostream & Info()
Definition hadd.cxx:171
Double_t y[n]
Definition legend1.C:17
Double_t x[n]
Definition legend1.C:17
const Int_t n
Definition legend1.C:16
TF1 * f1
Definition legend1.C:11
#define G(x, y, z)
Short_t Max(Short_t a, Short_t b)
Returns the largest of a and b.
Definition TMathBase.h:250
Int_t Finite(Double_t x)
Check if it is finite with a mask in order to be consistent in presence of fast math.
Definition TMath.h:774
Double_t Log(Double_t x)
Returns the natural logarithm of x.
Definition TMath.h:760
Double_t Sqrt(Double_t x)
Returns the square root of x.
Definition TMath.h:666
LongDouble_t Power(LongDouble_t x, LongDouble_t y)
Returns x raised to the power y.
Definition TMath.h:725
Short_t Min(Short_t a, Short_t b)
Returns the smallest of a and b.
Definition TMathBase.h:198
Double_t Log10(Double_t x)
Returns the common (base-10) logarithm of x.
Definition TMath.h:766
Short_t Abs(Short_t d)
Returns the absolute value of parameter Short_t d.
Definition TMathBase.h:123
TMarker m
Definition textangle.C:8
static uint64_t sum(uint64_t i)
Definition Factory.cxx:2345
static void output()