Logo ROOT   6.14/05
Reference Guide
TPrincipal.cxx
Go to the documentation of this file.
1 // @(#)root/hist:$Id$
2 // Author: Christian Holm Christensen 1/8/2000
3 
4 /*************************************************************************
5  * Copyright (C) 1995-2004, Rene Brun and Fons Rademakers. *
6  * All rights reserved. *
7  * *
8  * For the licensing terms see $ROOTSYS/LICENSE. *
9  * For the list of contributors see $ROOTSYS/README/CREDITS. *
10  *************************************************************************/
11 
12 /** \class TPrincipal
13  \ingroup Hist
14 Principal Components Analysis (PCA)
15 
16 The current implementation is based on the LINTRA package from CERNLIB
17 by R. Brun, H. Hansroul, and J. Kubler.
18 The class has been implemented by Christian Holm Christensen in August 2000.
19 
20 ## Introduction
21 
22 In many applications of various fields of research, the treatment of
23 large amounts of data requires powerful techniques capable of rapid
24 data reduction and analysis. Usually, the quantities most
25 conveniently measured by the experimentalist, are not necessarily the
26 most significant for classification and analysis of the data. It is
27 then useful to have a way of selecting an optimal set of variables
28 necessary for the recognition process and reducing the dimensionality
29 of the problem, resulting in an easier classification procedure.
30 
31 This paper describes the implementation of one such method of
32 feature selection, namely the principal components analysis. This
33 multidimensional technique is well known in the field of pattern
34 recognition and and its use in Particle Physics has been documented
35 elsewhere (cf. H. Wind, <I>Function Parameterization</I>, CERN
36 72-21).
37 
38 ## Overview
39 Suppose we have prototypes which are trajectories of particles,
40 passing through a spectrometer. If one measures the passage of the
41 particle at say 8 fixed planes, the trajectory is described by an
42 8-component vector:
43 \f[
44  \mathbf{x} = \left(x_0, x_1, \ldots, x_7\right)
45 \f]
46 in 8-dimensional pattern space.
47 
48 One proceeds by generating a a representative tracks sample and
49 building up the covariance matrix \f$\mathsf{C}\f$. Its eigenvectors and
50 eigenvalues are computed by standard methods, and thus a new basis is
51 obtained for the original 8-dimensional space the expansion of the
52 prototypes,
53 \f[
54  \mathbf{x}_m = \sum^7_{i=0} a_{m_i} \mathbf{e}_i
55  \quad
56  \mbox{where}
57  \quad
58  a_{m_i} = \mathbf{x}^T\bullet\mathbf{e}_i
59 \f]
60 allows the study of the behavior of the coefficients \f$a_{m_i}\f$ for all
61 the tracks of the sample. The eigenvectors which are insignificant for
62 the trajectory description in the expansion will have their
63 corresponding coefficients \f$a_{m_i}\f$ close to zero for all the
64 prototypes.
65 
66 On one hand, a reduction of the dimensionality is then obtained by
67 omitting these least significant vectors in the subsequent analysis.
68 
69 On the other hand, in the analysis of real data, these least
70 significant variables(?) can be used for the pattern
71 recognition problem of extracting the valid combinations of
72 coordinates describing a true trajectory from the set of all possible
73 wrong combinations.
74 
75 The program described here performs this principal components analysis
76 on a sample of data provided by the user. It computes the covariance
77 matrix, its eigenvalues ands corresponding eigenvectors and exhibits
78 the behavior of the principal components \f$a_{m_i}\f$, thus providing
79 to the user all the means of understanding their data.
80 
81 ## Principal Components Method
82 Let's consider a sample of \f$M\f$ prototypes each being characterized by
83 \f$P\f$ variables \f$x_0, x_1, \ldots, x_{P-1}\f$. Each prototype is a point, or a
84 column vector, in a \f$P\f$-dimensional *Pattern space*.
85 \f[
86  \mathbf{x} = \left[\begin{array}{c}
87  x_0\\x_1\\\vdots\\x_{P-1}\end{array}\right]\,,
88 \f]
89 where each \f$x_n\f$ represents the particular value associated with the
90 \f$n\f$-dimension.
91 
92 Those \f$P\f$ variables are the quantities accessible to the
93 experimentalist, but are not necessarily the most significant for the
94 classification purpose.
95 
96 The *Principal Components Method* consists of applying a
97 *linear* transformation to the original variables. This
98 transformation is described by an orthogonal matrix and is equivalent
99 to a rotation of the original pattern space into a new set of
100 coordinate vectors, which hopefully provide easier feature
101 identification and dimensionality reduction.
102 
103 Let's define the covariance matrix:
104 \f[
105  \mathsf{C} = \left\langle\mathbf{y}\mathbf{y}^T\right\rangle
106  \quad\mbox{where}\quad
107  \mathbf{y} = \mathbf{x} - \left\langle\mathbf{x}\right\rangle\,,
108 \f]
109 and the brackets indicate mean value over the sample of \f$M\f$
110 prototypes.
111 
112 This matrix \f$\mathsf{C}\f$ is real, positive definite, symmetric, and will
113 have all its eigenvalues greater then zero. It will now be show that
114 among the family of all the complete orthonormal bases of the pattern
115 space, the base formed by the eigenvectors of the covariance matrix
116 and belonging to the largest eigenvalues, corresponds to the most
117 significant features of the description of the original prototypes.
118 
119 let the prototypes be expanded on into a set of \f$N\f$ basis vectors
120 \f$\mathbf{e}_n, n=0,\ldots,N,N+1, \ldots, P-1\f$
121 \f[
122  \mathbf{y}_i = \sum^N_{i=0} a_{i_n} \mathbf{e}_n,
123  \quad
124  i = 1, \ldots, M,
125  \quad
126  N < P-1
127 \f]
128 The `best' feature coordinates \f$\mathbf{e}_n\f$, spanning a *feature
129 space*, will be obtained by minimizing the error due to this
130 truncated expansion, i.e.,
131 \f[
132  \min\left(E_N\right) =
133  \min\left[\left\langle\left(\mathbf{y}_i - \sum^N_{i=0} a_{i_n} \mathbf{e}_n\right)^2\right\rangle\right]
134 \f]
135 with the conditions:
136 \f[
137  \mathbf{e}_k\bullet\mathbf{e}_j = \delta_{jk} =
138  \left\{\begin{array}{rcl}
139  1 & \mbox{for} & k = j\\
140  0 & \mbox{for} & k \neq j
141  \end{array}\right.
142 \f]
143 Multiplying (3) by \f$\mathbf{e}^T_n\f$ using (5),
144 we get
145 \f[
146  a_{i_n} = \mathbf{y}_i^T\bullet\mathbf{e}_n\,,
147 \f]
148 so the error becomes
149 \f{eqnarray*}{
150  E_N &=&
151  \left\langle\left[\sum_{n=N+1}^{P-1} a_{i_n}\mathbf{e}_n\right]^2\right\rangle\nonumber\\
152  &=&
153  \left\langle\left[\sum_{n=N+1}^{P-1} \mathbf{y}_i^T\bullet\mathbf{e}_n\mathbf{e}_n\right]^2\right\rangle\nonumber\\
154  &=&
155  \left\langle\sum_{n=N+1}^{P-1} \mathbf{e}_n^T\mathbf{y}_i\mathbf{y}_i^T\mathbf{e}_n\right\rangle\nonumber\\
156  &=&
157  \sum_{n=N+1}^{P-1} \mathbf{e}_n^T\mathsf{C}\mathbf{e}_n
158 \f}
159 The minimization of the sum in (7) is obtained when each
160 term \f$\mathbf{e}_n^\mathsf{C}\mathbf{e}_n\f$ is minimum, since \f$\mathsf{C}\f$ is
161 positive definite. By the method of Lagrange multipliers, and the
162 condition (5), we get
163 \f[
164  E_N = \sum^{P-1}_{n=N+1} \left(\mathbf{e}_n^T\mathsf{C}\mathbf{e}_n -
165  l_n\mathbf{e}_n^T\bullet\mathbf{e}_n + l_n\right)
166 \f]
167 The minimum condition \f$\frac{dE_N}{d\mathbf{e}^T_n} = 0\f$ leads to the
168 equation
169 \f[
170  \mathsf{C}\mathbf{e}_n = l_n\mathbf{e}_n\,,
171 \f]
172 which shows that \f$\mathbf{e}_n\f$ is an eigenvector of the covariance
173 matrix \f$\mathsf{C}\f$ with eigenvalue \f$l_n\f$. The estimated minimum error is
174 then given by
175 \f[
176  E_N \sim \sum^{P-1}_{n=N+1} \mathbf{e}_n^T\bullet l_n\mathbf{e}_n
177  = \sum^{P-1}_{n=N+1} l_n\,,
178 \f]
179 where \f$l_n,\,n=N+1,\ldots,P\f$ \f$l_n,\,n=N+1,\ldots,P-1\f$ are the eigenvalues associated with the
180 omitted eigenvectors in the expansion (3). Thus, by choosing
181 the \f$N\f$ largest eigenvalues, and their associated eigenvectors, the
182 error \f$E_N\f$ is minimized.
183 
184 The transformation matrix to go from the pattern space to the feature
185 space consists of the ordered eigenvectors \f$\mathbf{e}_1,\ldots,\mathbf{e}_P\f$
186 \f$\mathbf{e}_0,\ldots,\mathbf{e}_{P-1}\f$ for its columns
187 \f[
188  \mathsf{T} = \left[
189  \begin{array}{cccc}
190  \mathbf{e}_0 &
191  \mathbf{e}_1 &
192  \vdots &
193  \mathbf{e}_{P-1}
194  \end{array}\right]
195  = \left[
196  \begin{array}{cccc}
197  \mathbf{e}_{0_0} & \mathbf{e}_{1_0} & \cdots & \mathbf{e}_{{P-1}_0}\\
198  \mathbf{e}_{0_1} & \mathbf{e}_{1_1} & \cdots & \mathbf{e}_{{P-1}_1}\\
199  \vdots & \vdots & \ddots & \vdots \\
200  \mathbf{e}_{0_{P-1}} & \mathbf{e}_{1_{P-1}} & \cdots & \mathbf{e}_{{P-1}_{P-1}}\\
201  \end{array}\right]
202 \f]
203 This is an orthogonal transformation, or rotation, of the pattern
204 space and feature selection results in ignoring certain coordinates
205 in the transformed space.
206 
207 Christian Holm August 2000, CERN
208 */
209 
210 #include "TPrincipal.h"
211 
212 #include "TVectorD.h"
213 #include "TMatrixD.h"
214 #include "TMatrixDSymEigen.h"
215 #include "TMath.h"
216 #include "TList.h"
217 #include "TH2.h"
218 #include "TDatime.h"
219 #include "TBrowser.h"
220 #include "TROOT.h"
221 #include "Riostream.h"
222 
223 
225 
226 ////////////////////////////////////////////////////////////////////////////////
227 /// Empty constructor. Do not use.
228 
230  : fMeanValues(0),
231  fSigmas(0),
232  fCovarianceMatrix(1,1),
233  fEigenVectors(1,1),
234  fEigenValues(0),
235  fOffDiagonal(0),
236  fStoreData(kFALSE)
237 {
238  fTrace = 0;
239  fHistograms = 0;
242  fNumberOfVariables = 0;
243 }
244 
245 ////////////////////////////////////////////////////////////////////////////////
246 /// Constructor. Argument is number of variables in the sample of data
247 /// Options are:
248 /// - N Normalize the covariance matrix (default)
249 /// - D Store input data (default)
250 ///
251 /// The created object is named "principal" by default.
252 
254  : fMeanValues(nVariables),
255  fSigmas(nVariables),
256  fCovarianceMatrix(nVariables,nVariables),
257  fEigenVectors(nVariables,nVariables),
258  fEigenValues(nVariables),
259  fOffDiagonal(nVariables),
261 {
262  if (nVariables <= 1) {
263  Error("TPrincipal", "You can't be serious - nVariables == 1!!!");
264  return;
265  }
266 
267  SetName("principal");
268 
269  fTrace = 0;
270  fHistograms = 0;
273  fNumberOfVariables = nVariables;
274  while (strlen(opt) > 0) {
275  switch(*opt++) {
276  case 'N':
277  case 'n':
279  break;
280  case 'D':
281  case 'd':
282  fStoreData = kTRUE;
283  break;
284  default:
285  break;
286  }
287  }
288 
289  if (!fMeanValues.IsValid())
290  Error("TPrincipal","Couldn't create vector mean values");
291  if (!fSigmas.IsValid())
292  Error("TPrincipal","Couldn't create vector sigmas");
293  if (!fCovarianceMatrix.IsValid())
294  Error("TPrincipal","Couldn't create covariance matrix");
295  if (!fEigenVectors.IsValid())
296  Error("TPrincipal","Couldn't create eigenvector matrix");
297  if (!fEigenValues.IsValid())
298  Error("TPrincipal","Couldn't create eigenvalue vector");
299  if (!fOffDiagonal.IsValid())
300  Error("TPrincipal","Couldn't create offdiagonal vector");
301  if (fStoreData) {
302  fUserData.ResizeTo(nVariables*1000);
303  fUserData.Zero();
304  if (!fUserData.IsValid())
305  Error("TPrincipal","Couldn't create user data vector");
306  }
307 }
308 
309 ////////////////////////////////////////////////////////////////////////////////
310 /// Copy constructor.
311 
313  TNamed(pr),
317  fSigmas(pr.fSigmas),
322  fUserData(pr.fUserData),
323  fTrace(pr.fTrace),
327 {
328 }
329 
330 ////////////////////////////////////////////////////////////////////////////////
331 /// Assignment operator.
332 
334 {
335  if(this!=&pr) {
336  TNamed::operator=(pr);
340  fSigmas=pr.fSigmas;
345  fUserData=pr.fUserData;
346  fTrace=pr.fTrace;
350  }
351  return *this;
352 }
353 
354 ////////////////////////////////////////////////////////////////////////////////
355 /// Destructor.
356 
358 {
359  if (fHistograms) {
360  fHistograms->Delete();
361  delete fHistograms;
362  }
363 }
364 
365 ////////////////////////////////////////////////////////////////////////////////
366 /// Add a data point and update the covariance matrix. The input
367 /// array must be <TT>fNumberOfVariables</TT> long.
368 ///
369 ///
370 /// The Covariance matrix and mean values of the input data is calculated
371 /// on the fly by the following equations:
372 ///
373 /// \f[
374 /// \left<x_i\right>^{(0)} = x_{i0}
375 /// \f]
376 ///
377 ///
378 /// \f[
379 /// \left<x_i\right>^{(n)} = \left<x_i\right>^{(n-1)}
380 /// + \frac1n \left(x_{in} - \left<x_i\right>^{(n-1)}\right)
381 /// \f]
382 ///
383 /// \f[
384 /// C_{ij}^{(0)} = 0
385 /// \f]
386 ///
387 ///
388 ///
389 /// \f[
390 /// C_{ij}^{(n)} = C_{ij}^{(n-1)}
391 /// + \frac1{n-1}\left[\left(x_{in} - \left<x_i\right>^{(n)}\right)
392 /// \left(x_{jn} - \left<x_j\right>^{(n)}\right)\right]
393 /// - \frac1n C_{ij}^{(n-1)}
394 /// \f]
395 ///
396 /// since this is a really fast method, with no rounding errors (please
397 /// refer to CERN 72-21 pp. 54-106).
398 ///
399 ///
400 /// The data is stored internally in a <TT>TVectorD</TT>, in the following
401 /// way:
402 ///
403 /// \f[
404 /// \mathbf{x} = \left[\left(x_{0_0},\ldots,x_{{P-1}_0}\right),\ldots,
405 /// \left(x_{0_i},\ldots,x_{{P-1}_i}\right), \ldots\right]
406 /// \f]
407 ///
408 /// With \f$P\f$ as defined in the class description.
409 
411 {
412  if (!p)
413  return;
414 
415  // Increment the data point counter
416  Int_t i,j;
417  if (++fNumberOfDataPoints == 1) {
418  for (i = 0; i < fNumberOfVariables; i++)
419  fMeanValues(i) = p[i];
420  }
421  else {
422 
423  Double_t cor = 1 - 1./Double_t(fNumberOfDataPoints);
424  for (i = 0; i < fNumberOfVariables; i++) {
425 
426  fMeanValues(i) *= cor;
428  Double_t t1 = (p[i] - fMeanValues(i)) / (fNumberOfDataPoints - 1);
429 
430  // Setting Matrix (lower triangle) elements
431  for (j = 0; j < i + 1; j++) {
432  fCovarianceMatrix(i,j) *= cor;
433  fCovarianceMatrix(i,j) += t1 * (p[j] - fMeanValues(j));
434  }
435  }
436  }
437 
438  // Store data point in internal vector
439  // If the vector isn't big enough to hold the new data, then
440  // expand the vector by half it's size.
441  if (!fStoreData)
442  return;
443  Int_t size = fUserData.GetNrows();
445  fUserData.ResizeTo(size + size/2);
446 
447  for (i = 0; i < fNumberOfVariables; i++) {
448  j = (fNumberOfDataPoints-1) * fNumberOfVariables + i;
449  fUserData(j) = p[i];
450  }
451 
452 }
453 
454 ////////////////////////////////////////////////////////////////////////////////
455 /// Browse the TPrincipal object in the TBrowser.
456 
458 {
459  if (fHistograms) {
460  TIter next(fHistograms);
461  TH1* h = 0;
462  while ((h = (TH1*)next()))
463  b->Add(h,h->GetName());
464  }
465 
466  if (fStoreData)
467  b->Add(&fUserData,"User Data");
468  b->Add(&fCovarianceMatrix,"Covariance Matrix");
469  b->Add(&fMeanValues,"Mean value vector");
470  b->Add(&fSigmas,"Sigma value vector");
471  b->Add(&fEigenValues,"Eigenvalue vector");
472  b->Add(&fEigenVectors,"Eigenvector Matrix");
473 
474 }
475 
476 ////////////////////////////////////////////////////////////////////////////////
477 /// Clear the data in Object. Notice, that's not possible to change
478 /// the dimension of the original data.
479 
481 {
482  if (fHistograms) {
483  fHistograms->Delete(opt);
484  }
485 
487  fTrace = 0;
490  fEigenValues.Zero();
491  fMeanValues.Zero();
492  fSigmas.Zero();
493  fOffDiagonal.Zero();
494 
495  if (fStoreData) {
497  fUserData.Zero();
498  }
499 }
500 
501 ////////////////////////////////////////////////////////////////////////////////
502 /// Return a row of the user supplied data.
503 /// If row is out of bounds, 0 is returned.
504 /// It's up to the user to delete the returned array.
505 /// Row 0 is the first row;
506 
508 {
509  if (row >= fNumberOfDataPoints)
510  return 0;
511 
512  if (!fStoreData)
513  return 0;
514 
515  Int_t index = row * fNumberOfVariables;
516  return &fUserData(index);
517 }
518 
519 
520 ////////////////////////////////////////////////////////////////////////////////
521 /// Generates the file `<filename>`, with `.C` appended if it does
522 /// argument doesn't end in .cxx or .C.
523 ///
524 /// The file contains the implementation of two functions
525 /// ~~~ {.cpp}
526 /// void X2P(Double_t *x, Double *p)
527 /// void P2X(Double_t *p, Double *x, Int_t nTest)
528 /// ~~~
529 /// which does the same as `TPrincipal::X2P` and `TPrincipal::P2X`
530 /// respectively. Please refer to these methods.
531 ///
532 /// Further, the static variables:
533 /// ~~~ {.cpp}
534 /// Int_t gNVariables
535 /// Double_t gEigenValues[]
536 /// Double_t gEigenVectors[]
537 /// Double_t gMeanValues[]
538 /// Double_t gSigmaValues[]
539 /// ~~~
540 /// are initialized. The only ROOT header file needed is Rtypes.h
541 ///
542 /// See TPrincipal::MakeRealCode for a list of options
543 
544 void TPrincipal::MakeCode(const char *filename, Option_t *opt)
545 {
546  TString outName(filename);
547  if (!outName.EndsWith(".C") && !outName.EndsWith(".cxx"))
548  outName += ".C";
549 
550  MakeRealCode(outName.Data(),"",opt);
551 }
552 
553 ////////////////////////////////////////////////////////////////////////////////
554 /// Make histograms of the result of the analysis.
555 /// The option string say which histograms to create
556 /// - X Histogram original data
557 /// - P Histogram principal components corresponding to
558 /// original data
559 /// - D Histogram the difference between the original data
560 /// and the projection of principal unto a lower
561 /// dimensional subspace (2D histograms)
562 /// - E Histogram the eigenvalues
563 /// - S Histogram the square of the residues
564 /// (see `TPrincipal::SumOfSquareResiduals`)
565 /// The histograms will be named `<name>_<type><number>`, where `<name>`
566 /// is the first argument, `<type>` is one of X,P,D,E,S, and `<number>`
567 /// is the variable.
568 
569 void TPrincipal::MakeHistograms(const char *name, Option_t *opt)
570 {
571  Bool_t makeX = kFALSE;
572  Bool_t makeD = kFALSE;
573  Bool_t makeP = kFALSE;
574  Bool_t makeE = kFALSE;
575  Bool_t makeS = kFALSE;
576 
577  Int_t len = strlen(opt);
578  Int_t i,j,k;
579  for (i = 0; i < len; i++) {
580  switch (opt[i]) {
581  case 'X':
582  case 'x':
583  if (fStoreData)
584  makeX = kTRUE;
585  break;
586  case 'd':
587  case 'D':
588  if (fStoreData)
589  makeD = kTRUE;
590  break;
591  case 'P':
592  case 'p':
593  if (fStoreData)
594  makeP = kTRUE;
595  break;
596  case 'E':
597  case 'e':
598  makeE = kTRUE;
599  break;
600  case 's':
601  case 'S':
602  if (fStoreData)
603  makeS = kTRUE;
604  break;
605  default:
606  Warning("MakeHistograms","Unknown option: %c",opt[i]);
607  }
608  }
609 
610  // If no option was given, then exit gracefully
611  if (!makeX && !makeD && !makeP && !makeE && !makeS)
612  return;
613 
614  // If the list of histograms doesn't exist, create it.
615  if (!fHistograms)
616  fHistograms = new TList;
617 
618  // Don't create the histograms if they are already in the TList.
619  if (makeX && fHistograms->FindObject(Form("%s_x000",name)))
620  makeX = kFALSE;
621  if (makeD && fHistograms->FindObject(Form("%s_d000",name)))
622  makeD = kFALSE;
623  if (makeP && fHistograms->FindObject(Form("%s_p000",name)))
624  makeP = kFALSE;
625  if (makeE && fHistograms->FindObject(Form("%s_e",name)))
626  makeE = kFALSE;
627  if (makeS && fHistograms->FindObject(Form("%s_s",name)))
628  makeS = kFALSE;
629 
630  TH1F **hX = 0;
631  TH2F **hD = 0;
632  TH1F **hP = 0;
633  TH1F *hE = 0;
634  TH1F *hS = 0;
635 
636  // Initialize the arrays of histograms needed
637  if (makeX)
638  hX = new TH1F * [fNumberOfVariables];
639 
640  if (makeD)
641  hD = new TH2F * [fNumberOfVariables];
642 
643  if (makeP)
644  hP = new TH1F * [fNumberOfVariables];
645 
646  if (makeE){
647  hE = new TH1F(Form("%s_e",name), "Eigenvalues of Covariance matrix",
649  hE->SetXTitle("Eigenvalue");
650  fHistograms->Add(hE);
651  }
652 
653  if (makeS) {
654  hS = new TH1F(Form("%s_s",name),"E_{N}",
656  hS->SetXTitle("N");
657  hS->SetYTitle("#sum_{i=1}^{M} (x_{i} - x'_{N,i})^{2}");
658  fHistograms->Add(hS);
659  }
660 
661  // Initialize sub elements of the histogram arrays
662  for (i = 0; i < fNumberOfVariables; i++) {
663  if (makeX) {
664  // We allow 4 sigma spread in the original data in our
665  // histogram.
666  Double_t xlowb = fMeanValues(i) - 4 * fSigmas(i);
667  Double_t xhighb = fMeanValues(i) + 4 * fSigmas(i);
668  Int_t xbins = fNumberOfDataPoints/100;
669  hX[i] = new TH1F(Form("%s_x%03d", name, i),
670  Form("Pattern space, variable %d", i),
671  xbins,xlowb,xhighb);
672  hX[i]->SetXTitle(Form("x_{%d}",i));
673  fHistograms->Add(hX[i]);
674  }
675 
676  if(makeD) {
677  // The upper limit below is arbitrary!!!
678  Double_t dlowb = 0;
679  Double_t dhighb = 20;
680  Int_t dbins = fNumberOfDataPoints/100;
681  hD[i] = new TH2F(Form("%s_d%03d", name, i),
682  Form("Distance from pattern to "
683  "feature space, variable %d", i),
684  dbins,dlowb,dhighb,
685  fNumberOfVariables-1,
686  1,
687  fNumberOfVariables);
688  hD[i]->SetXTitle(Form("|x_{%d} - x'_{%d,N}|/#sigma_{%d}",i,i,i));
689  hD[i]->SetYTitle("N");
690  fHistograms->Add(hD[i]);
691  }
692 
693  if(makeP) {
694  // For some reason, the trace of the none-scaled matrix
695  // (see TPrincipal::MakeNormalised) should enter here. Taken
696  // from LINTRA code.
698  Double_t plowb = -10 * TMath::Sqrt(et);
699  Double_t phighb = -plowb;
700  Int_t pbins = 100;
701  hP[i] = new TH1F(Form("%s_p%03d", name, i),
702  Form("Feature space, variable %d", i),
703  pbins,plowb,phighb);
704  hP[i]->SetXTitle(Form("p_{%d}",i));
705  fHistograms->Add(hP[i]);
706  }
707 
708  if (makeE)
709  // The Eigenvector histogram is easy
710  hE->Fill(i,fEigenValues(i));
711 
712  }
713  if (!makeX && !makeP && !makeD && !makeS) {
714  if (hX)
715  delete[] hX;
716  if (hD)
717  delete[] hD;
718  if (hP)
719  delete[] hP;
720  return;
721  }
722 
723  Double_t *x = 0;
726  for (i = 0; i < fNumberOfDataPoints; i++) {
727 
728  // Zero arrays
729  for (j = 0; j < fNumberOfVariables; j++)
730  p[j] = d[j] = 0;
731 
732  // update the original data histogram
733  x = (Double_t*)(GetRow(i));
734  R__ASSERT(x);
735 
736  if (makeP||makeD||makeS)
737  // calculate the corresponding principal component
738  X2P(x,p);
739 
740  if (makeD || makeS) {
741  // Calculate the difference between the original data, and the
742  // same project onto principal components, and then to a lower
743  // dimensional sub-space
744  for (j = fNumberOfVariables; j > 0; j--) {
745  P2X(p,d,j);
746 
747  for (k = 0; k < fNumberOfVariables; k++) {
748  // We use the absolute value of the difference!
749  d[k] = x[k] - d[k];
750 
751  if (makeS)
752  hS->Fill(j,d[k]*d[k]);
753 
754  if (makeD) {
755  d[k] = TMath::Abs(d[k]) / (fIsNormalised ? fSigmas(k) : 1);
756  (hD[k])->Fill(d[k],j);
757  }
758  }
759  }
760  }
761 
762  if (makeX||makeP) {
763  // If we are asked to make any of these histograms, we have to
764  // go here
765  for (j = 0; j < fNumberOfVariables; j++) {
766  if (makeX)
767  (hX[j])->Fill(x[j]);
768 
769  if (makeP)
770  (hP[j])->Fill(p[j]);
771  }
772  }
773  }
774  // Clean up
775  if (hX)
776  delete [] hX;
777  if (hD)
778  delete [] hD;
779  if (hP)
780  delete [] hP;
781  if (d)
782  delete [] d;
783  if (p)
784  delete [] p;
785 
786  // Normalize the residues
787  if (makeS)
788  hS->Scale(Double_t(1.)/fNumberOfDataPoints);
789 }
790 
791 ////////////////////////////////////////////////////////////////////////////////
792 /// Normalize the covariance matrix
793 
795 {
796  Int_t i,j;
797  for (i = 0; i < fNumberOfVariables; i++) {
799  if (fIsNormalised)
800  for (j = 0; j <= i; j++)
801  fCovarianceMatrix(i,j) /= (fSigmas(i) * fSigmas(j));
802 
803  fTrace += fCovarianceMatrix(i,i);
804  }
805 
806  // Fill remaining parts of matrix, and scale.
807  for (i = 0; i < fNumberOfVariables; i++)
808  for (j = 0; j <= i; j++) {
809  fCovarianceMatrix(i,j) /= fTrace;
811  }
812 
813 }
814 
815 ////////////////////////////////////////////////////////////////////////////////
816 /// Generate the file <classname>PCA.cxx which contains the
817 /// implementation of two methods:
818 /// ~~~ {.cpp}
819 /// void <classname>::X2P(Double_t *x, Double *p)
820 /// void <classname>::P2X(Double_t *p, Double *x, Int_t nTest)
821 /// ~~~
822 /// which does the same as TPrincipal::X2P and TPrincipal::P2X
823 /// respectively. Please refer to these methods.
824 ///
825 /// Further, the public static members:
826 /// ~~~ {.cpp}
827 /// Int_t <classname>::fgNVariables
828 /// Double_t <classname>::fgEigenValues[]
829 /// Double_t <classname>::fgEigenVectors[]
830 /// Double_t <classname>::fgMeanValues[]
831 /// Double_t <classname>::fgSigmaValues[]
832 /// ~~~
833 /// are initialized, and assumed to exist. The class declaration is
834 /// assumed to be in <classname>.h and assumed to be provided by the
835 /// user.
836 ///
837 /// See TPrincipal::MakeRealCode for a list of options
838 ///
839 /// The minimal class definition is:
840 /// ~~~ {.cpp}
841 /// class <classname> {
842 /// public:
843 /// static Int_t fgNVariables;
844 /// static Double_t fgEigenVectors[];
845 /// static Double_t fgEigenValues[];
846 /// static Double_t fgMeanValues[];
847 /// static Double_t fgSigmaValues[];
848 ///
849 /// void X2P(Double_t *x, Double_t *p);
850 /// void P2X(Double_t *p, Double_t *x, Int_t nTest);
851 /// };
852 /// ~~~
853 /// Whether the methods <classname>::X2P and <classname>::P2X should
854 /// be static or not, is up to the user.
855 
856 void TPrincipal::MakeMethods(const char *classname, Option_t *opt)
857 {
858 
859  MakeRealCode(Form("%sPCA.cxx", classname), classname, opt);
860 }
861 
862 
863 ////////////////////////////////////////////////////////////////////////////////
864 /// Perform the principal components analysis.
865 /// This is done in several stages in the TMatrix::EigenVectors method:
866 /// - Transform the covariance matrix into a tridiagonal matrix.
867 /// - Find the eigenvalues and vectors of the tridiagonal matrix.
868 
870 {
871  // Normalize covariance matrix
872  MakeNormalised();
873 
875  TMatrixDSymEigen eigen(sym);
876  fEigenVectors = eigen.GetEigenVectors();
877  fEigenValues = eigen.GetEigenValues();
878  //make sure that eigenvalues are positive
879  for (Int_t i = 0; i < fNumberOfVariables; i++) {
880  if (fEigenValues[i] < 0) fEigenValues[i] = -fEigenValues[i];
881  }
882 }
883 
884 ////////////////////////////////////////////////////////////////////////////////
885 /// This is the method that actually generates the code for the
886 /// transformations to and from feature space and pattern space
887 /// It's called by TPrincipal::MakeCode and TPrincipal::MakeMethods.
888 ///
889 /// The options are: NONE so far
890 
891 void TPrincipal::MakeRealCode(const char *filename, const char *classname,
892  Option_t *)
893 {
894  Bool_t isMethod = (classname[0] == '\0' ? kFALSE : kTRUE);
895  const char *prefix = (isMethod ? Form("%s::", classname) : "");
896  const char *cv_qual = (isMethod ? "" : "static ");
897 
898  std::ofstream outFile(filename,std::ios::out|std::ios::trunc);
899  if (!outFile) {
900  Error("MakeRealCode","couldn't open output file '%s'",filename);
901  return;
902  }
903 
904  std::cout << "Writing on file \"" << filename << "\" ... " << std::flush;
905  //
906  // Write header of file
907  //
908  // Emacs mode line ;-)
909  outFile << "// -*- mode: c++ -*-" << std::endl;
910  // Info about creator
911  outFile << "// " << std::endl
912  << "// File " << filename
913  << " generated by TPrincipal::MakeCode" << std::endl;
914  // Time stamp
915  TDatime date;
916  outFile << "// on " << date.AsString() << std::endl;
917  // ROOT version info
918  outFile << "// ROOT version " << gROOT->GetVersion()
919  << std::endl << "//" << std::endl;
920  // General information on the code
921  outFile << "// This file contains the functions " << std::endl
922  << "//" << std::endl
923  << "// void " << prefix
924  << "X2P(Double_t *x, Double_t *p); " << std::endl
925  << "// void " << prefix
926  << "P2X(Double_t *p, Double_t *x, Int_t nTest);"
927  << std::endl << "//" << std::endl
928  << "// The first for transforming original data x in " << std::endl
929  << "// pattern space, to principal components p in " << std::endl
930  << "// feature space. The second function is for the" << std::endl
931  << "// inverse transformation, but using only nTest" << std::endl
932  << "// of the principal components in the expansion" << std::endl
933  << "// " << std::endl
934  << "// See TPrincipal class documentation for more "
935  << "information " << std::endl << "// " << std::endl;
936  // Header files
937  outFile << "#ifndef __CINT__" << std::endl;
938  if (isMethod)
939  // If these are methods, we need the class header
940  outFile << "#include \"" << classname << ".h\"" << std::endl;
941  else
942  // otherwise, we need the typedefs of Int_t and Double_t
943  outFile << "#include <Rtypes.h> // needed for Double_t etc" << std::endl;
944  // Finish the preprocessor block
945  outFile << "#endif" << std::endl << std::endl;
946 
947  //
948  // Now for the data
949  //
950  // We make the Eigenvector matrix, Eigenvalue vector, Sigma vector,
951  // and Mean value vector static, since all are needed in both
952  // functions. Also ,the number of variables are stored in a static
953  // variable.
954  outFile << "//" << std::endl
955  << "// Static data variables" << std::endl
956  << "//" << std::endl;
957  outFile << cv_qual << "Int_t " << prefix << "gNVariables = "
958  << fNumberOfVariables << ";" << std::endl;
959 
960  // Assign the values to the Eigenvector matrix. The elements are
961  // stored row-wise, that is element
962  // M[i][j] = e[i * nVariables + j]
963  // where i and j are zero-based.
964  outFile << std::endl << "// Assignment of eigenvector matrix." << std::endl
965  << "// Elements are stored row-wise, that is" << std::endl
966  << "// M[i][j] = e[i * nVariables + j] " << std::endl
967  << "// where i and j are zero-based" << std::endl;
968  outFile << cv_qual << "Double_t " << prefix
969  << "gEigenVectors[] = {" << std::flush;
970  Int_t i,j;
971  for (i = 0; i < fNumberOfVariables; i++) {
972  for (j = 0; j < fNumberOfVariables; j++) {
973  Int_t index = i * fNumberOfVariables + j;
974  outFile << (index != 0 ? "," : "" ) << std::endl
975  << " " << fEigenVectors(i,j) << std::flush;
976  }
977  }
978  outFile << "};" << std::endl << std::endl;
979 
980  // Assignment to eigenvalue vector. Zero-based.
981  outFile << "// Assignment to eigen value vector. Zero-based." << std::endl;
982  outFile << cv_qual << "Double_t " << prefix
983  << "gEigenValues[] = {" << std::flush;
984  for (i = 0; i < fNumberOfVariables; i++)
985  outFile << (i != 0 ? "," : "") << std::endl
986  << " " << fEigenValues(i) << std::flush;
987  outFile << std::endl << "};" << std::endl << std::endl;
988 
989  // Assignment to mean Values vector. Zero-based.
990  outFile << "// Assignment to mean value vector. Zero-based." << std::endl;
991  outFile << cv_qual << "Double_t " << prefix
992  << "gMeanValues[] = {" << std::flush;
993  for (i = 0; i < fNumberOfVariables; i++)
994  outFile << (i != 0 ? "," : "") << std::endl
995  << " " << fMeanValues(i) << std::flush;
996  outFile << std::endl << "};" << std::endl << std::endl;
997 
998  // Assignment to mean Values vector. Zero-based.
999  outFile << "// Assignment to sigma value vector. Zero-based." << std::endl;
1000  outFile << cv_qual << "Double_t " << prefix
1001  << "gSigmaValues[] = {" << std::flush;
1002  for (i = 0; i < fNumberOfVariables; i++)
1003  outFile << (i != 0 ? "," : "") << std::endl
1004  << " " << (fIsNormalised ? fSigmas(i) : 1) << std::flush;
1005  // << " " << fSigmas(i) << std::flush;
1006  outFile << std::endl << "};" << std::endl << std::endl;
1007 
1008  //
1009  // Finally we reach the functions themselves
1010  //
1011  // First: void x2p(Double_t *x, Double_t *p);
1012  //
1013  outFile << "// " << std::endl
1014  << "// The "
1015  << (isMethod ? "method " : "function ")
1016  << " void " << prefix
1017  << "X2P(Double_t *x, Double_t *p)"
1018  << std::endl << "// " << std::endl;
1019  outFile << "void " << prefix
1020  << "X2P(Double_t *x, Double_t *p) {" << std::endl
1021  << " for (Int_t i = 0; i < gNVariables; i++) {" << std::endl
1022  << " p[i] = 0;" << std::endl
1023  << " for (Int_t j = 0; j < gNVariables; j++)" << std::endl
1024  << " p[i] += (x[j] - gMeanValues[j]) " << std::endl
1025  << " * gEigenVectors[j * gNVariables + i] "
1026  << "/ gSigmaValues[j];" << std::endl << std::endl << " }"
1027  << std::endl << "}" << std::endl << std::endl;
1028  //
1029  // Now: void p2x(Double_t *p, Double_t *x, Int_t nTest);
1030  //
1031  outFile << "// " << std::endl << "// The "
1032  << (isMethod ? "method " : "function ")
1033  << " void " << prefix
1034  << "P2X(Double_t *p, Double_t *x, Int_t nTest)"
1035  << std::endl << "// " << std::endl;
1036  outFile << "void " << prefix
1037  << "P2X(Double_t *p, Double_t *x, Int_t nTest) {" << std::endl
1038  << " for (Int_t i = 0; i < gNVariables; i++) {" << std::endl
1039  << " x[i] = gMeanValues[i];" << std::endl
1040  << " for (Int_t j = 0; j < nTest; j++)" << std::endl
1041  << " x[i] += p[j] * gSigmaValues[i] " << std::endl
1042  << " * gEigenVectors[i * gNVariables + j];" << std::endl
1043  << " }" << std::endl << "}" << std::endl << std::endl;
1044 
1045  // EOF
1046  outFile << "// EOF for " << filename << std::endl;
1047 
1048  // Close the file
1049  outFile.close();
1050 
1051  std::cout << "done" << std::endl;
1052 }
1053 
1054 ////////////////////////////////////////////////////////////////////////////////
1055 /// Calculate x as a function of nTest of the most significant
1056 /// principal components p, and return it in x.
1057 /// It's the users responsibility to make sure that both x and p are
1058 /// of the right size (i.e., memory must be allocated for x).
1059 
1060 void TPrincipal::P2X(const Double_t *p, Double_t *x, Int_t nTest)
1061 {
1062  for (Int_t i = 0; i < fNumberOfVariables; i++){
1063  x[i] = fMeanValues(i);
1064  for (Int_t j = 0; j < nTest; j++)
1065  x[i] += p[j] * (fIsNormalised ? fSigmas(i) : 1)
1066  * fEigenVectors(i,j);
1067  }
1068 
1069 }
1070 
1071 ////////////////////////////////////////////////////////////////////////////////
1072 /// Print the statistics
1073 /// Options are
1074 /// - M Print mean values of original data
1075 /// - S Print sigma values of original data
1076 /// - E Print eigenvalues of covariance matrix
1077 /// - V Print eigenvectors of covariance matrix
1078 /// Default is MSE
1079 
1080 void TPrincipal::Print(Option_t *opt) const
1081 {
1082  Bool_t printV = kFALSE;
1083  Bool_t printM = kFALSE;
1084  Bool_t printS = kFALSE;
1085  Bool_t printE = kFALSE;
1086 
1087  Int_t len = strlen(opt);
1088  for (Int_t i = 0; i < len; i++) {
1089  switch (opt[i]) {
1090  case 'V':
1091  case 'v':
1092  printV = kTRUE;
1093  break;
1094  case 'M':
1095  case 'm':
1096  printM = kTRUE;
1097  break;
1098  case 'S':
1099  case 's':
1100  printS = kTRUE;
1101  break;
1102  case 'E':
1103  case 'e':
1104  printE = kTRUE;
1105  break;
1106  default:
1107  Warning("Print", "Unknown option '%c'",opt[i]);
1108  break;
1109  }
1110  }
1111 
1112  if (printM||printS||printE) {
1113  std::cout << " Variable # " << std::flush;
1114  if (printM)
1115  std::cout << "| Mean Value " << std::flush;
1116  if (printS)
1117  std::cout << "| Sigma " << std::flush;
1118  if (printE)
1119  std::cout << "| Eigenvalue" << std::flush;
1120  std::cout << std::endl;
1121 
1122  std::cout << "-------------" << std::flush;
1123  if (printM)
1124  std::cout << "+------------" << std::flush;
1125  if (printS)
1126  std::cout << "+------------" << std::flush;
1127  if (printE)
1128  std::cout << "+------------" << std::flush;
1129  std::cout << std::endl;
1130 
1131  for (Int_t i = 0; i < fNumberOfVariables; i++) {
1132  std::cout << std::setw(12) << i << " " << std::flush;
1133  if (printM)
1134  std::cout << "| " << std::setw(10) << std::setprecision(4)
1135  << fMeanValues(i) << " " << std::flush;
1136  if (printS)
1137  std::cout << "| " << std::setw(10) << std::setprecision(4)
1138  << fSigmas(i) << " " << std::flush;
1139  if (printE)
1140  std::cout << "| " << std::setw(10) << std::setprecision(4)
1141  << fEigenValues(i) << " " << std::flush;
1142  std::cout << std::endl;
1143  }
1144  std::cout << std::endl;
1145  }
1146 
1147  if(printV) {
1148  for (Int_t i = 0; i < fNumberOfVariables; i++) {
1149  std::cout << "Eigenvector # " << i << std::flush;
1150  TVectorD v(fNumberOfVariables);
1152  v.Print();
1153  }
1154  }
1155 }
1156 
1157 ////////////////////////////////////////////////////////////////////////////////
1158 /// Calculates the sum of the square residuals, that is
1159 ///
1160 /// \f[
1161 /// E_N = \sum_{i=0}^{P-1} \left(x_i - x^\prime_i\right)^2
1162 /// \f]
1163 ///
1164 /// where \f$x^\prime_i = \sum_{j=i}^N p_i e_{n_j}\f$
1165 /// is the \f$i^{\mbox{th}}\f$ component of the principal vector, corresponding to
1166 /// \f$x_i\f$, the original data; I.e., the square distance to the space
1167 /// spanned by \f$N\f$ eigenvectors.
1168 
1170 {
1171 
1172  if (!x)
1173  return;
1174 
1175  Double_t p[100];
1176  Double_t xp[100];
1177 
1178  X2P(x,p);
1179  for (Int_t i = fNumberOfVariables-1; i >= 0; i--) {
1180  P2X(p,xp,i);
1181  for (Int_t j = 0; j < fNumberOfVariables; j++) {
1182  s[i] += (x[j] - xp[j])*(x[j] - xp[j]);
1183  }
1184  }
1185 }
1186 
1187 ////////////////////////////////////////////////////////////////////////////////
1188 /// Test the PCA, bye calculating the sum square of residuals
1189 /// (see method SumOfSquareResiduals), and display the histogram
1190 
1192 {
1193  MakeHistograms("pca","S");
1194 
1195  if (!fStoreData)
1196  return;
1197 
1198  TH1 *pca_s = 0;
1199  if (fHistograms) pca_s = (TH1*)fHistograms->FindObject("pca_s");
1200  if (!pca_s) {
1201  Warning("Test", "Couldn't get histogram of square residuals");
1202  return;
1203  }
1204 
1205  pca_s->Draw();
1206 }
1207 
1208 ////////////////////////////////////////////////////////////////////////////////
1209 /// Calculate the principal components from the original data vector
1210 /// x, and return it in p.
1211 ///
1212 /// It's the users responsibility to make sure that both x and p are
1213 /// of the right size (i.e., memory must be allocated for p).
1214 
1216 {
1217  for (Int_t i = 0; i < fNumberOfVariables; i++){
1218  p[i] = 0;
1219  for (Int_t j = 0; j < fNumberOfVariables; j++)
1220  p[i] += (x[j] - fMeanValues(j)) * fEigenVectors(j,i) /
1221  (fIsNormalised ? fSigmas(j) : 1);
1222  }
1223 
1224 }
void Add(TObject *obj, const char *name=0, Int_t check=-1)
Add object with name to browser.
Definition: TBrowser.cxx:261
virtual const char * GetName() const
Returns name of object.
Definition: TNamed.h:47
virtual void Scale(Double_t c1=1, Option_t *option="")
Multiply this histogram by a constant c1.
Definition: TH1.cxx:6101
virtual Int_t Fill(Double_t x)
Increment bin with abscissa X by 1.
Definition: TH1.cxx:3251
Principal Components Analysis (PCA)
Definition: TPrincipal.h:20
TVectorT< Element > & ResizeTo(Int_t lwb, Int_t upb)
Resize the vector to [lwb:upb] .
Definition: TVectorT.cxx:292
virtual void Browse(TBrowser *b)
Browse the TPrincipal object in the TBrowser.
Definition: TPrincipal.cxx:457
virtual void Delete(Option_t *option="")
Remove all objects from the list AND delete all heap based objects.
Definition: TList.cxx:467
Bool_t fIsNormalised
Definition: TPrincipal.h:41
TMatrixTSym< Element > & Use(Int_t row_lwb, Int_t row_upb, Element *data)
virtual ~TPrincipal()
Destructor.
Definition: TPrincipal.cxx:357
void MakeRealCode(const char *filename, const char *prefix, Option_t *option="")
This is the method that actually generates the code for the transformations to and from feature space...
Definition: TPrincipal.cxx:891
TMatrixD fEigenVectors
Definition: TPrincipal.h:30
virtual void Print(Option_t *opt="MSE") const
Print the statistics Options are.
TList * fHistograms
Definition: TPrincipal.h:39
virtual TMatrixTBase< Element > & Zero()
Set matrix elements to zero.
virtual void P2X(const Double_t *p, Double_t *x, Int_t nTest)
Calculate x as a function of nTest of the most significant principal components p, and return it in x.
const char Option_t
Definition: RtypesCore.h:62
virtual const Element * GetMatrixArray() const
Definition: TMatrixT.h:222
virtual void SetName(const char *name)
Set the name of the TNamed.
Definition: TNamed.cxx:140
THist< 1, float, THistStatContent, THistStatUncertainty > TH1F
Definition: THist.hxx:285
TVectorD fMeanValues
Definition: TPrincipal.h:26
#define R__ASSERT(e)
Definition: TError.h:96
#define gROOT
Definition: TROOT.h:410
Basic string class.
Definition: TString.h:131
1-D histogram with a float per channel (see TH1 documentation)}
Definition: TH1.h:567
Int_t GetNrows() const
Definition: TVectorT.h:75
TMatrixTColumn_const< Double_t > TMatrixDColumn_const
virtual void SumOfSquareResiduals(const Double_t *x, Double_t *s)
Calculates the sum of the square residuals, that is.
int Int_t
Definition: RtypesCore.h:41
virtual void SetYTitle(const char *title)
Definition: TH1.h:406
bool Bool_t
Definition: RtypesCore.h:59
void MakeNormalised()
Normalize the covariance matrix.
Definition: TPrincipal.cxx:794
void Test(Option_t *option="")
Test the PCA, bye calculating the sum square of residuals (see method SumOfSquareResiduals), and display the histogram.
void Print(Option_t *option="") const
Print the vector as a list of elements.
Definition: TVectorT.cxx:1361
Short_t Abs(Short_t d)
Definition: TMathBase.h:108
Int_t fNumberOfVariables
Definition: TPrincipal.h:24
virtual TObject * FindObject(const char *name) const
Delete a TObjLink object.
Definition: TList.cxx:574
TMatrixDSymEigen.
Double_t x[n]
Definition: legend1.C:17
The TNamed class is the base class for all named ROOT classes.
Definition: TNamed.h:29
virtual void MakeMethods(const char *classname="PCA", Option_t *option="")
Generate the file <classname>PCA.cxx which contains the implementation of two methods: void <classnam...
Definition: TPrincipal.cxx:856
Bool_t EndsWith(const char *pat, ECaseCompare cmp=kExact) const
Return true if string ends with the specified string.
Definition: TString.cxx:2152
Double_t fTrace
Definition: TPrincipal.h:37
virtual void MakeHistograms(const char *name="pca", Option_t *option="epsdx")
Make histograms of the result of the analysis.
Definition: TPrincipal.cxx:569
virtual void X2P(const Double_t *x, Double_t *p)
Calculate the principal components from the original data vector x, and return it in p...
A doubly linked list.
Definition: TList.h:44
Using a TBrowser one can browse all ROOT objects.
Definition: TBrowser.h:37
TNamed & operator=(const TNamed &rhs)
TNamed assignment operator.
Definition: TNamed.cxx:51
TVectorD fUserData
Definition: TPrincipal.h:35
virtual void AddRow(const Double_t *x)
Add a data point and update the covariance matrix.
Definition: TPrincipal.cxx:410
virtual void Draw(Option_t *option="")
Draw this histogram with options.
Definition: TH1.cxx:2974
SVector< double, 2 > v
Definition: Dict.h:5
2-D histogram with a float per channel (see TH1 documentation)}
Definition: TH2.h:250
TVectorT< Element > & Zero()
Set vector elements to zero.
Definition: TVectorT.cxx:451
Bool_t IsValid() const
Definition: TVectorT.h:83
TVectorD fEigenValues
Definition: TPrincipal.h:31
virtual void Error(const char *method, const char *msgfmt,...) const
Issue error message.
Definition: TObject.cxx:880
char * Form(const char *fmt,...)
virtual void MakePrincipals()
Perform the principal components analysis.
Definition: TPrincipal.cxx:869
Int_t fNumberOfDataPoints
Definition: TPrincipal.h:23
const TMatrixD & GetEigenVectors() const
virtual void Clear(Option_t *option="")
Clear the data in Object.
Definition: TPrincipal.cxx:480
Bool_t fStoreData
Definition: TPrincipal.h:42
#define h(i)
Definition: RSha256.hxx:106
Int_t GetNrows() const
Definition: TMatrixTBase.h:122
const Bool_t kFALSE
Definition: RtypesCore.h:88
#define d(i)
Definition: RSha256.hxx:102
const TVectorD & GetEigenValues() const
const Double_t * GetRow(Int_t row)
Return a row of the user supplied data.
Definition: TPrincipal.cxx:507
auto * t1
Definition: textangle.C:20
#define ClassImp(name)
Definition: Rtypes.h:359
double Double_t
Definition: RtypesCore.h:55
TPrincipal & operator=(const TPrincipal &)
Assignment operator.
Definition: TPrincipal.cxx:333
The TH1 histogram class.
Definition: TH1.h:56
static constexpr double s
const char * AsString() const
Return the date & time as a string (ctime() format).
Definition: TDatime.cxx:101
virtual void SetXTitle(const char *title)
Definition: TH1.h:405
TVectorD fOffDiagonal
Definition: TPrincipal.h:33
virtual void Add(TObject *obj)
Definition: TList.h:87
you should not use this method at all Int_t Int_t Double_t Double_t Double_t Int_t Double_t Double_t Double_t Double_t b
Definition: TRolke.cxx:630
virtual void MakeCode(const char *filename="pca", Option_t *option="")
Generates the file <filename>, with .C appended if it does argument doesn&#39;t end in ...
Definition: TPrincipal.cxx:544
Double_t Sqrt(Double_t x)
Definition: TMath.h:690
THist< 2, float, THistStatContent, THistStatUncertainty > TH2F
Definition: THist.hxx:291
const Bool_t kTRUE
Definition: RtypesCore.h:87
#define sym(otri1, otri2)
Definition: triangle.c:932
char name[80]
Definition: TGX11.cxx:109
TMatrixD fCovarianceMatrix
Definition: TPrincipal.h:28
TVectorD fSigmas
Definition: TPrincipal.h:27
virtual void Warning(const char *method, const char *msgfmt,...) const
Issue warning message.
Definition: TObject.cxx:866
Bool_t IsValid() const
Definition: TMatrixTBase.h:145
TPrincipal()
Empty constructor. Do not use.
Definition: TPrincipal.cxx:229
This class stores the date and time with a precision of one second in an unsigned 32 bit word (950130...
Definition: TDatime.h:37
const char * Data() const
Definition: TString.h:364