Logo ROOT  
Reference Guide
 
Loading...
Searching...
No Matches
MethodCFMlpANN.cxx
Go to the documentation of this file.
1// @(#)root/tmva $Id$
2// Author: Andreas Hoecker, Joerg Stelzer, Helge Voss, Kai Voss
3
4/**********************************************************************************
5 * Project: TMVA - a Root-integrated toolkit for multivariate Data analysis *
6 * Package: TMVA *
7 * Class : TMVA::MethodCFMlpANN *
8 * Web : http://tmva.sourceforge.net *
9 * *
10 * Description: *
11 * Implementation (see header for description) *
12 * *
13 * Authors (alphabetical): *
14 * Andreas Hoecker <Andreas.Hocker@cern.ch> - CERN, Switzerland *
15 * Xavier Prudent <prudent@lapp.in2p3.fr> - LAPP, France *
16 * Helge Voss <Helge.Voss@cern.ch> - MPI-K Heidelberg, Germany *
17 * Kai Voss <Kai.Voss@cern.ch> - U. of Victoria, Canada *
18 * *
19 * Copyright (c) 2005: *
20 * CERN, Switzerland *
21 * U. of Victoria, Canada *
22 * MPI-K Heidelberg, Germany *
23 * LAPP, Annecy, France *
24 * *
25 * Redistribution and use in source and binary forms, with or without *
26 * modification, are permitted according to the terms listed in LICENSE *
27 * (http://tmva.sourceforge.net/LICENSE) *
28 **********************************************************************************/
29
30/*! \class TMVA::MethodCFMlpANN
31\ingroup TMVA
32
33Interface to Clermond-Ferrand artificial neural network
34
35
36The CFMlpANN belong to the class of Multilayer Perceptrons (MLP), which are
37feed-forward networks according to the following propagation schema:
38
39\image html tmva_mlp.png Schema for artificial neural network.
40
41The input layer contains as many neurons as input variables used in the MVA.
42The output layer contains two neurons for the signal and background
43event classes. In between the input and output layers are a variable number
44of <i>k</i> hidden layers with arbitrary numbers of neurons. (While the
45structure of the input and output layers is determined by the problem, the
46hidden layers can be configured by the user through the option string
47of the method booking.)
48
49As indicated in the sketch, all neuron inputs to a layer are linear
50combinations of the neuron output of the previous layer. The transfer
51from input to output within a neuron is performed by means of an "activation
52function". In general, the activation function of a neuron can be
53zero (deactivated), one (linear), or non-linear. The above example uses
54a sigmoid activation function. The transfer function of the output layer
55is usually linear. As a consequence: an ANN without hidden layer should
56give identical discrimination power as a linear discriminant analysis (Fisher).
57In case of one hidden layer, the ANN computes a linear combination of
58sigmoid.
59
60The learning method used by the CFMlpANN is only stochastic.
61*/
62
63
64#include "TMVA/MethodCFMlpANN.h"
65
67#include "TMVA/Configurable.h"
68#include "TMVA/DataSet.h"
69#include "TMVA/DataSetInfo.h"
70#include "TMVA/IMethod.h"
71#include "TMVA/MethodBase.h"
73#include "TMVA/MsgLogger.h"
74#include "TMVA/Tools.h"
75#include "TMVA/Types.h"
76
77#include "TMatrix.h"
78#include "TMath.h"
79
80#include <cstdlib>
81#include <iostream>
82#include <string>
83
84
85
86REGISTER_METHOD(CFMlpANN)
87
88using std::stringstream;
89using std::make_pair;
90using std::atoi;
91
93
94
95
96////////////////////////////////////////////////////////////////////////////////
97/// standard constructor
98///
99/// option string: "n_training_cycles:n_hidden_layers"
100///
101/// default is: n_training_cycles = 5000, n_layers = 4
102///
103/// * note that the number of hidden layers in the NN is:
104/// n_hidden_layers = n_layers - 2
105///
106/// * since there is one input and one output layer. The number of
107/// nodes (neurons) is predefined to be:
108///
109/// n_nodes[i] = nvars + 1 - i (where i=1..n_layers)
110///
111/// with nvars being the number of variables used in the NN.
112///
113/// Hence, the default case is:
114///
115/// n_neurons(layer 1 (input)) : nvars
116/// n_neurons(layer 2 (hidden)): nvars-1
117/// n_neurons(layer 3 (hidden)): nvars-1
118/// n_neurons(layer 4 (out)) : 2
119///
120/// This artificial neural network usually needs a relatively large
121/// number of cycles to converge (8000 and more). Overtraining can
122/// be efficiently tested by comparing the signal and background
123/// output of the NN for the events that were used for training and
124/// an independent data sample (with equal properties). If the separation
125/// performance is significantly better for the training sample, the
126/// NN interprets statistical effects, and is hence overtrained. In
127/// this case, the number of cycles should be reduced, or the size
128/// of the training sample increased.
129
131 const TString& methodTitle,
132 DataSetInfo& theData,
133 const TString& theOption ) :
134 TMVA::MethodBase( jobName, Types::kCFMlpANN, methodTitle, theData, theOption),
135 fData(0),
136 fClass(0),
137 fNlayers(0),
138 fNcycles(0),
139 fNodes(0),
140 fYNN(0),
141 MethodCFMlpANN_nsel(0)
142{
144}
145
146////////////////////////////////////////////////////////////////////////////////
147/// constructor from weight file
148
150 const TString& theWeightFile):
151 TMVA::MethodBase( Types::kCFMlpANN, theData, theWeightFile),
152 fData(0),
153 fClass(0),
154 fNlayers(0),
155 fNcycles(0),
156 fNodes(0),
157 fYNN(0),
158 MethodCFMlpANN_nsel(0)
159{
160}
161
162////////////////////////////////////////////////////////////////////////////////
163/// CFMlpANN can handle classification with 2 classes
164
166{
167 if (type == Types::kClassification && numberClasses == 2) return kTRUE;
168 return kFALSE;
169}
170
171////////////////////////////////////////////////////////////////////////////////
172/// define the options (their key words) that can be set in the option string
173/// know options: NCycles=xx :the number of training cycles
174/// HiddenLayser="N-1,N-2" :the specification of the hidden layers
175
177{
178 DeclareOptionRef( fNcycles =3000, "NCycles", "Number of training cycles" );
179 DeclareOptionRef( fLayerSpec="N,N-1", "HiddenLayers", "Specification of hidden layer architecture" );
180}
181
182////////////////////////////////////////////////////////////////////////////////
183/// decode the options in the option string
184
186{
187 fNodes = new Int_t[20]; // number of nodes per layer (maximum 20 layers)
188 fNlayers = 2;
189 Int_t currentHiddenLayer = 1;
190 TString layerSpec(fLayerSpec);
191 while(layerSpec.Length()>0) {
192 TString sToAdd = "";
193 if (layerSpec.First(',')<0) {
194 sToAdd = layerSpec;
195 layerSpec = "";
196 }
197 else {
198 sToAdd = layerSpec(0,layerSpec.First(','));
199 layerSpec = layerSpec(layerSpec.First(',')+1,layerSpec.Length());
200 }
201 Int_t nNodes = 0;
202 if (sToAdd.BeginsWith("N") || sToAdd.BeginsWith("n")) { sToAdd.Remove(0,1); nNodes = GetNvar(); }
203 nNodes += atoi(sToAdd);
204 fNodes[currentHiddenLayer++] = nNodes;
205 fNlayers++;
206 }
207 fNodes[0] = GetNvar(); // number of input nodes
208 fNodes[fNlayers-1] = 2; // number of output nodes
209
210 if (IgnoreEventsWithNegWeightsInTraining()) {
211 Log() << kFATAL << "Mechanism to ignore events with negative weights in training not yet available for method: "
212 << GetMethodTypeName()
213 << " --> please remove \"IgnoreNegWeightsInTraining\" option from booking string."
214 << Endl;
215 }
216
217 Log() << kINFO << "Use configuration (nodes per layer): in=";
218 for (Int_t i=0; i<fNlayers-1; i++) Log() << kINFO << fNodes[i] << ":";
219 Log() << kINFO << fNodes[fNlayers-1] << "=out" << Endl;
220
221 // some info
222 Log() << "Use " << fNcycles << " training cycles" << Endl;
223
224 Int_t nEvtTrain = Data()->GetNTrainingEvents();
225
226 // note that one variable is type
227 if (nEvtTrain>0) {
228
229 // Data LUT
230 fData = new TMatrix( nEvtTrain, GetNvar() );
231 fClass = new std::vector<Int_t>( nEvtTrain );
232
233 // ---- fill LUTs
234
235 UInt_t ivar;
236 for (Int_t ievt=0; ievt<nEvtTrain; ievt++) {
237 const Event * ev = GetEvent(ievt);
238
239 // identify signal and background events
240 (*fClass)[ievt] = DataInfo().IsSignal(ev) ? 1 : 2;
241
242 // use normalized input Data
243 for (ivar=0; ivar<GetNvar(); ivar++) {
244 (*fData)( ievt, ivar ) = ev->GetValue(ivar);
245 }
246 }
247
248 //Log() << kVERBOSE << Data()->GetNEvtSigTrain() << " Signal and "
249 // << Data()->GetNEvtBkgdTrain() << " background" << " events in trainingTree" << Endl;
250 }
251
252}
253
254////////////////////////////////////////////////////////////////////////////////
255/// default initialisation called by all constructors
256
258{
259 // CFMlpANN prefers normalised input variables
260 SetNormalised( kTRUE );
261
262 // initialize dimensions
263 MethodCFMlpANN_nsel = 0;
264}
265
266////////////////////////////////////////////////////////////////////////////////
267/// destructor
268
270{
271 delete fData;
272 delete fClass;
273 delete[] fNodes;
274
275 if (fYNN!=0) {
276 for (Int_t i=0; i<fNlayers; i++) delete[] fYNN[i];
277 delete[] fYNN;
278 fYNN=0;
279 }
280}
281
282////////////////////////////////////////////////////////////////////////////////
283/// training of the Clement-Ferrand NN classifier
284
286{
287 Double_t dumDat(0);
288 Int_t ntrain(Data()->GetNTrainingEvents());
289 Int_t ntest(0);
290 Int_t nvar(GetNvar());
291 Int_t nlayers(fNlayers);
292 Int_t *nodes = new Int_t[nlayers];
293 Int_t ncycles(fNcycles);
294
295 for (Int_t i=0; i<nlayers; i++) nodes[i] = fNodes[i]; // full copy of class member
296
297 if (fYNN != 0) {
298 for (Int_t i=0; i<fNlayers; i++) delete[] fYNN[i];
299 delete[] fYNN;
300 fYNN = 0;
301 }
302 fYNN = new Double_t*[nlayers];
303 for (Int_t layer=0; layer<nlayers; layer++)
304 fYNN[layer] = new Double_t[fNodes[layer]];
305
306 // please check
307#ifndef R__WIN32
308 Train_nn( &dumDat, &dumDat, &ntrain, &ntest, &nvar, &nlayers, nodes, &ncycles );
309#else
310 Log() << kWARNING << "<Train> sorry CFMlpANN does not run on Windows" << Endl;
311#endif
312
313 delete [] nodes;
314
315 ExitFromTraining();
316}
317
318////////////////////////////////////////////////////////////////////////////////
319/// returns CFMlpANN output (normalised within [0,1])
320
322{
323 Bool_t isOK = kTRUE;
324
325 const Event* ev = GetEvent();
326
327 // copy of input variables
328 std::vector<Double_t> inputVec( GetNvar() );
329 for (UInt_t ivar=0; ivar<GetNvar(); ivar++) inputVec[ivar] = ev->GetValue(ivar);
330
331 Double_t myMVA = EvalANN( inputVec, isOK );
332 if (!isOK) Log() << kFATAL << "EvalANN returns (!isOK) for event " << Endl;
333
334 // cannot determine error
335 NoErrorCalc(err, errUpper);
336
337 return myMVA;
338}
339
340////////////////////////////////////////////////////////////////////////////////
341/// evaluates NN value as function of input variables
342
343Double_t TMVA::MethodCFMlpANN::EvalANN( std::vector<Double_t>& inVar, Bool_t& isOK )
344{
345 // hardcopy of input variables (necessary because they are update later)
346 Double_t* xeev = new Double_t[GetNvar()];
347 for (UInt_t ivar=0; ivar<GetNvar(); ivar++) xeev[ivar] = inVar[ivar];
348
349 // ---- now apply the weights: get NN output
350 isOK = kTRUE;
351 for (UInt_t jvar=0; jvar<GetNvar(); jvar++) {
352
353 if (fVarn_1.xmax[jvar] < xeev[jvar]) xeev[jvar] = fVarn_1.xmax[jvar];
354 if (fVarn_1.xmin[jvar] > xeev[jvar]) xeev[jvar] = fVarn_1.xmin[jvar];
355 if (fVarn_1.xmax[jvar] == fVarn_1.xmin[jvar]) {
356 isOK = kFALSE;
357 xeev[jvar] = 0;
358 }
359 else {
360 xeev[jvar] = xeev[jvar] - ((fVarn_1.xmax[jvar] + fVarn_1.xmin[jvar])/2);
361 xeev[jvar] = xeev[jvar] / ((fVarn_1.xmax[jvar] - fVarn_1.xmin[jvar])/2);
362 }
363 }
364
365 NN_ava( xeev );
366
367 Double_t retval = 0.5*(1.0 + fYNN[fParam_1.layerm-1][0]);
368
369 delete [] xeev;
370
371 return retval;
372}
373
374////////////////////////////////////////////////////////////////////////////////
375/// auxiliary functions
376
378{
379 for (Int_t ivar=0; ivar<fNeur_1.neuron[0]; ivar++) fYNN[0][ivar] = xeev[ivar];
380
381 for (Int_t layer=1; layer<fParam_1.layerm; layer++) {
382 for (Int_t j=1; j<=fNeur_1.neuron[layer]; j++) {
383
384 Double_t x = Ww_ref(fNeur_1.ww, layer+1,j); // init with the bias layer
385
386 for (Int_t k=1; k<=fNeur_1.neuron[layer-1]; k++) { // neurons of originating layer
387 x += fYNN[layer-1][k-1]*W_ref(fNeur_1.w, layer+1, j, k);
388 }
389 fYNN[layer][j-1] = NN_fonc( layer, x );
390 }
391 }
392}
393
394////////////////////////////////////////////////////////////////////////////////
395/// activation function
396
398{
399 Double_t f(0);
400
401 if (u/fDel_1.temp[i] > 170) f = +1;
402 else if (u/fDel_1.temp[i] < -170) f = -1;
403 else {
404 Double_t yy = TMath::Exp(-u/fDel_1.temp[i]);
405 f = (1 - yy)/(1 + yy);
406 }
407
408 return f;
409}
410
411////////////////////////////////////////////////////////////////////////////////
412/// read back the weight from the training from file (stream)
413
415{
416 TString var;
417
418 // read number of variables and classes
419 UInt_t nva(0), lclass(0);
420 istr >> nva >> lclass;
421
422 if (GetNvar() != nva) // wrong file
423 Log() << kFATAL << "<ReadWeightsFromFile> mismatch in number of variables" << Endl;
424
425 // number of output classes must be 2
426 if (lclass != 2) // wrong file
427 Log() << kFATAL << "<ReadWeightsFromFile> mismatch in number of classes" << Endl;
428
429 // check that we are not at the end of the file
430 if (istr.eof( ))
431 Log() << kFATAL << "<ReadWeightsFromStream> reached EOF prematurely " << Endl;
432
433 // read extrema of input variables
434 for (UInt_t ivar=0; ivar<GetNvar(); ivar++)
435 istr >> fVarn_1.xmax[ivar] >> fVarn_1.xmin[ivar];
436
437 // read number of layers (sum of: input + output + hidden)
438 istr >> fParam_1.layerm;
439
440 if (fYNN != 0) {
441 for (Int_t i=0; i<fNlayers; i++) delete[] fYNN[i];
442 delete[] fYNN;
443 fYNN = 0;
444 }
445 fYNN = new Double_t*[fParam_1.layerm];
446 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
447 // read number of neurons for each layer
448 // coverity[tainted_data_argument]
449 istr >> fNeur_1.neuron[layer];
450 fYNN[layer] = new Double_t[fNeur_1.neuron[layer]];
451 }
452
453 // to read dummy lines
454 const Int_t nchar( 100 );
455 char* dumchar = new char[nchar];
456
457 // read weights
458 for (Int_t layer=1; layer<=fParam_1.layerm-1; layer++) {
459
460 Int_t nq = fNeur_1.neuron[layer]/10;
461 Int_t nr = fNeur_1.neuron[layer] - nq*10;
462
463 Int_t kk(0);
464 if (nr==0) kk = nq;
465 else kk = nq+1;
466
467 for (Int_t k=1; k<=kk; k++) {
468 Int_t jmin = 10*k - 9;
469 Int_t jmax = 10*k;
470 if (fNeur_1.neuron[layer]<jmax) jmax = fNeur_1.neuron[layer];
471 for (Int_t j=jmin; j<=jmax; j++) {
472 istr >> Ww_ref(fNeur_1.ww, layer+1, j);
473 }
474 for (Int_t i=1; i<=fNeur_1.neuron[layer-1]; i++) {
475 for (Int_t j=jmin; j<=jmax; j++) {
476 istr >> W_ref(fNeur_1.w, layer+1, j, i);
477 }
478 }
479 // skip two empty lines
480 istr.getline( dumchar, nchar );
481 }
482 }
483
484 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
485
486 // skip 2 empty lines
487 istr.getline( dumchar, nchar );
488 istr.getline( dumchar, nchar );
489
490 istr >> fDel_1.temp[layer];
491 }
492
493 // sanity check
494 if ((Int_t)GetNvar() != fNeur_1.neuron[0]) {
495 Log() << kFATAL << "<ReadWeightsFromFile> mismatch in zeroth layer:"
496 << GetNvar() << " " << fNeur_1.neuron[0] << Endl;
497 }
498
499 fNlayers = fParam_1.layerm;
500 delete[] dumchar;
501}
502
503////////////////////////////////////////////////////////////////////////////////
504/// data interface function
505
507 Int_t* /* icode*/, Int_t* /*flag*/,
508 Int_t* /*nalire*/, Int_t* nvar,
509 Double_t* xpg, Int_t* iclass, Int_t* ikend )
510{
511 // icode and ikend are dummies needed to match f2c mlpl3 functions
512 *ikend = 0;
513
514
515 // sanity checks
516 if (0 == xpg) {
517 Log() << kFATAL << "ERROR in MethodCFMlpANN_DataInterface zero pointer xpg" << Endl;
518 }
519 if (*nvar != (Int_t)this->GetNvar()) {
520 Log() << kFATAL << "ERROR in MethodCFMlpANN_DataInterface mismatch in num of variables: "
521 << *nvar << " " << this->GetNvar() << Endl;
522 }
523
524 // fill variables
525 *iclass = (int)this->GetClass( MethodCFMlpANN_nsel );
526 for (UInt_t ivar=0; ivar<this->GetNvar(); ivar++)
527 xpg[ivar] = (double)this->GetData( MethodCFMlpANN_nsel, ivar );
528
529 ++MethodCFMlpANN_nsel;
530
531 return 0;
532}
533
534////////////////////////////////////////////////////////////////////////////////
535/// write weights to xml file
536
538{
539 void *wght = gTools().AddChild(parent, "Weights");
540 gTools().AddAttr(wght,"NVars",fParam_1.nvar);
541 gTools().AddAttr(wght,"NClasses",fParam_1.lclass);
542 gTools().AddAttr(wght,"NLayers",fParam_1.layerm);
543 void* minmaxnode = gTools().AddChild(wght, "VarMinMax");
544 stringstream s;
545 s.precision( 16 );
546 for (Int_t ivar=0; ivar<fParam_1.nvar; ivar++)
547 s << std::scientific << fVarn_1.xmin[ivar] << " " << fVarn_1.xmax[ivar] << " ";
548 gTools().AddRawLine( minmaxnode, s.str().c_str() );
549 void* neurons = gTools().AddChild(wght, "NNeurons");
550 stringstream n;
551 n.precision( 16 );
552 for (Int_t layer=0; layer<fParam_1.layerm; layer++)
553 n << std::scientific << fNeur_1.neuron[layer] << " ";
554 gTools().AddRawLine( neurons, n.str().c_str() );
555 for (Int_t layer=1; layer<fParam_1.layerm; layer++) {
556 void* layernode = gTools().AddChild(wght, "Layer"+gTools().StringFromInt(layer));
557 gTools().AddAttr(layernode,"NNeurons",fNeur_1.neuron[layer]);
558 void* neuronnode=NULL;
559 for (Int_t neuron=0; neuron<fNeur_1.neuron[layer]; neuron++) {
560 neuronnode = gTools().AddChild(layernode,"Neuron"+gTools().StringFromInt(neuron));
561 stringstream weights;
562 weights.precision( 16 );
563 weights << std::scientific << Ww_ref(fNeur_1.ww, layer+1, neuron+1);
564 for (Int_t i=0; i<fNeur_1.neuron[layer-1]; i++) {
565 weights << " " << std::scientific << W_ref(fNeur_1.w, layer+1, neuron+1, i+1);
566 }
567 gTools().AddRawLine( neuronnode, weights.str().c_str() );
568 }
569 }
570 void* tempnode = gTools().AddChild(wght, "LayerTemp");
571 stringstream temp;
572 temp.precision( 16 );
573 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
574 temp << std::scientific << fDel_1.temp[layer] << " ";
575 }
576 gTools().AddRawLine(tempnode, temp.str().c_str() );
577}
578////////////////////////////////////////////////////////////////////////////////
579/// read weights from xml file
580
582{
583 gTools().ReadAttr( wghtnode, "NLayers",fParam_1.layerm );
584 void* minmaxnode = gTools().GetChild(wghtnode);
585 const char* minmaxcontent = gTools().GetContent(minmaxnode);
586 stringstream content(minmaxcontent);
587 for (UInt_t ivar=0; ivar<GetNvar(); ivar++)
588 content >> fVarn_1.xmin[ivar] >> fVarn_1.xmax[ivar];
589 if (fYNN != 0) {
590 for (Int_t i=0; i<fNlayers; i++) delete[] fYNN[i];
591 delete[] fYNN;
592 fYNN = 0;
593 }
594 fYNN = new Double_t*[fParam_1.layerm];
595 void *layernode=gTools().GetNextChild(minmaxnode);
596 const char* neuronscontent = gTools().GetContent(layernode);
597 stringstream ncontent(neuronscontent);
598 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
599 // read number of neurons for each layer;
600 // coverity[tainted_data_argument]
601 ncontent >> fNeur_1.neuron[layer];
602 fYNN[layer] = new Double_t[fNeur_1.neuron[layer]];
603 }
604 for (Int_t layer=1; layer<fParam_1.layerm; layer++) {
605 layernode=gTools().GetNextChild(layernode);
606 void* neuronnode=NULL;
607 neuronnode = gTools().GetChild(layernode);
608 for (Int_t neuron=0; neuron<fNeur_1.neuron[layer]; neuron++) {
609 const char* neuronweights = gTools().GetContent(neuronnode);
610 stringstream weights(neuronweights);
611 weights >> Ww_ref(fNeur_1.ww, layer+1, neuron+1);
612 for (Int_t i=0; i<fNeur_1.neuron[layer-1]; i++) {
613 weights >> W_ref(fNeur_1.w, layer+1, neuron+1, i+1);
614 }
615 neuronnode=gTools().GetNextChild(neuronnode);
616 }
617 }
618 void* tempnode=gTools().GetNextChild(layernode);
619 const char* temp = gTools().GetContent(tempnode);
620 stringstream t(temp);
621 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
622 t >> fDel_1.temp[layer];
623 }
624 fNlayers = fParam_1.layerm;
625}
626
627////////////////////////////////////////////////////////////////////////////////
628/// write the weights of the neural net
629
630void TMVA::MethodCFMlpANN::PrintWeights( std::ostream & o ) const
631{
632 // write number of variables and classes
633 o << "Number of vars " << fParam_1.nvar << std::endl;
634 o << "Output nodes " << fParam_1.lclass << std::endl;
635
636 // write extrema of input variables
637 for (Int_t ivar=0; ivar<fParam_1.nvar; ivar++)
638 o << "Var " << ivar << " [" << fVarn_1.xmin[ivar] << " - " << fVarn_1.xmax[ivar] << "]" << std::endl;
639
640 // write number of layers (sum of: input + output + hidden)
641 o << "Number of layers " << fParam_1.layerm << std::endl;
642
643 o << "Nodes per layer ";
644 for (Int_t layer=0; layer<fParam_1.layerm; layer++)
645 // write number of neurons for each layer
646 o << fNeur_1.neuron[layer] << " ";
647 o << std::endl;
648
649 // write weights
650 for (Int_t layer=1; layer<=fParam_1.layerm-1; layer++) {
651
652 Int_t nq = fNeur_1.neuron[layer]/10;
653 Int_t nr = fNeur_1.neuron[layer] - nq*10;
654
655 Int_t kk(0);
656 if (nr==0) kk = nq;
657 else kk = nq+1;
658
659 for (Int_t k=1; k<=kk; k++) {
660 Int_t jmin = 10*k - 9;
661 Int_t jmax = 10*k;
662 Int_t i, j;
663 if (fNeur_1.neuron[layer]<jmax) jmax = fNeur_1.neuron[layer];
664 for (j=jmin; j<=jmax; j++) {
665
666 //o << fNeur_1.ww[j*max_nLayers_ + layer - 6] << " ";
667 o << Ww_ref(fNeur_1.ww, layer+1, j) << " ";
668
669 }
670 o << std::endl;
671 //for (i=1; i<=fNeur_1.neuron[layer-1]; i++) {
672 for (i=1; i<=fNeur_1.neuron[layer-1]; i++) {
673 for (j=jmin; j<=jmax; j++) {
674 // o << fNeur_1.w[(i*max_nNodes_ + j)*max_nLayers_ + layer - 186] << " ";
675 o << W_ref(fNeur_1.w, layer+1, j, i) << " ";
676 }
677 o << std::endl;
678 }
679
680 // skip two empty lines
681 o << std::endl;
682 }
683 }
684 for (Int_t layer=0; layer<fParam_1.layerm; layer++) {
685 o << "Del.temp in layer " << layer << " : " << fDel_1.temp[layer] << std::endl;
686 }
687}
688
689////////////////////////////////////////////////////////////////////////////////
690
691void TMVA::MethodCFMlpANN::MakeClassSpecific( std::ostream& fout, const TString& className ) const
692{
693 // write specific classifier response
694 fout << " // not implemented for class: \"" << className << "\"" << std::endl;
695 fout << "};" << std::endl;
696}
697
698////////////////////////////////////////////////////////////////////////////////
699/// write specific classifier response for header
700
701void TMVA::MethodCFMlpANN::MakeClassSpecificHeader( std::ostream& , const TString& ) const
702{
703}
704
705////////////////////////////////////////////////////////////////////////////////
706/// get help message text
707///
708/// typical length of text line:
709/// "|--------------------------------------------------------------|"
710
712{
713 Log() << Endl;
714 Log() << gTools().Color("bold") << "--- Short description:" << gTools().Color("reset") << Endl;
715 Log() << Endl;
716 Log() << "<None>" << Endl;
717 Log() << Endl;
718 Log() << gTools().Color("bold") << "--- Performance optimisation:" << gTools().Color("reset") << Endl;
719 Log() << Endl;
720 Log() << "<None>" << Endl;
721 Log() << Endl;
722 Log() << gTools().Color("bold") << "--- Performance tuning via configuration options:" << gTools().Color("reset") << Endl;
723 Log() << Endl;
724 Log() << "<None>" << Endl;
725}
#define REGISTER_METHOD(CLASS)
for example
Cppyy::TCppType_t fClass
#define f(i)
Definition RSha256.hxx:104
constexpr Bool_t kFALSE
Definition RtypesCore.h:101
constexpr Bool_t kTRUE
Definition RtypesCore.h:100
#define ClassImp(name)
Definition Rtypes.h:377
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t nchar
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t GCValues_t GetPrimarySelectionOwner GetDisplay GetScreen GetColormap GetNativeEvent const char const char dpyName wid window const char font_name cursor keysym reg const char only_if_exist regb h Point_t winding char text const char depth char const char Int_t count const char ColorStruct_t color const char Pixmap_t Pixmap_t PictureAttributes_t attr const char char ret_data h unsigned char height h Atom_t Int_t ULong_t ULong_t unsigned char prop_list Atom_t Atom_t Atom_t Time_t type
TMatrixT< Float_t > TMatrix
Definition TMatrix.h:24
MsgLogger & Log() const
Class that contains all the data information.
Definition DataSetInfo.h:62
Float_t GetValue(UInt_t ivar) const
return value of i'th variable
Definition Event.cxx:236
Virtual base Class for all MVA method.
Definition MethodBase.h:111
Interface to Clermond-Ferrand artificial neural network.
Double_t GetMvaValue(Double_t *err=nullptr, Double_t *errUpper=nullptr)
returns CFMlpANN output (normalised within [0,1])
void PrintWeights(std::ostream &o) const
write the weights of the neural net
void MakeClassSpecific(std::ostream &, const TString &) const
Double_t EvalANN(std::vector< Double_t > &, Bool_t &isOK)
evaluates NN value as function of input variables
void DeclareOptions()
define the options (their key words) that can be set in the option string know options: NCycles=xx :t...
virtual Bool_t HasAnalysisType(Types::EAnalysisType type, UInt_t numberClasses, UInt_t)
CFMlpANN can handle classification with 2 classes.
void NN_ava(Double_t *)
auxiliary functions
void AddWeightsXMLTo(void *parent) const
write weights to xml file
void ProcessOptions()
decode the options in the option string
void Train(void)
training of the Clement-Ferrand NN classifier
Double_t NN_fonc(Int_t, Double_t) const
activation function
void ReadWeightsFromStream(std::istream &istr)
read back the weight from the training from file (stream)
void MakeClassSpecificHeader(std::ostream &, const TString &="") const
write specific classifier response for header
virtual ~MethodCFMlpANN(void)
destructor
MethodCFMlpANN(const TString &jobName, const TString &methodTitle, DataSetInfo &theData, const TString &theOption="3000:N-1:N-2")
standard constructor
void Init(void)
default initialisation called by all constructors
Int_t DataInterface(Double_t *, Double_t *, Int_t *, Int_t *, Int_t *, Int_t *, Double_t *, Int_t *, Int_t *)
data interface function
void ReadWeightsFromXML(void *wghtnode)
read weights from xml file
void GetHelpMessage() const
get help message text
Bool_t AddRawLine(void *node, const char *raw)
XML helpers.
Definition Tools.cxx:1190
const TString & Color(const TString &)
human readable color strings
Definition Tools.cxx:828
const char * GetContent(void *node)
XML helpers.
Definition Tools.cxx:1174
void ReadAttr(void *node, const char *, T &value)
read attribute from xml
Definition Tools.h:329
void * GetChild(void *parent, const char *childname=nullptr)
get child node
Definition Tools.cxx:1150
void AddAttr(void *node, const char *, const T &value, Int_t precision=16)
add attribute to xml
Definition Tools.h:347
void * AddChild(void *parent, const char *childname, const char *content=nullptr, bool isRootNode=false)
add child node
Definition Tools.cxx:1124
void * GetNextChild(void *prevchild, const char *childname=nullptr)
XML helpers.
Definition Tools.cxx:1162
Singleton class for Global types used by TMVA.
Definition Types.h:71
@ kClassification
Definition Types.h:127
Basic string class.
Definition TString.h:139
Ssiz_t Length() const
Definition TString.h:421
Ssiz_t First(char c) const
Find first occurrence of a character c.
Definition TString.cxx:531
Bool_t BeginsWith(const char *s, ECaseCompare cmp=kExact) const
Definition TString.h:627
TString & Remove(Ssiz_t pos)
Definition TString.h:685
Double_t x[n]
Definition legend1.C:17
const Int_t n
Definition legend1.C:16
create variable transformations
Tools & gTools()
MsgLogger & Endl(MsgLogger &ml)
Definition MsgLogger.h:148
Double_t Exp(Double_t x)
Returns the base-e exponential function of x, which is e raised to the power x.
Definition TMath.h:709