Now we try to make a neural network that realizes a logical AND. We need to include the header of NNFpp and the sdtlib header:
#include "NNFpp.h"
#include <conio.h>
Than we have to write the body of main function, but before we'll write a file ".nnf.ts" with the training set for the net, the file is something like this:
NNF - Neural Net Framework
__neural net training set file__
SET
in: 2
out: 1
examples: 4
0 0
0
1 0
0
0 1
0
1 1
1
Remember
that you must respect the file's structure.
Now the main function. We needs of one floating point vector and one float pointer:
the pointer *out is for the neural
network outputs and the vector in
is for input data:
float *out, in[2]={0,0};
The next step is create the neural network:
int Neurons[3] = {2,4,1};
NNFpp *net;
net = new NNFpp(3,Neurons,1,NNF_SIGMA_LOGISTIC,0.6);
The structure of the net is:
Three layers of two, four and one neurons; the activation function is a sigma logistic and the learning rate is 0.6. Now we have to make learning from file:
FILE *f;
f = fopen("and.nnf.ts","r");
for (int i=0; i<5000;
i++)
{
net->TrainingSetFromFile(f);
}
At this point the net is able to calculates logical and, and we can test with this code:
in[0] = 0;
in[1] = 0;
out = net->Execute(in);
printf("\n0 0 -> 0: %f",out[0]);
in[0] = 0;
in[1] = 1;
out = net->Execute(in);
printf("\n0 1 -> 0: %f",out[0]);
in[0] = 1;
in[1] = 0;
out = net->Execute(in);
printf("\n1 0 -> 0: %f",out[0]);
in[0] = 1;
in[1] = 1;
out = net->Execute(in);
printf("\n1 1 -> 1: %f\n",out[0]);
In the end we free memory and terminate program:
delete net;
The complete source of this program is in examples/Tutorial/And directory.
Now we try to make a neural network that realizes a logical XOR. Once written xor.nnf.ts, the structure of main.cpp file is similar to the previous one, the only difference is in the loop for learning:
int Epoch=0;
float Acc=0;
do
{
net->Errors->CumulativeError = 0;
net->TrainingSetFromFile(f);
Epoch++;
Acc = 100*net->Errors->CumulativeAccuracy/4;
printf("Epoch: %d Error: %f Accuracy: %f%\n",Epoch,net->Errors->CumulativeError,Acc);
}while((net->Errors->CumulativeError > 0.01)&&(Acc <= 90));
In this case we use the new class Error, to control the error during learning and stop learning when the error is smaller than 0.01 or accuracy is greater than 90%.
The complete source of this program is in examples/Tutorial/Xor directory.
Iris is a dataset often used to test neural network (the complete dataset with names and theory is on http://www.ics.uci.edu/~mlearn/databases/). This is a very simple example that shows how to use the functions: TrainingSetFromFile(FILE *TrainingSet, int seed) and TestOnFile(FILE *test, FILE *output).
The complete source of this program is in examples/Tutorial/Iris directory.