Towards the Evolution of Training Data Sets for Artificial Neural Networks


While most efforts in artificial neural network (ANN) research have been put into the investigation of network types, network topologies, various types of neurons and training algorithms, work on training data sets (TDSs) for ANNs has been comparably small. There are some approximations for the size of ANN TDSs, but little is known about the quality of TDSs, i.e. selecting data sets from which the ANN can draw the most information. As a matter of fact, with most real world applications not even human experts being familiar with the problem can give accurate guide lines for the construction of the TDS. In order to automatize this process we investigate the use of a genetic algorithm (GA) for the selection of appropriate input patterns for the TDS. The parallel netGEN system which uses a GA to generate problem-adapted generalized multi-layer perceptrons being trained by error-back-propagation has been extended to evolve (sub)-optimal TDSs. Empirical results on a simple example problem are presented.
Helmut A. Mayer <helmut@cosy.sbg.ac.at>
Last modified: Tue Jul 17 2018