JASIC Volume. 2, Issue 1 (2021)


Adeleke Raheem Ajiboye, Fatima Usman-Hamza, Muyideen AbdulRaheem, Chinecherem Umezuruike


Linear classification Elman backpropagation Cascade feed-forward Machine learning Neural networks Mean Absolute Error

Download Full-text (PDF)

... Download File [ 0.24 MB ]
Go Back

Performance Evaluation of Classifiers Created using Elman Back-Propagation and Cascade Feed-forward Neural Networks



Huge data being captured in our day-to-day activities are mostly imbalance. Such data therefore, calls for fast, accurate and robust techniques, through which they could be analyzed in order to fast-track early decision making. A Cascade Feed-forward Neural Networks and Elman Backpropagation are known techniques in neural network domain and their efficacies is therefore tested on separable data in this study. The objective of this study is to evaluate the performance of these techniques in solving a linear classification problem. The linear classification of data involves, splitting of separable data into two distinct clusters. In order to achieve the goal of this study, linear classifiers were created using the two aforementioned techniques. Both network structures were exposed to the same dataset and similar parameter configurations were set for each technique. The model that emanates through each technique was simulated using the set of untrained data. In order to determine the accuracy of each model created through each technique, their Mean Absolute Errors (MAE) was computed. The performance of each model was determined based on the value of MAE. The error computation for the simulated output reveals that, cascade neural network gives an error of 0.0928; while the model created using Elman Backpropagation network gives a relatively lower error of 0.0661. It can be inferred from this study, that, both techniques are capable of fitting accurate classifiers from dataset and specifically, both techniques are very suitable for binary classification of separable data.