Publication Details

AFRICAN RESEARCH NEXUS

SHINING A SPOTLIGHT ON AFRICAN RESEARCH

computer science

Batch Gradient Learning Algorithm with Smoothing L1 Regularization for Feedforward Neural Networks

Computers, Volume 12, No. 1, Article 4, Year 2023

Regularization techniques are critical in the development of machine learning models. Complex models, such as neural networks, are particularly prone to overfitting and to performing poorly on the training data. (Formula presented.) regularization is the most extreme way to enforce sparsity, but, regrettably, it does not result in an NP-hard problem due to the non-differentiability of the 1-norm. However, the (Formula presented.) regularization term achieved convergence speed and efficiency optimization solution through a proximal method. In this paper, we propose a batch gradient learning algorithm with smoothing (Formula presented.) regularization (BGS (Formula presented.)) for learning and pruning a feedforward neural network with hidden nodes. To achieve our study purpose, we propose a smoothing (differentiable) function in order to address the non-differentiability of (Formula presented.) regularization at the origin, make the convergence speed faster, improve the network structure ability, and build stronger mapping. Under this condition, the strong and weak convergence theorems are provided. We used N-dimensional parity problems and function approximation problems in our experiments. Preliminary findings indicate that the BGS (Formula presented.) has convergence faster and good generalization abilities when compared with BG (Formula presented.), BG (Formula presented.), BG (Formula presented.), and BGS (Formula presented.). As a result, we demonstrate that the error function decreases monotonically and that the norm of the gradient of the error function approaches zero, thereby validating the theoretical finding and the supremacy of the suggested technique.

Statistics
Citations: 1
Authors: 1
Affiliations: 2
Identifiers