Resumen:
The black-box nature of deep machine learning hinders the extraction of knowledge
in science. To address this issue, a proposal for a neural network (k-net) based on the
Kolmogorov–Arnold Representation Theorem is presented, pursuing to be an alternative to
the traditional Multilayer Perceptron. In its core, the algorithmic nature of neural networks
lies in the fundamental symmetry between forward-mode and reverse-mode accumulation
techniques, both of which rely on the chain rule of partial derivatives. These methods
are essential for computing gradients of functions, an operation that is at the core of the
training process of neural networks. Automatic differentiation addresses the need for
accurate and efficient calculation of derivative values in scientific computing; procedural
programs are thus transformed into the computation of the required derivatives at the
same numerical arguments. This work formalizes the algebraic structure of neural network
computations by framing the training process within the domain of hyperdual numbers.
Specifically, it defines a Kolmogorov–Arnold-inspired neural network (k-net) using dual
numbers by extending the univariate functions and their compositions that appear in the
representation theorem. This approach focuses on computation of the Jacobian and the
ability to implement such procedures algorithmically, without sacrificing accuracy and
mathematical rigor, while exploiting the inherent symmetry of the dual number formalism.