Adaptive Attention Reasoning Transformer Using Neuro-Fuzzy Modulation

Adrian E. Adame, Mauricio A. Sánchez, Juan R. Castro

Abstract


The transformer model architecture has been
shown to be very effective at understanding sequence
relations. For tasks like natural language process-
ing (NLP), we can make decisions using dynamic
relationships among linguistic elements. While this
architecture excels at capturing complex dependencies
through self-attention mechanisms, their application to
tabular data often creates powerful, complex models. By
using neuro-fuzzy systems, we can provide rule-based
interpretability, but they typically lack the performance
of robust deep learning models on complex datasets.
Using the best instrument of each model, we proposed
a new hybrid approach using the rule-based knowledge
and the self-attention mechanism.

Keywords


Fuzzy systems, transformer, attention, interpretability, tabular data

Full Text: PDF