Abstract
- The success of deep neural networks have been compromised by their lack of interpretability. On the other hand, most interpretable models do not offer same accuracy as deep neural networks or the former depends on the latter. Inspired by Classification-by-Components networks, in this paper, we present a novel approach into designing a two-layered perceptron network, that offers a level of interpetability. Hence, we use the prediction power of a multi-layer perceptron, while a class of the adapted parameters make fair sense to human. We will visualize the weights, between input layer and the hidden layer, and show that Matching the right objective function with activation function of the output layer, is the key to interpreting the weights and their influence on component-wise classification.