Xingyuan Pan and Vivek Srikumar
Proceedings of the International Conference on Machine Learning (ICML), 2016.
Abstract
Rectified Linear Units (ReLUs) have been shown to ameliorate the vanishing gradient problem, allow for efficient backpropagation, and empirically promote sparsity in the learned parameters. They have led to state-of-the-art results in a variety of applications. However, unlike threshold and sigmoid networks, ReLU networks are less explored from the perspective of their expressiveness. This paper studies the expressiveness of ReLU networks. We characterize the decision boundary of two-layer ReLU networks by constructing functionally equivalent threshold networks. We show that while the decision boundary of a two-layer ReLU network can be captured by a threshold network, the latter may require an exponentially larger number of hidden units. We also formulate sufficient conditions for a corresponding logarithmic reduction in the number of hidden units to represent a sign network as a ReLU network. Finally, we experimentally compare threshold networks and their much smaller ReLU counterparts with respect to their ability to learn from synthetically generated data.
Links
Bib Entry
@inproceedings{pan2016expressiveness, author = {Pan, Xingyuan and Srikumar, Vivek}, title = {Expressiveness of Rectifier Networks}, booktitle = {Proceedings of the International Conference on Machine Learning (ICML)}, year = {2016} }