Unified Backpropagation for MultiObjective Deep Learning
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 4 months ago.

Unified Backpropagation for MultiObjective Deep Learning
A common practice in most of deep convolutional neural architectures is to employ fullyconnected layers followed by Softmax activation to minimize crossentropy loss for the sake of classification. Recent studies show that substitution or addition of the Softmax objective to the cost functions of support vector machines or linear discriminant analysis is highly beneficial to improve the classification performance in hybrid neural networks. We propose a novel paradigm to link the optimization of several hybrid objectives through unified backpropagation. This highly alleviates the burden of extensive boosting for independent objective functions or complex formulation of multiobjective gradients. Hybrid loss functions are linked by basic probability assignment from evidence theory. We conduct our experiments for a variety of scenarios and standard datasets to evaluate the advantage of our proposed unification approach to deliver consistent improvements into the classification performance of deep convolutional neural networks.
Unified Backpropagation for MultiObjective Deep Learning
by Arash Shahriari
https://arxiv.org/pdf/1710.07438v1.pdf
You must be logged in to reply to this topic.