8 research outputs found
From the Embodied to the Axiomatic World of Mathematics: Students’ Perceptions on the Concept ‘Limit’
Less Is More: Adaptive Trainable Gradient Dropout for Deep Neural Networks
The undeniable computational power of artificial neural networks has granted the scientific community the ability to exploit the available data in ways previously inconceivable. However, deep neural networks require an overwhelming quantity of data in order to interpret the underlying connections between them, and therefore, be able to complete the specific task that they have been assigned to. Feeding a deep neural network with vast amounts of data usually ensures efficiency, but may, however, harm the network’s ability to generalize. To tackle this, numerous regularization techniques have been proposed, with dropout being one of the most dominant. This paper proposes a selective gradient dropout method, which, instead of relying on dropping random weights, learns to freeze the training process of specific connections, thereby increasing the overall network’s sparsity in an adaptive manner, by driving it to utilize more salient weights. The experimental results show that the produced sparse network outperforms the baseline on numerous image classification datasets, and additionally, the yielded results occurred after significantly less training epochs
Less Is More: Adaptive Trainable Gradient Dropout for Deep Neural Networks
The undeniable computational power of artificial neural networks has granted the scientific community the ability to exploit the available data in ways previously inconceivable. However, deep neural networks require an overwhelming quantity of data in order to interpret the underlying connections between them, and therefore, be able to complete the specific task that they have been assigned to. Feeding a deep neural network with vast amounts of data usually ensures efficiency, but may, however, harm the network’s ability to generalize. To tackle this, numerous regularization techniques have been proposed, with dropout being one of the most dominant. This paper proposes a selective gradient dropout method, which, instead of relying on dropping random weights, learns to freeze the training process of specific connections, thereby increasing the overall network’s sparsity in an adaptive manner, by driving it to utilize more salient weights. The experimental results show that the produced sparse network outperforms the baseline on numerous image classification datasets, and additionally, the yielded results occurred after significantly less training epochs
Less Is More: Adaptive Trainable Gradient Dropout for Deep Neural Networks
The undeniable computational power of artificial neural networks has granted the scientific community the ability to exploit the available data in ways previously inconceivable. However, deep neural networks require an overwhelming quantity of data in order to interpret the underlying connections between them, and therefore, be able to complete the specific task that they have been assigned to. Feeding a deep neural network with vast amounts of data usually ensures efficiency, but may, however, harm the network’s ability to generalize. To tackle this, numerous regularization techniques have been proposed, with dropout being one of the most dominant. This paper proposes a selective gradient dropout method, which, instead of relying on dropping random weights, learns to freeze the training process of specific connections, thereby increasing the overall network’s sparsity in an adaptive manner, by driving it to utilize more salient weights. The experimental results show that the produced sparse network outperforms the baseline on numerous image classification datasets, and additionally, the yielded results occurred after significantly less training epochs.</jats:p
Right or left thoracotomy for esophageal atresia and right aortic arch? Systematic review and surgicoanatomic justification
Introduction: The optimal thoracotomy approach for the management of
esophageal atresia and tracheoesophageal fistula (EA/TEF) with a right
aortic arch (RAA) remains controversial.
Methods: Systematic review of complications and death rates between
right- and left-sided repairs, including all studies on EA/TEF and RAA,
apart from studies focusing on long-gap EA and thoracoscopic repairs.
Review of right- and left-sided surgical anatomy in relation to reported
complications.
Results: Although no significant differences were elicited between
right- and left-sided repairs in complications (9/29 vs. 1/6, p = 0.64)
and death rates (2/29 vs. 0/6, p = 0.57), unique anatomic complications
- such as injury to the RAA covering the esophagus and intractable
bleeding - associated with mortality were revealed in the right
thoracotomy group. Left-sided repairs following failed repair through
the right showed higher complications rate (3/3) than straightforward
right- (9/29) or left-sided repairs (1/6) (p = 0.024). Right
thoracotomies converted to left thoracotomies led to staged repairs more
frequently (4/9) than straightforward right (5/38) or left thoracotomies
(0/6) (p = 0.03).
Conclusions: There is not enough evidence to support that right
thoracotomy, characterized by unique surgicoanatomic difficulties, is
equivalent to left thoracotomy for EA/TEF with RAA. Both approaches
might be required, and, therefore, surgeons should be familiarized with
surgical anatomy of mediastinum approached from right and left. (C) 2018
Elsevier Inc. All rights reserved