34,476 research outputs found
Knowledge Transfer for Melanoma Screening with Deep Learning
Knowledge transfer impacts the performance of deep learning -- the state of
the art for image classification tasks, including automated melanoma screening.
Deep learning's greed for large amounts of training data poses a challenge for
medical tasks, which we can alleviate by recycling knowledge from models
trained on different tasks, in a scheme called transfer learning. Although much
of the best art on automated melanoma screening employs some form of transfer
learning, a systematic evaluation was missing. Here we investigate the presence
of transfer, from which task the transfer is sourced, and the application of
fine tuning (i.e., retraining of the deep learning model after transfer). We
also test the impact of picking deeper (and more expensive) models. Our results
favor deeper models, pre-trained over ImageNet, with fine-tuning, reaching an
AUC of 80.7% and 84.5% for the two skin-lesion datasets evaluated.Comment: 4 page
Adaptive performance optimization for large-scale traffic control systems
In this paper, we study the problem of optimizing (fine-tuning) the design parameters of large-scale traffic control systems that are composed of distinct and mutually interacting modules. This problem usually requires a considerable amount of human effort and time to devote to the successful deployment and operation of traffic control systems due to the lack of an automated well-established systematic approach. We investigate the adaptive fine-tuning algorithm for determining the set of design parameters of two distinct mutually interacting modules of the traffic-responsive urban control (TUC) strategy, i.e., split and cycle, for the large-scale urban road network of the city of Chania, Greece. Simulation results are presented, demonstrating that the network performance in terms of the daily mean speed, which is attained by the proposed adaptive optimization methodology, is significantly better than the original TUC system in the case in which the aforementioned design parameters are manually fine-tuned to virtual perfection by the system operators
Essential guidelines for computational method benchmarking
In computational biology and other sciences, researchers are frequently faced
with a choice between several computational methods for performing data
analyses. Benchmarking studies aim to rigorously compare the performance of
different methods using well-characterized benchmark datasets, to determine the
strengths of each method or to provide recommendations regarding suitable
choices of methods for an analysis. However, benchmarking studies must be
carefully designed and implemented to provide accurate, unbiased, and
informative results. Here, we summarize key practical guidelines and
recommendations for performing high-quality benchmarking analyses, based on our
experiences in computational biology.Comment: Minor update
Automated Website Fingerprinting through Deep Learning
Several studies have shown that the network traffic that is generated by a
visit to a website over Tor reveals information specific to the website through
the timing and sizes of network packets. By capturing traffic traces between
users and their Tor entry guard, a network eavesdropper can leverage this
meta-data to reveal which website Tor users are visiting. The success of such
attacks heavily depends on the particular set of traffic features that are used
to construct the fingerprint. Typically, these features are manually engineered
and, as such, any change introduced to the Tor network can render these
carefully constructed features ineffective. In this paper, we show that an
adversary can automate the feature engineering process, and thus automatically
deanonymize Tor traffic by applying our novel method based on deep learning. We
collect a dataset comprised of more than three million network traces, which is
the largest dataset of web traffic ever used for website fingerprinting, and
find that the performance achieved by our deep learning approaches is
comparable to known methods which include various research efforts spanning
over multiple years. The obtained success rate exceeds 96% for a closed world
of 100 websites and 94% for our biggest closed world of 900 classes. In our
open world evaluation, the most performant deep learning model is 2% more
accurate than the state-of-the-art attack. Furthermore, we show that the
implicit features automatically learned by our approach are far more resilient
to dynamic changes of web content over time. We conclude that the ability to
automatically construct the most relevant traffic features and perform accurate
traffic recognition makes our deep learning based approach an efficient,
flexible and robust technique for website fingerprinting.Comment: To appear in the 25th Symposium on Network and Distributed System
Security (NDSS 2018
Essential guidelines for computational method benchmarking
In computational biology and other sciences, researchers are frequently faced with a choice between several computational methods for performing data analyses. Benchmarking studies aim to rigorously compare the performance of different methods using well-characterized benchmark datasets, to determine the strengths of each method or to provide recommendations regarding suitable choices of methods for an analysis. However, benchmarking studies must be carefully designed and implemented to provide accurate, unbiased, and informative results. Here, we summarize key practical guidelines and recommendations for performing high-quality benchmarking analyses, based on our experiences in computational biology
Analysis of potential helicopter vibration reduction concepts
Results of analytical investigations to develop, understand, and evaluate potential helicopter vibration reduction concepts are presented in the following areas: identification of the fundamental sources of vibratory loads, blade design for low vibration, application of design optimization techniques, active higher harmonic control, blade appended aeromechanical devices, and the prediction of vibratory airloads. Primary sources of vibration are identified for a selected four-bladed articulated rotor operating in high speed level flight. The application of analytical design procedures and optimization techniques are shown to have the potential for establishing reduced vibration blade designs through variations in blade mass and stiffness distributions, and chordwise center-of-gravity location
- …