198 research outputs found
Open Source Dataset and Machine Learning Techniques for Automatic Recognition of Historical Graffiti
Machine learning techniques are presented for automatic recognition of the
historical letters (XI-XVIII centuries) carved on the stoned walls of St.Sophia
cathedral in Kyiv (Ukraine). A new image dataset of these carved Glagolitic and
Cyrillic letters (CGCL) was assembled and pre-processed for recognition and
prediction by machine learning methods. The dataset consists of more than 4000
images for 34 types of letters. The explanatory data analysis of CGCL and
notMNIST datasets shown that the carved letters can hardly be differentiated by
dimensionality reduction methods, for example, by t-distributed stochastic
neighbor embedding (tSNE) due to the worse letter representation by stone
carving in comparison to hand writing. The multinomial logistic regression
(MLR) and a 2D convolutional neural network (CNN) models were applied. The MLR
model demonstrated the area under curve (AUC) values for receiver operating
characteristic (ROC) are not lower than 0.92 and 0.60 for notMNIST and CGCL,
respectively. The CNN model gave AUC values close to 0.99 for both notMNIST and
CGCL (despite the much smaller size and quality of CGCL in comparison to
notMNIST) under condition of the high lossy data augmentation. CGCL dataset was
published to be available for the data science community as an open source
resource.Comment: 11 pages, 9 figures, accepted for 25th International Conference on
Neural Information Processing (ICONIP 2018), 14-16 December, 2018 (Siem Reap,
Cambodia
Learning to Generate Novel Domains for Domain Generalization
This paper focuses on domain generalization (DG), the task of learning from
multiple source domains a model that generalizes well to unseen domains. A main
challenge for DG is that the available source domains often exhibit limited
diversity, hampering the model's ability to learn to generalize. We therefore
employ a data generator to synthesize data from pseudo-novel domains to augment
the source domains. This explicitly increases the diversity of available
training domains and leads to a more generalizable model. To train the
generator, we model the distribution divergence between source and synthesized
pseudo-novel domains using optimal transport, and maximize the divergence. To
ensure that semantics are preserved in the synthesized data, we further impose
cycle-consistency and classification losses on the generator. Our method,
L2A-OT (Learning to Augment by Optimal Transport) outperforms current
state-of-the-art DG methods on four benchmark datasets.Comment: To appear in ECCV'2
Deep Shape Matching
We cast shape matching as metric learning with convolutional networks. We
break the end-to-end process of image representation into two parts. Firstly,
well established efficient methods are chosen to turn the images into edge
maps. Secondly, the network is trained with edge maps of landmark images, which
are automatically obtained by a structure-from-motion pipeline. The learned
representation is evaluated on a range of different tasks, providing
improvements on challenging cases of domain generalization, generic
sketch-based image retrieval or its fine-grained counterpart. In contrast to
other methods that learn a different model per task, object category, or
domain, we use the same network throughout all our experiments, achieving
state-of-the-art results in multiple benchmarks.Comment: ECCV 201
Muon and Cosmogenic Neutron Detection in Borexino
Borexino, a liquid scintillator detector at LNGS, is designed for the
detection of neutrinos and antineutrinos from the Sun, supernovae, nuclear
reactors, and the Earth. The feeble nature of these signals requires a strong
suppression of backgrounds below a few MeV. Very low intrinsic radiogenic
contamination of all detector components needs to be accompanied by the
efficient identification of muons and of muon-induced backgrounds. Muons
produce unstable nuclei by spallation processes along their trajectory through
the detector whose decays can mimic the expected signals; for isotopes with
half-lives longer than a few seconds, the dead time induced by a muon-related
veto becomes unacceptably long, unless its application can be restricted to a
sub-volume along the muon track. Consequently, not only the identification of
muons with very high efficiency but also a precise reconstruction of their
tracks is of primary importance for the physics program of the experiment. The
Borexino inner detector is surrounded by an outer water-Cherenkov detector that
plays a fundamental role in accomplishing this task. The detector design
principles and their implementation are described. The strategies adopted to
identify muons are reviewed and their efficiency is evaluated. The overall muon
veto efficiency is found to be 99.992% or better. Ad-hoc track reconstruction
algorithms developed are presented. Their performance is tested against muon
events of known direction such as those from the CNGS neutrino beam, test
tracks available from a dedicated External Muon Tracker and cosmic muons whose
angular distribution reflects the local overburden profile. The achieved
angular resolution is 3-5 deg and the lateral resolution is 35-50 cm, depending
on the impact parameter of the crossing muon. The methods implemented to
efficiently tag cosmogenic neutrons are also presented.Comment: 42 pages. 32 figures on 37 files. Uses JINST.cls. 1 auxiliary file
(defines.tex) with TEX macros. submitted to Journal of Instrumentatio
Muon and Cosmogenic Neutron Detection in Borexino
Borexino, a liquid scintillator detector at LNGS, is designed for the
detection of neutrinos and antineutrinos from the Sun, supernovae, nuclear
reactors, and the Earth. The feeble nature of these signals requires a strong
suppression of backgrounds below a few MeV. Very low intrinsic radiogenic
contamination of all detector components needs to be accompanied by the
efficient identification of muons and of muon-induced backgrounds. Muons
produce unstable nuclei by spallation processes along their trajectory through
the detector whose decays can mimic the expected signals; for isotopes with
half-lives longer than a few seconds, the dead time induced by a muon-related
veto becomes unacceptably long, unless its application can be restricted to a
sub-volume along the muon track. Consequently, not only the identification of
muons with very high efficiency but also a precise reconstruction of their
tracks is of primary importance for the physics program of the experiment. The
Borexino inner detector is surrounded by an outer water-Cherenkov detector that
plays a fundamental role in accomplishing this task. The detector design
principles and their implementation are described. The strategies adopted to
identify muons are reviewed and their efficiency is evaluated. The overall muon
veto efficiency is found to be 99.992% or better. Ad-hoc track reconstruction
algorithms developed are presented. Their performance is tested against muon
events of known direction such as those from the CNGS neutrino beam, test
tracks available from a dedicated External Muon Tracker and cosmic muons whose
angular distribution reflects the local overburden profile. The achieved
angular resolution is 3-5 deg and the lateral resolution is 35-50 cm, depending
on the impact parameter of the crossing muon. The methods implemented to
efficiently tag cosmogenic neutrons are also presented.Comment: 42 pages. 32 figures on 37 files. Uses JINST.cls. 1 auxiliary file
(defines.tex) with TEX macros. submitted to Journal of Instrumentatio
Supernova Neutrino Spectrum with Matter and Spin Flavor Precession Effects
We consider Majorana neutrino conversions inside supernovae by taking into
account both flavor mixing and the neutrino magnetic moment. We study the
adiabaticity of various possible transitions between the neutrino states for
both normal and inverted hierarchy within the various solar neutrino problem
solutions. From the final mass spectrum within diffrent scenarios, we infer the
consequences of the various conversion effects on the neutronization peak, the
nature of final spectra, and the possible Earth matter effect on the final
fluxes. This enable us to check the sensibility of the SN neutrino flux on
magnetic moment interaction, and narrow down possible scenarios which depend
on: the mass spectrum normal or inverted, the solution of the solar neutrino
problem; and the value of MuxB.Comment: 24pages, 7 figure
Muon and Cosmogenic Neutron Detection in Borexino
Borexino, a liquid scintillator detector at LNGS, is designed for the
detection of neutrinos and antineutrinos from the Sun, supernovae, nuclear
reactors, and the Earth. The feeble nature of these signals requires a strong
suppression of backgrounds below a few MeV. Very low intrinsic radiogenic
contamination of all detector components needs to be accompanied by the
efficient identification of muons and of muon-induced backgrounds. Muons
produce unstable nuclei by spallation processes along their trajectory through
the detector whose decays can mimic the expected signals; for isotopes with
half-lives longer than a few seconds, the dead time induced by a muon-related
veto becomes unacceptably long, unless its application can be restricted to a
sub-volume along the muon track. Consequently, not only the identification of
muons with very high efficiency but also a precise reconstruction of their
tracks is of primary importance for the physics program of the experiment. The
Borexino inner detector is surrounded by an outer water-Cherenkov detector that
plays a fundamental role in accomplishing this task. The detector design
principles and their implementation are described. The strategies adopted to
identify muons are reviewed and their efficiency is evaluated. The overall muon
veto efficiency is found to be 99.992% or better. Ad-hoc track reconstruction
algorithms developed are presented. Their performance is tested against muon
events of known direction such as those from the CNGS neutrino beam, test
tracks available from a dedicated External Muon Tracker and cosmic muons whose
angular distribution reflects the local overburden profile. The achieved
angular resolution is 3-5 deg and the lateral resolution is 35-50 cm, depending
on the impact parameter of the crossing muon. The methods implemented to
efficiently tag cosmogenic neutrons are also presented.Comment: 42 pages. 32 figures on 37 files. Uses JINST.cls. 1 auxiliary file
(defines.tex) with TEX macros. submitted to Journal of Instrumentatio
Unified Image and Video Saliency Modeling
Visual saliency modeling for images and videos is treated as two independent
tasks in recent computer vision literature. While image saliency modeling is a
well-studied problem and progress on benchmarks like SALICON and MIT300 is
slowing, video saliency models have shown rapid gains on the recent DHF1K
benchmark. Here, we take a step back and ask: Can image and video saliency
modeling be approached via a unified model, with mutual benefit? We identify
different sources of domain shift between image and video saliency data and
between different video saliency datasets as a key challenge for effective
joint modelling. To address this we propose four novel domain adaptation
techniques - Domain-Adaptive Priors, Domain-Adaptive Fusion, Domain-Adaptive
Smoothing and Bypass-RNN - in addition to an improved formulation of learned
Gaussian priors. We integrate these techniques into a simple and lightweight
encoder-RNN-decoder-style network, UNISAL, and train it jointly with image and
video saliency data. We evaluate our method on the video saliency datasets
DHF1K, Hollywood-2 and UCF-Sports, and the image saliency datasets SALICON and
MIT300. With one set of parameters, UNISAL achieves state-of-the-art
performance on all video saliency datasets and is on par with the
state-of-the-art for image saliency datasets, despite faster runtime and a 5 to
20-fold smaller model size compared to all competing deep methods. We provide
retrospective analyses and ablation studies which confirm the importance of the
domain shift modeling. The code is available at
https://github.com/rdroste/unisalComment: Presented at the European Conference on Computer Vision (ECCV) 2020.
R. Droste and J. Jiao contributed equally to this work. v3: Updated Fig. 5a)
and added new MTI300 benchmark results to supp. materia
Deep Eyedentification: Biometric Identification using Micro-Movements of the Eye
We study involuntary micro-movements of the eye for biometric identification.
While prior studies extract lower-frequency macro-movements from the output of
video-based eye-tracking systems and engineer explicit features of these
macro-movements, we develop a deep convolutional architecture that processes
the raw eye-tracking signal. Compared to prior work, the network attains a
lower error rate by one order of magnitude and is faster by two orders of
magnitude: it identifies users accurately within seconds
Classification of Foetal Distress and Hypoxia Using Machine Learning Approaches
© 2018, Springer International Publishing AG, part of Springer Nature. Foetal distress and hypoxia (oxygen deprivation) is considered as a serious condition and one of the main factors for caesarean section in the obstetrics and Gynecology department. It is the third most common cause of death in new-born babies. Many foetuses that experienced some sort of hypoxic effects can develop series risks including damage to the cells of the central nervous system that may lead to life-long disability (cerebral palsy) or even death. Continuous labour monitoring is essential to observe the foetal well being. Foetal surveillance by monitoring the foetal heart rate with a cardiotocography is widely used. Despite the indication of normal results, these results are not reassuring, and a small proportion of these foetuses are actually hypoxic. In this paper, machine-learning algorithms are utilized to classify foetuses which are experiencing oxygen deprivation using PH value (a measure of hydrogen ion concentration of blood used to specify the acidity or alkalinity) and Base Deficit of extra cellular fluid level (a measure of the total concentration of blood buffer base that indicates the metabolic acidosis or compensated respiratory alkalosis) as indicators of respiratory and metabolic acidosis, respectively, using open source partum clinical data obtained from Physionet. Six well know machine learning classifier models are utilised in our experiments for the evaluation; each model was presented with a set of selected features derived from the clinical data. Classifier’s evaluation is performed using the receiver operating characteristic curve analysis, area under the curve plots, as well as the confusion matrix. Our simulation results indicate that machine-learning algorithms provide viable methods that could delivery improvements over conventional analysis
- …