363 research outputs found
In situ growth regime characterization of cubic GaN using reflection high energy electron diffraction
Cubic GaN layers were grown by plasma-assisted molecular beam epitaxy on
3C-SiC (001)substrates. In situ reflection high energy electron diffraction was
used to quantitatively determine the Ga coverage of the GaN surface during
growth. Using the intensity of the electron beam as a probe,optimum growth
conditions of c-GaN were found when a 1 ML Ga coverage is formed at the
surface. 1 micrometer thick c-GaN layers had a minimum surface roughness of 2.5
nm when a Ga coverage of 1 ML was established during growth. These samples
revealed also a minimum full width at half maximum of the (002)rocking curve.Comment: 3pages with 4 figure
Design of Sequences with Good Folding Properties in Coarse-Grained Protein Models
Background: Designing amino acid sequences that are stable in a given target
structure amounts to maximizing a conditional probability. A straightforward
approach to accomplish this is a nested Monte Carlo where the conformation
space is explored over and over again for different fixed sequences, which
requires excessive computational demand. Several approximate attempts to remedy
this situation, based on energy minimization for fixed structure or high-
expansions, have been proposed. These methods are fast but often not accurate
since folding occurs at low .
Results: We develop a multisequence Monte Carlo procedure, where both
sequence and conformation space are simultaneously probed with efficient
prescriptions for pruning sequence space. The method is explored on
hydrophobic/polar models. We first discuss short lattice chains, in order to
compare with exact data and with other methods. The method is then successfully
applied to lattice chains with up to 50 monomers, and to off-lattice 20-mers.
Conclusions: The multisequence Monte Carlo method offers a new approach to
sequence design in coarse-grained models. It is much more efficient than
previous Monte Carlo methods, and is, as it stands, applicable to a fairly wide
range of two-letter models.Comment: 23 pages, 7 figure
Development of a low cost robot system for autonomous measuring of spatial field distributions
A new kind of a modular multi-purpose robot system is developed to measure
the spatial field distributions of very large as well as of small and crowded
areas. The probe is automatically placed at a number of pre-defined positions
where measurements are carried out. The advantages of this system are its
very low influence on the measured field as well as its wide area of possible
applications. In addition, the initial costs are quite low. In this paper the
theory underlying the measurement principle is explained. The accuracy is
analyzed and sample measurements are presented
Recommended from our members
Nonlinear bias correction for satellite data assimilation using Taylor series polynomials
Output from a high-resolution ensemble data assimilation system is used to assess the ability of an innovative nonlinear bias correction (BC) method that uses a Taylor series polynomial expansion of the observation-minus background departures to remove linear and nonlinear conditional biases from all-sky satellite infrared brightness temperatures. Univariate and multivariate experiments were performed in which the satellite zenith angle and variables sensitive to clouds and water vapor were used as the BC predictors. The results showed that even though the bias of the entire observation departure distribution is equal to zero regardless of the order of the Taylor series expansion, there are often large conditional biases that vary as a nonlinear function of the BC predictor. The linear 1st order term had the largest impact on the entire distribution as measured by reductions in variance; however, large conditional biases often remained in the distribution when plotted as a function of the predictor. These conditional biases were typically reduced to near zero when the nonlinear 2nd and 3rd order terms were used. The univariate results showed that variables sensitive to the cloud top height are effective BC predictors especially when higher order Taylor series terms are used. Comparison of the statistics for clear-sky and cloudy-sky observations revealed that nonlinear departures are more important for cloudy-sky observations as signified by the much larger impact of the 2nd and 3rd order terms on the conditional biases. Together, these results indicate that the nonlinear BC method is able to effectively remove the bias from all-sky infrared observation departures
A Decade of Shared Tasks in Digital Text Forensics at PAN
[EN] Digital text forensics aims at examining the originality and
credibility of information in electronic documents and, in this regard, to extract and analyze information about the authors of these documents. The research field has been substantially developed during the last decade. PAN is a series of shared tasks that started in 2009 and significantly contributed to attract the attention of the research community in well-defined digital text forensics tasks. Several benchmark datasets have been developed to assess the state-of-the-art performance in a wide range of tasks. In this paper, we present the evolution of both the examined tasks and the developed datasets during the last decade. We also briefly introduce the upcoming PAN 2019 shared tasks.We are indebted to many colleagues and friends who contributed greatly to PAN's tasks: Maik Anderka, Shlomo Argamon, Alberto Barrón-Cedeño, Fabio Celli, Fabio Crestani, Walter Daelemans, Andreas Eiselt, Tim Gollub,
Parth Gupta, Matthias Hagen, Teresa Holfeld, Patrick Juola, Giacomo Inches, Mike
Kestemont, Moshe Koppel, Manuel Montes-y-Gómez, Aurelio Lopez-Lopez, Francisco
Rangel, Miguel Angel Sánchez-Pérez, Günther Specht, Michael Tschuggnall, and Ben
Verhoeven. Our special thanks go to PAN¿s sponsors throughout the years and not
least to the hundreds of participants.Potthast, M.; Rosso, P.; Stamatatos, E.; Stein, B. (2019). A Decade of Shared Tasks in Digital Text Forensics at PAN. Lecture Notes in Computer Science. 11438:291-300. https://doi.org/10.1007/978-3-030-15719-7_39S2913001143
Fault tree analysis for system modeling in case of intentional EMI
The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn
Simplified modeling of EM field coupling to complex cable bundles
In this contribution, the procedure "Equivalent Cable Bundle Method" is
used for the simplification of large cable bundles, and it is extended to the
application on differential signal lines. The main focus is on the reduction
of twisted-pair cables. Furthermore, the process presented here allows to
take into account cables with wires that are situated quite close to each
other. The procedure is based on a new approach to calculate the geometry of
the simplified cable and uses the fact that the line parameters do not
uniquely correspond to a certain geometry. For this reason, an optimization
algorithm is applied
Recommended from our members
Kernel reconstruction for delayed neural field equations
Understanding the neural field activity for realistic living systems is a challenging task in contemporary neuroscience. Neural fields have been studied and developed theoretically and numerically with considerable success over the past four decades. However, to make effective use of such models, we need to identify their constituents in practical systems. This includes the determination of model parameters and in particular the reconstruction of the underlying effective connectivity in biological tissues. In this work, we provide an integral equation approach to the reconstruction of the neural connectivity in the case where the neural activity is governed by a delay neural field equation. As preparation, we study the solution of the direct problem based on the Banach fixed point theorem. Then we reformulate the inverse problem into a family of integral equations of the first kind. This equation will be vector valued when several neural activity trajectories are taken as input for the inverse problem. We employ spectral regularization techniques for its stable solution. A sensitivity analysis of the regularized kernel reconstruction with respect to the input signal u is carried out, investigating the Frechet differentiability of the kernel with respect to the signal. Finally, we use numerical examples to show the feasibility of the approach for kernel reconstruction, including numerical sensitivity tests, which show that the integral equation approach is a very stable and promising approach for practical computational neuroscience
Predicting the Next Best View for 3D Mesh Refinement
3D reconstruction is a core task in many applications such as robot
navigation or sites inspections. Finding the best poses to capture part of the
scene is one of the most challenging topic that goes under the name of Next
Best View. Recently, many volumetric methods have been proposed; they choose
the Next Best View by reasoning over a 3D voxelized space and by finding which
pose minimizes the uncertainty decoded into the voxels. Such methods are
effective, but they do not scale well since the underlaying representation
requires a huge amount of memory. In this paper we propose a novel mesh-based
approach which focuses on the worst reconstructed region of the environment
mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and
an energy function over the worst regions of the mesh which takes into account
the mutual parallax with respect to the previous cameras, the angle of
incidence of the viewing ray to the surface and the visibility of the region.
We test our approach over a well known dataset and achieve state-of-the-art
results.Comment: 13 pages, 5 figures, to be published in IAS-1
- …