362 research outputs found

    In situ growth regime characterization of cubic GaN using reflection high energy electron diffraction

    Full text link
    Cubic GaN layers were grown by plasma-assisted molecular beam epitaxy on 3C-SiC (001)substrates. In situ reflection high energy electron diffraction was used to quantitatively determine the Ga coverage of the GaN surface during growth. Using the intensity of the electron beam as a probe,optimum growth conditions of c-GaN were found when a 1 ML Ga coverage is formed at the surface. 1 micrometer thick c-GaN layers had a minimum surface roughness of 2.5 nm when a Ga coverage of 1 ML was established during growth. These samples revealed also a minimum full width at half maximum of the (002)rocking curve.Comment: 3pages with 4 figure

    Design of Sequences with Good Folding Properties in Coarse-Grained Protein Models

    Get PDF
    Background: Designing amino acid sequences that are stable in a given target structure amounts to maximizing a conditional probability. A straightforward approach to accomplish this is a nested Monte Carlo where the conformation space is explored over and over again for different fixed sequences, which requires excessive computational demand. Several approximate attempts to remedy this situation, based on energy minimization for fixed structure or high-TT expansions, have been proposed. These methods are fast but often not accurate since folding occurs at low TT. Results: We develop a multisequence Monte Carlo procedure, where both sequence and conformation space are simultaneously probed with efficient prescriptions for pruning sequence space. The method is explored on hydrophobic/polar models. We first discuss short lattice chains, in order to compare with exact data and with other methods. The method is then successfully applied to lattice chains with up to 50 monomers, and to off-lattice 20-mers. Conclusions: The multisequence Monte Carlo method offers a new approach to sequence design in coarse-grained models. It is much more efficient than previous Monte Carlo methods, and is, as it stands, applicable to a fairly wide range of two-letter models.Comment: 23 pages, 7 figure

    Development of a low cost robot system for autonomous measuring of spatial field distributions

    Get PDF
    A new kind of a modular multi-purpose robot system is developed to measure the spatial field distributions of very large as well as of small and crowded areas. The probe is automatically placed at a number of pre-defined positions where measurements are carried out. The advantages of this system are its very low influence on the measured field as well as its wide area of possible applications. In addition, the initial costs are quite low. In this paper the theory underlying the measurement principle is explained. The accuracy is analyzed and sample measurements are presented

    A Decade of Shared Tasks in Digital Text Forensics at PAN

    Full text link
    [EN] Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, to extract and analyze information about the authors of these documents. The research field has been substantially developed during the last decade. PAN is a series of shared tasks that started in 2009 and significantly contributed to attract the attention of the research community in well-defined digital text forensics tasks. Several benchmark datasets have been developed to assess the state-of-the-art performance in a wide range of tasks. In this paper, we present the evolution of both the examined tasks and the developed datasets during the last decade. We also briefly introduce the upcoming PAN 2019 shared tasks.We are indebted to many colleagues and friends who contributed greatly to PAN's tasks: Maik Anderka, Shlomo Argamon, Alberto Barrón-Cedeño, Fabio Celli, Fabio Crestani, Walter Daelemans, Andreas Eiselt, Tim Gollub, Parth Gupta, Matthias Hagen, Teresa Holfeld, Patrick Juola, Giacomo Inches, Mike Kestemont, Moshe Koppel, Manuel Montes-y-Gómez, Aurelio Lopez-Lopez, Francisco Rangel, Miguel Angel Sánchez-Pérez, Günther Specht, Michael Tschuggnall, and Ben Verhoeven. Our special thanks go to PAN¿s sponsors throughout the years and not least to the hundreds of participants.Potthast, M.; Rosso, P.; Stamatatos, E.; Stein, B. (2019). A Decade of Shared Tasks in Digital Text Forensics at PAN. Lecture Notes in Computer Science. 11438:291-300. https://doi.org/10.1007/978-3-030-15719-7_39S2913001143

    Fault tree analysis for system modeling in case of intentional EMI

    Get PDF
    The complexity of modern systems on the one hand and the rising threat of intentional electromagnetic interference (IEMI) on the other hand increase the necessity for systematical risk analysis. Most of the problems can not be treated deterministically since slight changes in the configuration (source, position, polarization, ...) can dramatically change the outcome of an event. For that purpose, methods known from probabilistic risk analysis can be applied. One of the most common approaches is the fault tree analysis (FTA). The FTA is used to determine the system failure probability and also the main contributors to its failure. In this paper the fault tree analysis is introduced and a possible application of that method is shown using a small computer network as an example. The constraints of this methods are explained and conclusions for further research are drawn

    Simplified modeling of EM field coupling to complex cable bundles

    Get PDF
    In this contribution, the procedure "Equivalent Cable Bundle Method" is used for the simplification of large cable bundles, and it is extended to the application on differential signal lines. The main focus is on the reduction of twisted-pair cables. Furthermore, the process presented here allows to take into account cables with wires that are situated quite close to each other. The procedure is based on a new approach to calculate the geometry of the simplified cable and uses the fact that the line parameters do not uniquely correspond to a certain geometry. For this reason, an optimization algorithm is applied

    Predicting the Next Best View for 3D Mesh Refinement

    Full text link
    3D reconstruction is a core task in many applications such as robot navigation or sites inspections. Finding the best poses to capture part of the scene is one of the most challenging topic that goes under the name of Next Best View. Recently, many volumetric methods have been proposed; they choose the Next Best View by reasoning over a 3D voxelized space and by finding which pose minimizes the uncertainty decoded into the voxels. Such methods are effective, but they do not scale well since the underlaying representation requires a huge amount of memory. In this paper we propose a novel mesh-based approach which focuses on the worst reconstructed region of the environment mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and an energy function over the worst regions of the mesh which takes into account the mutual parallax with respect to the previous cameras, the angle of incidence of the viewing ray to the surface and the visibility of the region. We test our approach over a well known dataset and achieve state-of-the-art results.Comment: 13 pages, 5 figures, to be published in IAS-1
    corecore