21,479 research outputs found

    Huddl: the Hydrographic Universal Data Description Language

    Get PDF
    Since many of the attempts to introduce a universal hydrographic data format have failed or have been only partially successful, a different approach is proposed. Our solution is the Hydrographic Universal Data Description Language (HUDDL), a descriptive XML-based language that permits the creation of a standardized description of (past, present, and future) data formats, and allows for applications like HUDDLER, a compiler that automatically creates drivers for data access and manipulation. HUDDL also represents a powerful solution for archiving data along with their structural description, as well as for cataloguing existing format specifications and their version control. HUDDL is intended to be an open, community-led initiative to simplify the issues involved in hydrographic data access

    HUDDL for description and archive of hydrographic binary data

    Get PDF
    Many of the attempts to introduce a universal hydrographic binary data format have failed or have been only partially successful. In essence, this is because such formats either have to simplify the data to such an extent that they only support the lowest common subset of all the formats covered, or they attempt to be a superset of all formats and quickly become cumbersome. Neither choice works well in practice. This paper presents a different approach: a standardized description of (past, present, and future) data formats using the Hydrographic Universal Data Description Language (HUDDL), a descriptive language implemented using the Extensible Markup Language (XML). That is, XML is used to provide a structural and physical description of a data format, rather than the content of a particular file. Done correctly, this opens the possibility of automatically generating both multi-language data parsers and documentation for format specification based on their HUDDL descriptions, as well as providing easy version control of them. This solution also provides a powerful approach for archiving a structural description of data along with the data, so that binary data will be easy to access in the future. Intending to provide a relatively low-effort solution to index the wide range of existing formats, we suggest the creation of a catalogue of format descriptions, each of them capturing the logical and physical specifications for a given data format (with its subsequent upgrades). A C/C++ parser code generator is used as an example prototype of one of the possible advantages of the adoption of such a hydrographic data format catalogue

    Development of a fusion adaptive algorithm for marine debris detection within the post-Sandy restoration framework

    Get PDF
    Recognition of marine debris represent a difficult task due to the extreme variability of the marine environment, the possible targets, and the variable skill levels of human operators. The range of potential targets is much wider than similar fields of research such as mine hunting, localization of unexploded ordnance or pipeline detection. In order to address this additional complexity, an adaptive algorithm is being developing that appropriately responds to changes in the environment, and context. The preliminary step is to properly geometrically and radiometrically correct the collected data. Then, the core engine manages the fusion of a set of statistically- and physically-based algorithms, working at different levels (swath, beam, snippet, and pixel) and using both predictive modeling (that is, a high-frequency acoustic backscatter model) and phenomenological (e.g., digital image processing techniques) approaches. The expected outcome is the reduction of inter-algorithmic cross-correlation and, thus, the probability of false alarm. At this early stage, we provide a proof of concept showing outcomes from algorithms that dynamically adapt themselves to the depth and average backscatter level met in the surveyed environment, targeting marine debris (modeled as objects of about 1-m size). The project relies on a modular software library, called Matador (Marine Target Detection and Object Recognition)

    Coupled Lugiato-Lefever equation for nonlinear frequency comb generation at an avoided crossing of a microresonator

    Full text link
    Guided-mode coupling in a microresonator generally manifests itself through avoided crossings of the corresponding resonances. This coupling can strongly modify the resonator local effective dispersion by creating two branches that have dispersions of opposite sign in spectral regions that would otherwise be characterized by either positive (normal) or negative (anomalous) dispersion. In this paper, we study, both analytically and computationally, the general properties of nonlinear frequency comb generation at an avoided crossing using the coupled Lugiato-Lefever equation. In particular, we find that bright solitons and broadband frequency combs can be excited when both branches are pumped for a suitable choice of the pump powers and the detuning parameters. A deterministic path for soliton generation is found.Comment: 9 pages, 5 figure

    A Bayesian marine debris detector using existing hydrographic data products

    Get PDF

    Efficient mining of discriminative molecular fragments

    Get PDF
    Frequent pattern discovery in structured data is receiving an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset
    • 

    corecore