5,006 research outputs found
On the Uncertainty of Archive Hydrographic Datasets
As the international hydrographic community continues to address the question of the irreducible uncertainty in modern surveys, we must ask how we do the same with archived Vertical Beam Echosounder (VBES) and leadline datasets. The ONR funded Strataform project surveyed an area of the New Jersey shelf around 39âŚ12âN 72âŚ50âW using an EM1000 Multibeam Echosounder (MBES). This area is also covered by NOAA surveys from 1936- 38 (assumed to be leadline) and 1975-76 (VBES). By comparison of the archival soundings to the MBES data, estimates of measurement error for the archival surveys are constructed as a function of depth. The analysis shows that archival leadline smoothsheets are heavily biased in deeper water because of âhydrographic roundingâ and may be unrecoverable, but that the VBES data appear approximately unbiased and may be used to construct products compatible with modern surveys. Estimates of uncertainty for a surface model generated from the archive data are then constructed, taking into account measurement, interpolation, and hydrographic uncertainty (addressing the problems of unobserved areas and surface reconstruction stability). Finally, the paper addresses the generality of the method, and its implications for the communityâs duty to convey our uncertainty to the end user
Parallel and Distributed Performance of a Depth Estimation Algorithm
Expansion of dataset sizes and increasing complexity of processing algorithms have led to consideration of parallel and distributed implementations. The rationale for distributing the computational load may be to thin-provision computational resources, to accelerate data processing rate, or to efficiently reuse already available but otherwise idle computational resources. Whatever the rationale, an efficient solution of this type brings with it questions of data distribution, job partitioning, reliability, and robustness. This paper addresses the first two of these questions in the context of a local cluster-computing environment. Using the CHRT depth estimator, it considers active and passive data distribution and their effect on data throughput, focusing mainly on the compromises required to maintain minimal communications requirements between nodes. As metric, the algorithm considers the overall computation time for a given dataset (i.e., the time lag that a user would experience), and shows that although there are significant speedups to be had by relatively simple modifications to the algorithm, there are limitations to the parallelism that can be achieved efficiently, and a balance between inter-node parallelism (i.e., multiple nodes running in parallel) and intranode parallelism (i.e., multiple threads within one node) for most efficient utilization of available resources
Multi-algorithm Swath Consistency Detection for Multibeam Echosounder Data
It is unrealistic to expect that any single algorithm for pre-filtering Multibeam Echosounder data will be able to detect all of the ânoise in such data all of the time. This paper therefore presents a scheme for fusing the results of many pre-filtering sub-algorithms in order to form one, significantly more robust, meta-algorithm. This principle is illustrated on the problem of consistency detection in regions of sloping bathymetry. We show that the meta-algorithm is more robust, adapts dynamically to sub-algorithm performance, and is consistent with operator assessment of the data. The meta-algorithm is called the Multi-Algorithm Swath Consistency Detector
Water-resource and land-use issues
Water resource managementWater useCase studiesCatchment areasLand useHydrologyModelsEvaporationSoil moistureDecision support toolsRunoffFlowForestryDeforestationErosionRain
Forests and Hydrological Services: Reconciling public and science perceptions
This paper compares and contrasts the science and public perceptions of the role of forests in relation to water quantity (annual and seasonal runoff and recharge) and erosion. It is suggested that the disparity between the two perceptions needs to be addressed before we are in a position to devise and develop financing mechanisms for the conservation and protection of indigenous forests. Examples are given of three âinteractiveâ forest hydrology research programmes: in the UK, South Africa and Panama. Through the involvement of stakeholder groups, often with representatives comprising both the science and public perceptions, interactive research programmes were designed not only to derive new research findings but also to achieve better âownershipâ and acceptance of research findings by the stakeholders. Following this approach, a new programme of research is outlined, aimed at improving our knowledge of forest impacts on seasonal flows and which represents DFIDâs contribution to the UN Year of Mountains, 2002. It is concluded that to move towards a reconciliation of the different perceptions and to connect policy with science will require further research to understand how the âbeliefâ systems underlying the science and public perceptions have evolved, and better dissemination of research findings.Resource /Energy Economics and Policy,
Huddl: the Hydrographic Universal Data Description Language
Since many of the attempts to introduce a universal hydrographic data format have failed or have been only partially successful, a different approach is proposed. Our solution is the Hydrographic Universal Data Description Language (HUDDL), a descriptive XML-based language that permits the creation of a standardized description of (past, present, and future) data formats, and allows for applications like HUDDLER, a compiler that automatically creates drivers for data access and manipulation. HUDDL also represents a powerful solution for archiving data along with their structural description, as well as for cataloguing existing format specifications and their version control. HUDDL is intended to be an open, community-led initiative to simplify the issues involved in hydrographic data access
HUDDL for description and archive of hydrographic binary data
Many of the attempts to introduce a universal hydrographic binary data format have failed or have been only partially successful. In essence, this is because such formats either have to simplify the data to such an extent that they only support the lowest common subset of all the formats covered, or they attempt to be a superset of all formats and quickly become cumbersome. Neither choice works well in practice. This paper presents a different approach: a standardized description of (past, present, and future) data formats using the Hydrographic Universal Data Description Language (HUDDL), a descriptive language implemented using the Extensible Markup Language (XML). That is, XML is used to provide a structural and physical description of a data format, rather than the content of a particular file. Done correctly, this opens the possibility of automatically generating both multi-language data parsers and documentation for format specification based on their HUDDL descriptions, as well as providing easy version control of them. This solution also provides a powerful approach for archiving a structural description of data along with the data, so that binary data will be easy to access in the future. Intending to provide a relatively low-effort solution to index the wide range of existing formats, we suggest the creation of a catalogue of format descriptions, each of them capturing the logical and physical specifications for a given data format (with its subsequent upgrades). A C/C++ parser code generator is used as an example prototype of one of the possible advantages of the adoption of such a hydrographic data format catalogue
Design and Implementation of an Extensible Variable Resolution Bathymetric Estimator
For grid-based bathymetric estimation techniques, determining the right resolution at which to work is essential. Appropriate grid resolution can be related, roughly, to data density and thence to sonar characteristics, survey methodology, and depth. It is therefore variable in almost all survey scenarios, and methods of addressing this problem can have enormous impact on the correctness and efficiency of computational schemes of this kind. This paper describes the design and implementation of a bathymetric depth estimation algorithm that attempts to address this problem by combining the computational efficiency of locally regular grids with piecewise-variable estimation resolution to provide a single logical data structure and associated algorithms that can adjust to local data conditions, change resolution where required to best support the data, and operate over essentially arbitrarily large areas as a single unit. The algorithm, which is in part a development of CUBE, is modular and extensible, and is structured as a client-server application to support different implementation modalities. The algorithm is called âCUBE with Hierarchical Resolution Techniquesâ, or CHRT
Traffic Analysis for the Calibration of Risk Assessment Methods
In order to provide some measure of the uncertainty inherent in the sorts of charting data that are provided to the end-user, we have previously proposed risk models that measure the magnitude of the uncertainty for a ship operating in a particular area. Calibration of these models is essential, but the complexity of the models means that we require detailed information on the sorts of ships, traffic patterns and density within the model area to make a reliable assessment. In theory, the ais system should provide this information for a suitably instrumented area. We consider the problem of converting, filtering and analysing the raw ais traffic to provide statistical characterizations of the traffic in a particular area, and illustrate the method with data from 2008-10-01 through 2008-11-30 around Norfolk, VA. We show that it is possible to automatically construct aggregate statistical characteristics of the port, resulting in distributions of transit location, termination and duration by vessel category, as well as type of traffic, physical dimensions, and intensity of activity. We also observe that although 60 days give us suffi- cient data for our immediate purposes, a large proportion of itâup to 52% by message volumeâmust be considered dubious due to difficulties in configuration, maintenance and operation of ais transceivers
- âŚ