414,724 research outputs found

    Distributed Information Object Resolution

    Full text link
    The established host-centric networking paradigm is chal-lenged due to handicaps related with disconnected opera-tion, mobility, and broken locator/identifier semantics. This paper soberly examines another topic of great interest: distributed information object resolution. After recapping the notion of an information object, we review object resolution in today’s Internet which is based on Uniform Resource Identifiers (URIs). We revisit the implications of DNS involvement in URI resolution and discuss how two different types of content distribution networks work with respect to name resolution. Then we evaluate proposals championing the replacement of DNS with alternatives based on distributed hash tables. We present the pros and cons and highlight the importance of latency in resolution. The paper positions these issues in the context of a Network of Information (NetInf) and concludes with open research topics in the area. 1

    compressive synthetic aperture sonar imaging with distributed optimization

    Get PDF
    Synthetic aperture sonar (SAS) provides high-resolution acoustic imaging by processing coherently the backscattered acoustic signal recorded over consecutive pings. Traditionally, object detection and classification tasks rely on high-resolution seafloor mapping achieved with widebeam, broadband SAS systems. However, aspect- or frequency-specific information is crucial for improving the performance of automatic target recognition algorithms. For example, low frequencies can be partly transmitted through objects or penetrate the seafloor providing information about internal structure and buried objects, while multiple views provide information about the object's shape and dimensions. Sub-band and limited-view processing, though, degrades the SAS resolution. In this paper, SAS imaging is formulated as an l1-norm regularized least-squares optimization problem which improves the resolution by promoting a parsimonious representation of the data. The optimization problem is solved in a distributed and computationally efficient way with an algorithm based on the alternating direction method of multipliers. The resulting SAS image is the consensus outcome of collaborative filtering of the data from each ping. The potential of the proposed method for high-resolution, narrowband, and limited-aspect SAS imaging is demonstrated with simulated and experimental data.Synthetic aperture sonar (SAS) provides high-resolution acoustic imaging by processing coherently the backscattered acoustic signal recorded over consecutive pings. Traditionally, object detection and classification tasks rely on high-resolution seafloor mapping achieved with widebeam, broadband SAS systems. However, aspect- or frequency-specific information is crucial for improving the performance of automatic target recognition algorithms. For example, low frequencies can be partly transmitted through objects or penetrate the seafloor providing information about internal structure and buried objects, while multiple views provide information about the object's shape and dimensions. Sub-band and limited-view processing, though, degrades the SAS resolution. In this paper, SAS imaging is formulated as an l1-norm regularized least-squares optimization problem which improves the resolution by promoting a parsimonious representation of the data. The optimization problem is solved in a distributed and computati..

    Management of object-oriented action-based distributed programs

    Get PDF
    Phd ThesisThis thesis addresses the problem of managing the runtime behaviour of distributed programs. The thesis of this work is that management is fundamentally an information processing activity and that the object model, as applied to actionbased distributed systems and database systems, is an appropriate representation of the management information. In this approach, the basic concepts of classes, objects, relationships, and atomic transition systems are used to form object models of distributed programs. Distributed programs are collections of objects whose methods are structured using atomic actions, i.e., atomic transactions. Object models are formed of two submodels, each representing a fundamental aspect of a distributed program. The structural submodel represents a static perspective of the distributed program, and the control submodel represents a dynamic perspective of it. Structural models represent the program's objects, classes and their relationships. Control models represent the program's object states, events, guards and actions-a transition system. Resolution of queries on the distributed program's object model enable the management system to control certain activities of distributed programs. At a different level of abstraction, the distributed program can be seen as a reactive system where two subprograms interact: an application program and a management program; they interact only through sensors and actuators. Sensors are methods used to probe an object's state and actuators are methods used to change an object's state. The management program is capable to prod the application program into action by activating sensors and actuators available at the interface of the application program. Actions are determined by management policies that are encoded in the management program. This way of structuring the management system encourages a clear modularization of application and management distributed programs, allowing better separation of concerns. Managemental concerns can be dealt with by the management program, functional concerns can be assigned to the application program. The object-oriented action-based computational model adopted by the management system provides a natural framework for the implementation of faulttolerant distributed programs. Object orientation provides modularity and extensibility through object encapsulation. Atomic actions guarantee the consistency of the objects of the distributed program despite concurrency and failures. Replication of the distributed program provides increased fault-tolerance by guaranteeing the consistent progress of the computation, even though some of the replicated objects can fail. A prototype management system based on the management theory proposed above has been implemented atop Arjuna; an object-oriented programming system which provides a set of tools for constructing fault-tolerant distributed programs. The management system is composed of two subsystems: Stabilis, a management system for structural information, and Vigil, a management system for control information. Example applications have been implemented to illustrate the use of the management system and gather experimental evidence to give support to the thesis.CNPq (Consellho Nacional de Desenvolvimento Cientifico e Tecnol6gico, Brazil): BROADCAST (Basic Research On Advanced Distributed Computing: from Algorithms to SysTems)

    C.R.I.S.T.A.L. Concurrent Repository & Information System for Tracking Assembly and production Lifecycles: A data capture and production management tool for the assembly and construction of the CMS ECAL detector

    Get PDF
    The CMS experiment will comprise several very large high resolution detectors for physics. Each detector may be constructed of well over a million parts and will be produced and assembled during the next decade by specialised centres distributed world-wide. Each constituent part of each detector must be accurately measured and tested locally prior to its ultimate assembly and integration in the experimental area at CERN. The CRISTAL project (Concurrent Repository and Information System for Tracking Assembly and production Lifecycles) [1] aims to monitor and control the quality of the production and assembly process to aid in optimising the performance of the physics detectors and to reject unacceptable constituent parts as early as possible in the construction lifecycle. During assembly CRISTAL will capture all the information required for subsequent detector calibration. Distributed instances of Object databases linked via CORBA [2] and with WWW/Java-based query processing are the main technology aspects of CRISTAL.The CMS experiment will comprise several very large high resolution detectors for physics. Each detector may be constructed of well over a million parts and will be produced and assembled during the next decade by specialised centres distributed world-wide. Each constituent part of each detector must be accurately measured and tested locally prior to its ultimate assembly and integration in the experimental area at CERN. The CRISTAL project (Concurrent Repository and Information System for Tracking Assembly and production Lifecycles) [1] aims to monitor and control the quality of the production and assembly process to aid in optimising the performance of the physics detectors and to reject unacceptable constituent parts as early as possible in the construction lifecycle. During assembly CRISTAL will capture all the information required for subsequent detector calibration. Distributed instances of Object databases linked via CORBA [2] and with WWW/Java-based query processing are the main technology aspects of CRISTAL

    Providing the Third Dimension: High-resolution Multibeam Sonar as a Tool for Archaeological Investigations - An Example from the D-day Beaches of Normandy

    Get PDF
    In general, marine archaeological investigations begin in the archives, using historic maps, coast surveys, and other materials, to define submerged areas suspected to contain potentially significant historical sites. Following this research phase, a typical archaeological survey uses sidescan sonar and marine magnetometers as initial search tools. Targets are then examined through direct observation by divers, video, or photographs. Magnetometers can demonstrate the presence, absence, and relative susceptibility of ferrous objects but provide little indication of the nature of the target. Sidescan sonar can present a clear image of the overall nature of a target and its surrounding environment, but the sidescan image is often distorted and contains little information about the true 3-D shape of the object. Optical techniques allow precise identification of objects but suffer from very limited range, even in the best of situations. Modern high-resolution multibeam sonar offers an opportunity to cover a relatively large area from a safe distance above the target, while resolving the true three-dimensional (3-D) shape of the object with centimeter-level resolution. A clear demonstration of the applicability of highresolution multibeam sonar to wreck and artifact investigations occurred this summer when the Naval Historical Center (NHC), the Center for Coastal and Ocean Mapping (CCOM) at the University of New Hampshire, and Reson Inc., collaborated to explore the state of preservation and impact on the surrounding environment of a series of wrecks located off the coast of Normandy, France, adjacent to the American landing sectors The survey augmented previously collected magnetometer and high-resolution sidescan sonar data using a Reson 8125 high-resolution focused multibeam sonar with 240, 0.5° (at nadir) beams distributed over a 120° swath. The team investigated 21 areas in water depths ranging from about three -to 30 meters (m); some areas contained individual targets such as landing craft, barges, a destroyer, troop carrier, etc., while others contained multiple smaller targets such as tanks and trucks. Of particular interest were the well-preserved caissons and blockships of the artificial Mulberry Harbor deployed off Omaha Beach. The near-field beam-forming capability of the Reson 8125 combined with 3-D visualization techniques provided an unprecedented level of detail including the ability to recognize individual components of the wrecks (ramps, gun turrets, hatches, etc.), the state of preservation of the wrecks, and the impact of the wrecks on the surrounding seafloor

    Named Data Object Organization in Distributed Name Resolution System for Information Centric Network Environment

    Get PDF
    The Information-Centric Networking (ICN) is an emerging network communication model that focuses on what is being exchanged rather than who is exchanging information within a network. The named hosts make use of Named Data Objects (NDO) for data registration and name resolution. The Name Resolution System (NRS) is an element of the ICN that translates the object identifiers into network addresses. Distributing the NRS is an important and challenging issue for increased NDO registering, acquiring and storage. This study proposes a new NRS mechanism called the Distributed Name Resolution Mechanism (DNRM) to address the most significant issue of segregating the network and Balanced Binary Tree (BBT) structure to manage storage on the ever-increasing number of NDOs. The study formulates the proposed DNRM through the nearest neighbor algorithm by adding phases, means and methods, and probable outcomes. The stored NDOs are balanced with a balance factor to increase the scalability of NRS. The result shows that the overall NDO searching time is reduced by half for each iteration, and the proposed mechanisms are faster and more stable than the existing solutions in terms of better grouping NDOs. Both the mechanisms are simulated in the OMNeT++ simulation environment, a discrete event based simulator. The experimental results ensure that both the mechanisms have advantages in improving the network performance by minimizing the end-to-end delay and improving the network throughput

    Integrating Multiple Uncertain Views of a Static Scene Acquired by an Agile Camera System

    Get PDF
    This paper addresses the problem of merging multiple views of a static scene into a common coordinate frame, explicitly considering uncertainty. It assumes that a static world is observed by an agile vision system, whose movements are known with a limited precision, and whose observations are inaccurate and incomplete. It concentrates on acquiring uncertain three-dimensional information from multiple views, rather than on modeling or representing the information at higher levels of abstraction. Two particular problems receive attention: identifying the transformation between two viewing positions; and understanding how errors and uncertainties propagate as a result of applying the transformation. The first is solved by identifying the forward kinematics of the agile camera system. The second is solved by first treating a measurement of camera position and orientation as a uniformly distributed random vector whose component variances are related to the resolution of the encoding potentiometers, then treating an object position measurement as a normally distributed random vector whose component variances are experimentally derived, and finally determining the uncertainty of the merged points as functions of these variances
    • …
    corecore