732 research outputs found
Summaries of the Sixth Annual JPL Airborne Earth Science Workshop
The Sixth Annual JPL Airborne Earth Science Workshop, held in Pasadena, California, on March 4-8, 1996, was divided into two smaller workshops:(1) The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) workshop, and The Airborne Synthetic Aperture Radar (AIRSAR) workshop. This current paper, Volume 2 of the Summaries of the Sixth Annual JPL Airborne Earth Science Workshop, presents the summaries for The Airborne Synthetic Aperture Radar (AIRSAR) workshop
Advanced Launch System Multi-Path Redundant Avionics Architecture Analysis and Characterization
The objective of the Multi-Path Redundant Avionics Suite (MPRAS) program is the development of a set of avionic architectural modules which will be applicable to the family of launch vehicles required to support the Advanced Launch System (ALS). To enable ALS cost/performance requirements to be met, the MPRAS must support autonomy, maintenance, and testability capabilities which exceed those present in conventional launch vehicles. The multi-path redundant or fault tolerance characteristics of the MPRAS are necessary to offset a reduction in avionics reliability due to the increased complexity needed to support these new cost reduction and performance capabilities and to meet avionics reliability requirements which will provide cost-effective reductions in overall ALS recurring costs. A complex, real-time distributed computing system is needed to meet the ALS avionics system requirements. General Dynamics, Boeing Aerospace, and C.S. Draper Laboratory have proposed system architectures as candidates for the ALS MPRAS. The purpose of this document is to report the results of independent performance and reliability characterization and assessment analyses of each proposed candidate architecture and qualitative assessments of testability, maintainability, and fault tolerance mechanisms. These independent analyses were conducted as part of the MPRAS Part 2 program and were carried under NASA Langley Research Contract NAS1-17964, Task Assignment 28
Probabilistic Image Models and their Massively Parallel Architectures : A Seamless Simulation- and VLSI Design-Framework Approach
Algorithmic robustness in real-world scenarios and real-time processing capabilities are the two essential and at the same time contradictory requirements modern image-processing systems have to fulfill to go significantly beyond state-of-the-art systems. Without suitable image processing and analysis systems at hand, which comply with the before mentioned contradictory requirements, solutions and devices for the application scenarios of the next generation will not become reality. This issue would eventually lead to a serious restraint of innovation for various branches of industry. This thesis presents a coherent approach to the above mentioned problem. The thesis at first describes a massively parallel architecture template and secondly a seamless simulation- and semiconductor-technology-independent design framework for a class of probabilistic image models, which are formulated on a regular Markovian processing grid. The architecture template is composed of different building blocks, which are rigorously derived from Markov Random Field theory with respect to the constraints of \it massively parallel processing \rm and \it technology independence\rm. This systematic derivation procedure leads to many benefits: it decouples the architecture characteristics from constraints of one specific semiconductor technology; it guarantees that the derived massively parallel architecture is in conformity with theory; and it finally guarantees that the derived architecture will be suitable for VLSI implementations. The simulation-framework addresses the unique hardware-relevant simulation needs of MRF based processing architectures. Furthermore the framework ensures a qualified representation for simulation of the image models and their massively parallel architectures by means of their specific simulation modules. This allows for systematic studies with respect to the combination of numerical, architectural, timing and massively parallel processing constraints to disclose novel insights into MRF models and their hardware architectures. The design-framework rests upon a graph theoretical approach, which offers unique capabilities to fulfill the VLSI demands of massively parallel MRF architectures: the semiconductor technology independence guarantees a technology uncommitted architecture for several design steps without restricting the design space too early; the design entry by means of behavioral descriptions allows for a functional representation without determining the architecture at the outset; and the topology-synthesis simplifies and separates the data- and control-path synthesis. Detailed results discussed in the particular chapters together with several additional results collected in the appendix will further substantiate the claims made in this thesis
Advanced methods and deep learning for video and satellite data compression
L'abstract è presente nell'allegato / the abstract is in the attachmen
Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models
To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented.
The modeling of increasing level of information is used to extract, represent and link image features to semantic content.
The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images
Spatiotemporal Hydrological Modelling with GIS for the Upper Mahaweli Catchment, Sri Lanka
Sustainability of water resources is imperative for the continued prosperity of Sri Lanka
where the economy is dependent upon agriculture. The Mahaweli river is the longest in
Sri Lanka, with the upper catchment covering an area of 3124 sq .km .. The Mahaweli
Development programme, a major undertaking in the upper catchment has been
implemented with the aims of providing Mahaweli water to the dry zone of the country
through a massive diversion scheme and also for generating hydropower. Under this
programme, seven large reservoirs have been constructed across the river and large
scale land use changes in the catchment have occurred during the last two decades.
Critics now say that the hydrological regime has been adversely affected due to
indiscriminate land use changes and, as a result, river flows have diminished during the
last two decades, thus jeopardising the expectations of this massive development
programme. Reforestation programmes have been recommended because of the
benefits of forest in resource conservation and also the water derived from fog
interception. Selection of the best sites for these forest plantations for maximum
benefits, especially in terms of water yield from fog interception has the utmost
importance. This created the need for a comprehensive model to represent the
hydrology and to simulate the hydrological dynamics of the catchment
In conceptual terms, GIS is well suited for modelling with large and complex databases
associated with hydrological parameters. However, hydrological modelling efforts in
GIS are constrained by the limitations in the representation of time in its spatial data
,structures. The SPANS GIS software used in this study provided the capability of
linking spatially distributed numerical parameters with corresponding tabulated data
through mathematical and statistical expressions while implicitly representing
temporality through iterative procedures.The spatial distribution of land use was identified through the supervised classification
of IRS-IA LISS II imagery. Daily rainfall data for a 30 year period and corresponding
gauging locations derived from GPS were managed and retrieved through a Lotus 1-2-
3 database. The fog interception component was estimated based on elevation and the
monsoon season. Hydrological processes such as interception and evapotranspiration
were derived from individual sub models and finally combined within the overall
hydrological model structure. The model was run with daily time steps on numerical
'values of each quad cell of the thematic coverage. The information on flow derived
from the model was depicted as a series of thematic maps in addition to the time series
of numerical values at subcatchment and catchment outlets. The results confirmed that
the model is capable of simulating catchment response of the UMCA successfully.
The time dimension was accommodated through a senes of non-interactive REXX
programmes in developing the customised version of the model. It is concluded that
the software architecture of SPANS GIS is capable of accommodating spatiotemporal
modelling implicitly in its spatial data structures although changes in the model
structure may necessitate considerable reprogramming.
Sensitivity of the model for different spatial interpolation techniques was evaluated.
Further, sensitivity of the model for the defined hydrological parameters, spatial
'resolution and land use was also assessed. The model is sensitive to land use changes in
the catchment and it shows 15-35% annual increase of runoff when forests are
converted to grassland. Further studies are required to develop a more detailed set of
hydrological parameters for the model
Towards Thompson Sampling for Complex Bayesian Reasoning
Paper III, IV, and VI are not available as a part of the dissertation due to the copyright.Thompson Sampling (TS) is a state-of-art algorithm for bandit problems set in a Bayesian framework. Both the theoretical foundation and the empirical efficiency of TS is wellexplored for plain bandit problems. However, the Bayesian underpinning of TS means that TS could potentially be applied to other, more complex, problems as well, beyond the bandit problem, if suitable Bayesian structures can be found.
The objective of this thesis is the development and analysis of TS-based schemes for more complex optimization problems, founded on Bayesian reasoning. We address several complex optimization problems where the previous state-of-art relies on a relatively myopic perspective on the problem. These includes stochastic searching on the line, the Goore game, the knapsack problem, travel time estimation, and equipartitioning. Instead of employing Bayesian reasoning to obtain a solution, they rely on carefully engineered rules. In all brevity, we recast each of these optimization problems in a Bayesian framework, introducing dedicated TS based solution schemes. For all of the addressed problems, the results show that besides being more effective, the TS based approaches we introduce are also capable of solving more adverse versions of the problems, such as dealing with stochastic liars.publishedVersio
Airborne Advanced Reconfigurable Computer System (ARCS)
A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility
Behaviour on Linked Data - Specification, Monitoring, and Execution
People, organisations, and machines around the globe make use of web technologies to communicate. For instance, 4.16 bn people with access to the internet made 4.6 bn pages on the web accessible using the transfer protocol HTTP, organisations such as Amazon built ecosystems around the HTTP-based access to their businesses under the headline RESTful APIs, and the Linking Open Data movement has put billions of facts on the web available in the data model RDF via HTTP. Moreover, under the headline Web of Things, people use RDF and HTTP to access sensors and actuators on the Internet of Things.
The necessary communication requires interoperable systems at a truly global scale, for which web technologies provide the necessary standards regarding the transfer and the representation of data: the HTTP protocol specifies how to transfer messages, besides defining the semantics of sending/receiving different types of messages, and the RDF family of languages specifies how to represent the data in the messages, besides providing means to elaborate the semantics of the data in the messages. The combination of HTTP and RDF -together with the shared assumption of HTTP and RDF to use URIs as identifiers- is called Linked Data.
While the representation of static data in the context of Linked Data has been formally grounded in mathematical logic, a formal treatment of dynamics and behaviour on Linked Data is largely missing. We regard behaviour in this context as the way in which a system (e.g. a user agent or server) works, and this behaviour manifests itself in dynamic data. Using a formal treatment of behaviour on Linked Data, we could specify applications that use or provide Linked Data in a way that allows for formal analysis (e.g. expressivity, validation, verification). Using an experimental treatment of behaviour, or a treatment of the behaviour\u27s manifestation in dynamic data, we could better design the handling of Linked Data in applications.
Hence, in this thesis, we investigate the notion of behaviour in the context of Linked Data. Specifically, we investigate the research question of how to capture the dynamics of Linked Data to inform the design of applications. The first contribution is a corpus that we built and analysed to monitor dynamic Linked Data on the web to study the update behaviour. We provide an extensive analysis to set up a long-term study of the dynamics of Linked Data on the web. We analyse data from the long-term study for dynamics on the level of accessing changing documents and on the level of changes within the documents. The second contribution is a model of computation for Linked Data that allows for expressing executable specifications of application behaviour. We provide a mapping from the conceptual foundations of the standards around Linked Data to Abstract State Machines, a Turing-complete model of computation rooted in mathematical logic. The third contribution is a workflow ontology and corresponding operational semantics to specify applications that execute and monitor behaviour in the context of Linked Data. Our approach allows for monitoring and executing behaviour specified in workflow models and respects the assumptions of the standards and practices around Linked Data. We evaluate our findings using the experimental corpus of dynamic Linked Data on the web and a synthetic benchmark from the Internet of Things, specifically the domain of building automation
- …