699 research outputs found

    Video and Image Super-Resolution via Deep Learning with Attention Mechanism

    Get PDF
    Image demosaicing, image super-resolution and video super-resolution are three important tasks in color imaging pipeline. Demosaicing deals with the recovery of missing color information and generation of full-resolution color images from so-called Color filter Array (CFA) such as Bayer pattern. Image super-resolution aims at increasing the spatial resolution and enhance important structures (e.g., edges and textures) in super-resolved images. Both spatial and temporal dependency are important to the task of video super-resolution, which has received increasingly more attention in recent years. Traditional solutions to these three low-level vision tasks lack generalization capability especially for real-world data. Recently, deep learning methods have achieved great success in vision problems including image demosaicing and image/video super-resolution. Conceptually similar to adaptation in model-based approaches, attention has received increasing more usage in deep learning recently. As a tool to reallocate limited computational resources based on the importance of informative components, attention mechanism which includes channel attention, spatial attention, non-local attention, etc. has found successful applications in both highlevel and low-level vision tasks. However, to the best of our knowledge, 1) most approaches independently studied super-resolution and demosaicing; little is known about the potential benefit of formulating a joint demosaicing and super-resolution (JDSR) problem; 2) attention mechanism has not been studied for spectral channels of color images in the open literature; 3) current approaches for video super-resolution implement deformable convolution based frame alignment methods and naive spatial attention mechanism. How to exploit attention mechanism in spectral and temporal domains sets up the stage for the research in this dissertation. In this dissertation, we conduct a systematic study about those two issues and make the following contributions: 1) we propose a spatial color attention network (SCAN) designed to jointly exploit the spatial and spectral dependency within color images for single image super-resolution (SISR) problem. We present a spatial color attention module that calibrates important color information for individual color components from output feature maps of residual groups. Experimental results have shown that SCAN has achieved superior performance in terms of both subjective and objective qualities on the NTIRE2019 dataset; 2) we propose two competing end-to-end joint optimization solutions to the JDSR problem: Densely-Connected Squeeze-and-Excitation Residual Network (DSERN) vs. Residual-Dense Squeeze-and-Excitation Network (RDSEN). Experimental results have shown that an enhanced design RDSEN can significantly improve both subjective and objective performance over DSERN; 3) we propose a novel deep learning based framework, Deformable Kernel Spatial Attention Network (DKSAN) to super-resolve videos with a scale factor as large as 16 (the extreme SR situation). Thanks to newly designed Deformable Kernel Convolution Alignment (DKC Align) and Deformable Kernel Spatial Attention (DKSA) modules, DKSAN can get both better subjective and objective results when compared with the existing state-of-the-art approach enhanced deformable convolutional network (EDVR)

    Real options for adaptive decisions in primary industries

    Get PDF
    Abstract The long term sustainability of Australian crop and livestock farms is threatened with climate change and climate variability. In response, farmers may decide to (1) adjust practices and technologies, (2) change production systems, or (3) transform their industries, for example, by relocating to new geographical areas. Adjustments to existing practices are easy to make relative to changes to production systems or transformations of an industry. Switching between production regimes requires new investments and infrastructure and can leave assets stranded. These changes can be partially or wholly irreversible but hysteresis effects can make switching difficult and mistakes costly to reverse. ‘Real options’ is a framework to structure thinking and analysis of these difficult choices. Previous work has demonstrated how real options can be applied to adaptation, and extends traditional economic analyses of agricultural investment decisions based on net present values to better represent the uncertainty and risks of climate change. This project uses transects across space as analogues for future climate scenarios. We simulate yields from climate data and draw on data from actual farms to estimate a real options model referred to as ‘Real Options for Adaptive Decisions’ (ROADs). We present results for the transformation of wheat dominant cropping systems in South Australia, New South Wales, and Western Australia. We find that farmers’ decisions, as much as a changing climate, determine how agriculture will be transformed. Please cite this report as: Hertzler, G, Sanderson, T, Capon, T, Hayman, P, Kingwell, R, McClintock, A, Crean, J, Randall, A 2013 Will primary producers continue to adjust practices and technologies, change production systems or transform their industry – an application of real options,  National Climate Change Adaptation Research Facility, Gold Coast, pp. 93. The long term sustainability of Australian crop and livestock farms is threatened with climate change and climate variability. In response, farmers may decide to (1) adjust practices and technologies, (2) change production systems, or (3) transform their industries, for example, by relocating to new geographical areas. Adjustments to existing practices are easy to make relative to changes to production systems or transformations of an industry. Switching between production regimes requires new investments and infrastructure and can leave assets stranded. These changes can be partially or wholly irreversible but hysteresis effects can make switching difficult and mistakes costly to reverse. ‘Real options’ is a framework to structure thinking and analysis of these difficult choices. Previous work has demonstrated how real options can be applied to adaptation, and extends traditional economic analyses of agricultural investment decisions based on net present values to better represent the uncertainty and risks of climate change. This project uses transects across space as analogues for future climate scenarios. We simulate yields from climate data and draw on data from actual farms to estimate a real options model referred to as ‘Real Options for Adaptive Decisions’ (ROADs). We present results for the transformation of wheat dominant cropping systems in South Australia, New South Wales, and Western Australia. We find that farmers’ decisions, as much as a changing climate, determine how agriculture will be transformed

    Improving the conceptualization of the streamflow Travel Time Distribution and its estimation from isotopic tracers

    Get PDF
    Wasser, das durch hydrologische Landschaften (wie Einzugsgebiete) fließt, benötigt unterschiedlich lange, um den nächsten Fluss zu erreichen. In anderen Worten: das Wasser im Fluss besteht aus einer Verteilung von Wasser verschiedenen Alters. Transitzeitverteilungen (engl.: Travel Time Distributions (TTDs)) charakterisieren die Transitzeiten des Wassers bis zum Abfluss aus einem Einzugsgebiet und beschreiben, wie Einzugsgebiete Wasser speichern und abgeben, das vor Tagen, Monaten und Jahren als Niederschlag gefallen ist. Kenntnisse über Transitzeitverteilungen sind entscheidend für ein besseres Wasserressourcenmanagement, weil sie Einblicke in Fließwege und –geschwindigkeiten in hydrologischen Systemen gewähren. Außerdem sind sie entscheidend bei der Steuerung hydrologischer Modelle zur Simulation von Wasserqualität. Transitzeitverteilungen in Flüssen werden über hydrologische Tracer, wie die stabilen Isotope von Sauerstoff (O) und Wasserstoff (H), abgeschätzt. Detaillierte Eigenschaften der Transitzeitverteilungen, wie deren Form, Statistik, und zeitliche Variabilität, sind nicht vollkommen entschlüsselt und die Faktoren, die diese Eigenschaften beeinflussen, sind nicht klar identifiziert. Darüber hinaus gibt es keine Rahmenvereinbarungen für die Schätzung instationärer Transitzeitverteilungen sowie deren Anwendung bei der Modellierung des Transports von Isotopentracern. Diese Einschränkungen haben ihren Ursprung zum Teil in der weit verbreiteten Verwendung einfacher stationärer Transitzeitverteilungsmodelle, die aus niedrig aufgelösten Tracerdaten der letzten Jahrzehnte abgeleitet wurden. Die meisten Transitzeit-Studien basieren auf zweiwöchentlichen oder monatlichen Daten stabiler Isotope, die meist über einen Zeitraum von weniger als zwei Jahren aufgenommen wurden. Dementsprechend wurde die zeitliche Varianz der Transitzeitverteilung in den meisten analytischen Transitzeitverteilungsmodellen, mit denen die Tracer Zeitreihen verglichen wurden, nicht berücksichtigt. Das alles überspannende Ziel dieser Arbeit ist es, zu verstehen, was die Form und zeitliche Variabilität von Transitzeitverteilungen beeinflusst und wie sie aus Tracerdaten abgeleitet werden können. Dafür werden theoretische Untersuchungen, experimentelle Arbeit (Feld- und Laborarbeit), sowie Modellierung auf eine Weise kombiniert, die weit über bisherige Arbeiten hinausgeht. In luxemburgischen Untersuchungsgebieten wurden über zwei Jahre Isotopentracer (2^2H, 18^{18}O, 3^3H) im Niederschlag und Abfluss in hoher Auflösung (mehrere Messungen pro Tag) gemessen. Basierend auf den theoretischen und experimentellen Grundlagen, insbesondere unter Berücksichtigung des in Luxemburg erhobenen Isotopendatensatzes, wurden verbesserte Parametrisierungen von Transitzeitverteilungen und neue analytische Modelle vorgeschlagen. Es wurde eine Methode zur Einschätzung von Informationen über das Alter des Wassers über einen „dual-isotopischen“-Ansatz (unter Verwendung von 2^2H und 3^3H) vorgeschlagen, um aufkommende Missverständnisse über die Grenzen der stabilen Isotope von O und H im Vergleich zu 3^3H aufzuklären. Die Arbeit dieser Dissertation zeigt, dass die Transitverteilungen des Wassers vielfältigere Formen und komplexere Variabilitäten haben können, als in Studien des letzten Jahrzehnts angenommen wurde. Im mediterranen Klima kann es beispielsweise bei übergängen zwischen Sommer und Winter zu komplexen Mustern des Wasseralters kommen. überlagerte Abflussbildungsprozesse unterschiedlicher Fließwege oder -geschwindigkeiten können multimodale Transitzeitverteilungen generieren, die verschiedene Altersspitzen enthalten. Darüber hinaus wird gezeigt, dass nur Tracer, die in Kombination verwendet werden, so wie 2^2H und 3^3H, helfen können, um multiple Spitzen der Transitzeitverteilungen und deren langen Auslaufkurven, die mit altem Wasser assoziiert werden, zu entschlüsseln. Akkurate Transitzeitverteilungen des Abflusses werden bald zu einem wesentlichen Konzept für die Bemühungen von Entscheidungsträgern im Wassermanagement werden, eine Minderung der Wasserqualität einzudämmen. Die effizientere und präzisere Bestimmung von instationären Transitzeitverteilungen des Abflusses über Isotopentracer und ihre in dieser Arbeit vorgeschlagene verbesserte Parametrisierung ebnen den Weg für ein ganzheitliches Verständnis von Wasserfließwegen und Wasserqualität in Einzugsgebieten

    UQ and AI: data fusion, inverse identification, and multiscale uncertainty propagation in aerospace components

    Get PDF
    A key requirement for engineering designs is that they offer good performance across a range of uncertain conditions while exhibiting an admissibly low probability of failure. In order to design components that offer good performance across a range of uncertain conditions, it is necessary to take account of the effect of the uncertainties associated with a candidate design. Uncertainty Quantification (UQ) methods are statistical methods that may be used to quantify the effect of the uncertainties inherent in a system on its performance. This thesis expands the envelope of UQ methods for the design of aerospace components, supporting the integration of UQ methods in product development by addressing four industrial challenges. Firstly, a method for propagating uncertainty through computational models in a hierachy of scales is described that is based on probabilistic equivalence and Non-Intrusive Polynomial Chaos (NIPC). This problem is relevant to the design of aerospace components as the computational models used to evaluate candidate designs are typically multiscale. This method was then extended to develop a formulation for inverse identification, where the probability distributions for the material properties of a coupon are deduced from measurements of its response. We demonstrate how probabilistic equivalence and the Maximum Entropy Principle (MEP) may be used to leverage data from simulations with scarce experimental data- with the intention of making this stage of product design less expensive and time consuming. The third contribution of this thesis is to develop two novel meta-modelling strategies to promote the wider exploration of the design space during the conceptual design phase. Design Space Exploration (DSE) in this phase is crucial as decisions made at the early, conceptual stages of an aircraft design can restrict the range of alternative designs available at later stages in the design process, despite limited quantitative knowledge of the interaction between requirements being available at this stage. A histogram interpolation algorithm is presented that allows the designer to interactively explore the design space with a model-free formulation, while a meta-model based on Knowledge Based Neural Networks (KBaNNs) is proposed in which the outputs of a high-level, inexpensive computer code are informed by the outputs of a neural network, in this way addressing the criticism of neural networks that they are purely data-driven and operate as black boxes. The final challenge addressed by this thesis is how to iteratively improve a meta-model by expanding the dataset used to train it. Given the reliance of UQ methods on meta-models this is an important challenge. This thesis proposes an adaptive learning algorithm for Support Vector Machine (SVM) metamodels, which are used to approximate an unknown function. In particular, we apply the adaptive learning algorithm to test cases in reliability analysis.Open Acces

    EverFarm® - Climate adapted perennial-based farming systems for dryland agriculture in southern Australia

    Get PDF
    AbstractAustralian dryland agriculture will be affected by climate change in a number of ways. First, higher temperatures and changes to rainfall are likely to create greater variability of crop yields and livestock productivity. Second, government policies introduced to mitigate greenhouse gas emissions are likely to influence production costs and commodity prices. Third, global trade patterns are likely to alter as populations increase, and as climate change continues to affect producers and consumers worldwide. This will create both challenges and opportunities for Australian agriculture.Farmers will have to respond to the additional challenge of climate change even when it is compounded by existing long term stresses associated with declining terms of trade, climate variability and existing environmental issues. Investing in new land-use options to combat climate change, with their associated risks, is made more difficult by being set against a backdrop of declining profitability. The opportunity to create transformational change in farming enterprises was tested by combining the multiple components of the potential future perennial‐based dryland farming systems and assessing their expected contribution to climate change adaptation. This project has found that adopting perennial pastures for livestock grazing and tree crops for biomass production, when planted on appropriate soils, can improve profitability when compared to the existing land uses facing a changing climate.  In some farming systems increased cropping is likely to result in improved future farm profits.This work demonstrated that Mallees as a biomass tree crop can be cohesively integrated into existing farming systems with minimal interruption to normal operations of livestock and cropping enterprises. A woody biomass crop can be profitable and diversify revenue risk by enabling farmers to supply biomass and sequester carbon to relevant markets. This work demonstrates suitable designs of a mallee belt planting layout that minimizes costs and maximizes benefits when planted in appropriate agro‐climatic zones and where there are adequate soil conditions. Knowledge developed from this work will help build farmers capacity about climate change adaptation and assist in achieving positive social, environmental and economic outcomes.Please cite this report as:Farquharson, R, Abadi, A, Finlayson, J, Ramilan, T,  Liu, DL, Muhaddin, A, Clark, S, Robertson, S, Mendham, D, Thomas, Q,  McGrath, J 2013 EverFarm® – Climate adapted perennial-based farming systems for dryland agriculture in southern Australia, National Climate Change Adaptation Research Facility, Gold Coast, pp. 159.AbstractAustralian dryland agriculture will be affected by climate change in a number of ways. First, higher temperatures and changes to rainfall are likely to create greater variability of crop yields and livestock productivity. Second, government policies introduced to mitigate greenhouse gas emissions are likely to influence production costs and commodity prices. Third, global trade patterns are likely to alter as populations increase, and as climate change continues to affect producers and consumers worldwide. This will create both challenges and opportunities for Australian agriculture.Farmers will have to respond to the additional challenge of climate change even when it is compounded by existing long term stresses associated with declining terms of trade, climate variability and existing environmental issues. Investing in new land-use options to combat climate change, with their associated risks, is made more difficult by being set against a backdrop of declining profitability. The opportunity to create transformational change in farming enterprises was tested by combining the multiple components of the potential future perennial‐based dryland farming systems and assessing their expected contribution to climate change adaptation. This project has found that adopting perennial pastures for livestock grazing and tree crops for biomass production, when planted on appropriate soils, can improve profitability when compared to the existing land uses facing a changing climate.  In some farming systems increased cropping is likely to result in improved future farm profits.This work demonstrated that Mallees as a biomass tree crop can be cohesively integrated into existing farming systems with minimal interruption to normal operations of livestock and cropping enterprises. A woody biomass crop can be profitable and diversify revenue risk by enabling farmers to supply biomass and sequester carbon to relevant markets. This work demonstrates suitable designs of a mallee belt planting layout that minimizes costs and maximizes benefits when planted in appropriate agro‐climatic zones and where there are adequate soil conditions. Knowledge developed from this work will help build farmers capacity about climate change adaptation and assist in achieving positive social, environmental and economic outcomes

    Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

    Full text link
    [ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7.[CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7.[EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA.Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883TESI

    Evaluating the risks of pasture and land degradation in native pastures in Queensland

    Get PDF
    The objective of the project was to develop an approach to quantify the risks of land nd pasture degradation. This objective was achieved by developing an operational model of the condition of native pastures in Queensland. The results of the project showed that: 1) historical and current pasture data can be used with models to simulate grazing lands in near real-time; 2) spatial models of production can be developed and validated with existing spatial data and monitoring systems; 3) data from graziers indicate that safe utilisation rates are 15-25% of average pasture growth; 4) relative risks of land and pasture can be quantified from simulations using actual numbers compared to safe stocking rates; and 5) case studies using the pasture growth model and models of grazing feedback on pasture and land degradation to evaluate the economic consequences of stocking rate strategies have been used in other projects (e.g. DroughtPlan: McKeon et al. 1996, Stafford Smith et al. 1996)

    Response and Damage Assessment of Reinforced Concrete Frames subject to Earthquakes

    Get PDF
    corecore