197 research outputs found

    The rationales of resilience in English and Dutch flood risk policies

    Get PDF
    We compared the governance of flood risk in England and the Netherlands, focusing on the general policies, instruments used and underlying principles. Both physical and political environments are important in explaining how countries evolved towards very different rationales of resilience. Answering questions as ‘who decides’, ‘who should act’ and ‘who is responsible and liable for flood damage’ systematically, results in a quite fundamental difference in what resilience means, and how this affects the governance regime. In the Netherlands, there is nationwide collective regime with a technocracy based on the merit of water expertise, legitimated by a social contract of government being responsible and the general public accepting and supporting this. In England there also is a technocracy, but this is part of a general-political and economic-rational decision-making process, with responsibilities spread over state, insurance companies, individuals and communities. The rationales are connected to specific conceptions of the public interest, leading to specific governance principles. In both countries, flood risk strategies are discussed in the light of climate change effects, but resilience strategies show more persistence, although combined with gradual adaptation of practices on lower scales, than great transformations

    Governance strategies for improving flood resilience in the face of climate change

    Get PDF
    Flooding is the most common of all natural disasters and accounts for large numbers of casualties and a high amount of economic damage worldwide. To be ‘flood resilient’, countries should have sufficient capacity to resist, the capacity to absorb and recover, and the capacity to transform and adapt. Based on international comparative research, we conclude that six key governance strategies will enhance ‘flood resilience’ and will secure the necessary capacities. These strategies pertain to: (i) the diversification of flood risk management approaches; (ii) the alignment of flood risk management approaches to overcome fragmentation; (iii) the involvement, cooperation, and alignment of both public and private actors in flood risk management; (iv) the presence of adequate formal rules that balance legal certainty and flexibility; (v) the assurance of sufficient financial and other types of resources; (vi) the adoption of normative principles that adequately deal with distributional effects. These governance strategies appear to be relevant across different physical and institutional contexts. The findings may also hold valuable lessons for the governance of climate adaptation more generally

    Performance of neural networks for localizing moving objects with an artificial lateral line

    Get PDF
    Fish are able to sense water flow velocities relative to their body with their mechanoreceptive lateral line organ. This organ consists of an array of flow detectors distributed along the fish body. Using the excitation of these individual detectors, fish can determine the location of nearby moving objects. Inspired by this sensory modality, it is shown here how neural networks can be used to extract an object's location from simulated excitation patterns, as can be measured along arrays of stationary artificial flow velocity sensors. The applicability, performance and robustness with respect to input noise of different neural network architectures are compared. When trained and tested under high signal to noise conditions (46 dB), the Extreme Learning Machine architecture performs best with a mean Euclidean error of 0.4% of the maximum depth of the field D, which is taken half the length of the sensor array. Under lower signal to noise conditions Echo State Networks, having recurrent connections, enhance the performance while the Multilayer Perceptron is shown to be the most noise robust architecture. Neural network performance decreased when the source moves close to the sensor array or to the sides of the array. For all considered architectures, increasing the number of detectors per array increased localization performance and robustness

    Solid organ transplantation programs facing lack of empiric evidence in the COVID‐19 pandemic: A By‐proxy Society Recommendation Consensus approach

    Get PDF
    The ongoing severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has a drastic impact on national health care systems. Given the overwhelming demand on facility capacity, the impact on all health care sectors has to be addressed. Solid organ transplantation represents a field with a high demand on staff, intensive care units, and follow-up facilities. The great therapeutic value of organ transplantation has to be weighed against mandatory constraints of health care capacities. In addition, the management of immunosuppressed recipients has to be reassessed during the ongoing coronavirus disease 2019 (COVID-19) pandemic. In addressing these crucial questions, transplant physicians are facing a total lack of scientific evidence. Therefore, the aim of this study was to offer an approach of consensus-based guidance, derived from individual information of 22 transplant societies. Key recommendations were extracted and the degree of consensus among different organizations was calculated. A high degree of consensus was found for temporarily suspending nonurgent transplant procedures and living donation programs. Systematic polymerase chain reaction-based testing of donors and recipients was broadly recommended. Additionally, more specific aspects (eg, screening of surgical explant teams and restricted use of marginal donor organs) were included in our analysis. This study offers a novel approach to informed guidance for health care management when a priori no scientific evidence is available

    Deep Learning for Identification of Acute Illness and Facial Cues of Illness

    Get PDF
    Background: The inclusion of facial and bodily cues (clinical gestalt) in machine learning (ML) models improves the assessment of patients' health status, as shown in genetic syndromes and acute coronary syndrome. It is unknown if the inclusion of clinical gestalt improves ML-based classification of acutely ill patients. As in previous research in ML analysis of medical images, simulated or augmented data may be used to assess the usability of clinical gestalt. Objective: To assess whether a deep learning algorithm trained on a dataset of simulated and augmented facial photographs reflecting acutely ill patients can distinguish between healthy and LPS-infused, acutely ill individuals. Methods: Photographs from twenty-six volunteers whose facial features were manipulated to resemble a state of acute illness were used to extract features of illness and generate a synthetic dataset of acutely ill photographs, using a neural transfer convolutional neural network (NT-CNN) for data augmentation. Then, four distinct CNNs were trained on different parts of the facial photographs and concatenated into one final, stacked CNN which classified individuals as healthy or acutely ill. Finally, the stacked CNN was validated in an external dataset of volunteers injected with lipopolysaccharide (LPS). Results: In the external validation set, the four individual feature models distinguished acutely ill patients with sensitivities ranging from 10.5% (95% CI, 1.3–33.1% for the skin model) to 89.4% (66.9–98.7%, for the nose model). Specificity ranged from 42.1% (20.3–66.5%) for the nose model and 94.7% (73.9–99.9%) for skin. The stacked model combining all four facial features achieved an area under the receiver characteristic operating curve (AUROC) of 0.67 (0.62–0.71) and distinguished acutely ill patients with a sensitivity of 100% (82.35–100.00%) and specificity of 42.11% (20.25–66.50%). Conclusion: A deep learning algorithm trained on a synthetic, augmented dataset of facial photographs distinguished between healthy and simulated acutely ill individuals, demonstrating that synthetically generated data can be used to develop algorithms for health conditions in which large datasets are difficult to obtain. These results support the potential of facial feature analysis algorithms to support the diagnosis of acute illness

    Recurrent governance challenges in the implementation and alignment of flood risk management strategies: a review

    Get PDF
    In Europe increasing flood risks challenge societies to diversify their Flood Risk Management Strategies (FRMSs). Such a diversification implies that actors not only focus on flood defence, but also and simultaneously on flood risk prevention, mitigation, preparation and recovery. There is much literature on the implementation of specific strategies and measures as well as on flood risk governance more generally. What is lacking, though, is a clear overview of the complex set of governance challenges which may result from a diversification and alignment of FRM strategies. This paper aims to address this knowledge gap. It elaborates on potential processes and mechanisms for coordinating the activities and capacities of actors that are involved on different levels and in different sectors of flood risk governance, both concerning the implementation of individual strategies and the coordination of the overall set of strategies. It identifies eight overall coordination mechanisms that have proven to be useful in this respect

    Warm-Start AlphaZero Self-Play Search Enhancements

    Get PDF
    Recently, AlphaZero has achieved landmark results in deep reinforcement learning, by providing a single self-play architecture that learned three different games at super human level. AlphaZero is a large and complicated system with many parameters, and success requires much compute power and fine-tuning. Reproducing results in other games is a challenge, and many researchers are looking for ways to improve results while reducing computational demands. AlphaZero's design is purely based on self-play and makes no use of labeled expert data ordomain specific enhancements; it is designed to learn from scratch. We propose a novel approach to deal with this cold-start problem by employing simple search enhancements at the beginning phase of self-play training, namely Rollout, Rapid Action Value Estimate (RAVE) and dynamically weighted combinations of these with the neural network, and Rolling Horizon Evolutionary Algorithms (RHEA). Our experiments indicate that most of these enhancements improve the performance of their baseline player in three different (small) board games, with especially RAVE based variants playing strongly
    • 

    corecore