25 research outputs found

    Unraveling the characteristic spatial scale of habitat selection for forest grouse species in the boreal landscape

    Get PDF
    The characteristic spatial scale at which species respond strongest to forest structure is unclear and species-specific and depends on the degree of landscape heterogeneity. Research often analyzes a pre-defined spatial scale when constructing species distribution models relating forest variables with occupancy patterns. This is a limitation, as forest characteristics shape the species use of habitat at multiple spatial scales. To explore the drivers of this relationship, we conducted an in-depth investigation into how scaling forest variables at biologically relevant spatial scales affects occupancy of grouse species in boreal forest. We used 4,790 grouse observations (broods and adults) collected over 39,303 stands for 15 years of four forest grouse species (capercaillie, black grouse, hazel grouse, and willow grouse) obtained from comprehensive Finnish wildlife triangle census data and forest variables obtained from Airborne Laser Scanning and satellite data originally sampled at 16 m resolution. We fitted Generalized Additive Mixed Models linking grouse presence/absence in the Finnish boreal forest with forest stand structure and composition. We estimated the effects of predictor variables aggregated at three spatial scales reflecting the species use of the landscape: local level at stand scale, home range level at 1 km radius, and regional level at 5 km radius. Multi-grain models considering forest-species relationships at multiple scales were used to evaluate whether there is a specific scale at which forest characteristics best predict local grouse occupancy. We found that that the spatial scale affected the predictive capacity of the grouse occupancy models and the characteristic scale of habitat selection was the same (i.e., stand scale) among species. Different grouse species exhibited varying optimal spatial scales for occupancy prediction. Forest structure was more important than compositional diversity in predicting grouse occupancy irrespective of the scale. A limited number of forest predictors related to availability of multi-layered vegetation and of suitable thickets explained the occupancy patterns for all the grouse species at different scales. In conclusion, modeling grouse occupancy using forest predictors at different spatial scales can inform forest managers about the scale at which the species perceive the landscape. This evidence calls for an integrated multiscale approach to habitat modelling for forest species

    Unified treatment algorithm for the management of crotaline snakebite in the United States: results of an evidence-informed consensus workshop

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Envenomation by crotaline snakes (rattlesnake, cottonmouth, copperhead) is a complex, potentially lethal condition affecting thousands of people in the United States each year. Treatment of crotaline envenomation is not standardized, and significant variation in practice exists.</p> <p>Methods</p> <p>A geographically diverse panel of experts was convened for the purpose of deriving an evidence-informed unified treatment algorithm. Research staff analyzed the extant medical literature and performed targeted analyses of existing databases to inform specific clinical decisions. A trained external facilitator used modified Delphi and structured consensus methodology to achieve consensus on the final treatment algorithm.</p> <p>Results</p> <p>A unified treatment algorithm was produced and endorsed by all nine expert panel members. This algorithm provides guidance about clinical and laboratory observations, indications for and dosing of antivenom, adjunctive therapies, post-stabilization care, and management of complications from envenomation and therapy.</p> <p>Conclusions</p> <p>Clinical manifestations and ideal treatment of crotaline snakebite differ greatly, and can result in severe complications. Using a modified Delphi method, we provide evidence-informed treatment guidelines in an attempt to reduce variation in care and possibly improve clinical outcomes.</p

    Revisiting QRS detection methodologies for portable, wearable, battery-operated, and wireless ECG systems

    Get PDF
    Cardiovascular diseases are the number one cause of death worldwide. Currently, portable battery-operated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical efficiency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices.Mohamed Elgendi, Björn Eskofier, Socrates Dokos, Derek Abbot

    A scale space approach for estimating the characteristic feature sizes in hierarchical signals

    No full text
    Abstract The temporal and spatial data analysed in, for example, ecology or climatology, are often hierarchically structured, carrying information in different scales. An important goal of data analysis is then to decompose the observed signal into distinctive hierarchical levels and to determine the size of the features that each level represents. Using differences of smooths, scale space multiresolution analysis decomposes a signal into additive components associated with different levels of scales present in the data. The smoothing levels used to compute the differences are determined by the local minima of the norm of the so‐called scale‐derivative of the signal. While this procedure accomplishes the first goal, the hierarchical decomposition of the signal, it does not achieve the second goal, the determination of the actual size of the features corresponding to each hierarchical level. Here, we show that the maximum of the scale‐derivative norm of an extracted hierarchical component can be used to estimate its characteristic feature size. The feasibility of the method is demonstrated using an artificial image and a time series of a drought index, based on climate reconstructions from long tree ring chronologies

    Effect of centralization on geographic accessibility of maternity hospitals in Finland

    No full text
    Abstract Background: In the past two decades, the number of maternity hospitals in Finland has been reduced from 42 to 22. Notwithstanding the benefits of centralization for larger units in terms of increased safety, the closures will inevitably impair geographical accessibility of services. Methods: This study aimed to employ a set of location-allocation methods to assess the potential impact on accessibility, should the number of maternity hospitals be reduced from 22 to 16. Accurate population grid data combined with road network and hospital facilities data is analyzed with three different location-allocation methods: straight, sequential and capacitated p-median. Results: Depending on the method used to assess the impact of further reduction in the number of maternity hospitals, 0.6 to 2.7% of mothers would have more than a two-hour travel time to the nearest maternity hospital, while the corresponding figure is 0.5 in the current situation. The analyses highlight the areas where the number of births is low, but a maternity hospital is still important in terms of accessibility, and the areas where even one unit would be enough to take care of a considerable volume of births. Conclusions: Even if the reduction in the number of hospitals might not drastically harm accessibility at the level of the entire population, considerable changes in accessibility can occur for clients living close to a maternity hospital facing closure. As different location-allocation analyses can result in different configurations of hospitals, decision-makers should be aware of their differences to ensure adequate accessibility for clients, especially in remote, sparsely populated areas

    Scaling up an edge server deployment

    No full text
    Abstract In this article, we study the scaling up of edge computing deployments. In edge computing, deployments are scaled up by adding more computational capacity atop the initial deployment, as deployment budgets allow. However, without careful consideration, adding new servers may not improve proximity to the mobile users, crucial for the Quality of Experience of users and the Quality of Service of the network operators. In this paper, we propose a novel method for scaling up an edge computing deployment by selecting the optimal number of new edge servers and their placement, and re-allocating access points optimally to the old and new edge servers. The algorithm is evaluated with two scenarios, using data on a real-world large-scale wireless network deployment. The evaluation shows that the proposed method is stable on a real city-scale deployment, resulting in optimized Quality of Service for the network operator

    Edge computing server placement with capacitated location allocation

    No full text
    Abstract The deployment of edge computing infrastructure requires a careful placement of the edge servers, with an aim to improve application latencies and reduce data transfer load in opportunistic Internet of Things systems. In the edge server placement, it is important to consider computing capacity, available deployment budget, and hardware requirements for the edge servers and the underlying backbone network topology. In this paper, we thoroughly survey the existing literature in edge server placement, identify gaps and present an extensive set of parameters to be considered. We then develop a novel algorithm, called PACK, for server placement as a capacitated location–allocation problem. PACK minimizes the distances between servers and their associated access points, while taking into account capacity constraints for load balancing and enabling workload sharing between servers. Moreover, PACK considers practical issues such as prioritized locations and reliability. We evaluate the algorithm in two distinct scenarios: one with high capacity servers for edge computing in general, and one with low capacity servers for Fog computing. Evaluations are performed with a data set collected in a real-world network, consisting of both dense and sparse deployments of access points across a city area. The resulting algorithm and related tools are publicly available as open source software

    EDISON:an edge-native method and architecture for distributed interpolation

    No full text
    Abstract Spatio-temporal interpolation provides estimates of observations in unobserved locations and time slots. In smart cities, interpolation helps to provide a fine-grained contextual and situational understanding of the urban environment, in terms of both short-term (e.g., weather, air quality, traffic) or long term (e.g., crime, demographics) spatio-temporal phenomena. Various initiatives improve spatio-temporal interpolation results by including additional data sources such as vehicle-fitted sensors, mobile phones, or micro weather stations of, for example, smart homes. However, the underlying computing paradigm in such initiatives is predominantly centralized, with all data collected and analyzed in the cloud. This solution is not scalable, as when the spatial and temporal density of sensor data grows, the required transmission bandwidth and computational capacity become unfeasible. To address the scaling problem, we propose EDISON: algorithms for distributed learning and inference, and an edge-native architecture for distributing spatio-temporal interpolation models, their computations, and the observed data vertically and horizontally between device, edge and cloud layers. We demonstrate EDISON functionality in a controlled, simulated spatio-temporal setup with 1 M artificial data points. While the main motivation of EDISON is the distribution of the heavy computations, the results show that EDISON also provides an improvement over alternative approaches, reaching at best a 10% smaller RMSE than a global interpolation and 6% smaller RMSE than a baseline distributed approach
    corecore