997 research outputs found

    Comparing Commonly Used Aquatic Habitat Modeling Methods for Native Fishes

    Get PDF
    Water resources are managed for a variety of human needs, including agriculture, industrial and municipal consumption, hydropower generation, and recreation. There has been a recent push to incorporate habitat needs of aquatic wildlife into water management models alongside these other uses, particularly as competition for limited water resources in a changing climate has reduced instream flow and contributed to declining native fish populations. Habitat models are used to estimate species distributions and differentiate between suitable and unsuitable habitat based on variables important to a given species, but are not usually incorporated into water management models. Because there are many ways of modeling habitat and no standard way to compare model accuracy, for this research I used three methods of comparing the accuracy of three commonly used habitat modeling approaches to identify best methods for estimating Bonneville Cutthroat Trout and Bluehead Sucker habitat in the Bear River Watershed (UT, ID, WY). I also explored how well variables used in making each model’s predictions compared with real-world conditions based on field observations. I determined total upstream catchment area was the most important large-scale variable for predicting both Bonneville Cutthroat Trout and Bluehead Sucker habitat suitability, and nearby land use was also important for Bonneville Cutthroat Trout. I showed none of the models’ variables reflected real-world conditions observed in summer 2022, which suggests data commonly used to build habitat models like these can be outdated, incorrect, or over-simplified. Finally, I determined simple habitat models which incorporated aspects of water quality or species biology, rather than simply available water quantity, best predicted both Bonneville Cutthroat Trout and Bluehead Sucker presence, though performance metrics chosen to evaluate model accuracy influenced results. Simpler methods that incorporate species-specific biological criteria are best to include in water management models so fish conservation can be easily and accurately included as a demand for water resources alongside other uses

    What makes it so hard to look and to listen? Exploring the use of the Cognitive and Affective Supervisory Approach with children’s social work managers

    Get PDF
    This paper reports on the findings of an ESRC-funded Knowledge Exchange project designed to explore the contribution of an innovative approach to supervision to social work practitioners’ assessment and decision-making practices. The Cognitive and Affective Supervisory Approach (CASA) is informed by cognitive interviewing techniques originally designed to elicit best evidence from witnesses and victims of crime. Adapted here for use in childcare social work supervision contexts, this model is designed to enhance the quantity and quality of information available for decision-making. Facilitating the reporting of both ‘event information’ and ‘emotion information’, it allows a more detailed picture to emerge of events, as recalled by the individual involved, and the meaning they give to them. Practice supervisors from Children’s Services in two local authorities undertook to introduce the CASA into supervision sessions and were supported in this through the provision of regular reflective group discussions. The project findings highlight the challenges for practitioners of ‘detailed looking’ and for supervisors of ‘active listening’. The paper concludes by acknowledging that the CASA’s successful contribution to decision-making is contingent on both the motivation and confidence of supervisors to develop their skills and an organisational commitment to, and resourcing of, reflective supervisory practices and spaces

    CSNL: A cost-sensitive non-linear decision tree algorithm

    Get PDF
    This article presents a new decision tree learning algorithm called CSNL that induces Cost-Sensitive Non-Linear decision trees. The algorithm is based on the hypothesis that nonlinear decision nodes provide a better basis than axis-parallel decision nodes and utilizes discriminant analysis to construct nonlinear decision trees that take account of costs of misclassification. The performance of the algorithm is evaluated by applying it to seventeen datasets and the results are compared with those obtained by two well known cost-sensitive algorithms, ICET and MetaCost, which generate multiple trees to obtain some of the best results to date. The results show that CSNL performs at least as well, if not better than these algorithms, in more than twelve of the datasets and is considerably faster. The use of bagging with CSNL further enhances its performance showing the significant benefits of using nonlinear decision nodes. The performance of the algorithm is evaluated by applying it to seventeen data sets and the results are compared with those obtained by two well known cost-sensitive algorithms, ICET and MetaCost, which generate multiple trees to obtain some of the best results to date. The results show that CSNL performs at least as well, if not better than these algorithms, in more than twelve of the data sets and is considerably faster. The use of bagging with CSNL further enhances its performance showing the significant benefits of using non-linear decision nodes

    A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations

    Get PDF
    Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks

    Drivers of abrupt Holocene shifts in West Antarctic ice stream direction from combined ice sheet modelling and geologic signatures

    Get PDF
    Determining the millennial-scale behaviour of marine-based sectors of the West Antarctic Ice Sheet (WAIS) is critical to improve predictions of the future contribution of Antarctica to sea level rise. Here high-resolution ice sheet modelling was combined with new terrestrial geological constraints (in situ14C and 10Be analysis) to reconstruct the evolution of two major ice streams entering the Weddell Sea over 20 000 years. The results demonstrate how marked differences in ice flux at the marine margin of the expanded Antarctic ice sheet led to a major reorganization of ice streams in the Weddell Sea during the last deglaciation, resulting in the eastward migration of the Institute Ice Stream, triggering a significant regional change in ice sheet mass balance during the early to mid Holocene. The findings highlight how spatial variability in ice flow can cause marked changes in the pattern, flux and flow direction of ice streams on millennial timescales in this marine ice sheet setting. Given that this sector of the WAIS is assumed to be sensitive to ocean-forced instability and may be influenced by predicted twenty-first century ocean warming, our ability to model and predict abrupt and extensive ice stream diversions is key to a realistic assessment of future ice sheet sensitivity

    Inducing safer oblique trees without costs

    Get PDF
    Decision tree induction has been widely studied and applied. In safety applications, such as determining whether a chemical process is safe or whether a person has a medical condition, the cost of misclassification in one of the classes is significantly higher than in the other class. Several authors have tackled this problem by developing cost-sensitive decision tree learning algorithms or have suggested ways of changing the distribution of training examples to bias the decision tree learning process so as to take account of costs. A prerequisite for applying such algorithms is the availability of costs of misclassification. Although this may be possible for some applications, obtaining reasonable estimates of costs of misclassification is not easy in the area of safety. This paper presents a new algorithm for applications where the cost of misclassifications cannot be quantified, although the cost of misclassification in one class is known to be significantly higher than in another class. The algorithm utilizes linear discriminant analysis to identify oblique relationships between continuous attributes and then carries out an appropriate modification to ensure that the resulting tree errs on the side of safety. The algorithm is evaluated with respect to one of the best known cost-sensitive algorithms (ICET), a well-known oblique decision tree algorithm (OC1) and an algorithm that utilizes robust linear programming

    Redating the earliest evidence of the mid-Holocene relative sea-level highstand in Australia and implications for global sea-level rise.

    Full text link
    Reconstructing past sea levels can help constrain uncertainties surrounding the rate of change, magnitude, and impacts of the projected increase through the 21st century. Of significance is the mid-Holocene relative sea-level highstand in tectonically stable and remote (far-field) locations from major ice sheets. The east coast of Australia provides an excellent arena in which to investigate changes in relative sea level during the Holocene. Considerable debate surrounds both the peak level and timing of the east coast highstand. The southeast Australian site of Bulli Beach provides the earliest evidence for the establishment of a highstand in the Southern Hemisphere, although questions have been raised about the pretreatment and type of material that was radiocarbon dated for the development of the regional sea-level curve. Here we undertake a detailed morpho- and chronostratigraphic study at Bulli Beach to better constrain the timing of the Holocene highstand in eastern Australia. In contrast to wood and charcoal samples that may provide anomalously old ages, probably due to inbuilt age, we find that short-lived terrestrial plant macrofossils provide a robust chronological framework. Bayesian modelling of the ages provide improved dating of the earliest evidence for a highstand at 6,880±50 cal BP, approximately a millennium later than previously reported. Our results from Bulli now closely align with other sea-level reconstructions along the east coast of Australia, and provide evidence for a synchronous relative sea-level highstand that extends from the Gulf of Carpentaria to Tasmania. Our refined age appears to be coincident with major ice mass loss from Northern Hemisphere and Antarctic ice sheets, supporting previous studies that suggest these may have played a role in the relative sea-level highstand. Further work is now needed to investigate the environmental impacts of regional sea levels, and refine the timing of the subsequent sea-level fall in the Holocene and its influence on coastal evolution

    Linguistic and statistically derived features for cause of death prediction from verbal autopsy text

    Get PDF
    Automatic Text Classification (ATC) is an emerging technology with economic importance given the unprecedented growth of text data. This paper reports on work in progress to develop methods for predicting Cause of Death from Verbal Autopsy (VA) documents recommended for use in low-income countries by the World Health Organisation. VA documents contain both coded data and open narrative. The task is formulated as a Text Classification problem and explores various combinations of linguistic and statistical approaches to determine how these may improve on the standard bag-of-words approach using a dataset of over 6400 VA documents that were manually annotated with cause of death. We demonstrate that a significant improvement of prediction accuracy can be obtained through a novel combination of statistical and linguistic features derived from the VA text. The paper explores the methods by which ATC may leads to improved accuracy in Cause of Death prediction

    Incremental dimension reduction of tensors with random index

    Get PDF
    We present an incremental, scalable and efficient dimension reduction technique for tensors that is based on sparse random linear coding. Data is stored in a compactified representation with fixed size, which makes memory requirements low and predictable. Component encoding and decoding are performed on-line without computationally expensive re-analysis of the data set. The range of tensor indices can be extended dynamically without modifying the component representation. This idea originates from a mathematical model of semantic memory and a method known as random indexing in natural language processing. We generalize the random-indexing algorithm to tensors and present signal-to-noise-ratio simulations for representations of vectors and matrices. We present also a mathematical analysis of the approximate orthogonality of high-dimensional ternary vectors, which is a property that underpins this and other similar random-coding approaches to dimension reduction. To further demonstrate the properties of random indexing we present results of a synonym identification task. The method presented here has some similarities with random projection and Tucker decomposition, but it performs well at high dimensionality only (n>10^3). Random indexing is useful for a range of complex practical problems, e.g., in natural language processing, data mining, pattern recognition, event detection, graph searching and search engines. Prototype software is provided. It supports encoding and decoding of tensors of order >= 1 in a unified framework, i.e., vectors, matrices and higher order tensors.Comment: 36 pages, 9 figure
    • …
    corecore