487,309 research outputs found

    Robustness of reserve selection procedures under temporal species turnover

    Get PDF
    Complementarity-based algorithms for the selection of reserve networks emphasize the need to represent biodiversity features efficiently, but this may not be sufficient to maintain those features in the long term. Here, we use data from the Common Birds Census in Britain as an exemplar data set to determine guidelines for the selection of reserve networks which are more robust to temporal turnover in features. The extinction patterns found over the 1981-1991 interval suggest that two such guidelines are to represent species in the best sites where they occur (higher local abundance) and to give priority to the rarer species. We tested five reserve selection strategies, one which finds the minimum representation set and others which incorporate the first or both guidelines proposed. Strategies were tested in terms of their efficiency (inversely related to the total area selected) and effectiveness (inversely related to the percentage of species lost) using data on eight pairs of ten-year intervals. The minimum set strategy was always the most efficient, but suffered higher species loss than the others, suggesting that there is a trade-off between efficiency and effectiveness. A desirable compromise can be achieved by embedding the concerns about the long-term maintenance of the biodiversity features of interest in the complementarity-based algorithms

    An alternative solution to the model structure selection problem

    Get PDF
    An alternative solution to the model structure selection problem is introduced by conducting a forward search through the many possible candidate model terms initially and then performing an exhaustive all subset model selection on the resulting model. An example is included to demonstrate that this approach leads to dynamically valid nonlinear model

    A methodology for the selection of new technologies in the aviation industry

    Get PDF
    The purpose of this report is to present a technology selection methodology to quantify both tangible and intangible benefits of certain technology alternatives within a fuzzy environment. Specifically, it describes an application of the theory of fuzzy sets to hierarchical structural analysis and economic evaluations for utilisation in the industry. The report proposes a complete methodology to accurately select new technologies. A computer based prototype model has been developed to handle the more complex fuzzy calculations. Decision-makers are only required to express their opinions on comparative importance of various factors in linguistic terms rather than exact numerical values. These linguistic variable scales, such as ‘very high’, ‘high’, ‘medium’, ‘low’ and ‘very low’, are then converted into fuzzy numbers, since it becomes more meaningful to quantify a subjective measurement into a range rather than in an exact value. By aggregating the hierarchy, the preferential weight of each alternative technology is found, which is called fuzzy appropriate index. The fuzzy appropriate indices of different technologies are then ranked and preferential ranking orders of technologies are found. From the economic evaluation perspective, a fuzzy cash flow analysis is employed. This deals quantitatively with imprecision or uncertainties, as the cash flows are modelled as triangular fuzzy numbers which represent ‘the most likely possible value’, ‘the most pessimistic value’ and ‘the most optimistic value’. By using this methodology, the ambiguities involved in the assessment data can be effectively represented and processed to assure a more convincing and effective decision- making process when selecting new technologies in which to invest. The prototype model was validated with a case study within the aviation industry that ensured it was properly configured to meet the

    Model structure selection using an integrated forward orthogonal search algorithm assisted by squared correlation and mutual information

    No full text
    Model structure selection plays a key role in non-linear system identification. The first step in non-linear system identification is to determine which model terms should be included in the model. Once significant model terms have been determined, a model selection criterion can then be applied to select a suitable model subset. The well known Orthogonal Least Squares (OLS) type algorithms are one of the most efficient and commonly used techniques for model structure selection. However, it has been observed that the OLS type algorithms may occasionally select incorrect model terms or yield a redundant model subset in the presence of particular noise structures or input signals. A very efficient Integrated Forward Orthogonal Search (IFOS) algorithm, which is assisted by the squared correlation and mutual information, and which incorporates a Generalised Cross-Validation (GCV) criterion and hypothesis tests, is introduced to overcome these limitations in model structure selection

    Conducting ethnographic research on language-like visual communication

    Get PDF

    Real-time prediction with U.K. monetary aggregates in the presence of model uncertainty

    Get PDF
    A popular account for the demise of the U.K.’s monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily revised data. We consider a large set of recursively estimated vector autoregressive (VAR) and vector error correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily revised data obscure these fluctuations. Out-of-sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the U.K.’s monetary targeting regime

    Identifying communities of practice: analysing ontologies as networks to support community recognition

    Get PDF
    Communities of practice are seen as increasingly important for creating, sharing and applying organisational knowledge. Yet their informal nature makes them difficult to identify and manage. In this paper we set out ONTOCOPI, a system that applies ontology-based network analysis techniques to target the problem of identifying such communities
    corecore