132,894 research outputs found

    MAINT.Data: modelling and analysing interval data in R

    Get PDF
    We present the CRAN R package MAINT.Data for the modelling and analysis of multivariate interval data, i.e., where units are described by variables whose values are intervals of IR, representing intrinsic variability. Parametric inference methodologies based on probabilistic models for interval variables have been developed, where each interval is represented by its midpoint and log-range, for which multivariate Normal and Skew-Normal distributions are assumed. The intrinsic nature of the interval variables leads to special structures of the variance-covariance matrix, which are represented by four different possible configurations. MAINT.Data implements the proposed methodologies in the S4 object system, introducing a specific data class for representing interval data. It includes functions and methods for modelling and analysing interval data, in particular maximum likelihood estimation, statistical tests for the different configurations, (M)ANOVA and Discriminant Analysis. For the Gaussian model, Model-based Clustering, robust estimation, outlier detection and Robust Discriminant Analysis are also availableinfo:eu-repo/semantics/publishedVersio

    A New Fuzzy Modeling Framework for Integrated Risk Prognosis and Therapy of Bladder Cancer Patients

    Get PDF
    This paper presents a new fuzzy modelling approach for analysing censored survival data and finding risk groups of patients diagnosed with bladder cancer. The proposed framework involves a new procedure for integrating the frameworks of interval type-2 fuzzy logic and Cox modelling intrinsically. The output of this synergistic framework is a risk score/prognostics index which is indicative of the patient's level of mortality risk. A threshold value is selected whereby patients with risk scores that are greater than this threshold are classed as high risk patients and vice versa. Unlike in the case of black-box type modelling approaches, the paper shows that interpretability and transparency are maintained using the proposed fuzzy modelling framework

    An analysis of observed daily maximum wind gusts in the UK

    Get PDF
    The greatest attention to the UK wind climatology has focused upon mean windspeeds, despite a knowledge of gust speeds being essential to a variety of users. This paper goes some way to redressing this imbalance by analysing observed daily maximum gust speeds from a 43-station network over the period 1980–2005. Complementing these data are dynamically downscaled reanalysis data, generated using the PRECIS Regional Climate Modelling system, for the period 1959–2001. Inter-annual variations in both the observed and downscaled reanalysis gust speeds are presented, with a statistically significant (at the 95% confidence interval) 5% increase across the network in daily maximum gust speeds between 1959 and the early 1990s, followed by an apparent decrease. The benefit of incorporating dynamically downscaled reanalysis data is revealed by the fact that the decrease in gust speeds since 1993 may be placed in the context of a very slight increase displayed over the longer 1959–2001 period. Furthermore, the severity of individual windstorm events is considered, with high profile recent events placed into the context of the long term record. A daily cycle is identified from the station observations in the timing of the daily maximum gust speeds, with an afternoon peak occurring between 12:00–15:00, exhibiting spatial and intra-annual variations

    Analysing imperfect temporal information in GIS using the Triangular Model

    Get PDF
    Rough set and fuzzy set are two frequently used approaches for modelling and reasoning about imperfect time intervals. In this paper, we focus on imperfect time intervals that can be modelled by rough sets and use an innovative graphic model [i.e. the triangular model (TM)] to represent this kind of imperfect time intervals. This work shows that TM is potentially advantageous in visualizing and querying imperfect time intervals, and its analytical power can be better exploited when it is implemented in a computer application with graphical user interfaces and interactive functions. Moreover, a probabilistic framework is proposed to handle the uncertainty issues in temporal queries. We use a case study to illustrate how the unique insights gained by TM can assist a geographical information system for exploratory spatio-temporal analysis

    The continuous period search method and its application to the young solar analogue HD 116956

    Get PDF
    We formulate an improved time series analysis method for the analysis of photometry of active stars. This new Continuous Period Search (CPS) method is applied to 12 years of V band photometry of the young solar analogue HD 116956 (NQ UMa). The new method is developed from the previous Three Stage Period Analysis (TSPA) method. Our improvements are the use of a sliding window in choosing the modelled datasets, a criterion applied to select the best model for each dataset and the computation of the time scale of change of the light curve. We test the performance of CPS with simulated and real data. The CPS has a much improved time resolution which allows us to better investigate fast evolution of stellar light curves. We can also separate between the cases when the data is best described by periodic and aperiodic (e.g. constant brightness) models. We find, however, that the performance of the CPS has certain limitations. It does not determine the correct model complexity in all cases, especially when the underlying light curve is constant and the number of observations too small. Also the sensitivity in detecting two close light curve minima is limited and it has a certain amount of intrinsic instability in its period estimation. Using the CPS, we find persistent active longitudes in the star HD 116956 and a "flip-flop" event that occurred during the year 1999. Assuming that the surface differential rotation of the star causes observable period variations in the stellar light curve, we determine the differential rotation coefficient to be |k|>0.11. The mean timescale of change of the light curve during the whole 12 year observing period was T_C=44.1 d, which is of the same order as the predicted convective turnover time of the star. We also investigate the presence of activity cycles on the star, but do not find any conclusive evidence supporting them.Comment: 14 pages, 11 figures, 3 table

    On the distribution function of the information speed in computer network

    Full text link
    We review a study of the Internet traffic properties. We analyze under what conditions the reported results could be reproduced. Relations of results of passive measurements and those of modelling are also discussed. An example of the first-order phase transitions in the Internet traffic is presented.Comment: cpcauth.cls included, 6 pages, 3 eps figures, Proceeding CCP 2001 Aachen, to appear in Comp. Phys. Com

    Impact of estimation techniques on regression analysis: an application to survey data on child nutritional status in five African countries

    No full text
    This paper illustrates the impact of ignoring survey design and hierarchical structure of survey data when fitting regression models. Data on child nutritional status from Ghana, Malawi, Tanzania, Zambia, and Zimbabwe are analysed using four techniques: ordinary least squares; weighted regression using standard statistical software; regression using specialist software that accounts for the survey design; and multilevel modelling. The impact of ignoring survey design on logistic and linear regression models is examined. The results show bias in estimates averaging between five and 17 per cent in linear models and between five and 22 per cent in logistic regression models. The standard errors are also under-estimated by up to 49 per cent in some countries. Socio-economic variables and service utilisation variables are poorly estimated when the survey design is ignored

    A core ontology for business process analysis

    Get PDF
    Business Process Management (BPM) aims at supporting the whole life-cycle necessary to deploy and maintain business processes in organisations. An important step of the BPM life-cycle is the analysis of the processes deployed in companies. However, the degree of automation currently achieved cannot support the level of adaptation required by businesses. Initial steps have been performed towards including some sort of automated reasoning within Business Process Analysis (BPA) but this is typically limited to using taxonomies. We present a core ontology aimed at enhancing the state of the art in BPA. The ontology builds upon a Time Ontology and is structured around the process, resource, and object perspectives as typically adopted when analysing business processes. The ontology has been extended and validated by means of an Events Ontology and an Events Analysis Ontology aimed at capturing the audit trails generated by Process-Aware Information Systems and deriving additional knowledge
    • …
    corecore