4,057 research outputs found

    Spatial birth-and-death processes in random environment

    Full text link
    We consider birth-and-death processes of objects (animals) defined in Zd{\bf Z}^d having unit death rates and random birth rates. For animals with uniformly bounded diameter we establish conditions on the rate distribution under which the following holds for almost all realizations of the birth rates: (i) the process is ergodic with at worst power-law time mixing; (ii) the unique invariant measure has exponential decay of (spatial) correlations; (iii) there exists a perfect-simulation algorithm for the invariant measure. The results are obtained by first dominating the process by a backwards oriented percolation model, and then using a multiscale analysis due to Klein to establish conditions for the absence of percolation.Comment: 48 page

    Comparison of Statistical Methods for Modeling Count Data with an Application to Length of Hospital Stay

    Get PDF
    Hospital length of stay (LOS) is a key indicator of hospital care management efficiency, cost of care, and hospital planning. Therefore, understanding hospital LOS variability is always an important healthcare focus. Hospital LOS data are count data, with discrete and nonnegative values, typically right-skewed, and often exhibiting excessive zeros. Numerous studies have been conducted to model hospital LOS to identify significant predictors contributing to its variability. Many researchers have used linear regression with or without logarithmic transformation of the outcome variable LOS, or logistic regression on a dichotomized LOS. These regression methods usually violate models’ assumptions and are subject to criticism for their inadequacy in modeling count data. Problems that may occur include biased parameter estimates, loss of precision of inferences, predicting meaningless negative values, and loss of important information about the underlying counts. Common statistical methods for the analysis of count data are Poisson, negative binomial (NB), zero-inflated Poisson (ZIP), and zero-inflated negative binomial (ZINB) regressions. Many studies have been conducted comparing the performance of regression models for count data. However, the results from the analysis of empirical and/or simulated count data are in much disagreement. In this study, we compared the performance of Poisson, NB, ZIP, and ZINB regression models using simulated data under different scenarios with varying sample sizes, proportions of zeros, and levels of overdispersion. To illustrate the aforementioned regression methods, an analysis of hospital LOS was conducted using empirical data from the MIMIC-III database

    Machine Learning for Wireless Network Throughput Prediction

    Get PDF
    This paper analyzes a dataset containing radio frequency (RF) measurements and Key Performance Indicators (KPIs) captured at 1876.6MHz with a bandwidth of 10MHz from an operational 4G LTE network in Nigeria. The dataset includes metrics such as RSRP (Reference Signal Received Power), which measures the power level of reference signals; RSRQ (Reference Signal Received Quality), an indicator of signal quality that provides insight into the number of users sharing the same resources; RSSI (Received Signal Strength Indicator), which gauges the total received power in a bandwidth; SINR (Signal to Interference plus Noise Ratio), a measure of signal quality considering both interference and noise; and other KPIs, all derived from three evolved node base stations (eNodeBs). After meticulous data cleaning, a subset of measurements from one serving eNB, spanning a 20-minute duration, was selected for deeper analysis. The PDCP DL Throughput, as a vital KPI metric, plays a paramount role in evaluating network quality and resource allocation strategies. Leveraging the high granularity of the data, the primary aim was to predict throughput. For this purpose, I compared the predictive capabilities of two machine learning models: Linear Regression and Random Forest. Metrics such as Mean Absolute Error (MAE), Root Mean Squared Error (RMSE) were used to examine the models as they offer a comprehensive insight into the models accuracies. The comparative analysis highlighted the superior performance of the Random Forest model in predicting the PDCP DL Throughput. The insights derived from this research can potentially guide network engineers and data scientists in optimizing network performance, ensuring a seamless user experience. Furthermore, as the telecommunication industry advances towards the integration of 5G and beyond, the methodologies explored in this paper will be invaluable in addressing the increasingly complex challenges of future wireless networks

    A restriction on centralizers in finite groups

    Full text link
    For a given m>=1, we consider the finite non-abelian groups G for which |C_G(g):|<=m for every g in G\Z(G). We show that the order of G can be bounded in terms of m and the largest prime divisor of the order of G. Our approach relies on dealing first with the case where G is a non-abelian finite p-group. In that situation, if we take m=p^k to be a power of p, we show that |G|<=p^{2k+2} with the only exception of Q_8. This bound is best possible, and implies that the order of G can be bounded by a function of m alone in the case of nilpotent groups

    A comparison of statistical methods for modeling count data with an application to hospital length of stay

    Get PDF
    Background Hospital length of stay (LOS) is a key indicator of hospital care management efficiency, cost of care, and hospital planning. Hospital LOS is often used as a measure of a post-medical procedure outcome, as a guide to the benefit of a treatment of interest, or as an important risk factor for adverse events. Therefore, understanding hospital LOS variability is always an important healthcare focus. Hospital LOS data can be treated as count data, with discrete and non-negative values, typically right skewed, and often exhibiting excessive zeros. In this study, we compared the performance of the Poisson, negative binomial (NB), zero-inflated Poisson (ZIP), and zero-inflated negative binomial (ZINB) regression models using simulated and empirical data. Methods Data were generated under different simulation scenarios with varying sample sizes, proportions of zeros, and levels of overdispersion. Analysis of hospital LOS was conducted using empirical data from the Medical Information Mart for Intensive Care database. Results Results showed that Poisson and ZIP models performed poorly in overdispersed data. ZIP outperformed the rest of the regression models when the overdispersion is due to zero-inflation only. NB and ZINB regression models faced substantial convergence issues when incorrectly used to model equidispersed data. NB model provided the best fit in overdispersed data and outperformed the ZINB model in many simulation scenarios with combinations of zero-inflation and overdispersion, regardless of the sample size. In the empirical data analysis, we demonstrated that fitting incorrect models to overdispersed data leaded to incorrect regression coefficients estimates and overstated significance of some of the predictors. Conclusions Based on this study, we recommend to the researchers that they consider the ZIP models for count data with zero-inflation only and NB models for overdispersed data or data with combinations of zero-inflation and overdispersion. If the researcher believes there are two different data generating mechanisms producing zeros, then the ZINB regression model may provide greater flexibility when modeling the zero-inflation and overdispersion

    Regulation, competition and integration in electronic payments markets: the Spanish and European cases

    Get PDF
    The instruments used by regulators to promote competition and integration in the face of problems such as those related to payment methods can generate disincentives, especially in the absence of adequate information to guarantee the rationality of agents. This is particularly relevant because these markets exhibit particularities and asymmetries that differentiate them from other more traditional markets (different sides, network economies, cross-transfers, hidden costs, large externalities, and so on), characteristics which make them more dependent on the quality and quantity of information. Therefore, when this information is inadequate, more failures than can be solely attributed to market regulation tend to occur. In this paper we argue that interventions in "two-sided” payment cards markets (2SMs) to reduce costs to merchants may in the end harm the interests of consumers and discourage penetration of cards as a payment method and their increased use in retail operations. In our study we analyze and simulate the effects of the legislative package on electronic payments proposed by the European Commission in July 2013, which seeks to force a top-down convergence, similar to that designed for domestic interest rates with the euro, and which has proven to be a failure during the recent debt crisis
    • …
    corecore