7 research outputs found

    Number of sources estimation using a hybrid algorithm for smart antenna

    Get PDF
    The number of sources estimation is one of the vital key technologies in smart antenna. The current paper adopts a new system that employs a hybrid algorithm of artificial bee colony (ABC) and complex generalized Hebbian (CGHA) neural network to Bayesian information criterion (BIC) technique, aiming to enhance the accuracy of number of sources estimation. The advantage of the new system is that no need to compute the covariance matrix, since its principal eigenvalues are computed using the CGHA neural network for the received signals. Moreover, the proposed system can optimize the training condition of the CGHA neural network, therefore it can overcome the random selection of initial weights and learning rate, which evades network oscillation and trapping into local solution. Simulation results of the offered system show good responses through reducing the required time to train the CGHA neural network, fast converge speed, effectiveness, in addition to achieving the correct number of sources

    Bayesian Cluster Enumeration Criterion for Unsupervised Learning

    Full text link
    We derive a new Bayesian Information Criterion (BIC) by formulating the problem of estimating the number of clusters in an observed data set as maximization of the posterior probability of the candidate models. Given that some mild assumptions are satisfied, we provide a general BIC expression for a broad class of data distributions. This serves as a starting point when deriving the BIC for specific distributions. Along this line, we provide a closed-form BIC expression for multivariate Gaussian distributed variables. We show that incorporating the data structure of the clustering problem into the derivation of the BIC results in an expression whose penalty term is different from that of the original BIC. We propose a two-step cluster enumeration algorithm. First, a model-based unsupervised learning algorithm partitions the data according to a given set of candidate models. Subsequently, the number of clusters is determined as the one associated with the model for which the proposed BIC is maximal. The performance of the proposed two-step algorithm is tested using synthetic and real data sets.Comment: 14 pages, 7 figure

    Technical-Economic Analysis of Grapple Saw: A Stochastic Approach

    Get PDF
    The processing of Eucalyptus logs is a stage that follows the full tree system in mechanized forest harvesting, commonly performed by grapple saw. Therefore, this activity presents some associated uncertainties, especially regarding technical and silvicultural factors that can affect productivity and production costs. To get around this problem, Monte Carlo simulation can be applied, or rather a technique that allows to measure the probabilities of values from factors that are under conditions of uncertainties, to which probability distributions are attributed. The objective of this study was to apply the Monte Carlo method for determining the probabilistic technical-economical coefficients of log processing using two different grapple saw models. Field data were obtained from an area of forest planted with Eucalyptus, located in the State of S茫o Paulo, Brazil. For the technical analysis, the time study protocol was applied by the method of continuous reading of the operational cycle elements, which resulted in production. As for the estimated cost of programmed hour, the applied methods were recommended by the Food and Agriculture Organization of the United Nations. The incorporation of the uncertainties was carried out by applying the Monte Carlo simulation method, by which 100,000 random values were generated. The results showed that the crane empty movement is the operational element that most impacts the total time for processing the logs; the variables that most influence the productivity are specific to each grapple saw model; the difference of USD 0.04 m3 in production costs was observed between processors with gripping area of 0.58 m2 and 0.85 m2. The Monte Carlo method proved to be an applicable tool for mechanized wood harvesting for presenting a range of probability of occurrences for the operational elements and for the production cost

    Dynamic Prediction of Acute Graft-versus-Host-Disease with Longitudinal Biomarkers

    Full text link
    This dissertation builds three prediction tools to dynamically predict the onset of acute graft-versus-host disease (aGVHD) with longitudinal biomarkers. Acute graft-versus-host disease is a complication for patients who have received allogeneic bone marrow transplant, and it is fatal for some patients. Clinicians could benefit from these prediction tools to identify patients who are at risk and who are not, and thus assign appropriate interventions. Our first project introduces how to apply joint modeling with latent classes (JMLC) and landmark analysis to aGVHD data. In JMLC, we group all aGVHD-free patients into one latent class and define that class as the "cure" class. In landmark analysis, we incorporate patients' biomarker information up to the landmark time to gain more efficiency. Computer simulations show that both methods adjust for the measurement error, and that JMLC outperforms landmark analysis when the functional form of the biomarker profile is correctly specified. In our second project, we describe how to execute dynamic prediction with the pattern mixture model, in which each patient is classified by his/her time-to-aGVHD, and patients in the same group share the same mean profile of biomarkers. The pattern mixture model is easy to execute and straightforward to interpret. Simulations indicate that the pattern mixture model controls loss of accuracy in predictions. In our third project, we incorporate censored cases to generalize the pattern mixture model in the second project. The simulation results demonstrate that this generalized pattern mixture model accurately estimates of the marginal pattern probabilities, and thus better estimates early predictions compared to early predictions not incorporating censored observations. In our fourth project, we explain the process of parametric bootstrap in selecting the number of latent classes in JMLC. Compared with the standard information-based criteria in model selection in JMLC, our parametric bootstrap likelihood ratio test (LRT) controls the Type I error well while maintaining sufficient power. We also propose two sequential early stopping rules to relieve the computational burden of bootstrap.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144104/1/yumeng_1.pd
    corecore