67 research outputs found

    Laparoscopic single-port sleeve gastrectomy for morbid obesity: preliminary series

    Get PDF
    Laparoscopic sleeve gastrectomy has been recently proposed as a sole bariatric procedure because of the resulting considerable weight loss in morbidly obese patients. Traditionally, laparoscopic sleeve gastrectomy requires 5-6 skin incisions to allow for placement of multiple trocars. With the introduction of single-incision laparoscopic surgery, multiple abdominal procedures have been performed using a sole umbilical incision, with good cosmetic outcomes. The purpose of our study was to evaluate the feasibility and safety of laparoscopic single incision sleeve gastrectomy for morbid obesity

    Mixing hetero- and homogeneous models in weighted ensembles

    Get PDF
    The effectiveness of ensembling for improving classification performance is well documented. Broadly speaking, ensemble design can be expressed as a spectrum where at one end a set of heterogeneous classifiers model the same data, and at the other homogeneous models derived from the same classification algorithm are diversified through data manipulation. The cross-validation accuracy weighted probabilistic ensemble is a heterogeneous weighted ensemble scheme that needs reliable estimates of error from its base classifiers. It estimates error through a cross-validation process, and raises the estimates to a power to accentuate differences. We study the effects of maintaining all models trained during cross-validation on the final ensemble's predictive performance, and the base model's and resulting ensembles' variance and robustness across datasets and resamples. We find that augmenting the ensemble through the retention of all models trained provides a consistent and significant improvement, despite reductions in the reliability of the base models' performance estimates

    An application of evidential networks to threat assessment

    Get PDF
    Abstract Decision makers operating in modern defence theatres need to comprehend and reason with huge quantities of potentially uncertain and imprecise data in a timely fashion. In this paper, an automatic information fusion system is developed which aims at supporting a commander's decision making by providing a threat assessment, that is an estimate of the extent to which an enemy platform poses a threat based on evidence about its intent and capability. Threat is modelled by a network of entities and relationships between them, while the uncertainties in the relationships are represented by belief functions as defined in the theory of evidence. To support the implementation of the threat assessment functionality, an efficient valuation-based reasoning scheme, referred to as an evidential network, is developed. To reduce computational overheads, the scheme performs local computations in the network by applying an inward propagation algorithm to the underlying binary join tree. This allows the dynamic nature of the external evidence, which drives the evidential network, to be taken into account by recomputing only the affected paths in the binary join tree

    Prevalence of Defaecatory Disorders in Morbidly Obese Patients Before and After Bariatric Surgery

    Get PDF
    BACKGROUND: The prevalence of obesity is increasing worldwide and has lately reached epidemic proportions in western countries. Several epidemiological studies have consistently shown that both overweight and obesity are important risk factors for the development of various functional defaecatory disorders (DDs), including faecal incontinence and constipation. However, data on their prevalence as well as effectiveness of bariatric surgery on their correction are scant. The primary objective of this study was to estimate the effect of morbid obesity on DDs in a cohort of patients listed for bariatric surgery. We also evaluated preliminary results of the effects of sleeve gastrectomy on these disorders. PATIENTS AND METHODS: A questionnaire-based study was proposed to morbidly obese patients having bariatric surgery. Data included demographics, past medical, surgical and obstetrics histories, as well as obesity related co-morbidities. Wexner Constipation Score (WCS) and the Faecal Incontinence Severity Index (FISI) questionnaires were used to evaluate constipation and incontinence. For the purpose of this study, we considered clinically relevant a WCS ≥5 and a FISI score ≥10. The same questionnaires were completed at 3 and 6 months follow-up after surgery. RESULTS: A total of 139 patients accepted the study and 68 underwent sleeve gastrectomy and fully satisfied our inclusion criteria with a minimum follow-up of 6 months. Overall, mean body mass index (BMI) at listing was 47 ± 7 kg/m(2) (range 35-67 kg/m(2)). Mean WCS was 4.1 ± 4 (range 0-17), while mean FISI score (expressed as mean±standard deviation) was 9.5 ± 9 (range 0-38). Overall, 58.9% of the patients reported DDs according to the above-mentioned scores. Twenty-eight patients (20%) had WCS ≥5. Thirty-five patients (25%) had a FISI ≥10 while 19 patients (13.7%) reported combined abnormal scores. Overall, DDs were more evident with the increase of obesity grade: Mean BMI decreased significantly from 47 ± 7 to 36 ± 6 and to 29 ± 4 kg/m(2) respectively at 3 and 6 months after surgery (p < 0.0001). According to the BMI decrease, the mean WCS decreased from 3.7 ± 3 to 3.1 ± 4 and to 1.6 ± 3 respectively at 3 and 6 months (p = 0.02). Similarly, the FISI score decreased from 10 ± 8 to 3 ± 4 and to 1 ± 2 respectively at 3 and 6 months (p = 0.0001). CONCLUSIONS: Defaecatory disorders are common in morbidly obese patients. The risk of DDs increases with BMI. Bariatric surgery reduces DDs, mainly faecal incontinence, and these findings correlated with BMI reduction

    Optimal control of a linear system subject to partially specified input noise

    Get PDF
    One of the most basic problems in control theory is that of controlling a discrete-time linear system subject to uncertain noise with the objective of minimising the expectation of a quadratic cost. If one assumes the noise to be white, then solving this problem is relatively straightforward. However, white noise is arguably unrealistic: noise is not necessarily independent and one does not always precisely know its expectation. We first recall the optimal control policy without assuming independence, and show that in this case computing the optimal control inputs becomes infeasible. In a next step, we assume only knowledge of lower and upper bounds on the conditional expectation of the noise, and prove that this approach leads to tight lower and upper bounds on the optimal control inputs. The analytical expressions that determine these bounds are strikingly similar to the usual expressions for the case of white noise

    The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances

    Get PDF
    In the last five years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train/test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only 9 of these algorithms are significantly more accurate than both benchmarks and that one classifier, the Collective of Transformation Ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more rigorous testing of new algorithms in the future

    The Prevalence of Errors in Machine Learning Experiments

    Get PDF
    Context: Conducting experiments is central to research machine learning research to benchmark, evaluate and compare learning algorithms. Consequently it is important we conduct reliable, trustworthy experiments. Objective: We investigate the incidence of errors in a sample of machine learning experiments in the domain of software defect prediction. Our focus is simple arithmetical and statistical errors. Method: We analyse 49 papers describing 2456 individual experimental results from a previously undertaken systematic review comparing supervised and unsupervised defect prediction classifiers. We extract the confusion matrices and test for relevant constraints, e.g., the marginal probabilities must sum to one. We also check for multiple statistical significance testing errors. Results: We find that a total of 22 out of 49 papers contain demonstrable errors. Of these 7 were statistical and 16 related to confusion matrix inconsistency (one paper contained both classes of error). Conclusions: Whilst some errors may be of a relatively trivial nature, e.g., transcription errors their presence does not engender confidence. We strongly urge researchers to follow open science principles so errors can be more easily be detected and corrected, thus as a community reduce this worryingly high error rate with our computational experiments

    Set-membership PHD filter

    No full text
    The paper proposes a novel Probability Hypothesis Density (PHD) filter for linear system in which initial state, process and measurement noises are only known to be bounded (they can vary on compact sets, e.g., polytopes). This means that no probabilistic assumption is imposed on the distributions of initial state and noises besides the knowledge of their supports. These are the same assumptions that are used in set-membership estimation. By exploiting a formulation of set-membership estimation in terms of set of probability measures, we derive the equations of the set-membership PHD filter, which consist in propagating in time compact sets that include with guarantee the targets' states. Numerical simulations show the effectiveness of the proposed approach and the comparison with a sequential Monte Carlo PHD filter which instead assumes that initial state and noises have uniform distributions

    C.P.: Inference from multinomial data based on a MLE-dominance criterion

    No full text
    Abstract. We consider the problem of inference from multinomial data with chances θ, subject to the a-priori information that the true parameter vector θ belongs to a known convex polytope Θ. The proposed estimator has the parametrized structure of the conditional-mean estimator with a prior Dirichlet distribution, whose parameters (s,t) are suitably designed via a dominance criterion so as to guarantee, for any θ ∈ Θ, an improvement of the Mean Squared Error over the Maximum Likelihood Estimator (MLE). The solution of this MLE-dominance problem allows us togiveadifferent interpretation of: (1) theseveral Bayesian estimators proposed in the literature for the problem of inference from multinomial data; (2) the Imprecise Dirichlet Model (IDM) developed by Walley [13].
    • …
    corecore