187 research outputs found

    Efficiently Approximating the Worst-Case Deadline Failure Probability under {EDF}

    Get PDF

    Analysis of High Dimensional Data from Intensive Care Medicine

    Get PDF
    As high dimensional data occur as a rule rather than an exception in critical care today, it is of utmost importance to improve acquisition, storage, modelling, and analysis of medical data, which appears feasable only with the help of bedside computers. The use of clinical information systems offers new perspectives of data recording and also causes a new challenge for statistical methodology. A graphical approach for analysing patterns in statistical time series from online monitoring systems in intensive care is proposed here as an example of a simple univariate method, which contains the possibility of a multivariate extension and which can be combined with procedures for dimension reduction

    Interactive decision support in hepatic surgery

    Get PDF
    BACKGROUND: Hepatic surgery is characterized by complicated operations with a significant peri- and postoperative risk for the patient. We developed a web-based, high-granular research database for comprehensive documentation of all relevant variables to evaluate new surgical techniques. METHODS: To integrate this research system into the clinical setting, we designed an interactive decision support component. The objective is to provide relevant information for the surgeon and the patient to assess preoperatively the risk of a specific surgical procedure. Based on five established predictors of patient outcomes, the risk assessment tool searches for similar cases in the database and aggregates the information to estimate the risk for an individual patient. RESULTS: The physician can verify the analysis and exclude manually non-matching cases according to his expertise. The analysis is visualized by means of a Kaplan-Meier plot. To evaluate the decision support component we analyzed data on 165 patients diagnosed with hepatocellular carcinoma (period 1996–2000). The similarity search provides a two-peak distribution indicating there are groups of similar patients and singular cases which are quite different to the average. The results of the risk estimation are consistent with the observed survival data, but must be interpreted with caution because of the limited number of matching reference cases. CONCLUSION: Critical issues for the decision support system are clinical integration, a transparent and reliable knowledge base and user feedback

    Case-oriented computer-based-training in radiology: concept, implementation and evaluation

    Get PDF
    BACKGROUND: Providing high-quality clinical cases is important for teaching radiology. We developed, implemented and evaluated a program for a university hospital to support this task. METHODS: The system was built with Intranet technology and connected to the Picture Archiving and Communications System (PACS). It contains cases for every user group from students to attendants and is structured according to the ACR-code (American College of Radiology) [2]. Each department member was given an individual account, could gather his teaching cases and put the completed cases into the common database. RESULTS: During 18 months 583 cases containing 4136 images involving all radiological techniques were compiled and 350 cases put into the common case repository. Workflow integration as well as individual interest influenced the personal efforts to participate but an increasing number of cases and minor modifications of the program improved user acceptance continuously. 101 students went through an evaluation which showed a high level of acceptance and a special interest in elaborate documentation. CONCLUSION: Electronic access to reference cases for all department members anytime anywhere is feasible. Critical success factors are workflow integration, reliability, efficient retrieval strategies and incentives for case authoring

    Using data mining techniques to explore physicians' therapeutic decisions when clinical guidelines do not provide recommendations: methods and example for type 2 diabetes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Clinical guidelines carry medical evidence to the point of practice. As evidence is not always available, many guidelines do not provide recommendations for all clinical situations encountered in practice. We propose an approach for identifying knowledge gaps in guidelines and for exploring physicians' therapeutic decisions with data mining techniques to fill these knowledge gaps. We demonstrate our method by an example in the domain of type 2 diabetes.</p> <p>Methods</p> <p>We analyzed the French national guidelines for the management of type 2 diabetes to identify clinical conditions that are not covered or those for which the guidelines do not provide recommendations. We extracted patient records corresponding to each clinical condition from a database of type 2 diabetic patients treated at Avicenne University Hospital of Bobigny, France. We explored physicians' prescriptions for each of these profiles using C5.0 decision-tree learning algorithm. We developed decision-trees for different levels of detail of the therapeutic decision, namely the type of treatment, the pharmaco-therapeutic class, the international non proprietary name, and the dose of each medication. We compared the rules generated with those added to the guidelines in a newer version, to examine their similarity.</p> <p>Results</p> <p>We extracted 27 rules from the analysis of a database of 463 patient records. Eleven rules were about the choice of the type of treatment and thirteen rules about the choice of the pharmaco-therapeutic class of each drug. For the choice of the international non proprietary name and the dose, we could extract only a few rules because the number of patient records was too low for these factors. The extracted rules showed similarities with those added to the newer version of the guidelines.</p> <p>Conclusion</p> <p>Our method showed its usefulness for completing guidelines recommendations with rules learnt automatically from physicians' prescriptions. It could be used during the development of guidelines as a complementary source from practice-based knowledge. It can also be used as an evaluation tool for comparing a physician's therapeutic decisions with those recommended by a given set of clinical guidelines. The example we described showed that physician practice was in some ways ahead of the guideline.</p

    Systematic Planning of Genome-Scale Experiments in Poorly Studied Species

    Get PDF
    Genome-scale datasets have been used extensively in model organisms to screen for specific candidates or to predict functions for uncharacterized genes. However, despite the availability of extensive knowledge in model organisms, the planning of genome-scale experiments in poorly studied species is still based on the intuition of experts or heuristic trials. We propose that computational and systematic approaches can be applied to drive the experiment planning process in poorly studied species based on available data and knowledge in closely related model organisms. In this paper, we suggest a computational strategy for recommending genome-scale experiments based on their capability to interrogate diverse biological processes to enable protein function assignment. To this end, we use the data-rich functional genomics compendium of the model organism to quantify the accuracy of each dataset in predicting each specific biological process and the overlap in such coverage between different datasets. Our approach uses an optimized combination of these quantifications to recommend an ordered list of experiments for accurately annotating most proteins in the poorly studied related organisms to most biological processes, as well as a set of experiments that target each specific biological process. The effectiveness of this experiment- planning system is demonstrated for two related yeast species: the model organism Saccharomyces cerevisiae and the comparatively poorly studied Saccharomyces bayanus. Our system recommended a set of S. bayanus experiments based on an S. cerevisiae microarray data compendium. In silico evaluations estimate that less than 10% of the experiments could achieve similar functional coverage to the whole microarray compendium. This estimation was confirmed by performing the recommended experiments in S. bayanus, therefore significantly reducing the labor devoted to characterize the poorly studied genome. This experiment-planning framework could readily be adapted to the design of other types of large-scale experiments as well as other groups of organisms

    Multimodal microscopy for automated histologic analysis of prostate cancer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Prostate cancer is the single most prevalent cancer in US men whose gold standard of diagnosis is histologic assessment of biopsies. Manual assessment of stained tissue of all biopsies limits speed and accuracy in clinical practice and research of prostate cancer diagnosis. We sought to develop a fully-automated multimodal microscopy method to distinguish cancerous from non-cancerous tissue samples.</p> <p>Methods</p> <p>We recorded chemical data from an unstained tissue microarray (TMA) using Fourier transform infrared (FT-IR) spectroscopic imaging. Using pattern recognition, we identified epithelial cells without user input. We fused the cell type information with the corresponding stained images commonly used in clinical practice. Extracted morphological features, optimized by two-stage feature selection method using a minimum-redundancy-maximal-relevance (mRMR) criterion and sequential floating forward selection (SFFS), were applied to classify tissue samples as cancer or non-cancer.</p> <p>Results</p> <p>We achieved high accuracy (area under ROC curve (AUC) >0.97) in cross-validations on each of two data sets that were stained under different conditions. When the classifier was trained on one data set and tested on the other data set, an AUC value of ~0.95 was observed. In the absence of IR data, the performance of the same classification system dropped for both data sets and between data sets.</p> <p>Conclusions</p> <p>We were able to achieve very effective fusion of the information from two different images that provide very different types of data with different characteristics. The method is entirely transparent to a user and does not involve any adjustment or decision-making based on spectral data. By combining the IR and optical data, we achieved high accurate classification.</p

    Observation of high-energy neutrinos from the Galactic plane

    Full text link
    The origin of high-energy cosmic rays, atomic nuclei that continuously impact Earth's atmosphere, has been a mystery for over a century. Due to deflection in interstellar magnetic fields, cosmic rays from the Milky Way arrive at Earth from random directions. However, near their sources and during propagation, cosmic rays interact with matter and produce high-energy neutrinos. We search for neutrino emission using machine learning techniques applied to ten years of data from the IceCube Neutrino Observatory. We identify neutrino emission from the Galactic plane at the 4.5σ\sigma level of significance, by comparing diffuse emission models to a background-only hypothesis. The signal is consistent with modeled diffuse emission from the Galactic plane, but could also arise from a population of unresolved point sources.Comment: Submitted on May 12th, 2022; Accepted on May 4th, 202
    corecore