1,303 research outputs found

    MicroRNA-related sequence variations in human cancers

    Get PDF
    MicroRNAs are emerging as a most promising field in basic and translational research, explaining the pathogenesis of numerous human diseases and providing excellent tools for their management. This review considers the effects of microRNA sequence variations and their implication in pathogenesis and predisposition to human cancers. Although the role of microRNAs still remains to be elucidated, functional, and populational studies indicate that microRNA variants are important factors underlying the process of carcinogenesis. Further understanding of the cellular and molecular basis of microRNA action will lead to the identification of their new target genes and microRNA-regulated pathways. As a consequence, novel models of cancer pathogenesis can be proposed, and serve as a basis for elucidation of new prognostic and diagnostic tools for human cancers

    Confidential Boosting with Random Linear Classifiers for Outsourced User-generated Data

    Full text link
    User-generated data is crucial to predictive modeling in many applications. With a web/mobile/wearable interface, a data owner can continuously record data generated by distributed users and build various predictive models from the data to improve their operations, services, and revenue. Due to the large size and evolving nature of users data, data owners may rely on public cloud service providers (Cloud) for storage and computation scalability. Exposing sensitive user-generated data and advanced analytic models to Cloud raises privacy concerns. We present a confidential learning framework, SecureBoost, for data owners that want to learn predictive models from aggregated user-generated data but offload the storage and computational burden to Cloud without having to worry about protecting the sensitive data. SecureBoost allows users to submit encrypted or randomly masked data to designated Cloud directly. Our framework utilizes random linear classifiers (RLCs) as the base classifiers in the boosting framework to dramatically simplify the design of the proposed confidential boosting protocols, yet still preserve the model quality. A Cryptographic Service Provider (CSP) is used to assist the Cloud's processing, reducing the complexity of the protocol constructions. We present two constructions of SecureBoost: HE+GC and SecSh+GC, using combinations of homomorphic encryption, garbled circuits, and random masking to achieve both security and efficiency. For a boosted model, Cloud learns only the RLCs and the CSP learns only the weights of the RLCs. Finally, the data owner collects the two parts to get the complete model. We conduct extensive experiments to understand the quality of the RLC-based boosting and the cost distribution of the constructions. Our results show that SecureBoost can efficiently learn high-quality boosting models from protected user-generated data

    Implicitly Constrained Semi-Supervised Least Squares Classification

    Full text link
    We introduce a novel semi-supervised version of the least squares classifier. This implicitly constrained least squares (ICLS) classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semi-supervised methods, our approach does not introduce explicit additional assumptions into the objective function, but leverages implicit assumptions already present in the choice of the supervised least squares classifier. We show this approach can be formulated as a quadratic programming problem and its solution can be found using a simple gradient descent procedure. We prove that, in a certain way, our method never leads to performance worse than the supervised classifier. Experimental results corroborate this theoretical result in the multidimensional case on benchmark datasets, also in terms of the error rate.Comment: 12 pages, 2 figures, 1 table. The Fourteenth International Symposium on Intelligent Data Analysis (2015), Saint-Etienne, Franc

    Effectiveness of preoperative planning in the restoration of balance and view in ankylosing spondylitis

    Get PDF
    Object. The object of this study was to assess the effectiveness of preoperative planning in the restoration of balance and view angle in patients treated with lumbar osteotomy in ankylosing spondylitis (AS). Methods. The authors prospectively analyzed 8 patients with a thoracolumbar kyphotic deformity due to AS that was treated using a closing wedge osteotomy (CWO) of the lumbar spine to correct sagittal imbalance and horizontal view. Preoperative planning to predict postoperative balance, defined by the sagittal vertical axis (SVA) and the sacral endplate angle (SEA), and the view angle, defined by the chin-brow to vertical angle (CBVA), was performed using the ASKyphoptan computational program. Results. All patients were treated with a CWO at level L-4 and improved in balance and view angle. The mean correction angle was 35° (range 24-47°). The postoperative SEA improved from 21 to 36° for a mean correction of 15°. In addition, the SVA and CBVA improved significantly. Note, however, that the postoperative results did not exactly reflect the predicted values of the analyzed parameters. Conclusions. Preoperative planning for the restoration of balance and view angle in AS improves understanding of the biomechanical and clinical effects of a correction osteotomy of the lumbar spine. The adaptation of basic clinical and biomechanical principles to restore balance is advised in such a way that the individual SEA is corrected by 15° (maximum 40°) in relation to the horizon and C-7 is balanced exactly above the posterosuperior corner of the sacrum

    Optimal treatment allocations in space and time for on-line control of an emerging infectious disease

    Get PDF
    A key component in controlling the spread of an epidemic is deciding where, whenand to whom to apply an intervention.We develop a framework for using data to informthese decisionsin realtime.We formalize a treatment allocation strategy as a sequence of functions, oneper treatment period, that map up-to-date information on the spread of an infectious diseaseto a subset of locations where treatment should be allocated. An optimal allocation strategyoptimizes some cumulative outcome, e.g. the number of uninfected locations, the geographicfootprint of the disease or the cost of the epidemic. Estimation of an optimal allocation strategyfor an emerging infectious disease is challenging because spatial proximity induces interferencebetween locations, the number of possible allocations is exponential in the number oflocations, and because disease dynamics and intervention effectiveness are unknown at outbreak.We derive a Bayesian on-line estimator of the optimal allocation strategy that combinessimulation–optimization with Thompson sampling.The estimator proposed performs favourablyin simulation experiments. This work is motivated by and illustrated using data on the spread ofwhite nose syndrome, which is a highly fatal infectious disease devastating bat populations inNorth America

    Transformation Consistency Regularization – A Semi-supervised Paradigm for Image-to-Image Translation

    Get PDF
    Scarcity of labeled data has motivated the development of semi-supervised learning methods, which learn from large portions of unlabeled data alongside a few labeled samples. Consistency Regularization between model's predictions under different input perturbations, particularly has shown to provide state-of-the art results in a semi-supervised framework. However, most of these method have been limited to classification and segmentation applications. We propose Transformation Consistency Regularization, which delves into a more challenging setting of image-to-image translation, which remains unexplored by semi-supervised algorithms. The method introduces a diverse set of geometric transformations and enforces the model's predictions for unlabeled data to be invariant to those transformations. We evaluate the efficacy of our algorithm on three different applications: image colorization, denoising and super-resolution. Our method is significantly data efficient, requiring only around 10 - 20% of labeled samples to achieve similar image reconstructions to its fully-supervised counterpart. Furthermore, we show the effectiveness of our method in video processing applications, where knowledge from a few frames can be leveraged to enhance the quality of the rest of the movie
    corecore