5,568 research outputs found

    Clinical applications of personalized medicine: a new paradigm and challenge

    Get PDF
    The personalized medicine is an emergent and rapidly developing method of clinical practice that uses new technologies to provide decisions in regard to the prediction, prevention, diagnosis and treatment of disease. The continue evolution of technology and the developments in molecular diagnostics and genomic analysis increased the possibility of an even more understanding and interpretation of the human genome and exome, allowing a "personalized" approach to clinical care, so that the concepts of "Systems Medicine" and "System Biology" are increasingly actual. The purpose of this study is to evaluate the personalized medicine about its indications and benefits, actual clinical applications and future perspectives as well as its issues and health care implications. It was made a careful review of the scientific literature on this field that highlighted the applicability and usefulness of this new medical approach as well as the fact that personalized medicine strategy is even more increasing in numerous fields of applications

    Ensuring sample quality for biomarker discovery studies - Use of ict tools to trace biosample life-cycle

    Get PDF
    The growing demand of personalized medicine marked the transition from an empirical medicine to a molecular one, aimed at predicting safer and more effective medical treatment for every patient, while minimizing adverse effects. This passage has emphasized the importance of biomarker discovery studies, and has led sample availability to assume a crucial role in biomedical research. Accordingly, a great interest in Biological Bank science has grown concomitantly. In biobanks, biological material and its accompanying data are collected, handled and stored in accordance with standard operating procedures (SOPs) and existing legislation. Sample quality is ensured by adherence to SOPs and sample whole life-cycle can be recorded by innovative tracking systems employing information technology (IT) tools for monitoring storage conditions and characterization of vast amount of data. All the above will ensure proper sample exchangeability among research facilities and will represent the starting point of all future personalized medicine-based clinical trials

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    Translational Research in the Era of Precision Medicine: Where We Are and Where We Will Go

    Get PDF
    The advent of Precision Medicine has globally revolutionized the approach of translational research suggesting a patient-centric vision with therapeutic choices driven by the identification of specific predictive biomarkers of response to avoid ineffective therapies and reduce adverse effects. The spread of "multi-omics" analysis and the use of sensors, together with the ability to acquire clinical, behavioral, and environmental information on a large scale, will allow the digitization of the state of health or disease of each person, and the creation of a global health management system capable of generating real-time knowledge and new opportunities for prevention and therapy in the individual person (high-definition medicine). Real world data-based translational applications represent a promising alternative to the traditional evidence-based medicine (EBM) approaches that are based on the use of randomized clinical trials to test the selected hypothesis. Multi-modality data integration is necessary for example in precision oncology where an Avatar interface allows several simulations in order to define the best therapeutic scheme for each cancer patient

    Exact and efficient top-K inference for multi-target prediction by querying separable linear relational models

    Get PDF
    Many complex multi-target prediction problems that concern large target spaces are characterised by a need for efficient prediction strategies that avoid the computation of predictions for all targets explicitly. Examples of such problems emerge in several subfields of machine learning, such as collaborative filtering, multi-label classification, dyadic prediction and biological network inference. In this article we analyse efficient and exact algorithms for computing the top-KK predictions in the above problem settings, using a general class of models that we refer to as separable linear relational models. We show how to use those inference algorithms, which are modifications of well-known information retrieval methods, in a variety of machine learning settings. Furthermore, we study the possibility of scoring items incompletely, while still retaining an exact top-K retrieval. Experimental results in several application domains reveal that the so-called threshold algorithm is very scalable, performing often many orders of magnitude more efficiently than the naive approach
    corecore