17,078 research outputs found

    Clinical application of high throughput molecular screening techniques for pharmacogenomics.

    Get PDF
    Genetic analysis is one of the fastest-growing areas of clinical diagnostics. Fortunately, as our knowledge of clinically relevant genetic variants rapidly expands, so does our ability to detect these variants in patient samples. Increasing demand for genetic information may necessitate the use of high throughput diagnostic methods as part of clinically validated testing. Here we provide a general overview of our current and near-future abilities to perform large-scale genetic testing in the clinical laboratory. First we review in detail molecular methods used for high throughput mutation detection, including techniques able to monitor thousands of genetic variants for a single patient or to genotype a single genetic variant for thousands of patients simultaneously. These methods are analyzed in the context of pharmacogenomic testing in the clinical laboratories, with a focus on tests that are currently validated as well as those that hold strong promise for widespread clinical application in the near future. We further discuss the unique economic and clinical challenges posed by pharmacogenomic markers. Our ability to detect genetic variants frequently outstrips our ability to accurately interpret them in a clinical context, carrying implications both for test development and introduction into patient management algorithms. These complexities must be taken into account prior to the introduction of any pharmacogenomic biomarker into routine clinical testing

    Analysis and Computational Dissection of Molecular Signature Multiplicity

    Get PDF
    Molecular signatures are computational or mathematical models created to diagnose disease and other phenotypes and to predict clinical outcomes and response to treatment. It is widely recognized that molecular signatures constitute one of the most important translational and basic science developments enabled by recent high-throughput molecular assays. A perplexing phenomenon that characterizes high-throughput data analysis is the ubiquitous multiplicity of molecular signatures. Multiplicity is a special form of data analysis instability in which different analysis methods used on the same data, or different samples from the same population lead to different but apparently maximally predictive signatures. This phenomenon has far-reaching implications for biological discovery and development of next generation patient diagnostics and personalized treatments. Currently the causes and interpretation of signature multiplicity are unknown, and several, often contradictory, conjectures have been made to explain it. We present a formal characterization of signature multiplicity and a new efficient algorithm that offers theoretical guarantees for extracting the set of maximally predictive and non-redundant signatures independent of distribution. The new algorithm identifies exactly the set of optimal signatures in controlled experiments and yields signatures with significantly better predictivity and reproducibility than previous algorithms in human microarray gene expression datasets. Our results shed light on the causes of signature multiplicity, provide computational tools for studying it empirically and introduce a framework for in silico bioequivalence of this important new class of diagnostic and personalized medicine modalities

    Personalized medicine—a modern approach for the diagnosis and management of hypertension

    Get PDF
    The main goal of treating hypertension is to reduce blood pressure to physiological levels and thereby prevent risk of cardiovascular disease and hypertension-associated target organ damage. Despite reductions in major risk factors and the availability of a plethora of effective antihypertensive drugs, the control of blood pressure to target values is still poor due to multiple factors including apparent drug resistance and lack of adherence. An explanation for this problem is related to the current reductionist and ‘trial-and-error’ approach in the management of hypertension, as we may oversimplify the complex nature of the disease and not pay enough attention to the heterogeneity of the pathophysiology and clinical presentation of the disorder. Taking into account specific risk factors, genetic phenotype, pharmacokinetic characteristics, and other particular features unique to each patient, would allow a personalized approach to managing the disease. Personalized medicine therefore represents the tailoring of medical approach and treatment to the individual characteristics of each patient and is expected to become the paradigm of future healthcare. The advancement of systems biology research and the rapid development of high-throughput technologies, as well as the characterization of different –omics, have contributed to a shift in modern biological and medical research from traditional hypothesis-driven designs toward data-driven studies and have facilitated the evolution of personalized or precision medicine for chronic diseases such as hypertension

    EPMA position paper in cancer:current overview and future perspectives

    Get PDF
    At present, a radical shift in cancer treatment is occurring in terms of predictive, preventive, and personalized medicine (PPPM). Individual patients will participate in more aspects of their healthcare. During the development of PPPM, many rapid, specific, and sensitive new methods for earlier detection of cancer will result in more efficient management of the patient and hence a better quality of life. Coordination of the various activities among different healthcare professionals in primary, secondary, and tertiary care requires well-defined competencies, implementation of training and educational programs, sharing of data, and harmonized guidelines. In this position paper, the current knowledge to understand cancer predisposition and risk factors, the cellular biology of cancer, predictive markers and treatment outcome, the improvement in technologies in screening and diagnosis, and provision of better drug development solutions are discussed in the context of a better implementation of personalized medicine. Recognition of the major risk factors for cancer initiation is the key for preventive strategies (EPMA J. 4(1):6, 2013). Of interest, cancer predisposing syndromes in particular the monogenic subtypes that lead to cancer progression are well defined and one should focus on implementation strategies to identify individuals at risk to allow preventive measures and early screening/diagnosis. Implementation of such measures is disturbed by improper use of the data, with breach of data protection as one of the risks to be heavily controlled. Population screening requires in depth cost-benefit analysis to justify healthcare costs, and the parameters screened should provide information that allow an actionable and deliverable solution, for better healthcare provision

    Review of precision cancer medicine: Evolution of the treatment paradigm.

    Get PDF
    In recent years, biotechnological breakthroughs have led to identification of complex and unique biologic features associated with carcinogenesis. Tumor and cell-free DNA profiling, immune markers, and proteomic and RNA analyses are used to identify these characteristics for optimization of anticancer therapy in individual patients. Consequently, clinical trials have evolved, shifting from tumor type-centered to gene-directed, histology-agnostic, with innovative adaptive design tailored to biomarker profiling with the goal to improve treatment outcomes. A plethora of precision medicine trials have been conducted. The majority of these trials demonstrated that matched therapy is associated with superior outcomes compared to non-matched therapy across tumor types and in specific cancers. To improve the implementation of precision medicine, this approach should be used early in the course of the disease, and patients should have complete tumor profiling and access to effective matched therapy. To overcome the complexity of tumor biology, clinical trials with combinations of gene-targeted therapy with immune-targeted approaches (e.g., checkpoint blockade, personalized vaccines and/or chimeric antigen receptor T-cells), hormonal therapy, chemotherapy and/or novel agents should be considered. These studies should target dynamic changes in tumor biologic abnormalities, eliminating minimal residual disease, and eradicating significant subclones that confer resistance to treatment. Mining and expansion of real-world data, facilitated by the use of advanced computer data processing capabilities, may contribute to validation of information to predict new applications for medicines. In this review, we summarize the clinical trials and discuss challenges and opportunities to accelerate the implementation of precision oncology

    Personalized medicine : the impact on chemistry

    Get PDF
    An effective strategy for personalized medicine requires a major conceptual change in the development and application of therapeutics. In this article, we argue that further advances in this field should be made with reference to another conceptual shift, that of network pharmacology. We examine the intersection of personalized medicine and network pharmacology to identify strategies for the development of personalized therapies that are fully informed by network pharmacology concepts. This provides a framework for discussion of the impact personalized medicine will have on chemistry in terms of drug discovery, formulation and delivery, the adaptations and changes in ideology required and the contribution chemistry is already making. New ways of conceptualizing chemistry’s relationship with medicine will lead to new approaches to drug discovery and hold promise of delivering safer and more effective therapies

    A lab-on-a-disc platform enables serial monitoring of individual CTCs associated with tumor progression during EGFR-targeted therapy for patients with NSCLC

    Get PDF
    Rationale: Unlike traditional biopsy, liquid biopsy, which is a largely non-invasive diagnostic and monitoring tool, can be performed more frequently to better track tumors and mutations over time and to validate the efficiency of a cancer treatment. Circulating tumor cells (CTCs) are considered promising liquid biopsy biomarkers; however, their use in clinical settings is limited by high costs and a low throughput of standard platforms for CTC enumeration and analysis. In this study, we used a label-free, high-throughput method for CTC isolation directly from whole blood of patients using a standalone, clinical setting-friendly platform. Methods: A CTC-based liquid biopsy approach was used to examine the efficacy of therapy and emergent drug resistance via longitudinal monitoring of CTC counts, DNA mutations, and single-cell-level gene expression in a prospective cohort of 40 patients with epidermal growth factor receptor (EGFR)-mutant non-small cell lung cancer. Results: The change ratio of the CTC counts was associated with tumor response, detected by CT scan, while the baseline CTC counts did not show association with progression-free survival or overall survival. We achieved a 100% concordance rate for the detection of EGFR mutation, including emergence of T790M, between tumor tissue and CTCs. More importantly, our data revealed the importance of the analysis of the epithelial/mesenchymal signature of individual pretreatment CTCs to predict drug responsiveness in patients. Conclusion: The fluid-assisted separation technology disc platform enables serial monitoring of CTC counts, DNA mutations, as well as unbiased molecular characterization of individual CTCs associated with tumor progression during targeted therapy

    Removing batch effects for prediction problems with frozen surrogate variable analysis

    Full text link
    Batch effects are responsible for the failure of promising genomic prognos- tic signatures, major ambiguities in published genomic results, and retractions of widely-publicized findings. Batch effect corrections have been developed to re- move these artifacts, but they are designed to be used in population studies. But genomic technologies are beginning to be used in clinical applications where sam- ples are analyzed one at a time for diagnostic, prognostic, and predictive applica- tions. There are currently no batch correction methods that have been developed specifically for prediction. In this paper, we propose an new method called frozen surrogate variable analysis (fSVA) that borrows strength from a training set for individual sample batch correction. We show that fSVA improves prediction ac- curacy in simulations and in public genomic studies. fSVA is available as part of the sva Bioconductor package
    • 

    corecore