75 research outputs found

    Overcoming phenotypic switching: targeting protein-protein interactions in cancer

    Get PDF
    Alternative protein-protein interactions (PPIs) arising from mutations or post-translational modifications (PTMs), termed phenotypic switching (PS), are critical for the transmission of alternative pathogenic signals and are particularly significant in cancer. In recent years, PPIs have emerged as promising targets for rational drug design, primarily because their high specificity facilitates targeting of disease-related signaling pathways. However, obstacles exist at the molecular level that arise from the properties of the interaction interfaces and the propensity of small molecule drugs to interact with more than one cleft surface. The difficulty in identifying small molecules that act as activators or inhibitors to counteract the biological effects of mutations raises issues that have not been encountered before. For example, small molecules can bind tightly but may not act as drugs or bind to multiple sites (interaction promiscuity). Another reason is the absence of significant clefts on protein surfaces; if a pocket is present, it may be too small, or its geometry may prevent binding. PS, which arises from oncogenic (alternative) signaling, causes drug resistance and forms the basis for the systemic robustness of tumors. In this review, the properties of PPI interfaces relevant to the design and development of targeting drugs are examined. In addition, the interactions between three tyrosine kinase inhibitors (TKIs) employed as drugs are discussed. Finally, potential novel targets of one of these drugs were identified in silico

    A systems approach identifies co-signaling molecules of early growth response 1 transcription factor in immobilization stress

    Get PDF
    Adaptation to stress is critical for survival. The adrenal medulla, the major source of epinephrine, plays an important role in the development of the hyperadenergic state and increased risk for stress associated disorders, such as hypertension and myocardial infarction. The transcription factor Egr1 plays a central role in acute and repeated stress, however the complexity of the response suggests that other transcription factor pathways might be playing equally important roles during acute and repeated stress. Therefore, we sought to discover such factors by applying a systems approach. Using microarrays and network analysis we show here for the first time that the transcription factor signal transducer and activator of transcription 3 (Stat3) gene is activated in acute stress whereas the prolactin releasing hormone (Prlh11) and chromogranin B (Chgb) genes are induced in repeated immobilization stress and that along with Egr1 may be critical mediators of the stress response. Our results suggest possible involvement of Stat3 and Prlh1/Chgb up-regulation in the transition from short to repeated stress activation

    Leptin as a critical regulator of hepatocellular carcinoma development through modulation of human telomerase reverse transcriptase

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Numerous epidemiological studies have documented that obesity is associated with hepatocellular carcinoma (HCC). The aim of this study was to investigate the biological actions regulated by leptin, the obesity biomarker molecule, and its receptors in HCC and the correlation between leptin and human telomerase reverse transcriptase (hTERT), a known mediator of cellular immortalization.</p> <p>Methods</p> <p>We investigated the relationship between leptin, leptin receptors and hTERT mRNA expression in HCC and healthy liver tissue samples. In HepG2 cells, chromatin immunoprecipitation assay was used to study signal transducer and activator of transcription-3 (STAT3) and myc/mad/max transcription factors downstream of leptin which could be responsible for hTERT regulation. Flow cytometry was used for evaluation of cell cycle modifications and MMP1, 9 and 13 expression after treatment of HepG2 cells with leptin. Blocking of leptin's expression was achieved using siRNA against leptin and transfection with liposomes.</p> <p>Results</p> <p>We showed, for the first time, that leptin's expression is highly correlated with hTERT expression levels in HCC liver tissues. We also demonstrated in HepG2 cells that leptin-induced up-regulation of hTERT and TA was mediated through binding of STAT3 and Myc/Max/Mad network proteins on <it>hTERT </it>promoter. We also found that leptin could affect hepatocellular carcinoma progression and invasion through its interaction with cytokines and matrix mettaloproteinases (MMPs) in the tumorigenic microenvironment. Furthermore, we showed that histone modification contributes to leptin's gene regulation in HCC.</p> <p>Conclusions</p> <p>We propose that leptin is a key regulator of the malignant properties of hepatocellular carcinoma cells through modulation of hTERT, a critical player of oncogenesis.</p

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
    corecore