4,850 research outputs found

    Time for change: a new training programme for morpho-molecular pathologists?

    Get PDF
    The evolution of cellular pathology as a specialty has always been driven by technological developments and the clinical relevance of incorporating novel investigations into diagnostic practice. In recent years, the molecular characterisation of cancer has become of crucial relevance in patient treatment both for predictive testing and subclassification of certain tumours. Much of this has become possible due to the availability of next-generation sequencing technologies and the whole-genome sequencing of tumours is now being rolled out into clinical practice in England via the 100 000 Genome Project. The effective integration of cellular pathology reporting and genomic characterisation is crucial to ensure the morphological and genomic data are interpreted in the relevant context, though despite this, in many UK centres molecular testing is entirely detached from cellular pathology departments. The CM-Path initiative recognises there is a genomics knowledge and skills gap within cellular pathology that needs to be bridged through an upskilling of the current workforce and a redesign of pathology training. Bridging this gap will allow the development of an integrated 'morphomolecular pathology' specialty, which can maintain the relevance of cellular pathology at the centre of cancer patient management and allow the pathology community to continue to be a major influence in cancer discovery as well as playing a driving role in the delivery of precision medicine approaches. Here, several alternative models of pathology training, designed to address this challenge, are presented and appraised

    Addendum to Informatics for Health 2017: Advancing both science and practice

    Get PDF
    This article presents presentation and poster abstracts that were mistakenly omitted from the original publication

    The EDRN knowledge environment: an open source, scalable informatics platform for biological sciences research

    Get PDF
    We describe here the Early Detection Research Network (EDRN) for Cancer’s knowledge environment. It is an open source platform built by NASA’s Jet Propulsion Laboratory with contributions from the California Institute of Technology, and Giesel School of Medicine at Dartmouth. It uses tools like Apache OODT, Plone, and Solr, and borrows heavily from JPL’s Planetary Data System’s ontological infrastructure. It has accumulated data on hundreds of thousands of biospecemens and serves over 1300 registered users across the National Cancer Institute (NCI). The scalable computing infrastructure is built such that we are being able to reach out to other agencies, provide homogeneous access, and provide seamless analytics support and bioinformatics tools through community engagement

    Cellular interactions in the tumor microenvironment: the role of secretome

    Get PDF
    Over the past years, it has become evident that cancer initiation and progression depends on several components of the tumor microenvironment, including inflammatory and immune cells, fibroblasts, endothelial cells, adipocytes, and extracellular matrix. These components of the tumor microenvironment and the neoplastic cells interact with each other providing pro and antitumor signals. The tumor-stroma communication occurs directly between cells or via a variety of molecules secreted, such as growth factors, cytokines, chemokines and microRNAs. This secretome, which derives not only from tumor cells but also from cancer-associated stromal cells, is an important source of key regulators of the tumorigenic process. Their screening and characterization could provide useful biomarkers to improve cancer diagnosis, prognosis, and monitoring of treatment responses.Agência financiadora Fundação de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) FAPESP 10/51168-0 12/06048-2 13/03839-1 National Council for Scientific and Technological Development (CNPq) CNPq 306216/2010-8 Fundacao para a Ciencia e a Tecnologia (FCT) UID/BIM/04773/2013 CBMR 1334info:eu-repo/semantics/publishedVersio

    Big Data Analytics for Complex Systems

    Get PDF
    The evolution of technology in all fields led to the generation of vast amounts of data by modern systems. Using data to extract information, make predictions, and make decisions is the current trend in artificial intelligence. The advancement of big data analytics tools made accessing and storing data easier and faster than ever, and machine learning algorithms help to identify patterns in and extract information from data. The current tools and machines in health, computer technologies, and manufacturing can generate massive raw data about their products or samples. The author of this work proposes a modern integrative system that can utilize big data analytics, machine learning, super-computer resources, and industrial health machines’ measurements to build a smart system that can mimic the human intelligence skills of observations, detection, prediction, and decision-making. The applications of the proposed smart systems are included as case studies to highlight the contributions of each system. The first contribution is the ability to utilize big data revolutionary and deep learning technologies on production lines to diagnose incidents and take proper action. In the current digital transformational industrial era, Industry 4.0 has been receiving researcher attention because it can be used to automate production-line decisions. Reconfigurable manufacturing systems (RMS) have been widely used to reduce the setup cost of restructuring production lines. However, the current RMS modules are not linked to the cloud for online decision-making to take the proper decision; these modules must connect to an online server (super-computer) that has big data analytics and machine learning capabilities. The online means that data is centralized on cloud (supercomputer) and accessible in real-time. In this study, deep neural networks are utilized to detect the decisive features of a product and build a prediction model in which the iFactory will make the necessary decision for the defective products. The Spark ecosystem is used to manage the access, processing, and storing of the big data streaming. This contribution is implemented as a closed cycle, which for the best of our knowledge, no one in the literature has introduced big data analysis using deep learning on real-time applications in the manufacturing system. The code shows a high accuracy of 97% for classifying the normal versus defective items. The second contribution, which is in Bioinformatics, is the ability to build supervised machine learning approaches based on the gene expression of patients to predict proper treatment for breast cancer. In the trial, to personalize treatment, the machine learns the genes that are active in the patient cohort with a five-year survival period. The initial condition here is that each group must only undergo one specific treatment. After learning about each group (or class), the machine can personalize the treatment of a new patient by diagnosing the patients’ gene expression. The proposed model will help in the diagnosis and treatment of the patient. The future work in this area involves building a protein-protein interaction network with the selected genes for each treatment to first analyze the motives of the genes and target them with the proper drug molecules. In the learning phase, a couple of feature-selection techniques and supervised standard classifiers are used to build the prediction model. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges around 100%. The third contribution is the ability to build semi-supervised learning for the breast cancer survival treatment that advances the second contribution. By understanding the relations between the classes, we can design the machine learning phase based on the similarities between classes. In the proposed research, the researcher used the Euclidean matrix distance among each survival treatment class to build the hierarchical learning model. The distance information that is learned through a non-supervised approach can help the prediction model to select the classes that are away from each other to maximize the distance between classes and gain wider class groups. The performance measurement of this approach shows a slight improvement from the second model. However, this model reduced the number of discriminative genes from 47 to 37. The model in the second contribution studies each class individually while this model focuses on the relationships between the classes and uses this information in the learning phase. Hierarchical clustering is completed to draw the borders between groups of classes before building the classification models. Several distance measurements are tested to identify the best linkages between classes. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges from 90% to 100%. All the case study models showed high-performance measurements in the prediction phase. These modern models can be replicated for different problems within different domains. The comprehensive models of the newer technologies are reconfigurable and modular; any newer learning phase can be plugged-in at both ends of the learning phase. Therefore, the output of the system can be an input for another learning system, and a newer feature can be added to the input to be considered for the learning phase

    Informatic system for a global tissue–fluid biorepository with a graph theory–oriented graphical user interface

    Get PDF
    The Richard Floor Biorepository supports collaborative studies of extracellular vesicles (EVs) found in human fluids and tissue specimens. The current emphasis is on biomarkers for central nervous system neoplasms but its structure may serve as a template for collaborative EV translational studies in other fields. The informatic system provides specimen inventory tracking with bar codes assigned to specimens and containers and projects, is hosted on globalized cloud computing resources, and embeds a suite of shared documents, calendars, and video-conferencing features. Clinical data are recorded in relation to molecular EV attributes and may be tagged with terms drawn from a network of externally maintained ontologies thus offering expansion of the system as the field matures. We fashioned the graphical user interface (GUI) around a web-based data visualization package. This system is now in an early stage of deployment, mainly focused on specimen tracking and clinical, laboratory, and imaging data capture in support of studies to optimize detection and analysis of brain tumour–specific mutations. It currently includes 4,392 specimens drawn from 611 subjects, the majority with brain tumours. As EV science evolves, we plan biorepository changes which may reflect multi-institutional collaborations, proteomic interfaces, additional biofluids, changes in operating procedures and kits for specimen handling, novel procedures for detection of tumour-specific EVs, and for RNA extraction and changes in the taxonomy of EVs. We have used an ontology-driven data model and web-based architecture with a graph theory–driven GUI to accommodate and stimulate the semantic web of EV science
    • …
    corecore