2,122 research outputs found

    Raman Spectroscopy Techniques for the Detection and Management of Breast Cancer

    Get PDF
    Breast cancer has recently become the most common cancer worldwide, and with increased incidence, there is increased pressure on health services to diagnose and treat many more patients. Mortality and survival rates for this particular disease are better than other cancer types, and part of this is due to the facilitation of early diagnosis provided by screening programmes, including the National Health Service breast screening programme in the UK. Despite the benefits of the programme, some patients undergo negative experiences in the form of false negative mammograms, overdiagnosis and subsequent overtreatment, and even a small number of cancers are induced by the use of ionising radiation. In addition to this, false positive mammograms cause a large number of unnecessary biopsies, which means significant costs, both financially and in terms of clinicians' time, and discourages patients from attending further screening. Improvement in areas of the treatment pathway is also needed. Surgery is usually the first line of treatment for early breast cancer, with breast conserving surgery being the preferred option compared to mastectomy. This type of operation achieves the same outcome as mastectomy - removal of the tumour - while allowing the patient to retain the majority of their normal breast tissue for improved aesthetic and psychological results. Yet, re-excision operations are often required when clear margins are not achieved, i.e. not all of the tumour is removed. This again has implications on cost and time, and increases the risk to the patient through additional surgery. Currently lacking in both the screening and surgical contexts is the ability to discern specific chemicals present in the breast tissue being assessed/removed. Specifically relevant to mammography is the presence of calcifications, the chemistry of which holds information indicative of pathology that cannot be accessed through x-rays. In addition, the chemical composition of breast tumour tissue has been shown to be different to normal tissue in a variety of ways, with one particular difference being a significant increase in water content. Raman spectroscopy is a rapid, non-ionising, non-destructive technique based on light scattering. It has been proven to discern between chemical types of calcification and subtleties within their spectra that indicate the malignancy status of the surrounding tissue, and differentiate between cancerous and normal breast tissue based on the relative water contents. Furthermore, this thesis presents work aimed at exploring deep Raman techniques to probe breast calcifications at depth within tissue, and using a high wavenumber Raman probe to discriminate tumour from normal tissue predominantly via changes in tissue water content. The ability of transmission Raman spectroscopy to detect different masses and distributions of calcified powder inclusions within tissue phantoms was tested, as well as elucidating a signal profile of a similar inclusion through a tissue phantom of clinically relevant thickness. The technique was then applied to the measurement of clinically active samples of bulk breast tissue from informed and consented patients to try to measure calcifications. Ex vivo specimens were also measured with a high wavenumber Raman probe, which found significant differences between tumour and normal tissue, largely due to water content, resulting in a classification model that achieved 77.1% sensitivity and 90.8% specificity. While calcifications were harder to detect in the ex vivo specimens, promising results were still achieved, potentially indicating a much more widespread influence of calcification in breast tissue, and to obtain useful signal from bulk human tissue is encouraging in itself. Consequently, this work demonstrates the potential value of both deep Raman techniques and high wavenumber Raman for future breast screening and tumour margin assessment methods

    Analysis and monitoring of single HaCaT cells using volumetric Raman mapping and machine learning

    Get PDF
    No explorer reached a pole without a map, no chef served a meal without tasting, and no surgeon implants untested devices. Higher accuracy maps, more sensitive taste buds, and more rigorous tests increase confidence in positive outcomes. Biomedical manufacturing necessitates rigour, whether developing drugs or creating bioengineered tissues [1]–[4]. By designing a dynamic environment that supports mammalian cells during experiments within a Raman spectroscope, this project provides a platform that more closely replicates in vivo conditions. The platform also adds the opportunity to automate the adaptation of the cell culture environment, alongside spectral monitoring of cells with machine learning and three-dimensional Raman mapping, called volumetric Raman mapping (VRM). Previous research highlighted key areas for refinement, like a structured approach for shading Raman maps [5], [6], and the collection of VRM [7]. Refining VRM shading and collection was the initial focus, k-means directed shading for vibrational spectroscopy map shading was developed in Chapter 3 and exploration of depth distortion and VRM calibration (Chapter 4). “Cage” scaffolds, designed using the findings from Chapter 4 were then utilised to influence cell behaviour by varying the number of cage beams to change the scaffold porosity. Altering the porosity facilitated spectroscopy investigation into previously observed changes in cell biology alteration in response to porous scaffolds [8]. VRM visualised changed single human keratinocyte (HaCaT) cell morphology, providing a complementary technique for machine learning classification. Increased technical rigour justified progression onto in-situ flow chamber for Raman spectroscopy development in Chapter 6, using a Psoriasis (dithranol-HaCaT) model on unfixed cells. K-means-directed shading and principal component analysis (PCA) revealed HaCaT cell adaptations aligning with previous publications [5] and earlier thesis sections. The k-means-directed Raman maps and PCA score plots verified the drug-supplying capacity of the flow chamber, justifying future investigation into VRM and machine learning for monitoring single cells within the flow chamber

    Minimal information for studies of extracellular vesicles (MISEV2023): From basic to advanced approaches

    Get PDF
    Extracellular vesicles (EVs), through their complex cargo, can reflect the state of their cell of origin and change the functions and phenotypes of other cells. These features indicate strong biomarker and therapeutic potential and have generated broad interest, as evidenced by the steady year-on-year increase in the numbers of scientific publications about EVs. Important advances have been made in EV metrology and in understanding and applying EV biology. However, hurdles remain to realising the potential of EVs in domains ranging from basic biology to clinical applications due to challenges in EV nomenclature, separation from non-vesicular extracellular particles, characterisation and functional studies. To address the challenges and opportunities in this rapidly evolving field, the International Society for Extracellular Vesicles (ISEV) updates its 'Minimal Information for Studies of Extracellular Vesicles', which was first published in 2014 and then in 2018 as MISEV2014 and MISEV2018, respectively. The goal of the current document, MISEV2023, is to provide researchers with an updated snapshot of available approaches and their advantages and limitations for production, separation and characterisation of EVs from multiple sources, including cell culture, body fluids and solid tissues. In addition to presenting the latest state of the art in basic principles of EV research, this document also covers advanced techniques and approaches that are currently expanding the boundaries of the field. MISEV2023 also includes new sections on EV release and uptake and a brief discussion of in vivo approaches to study EVs. Compiling feedback from ISEV expert task forces and more than 1000 researchers, this document conveys the current state of EV research to facilitate robust scientific discoveries and move the field forward even more rapidly

    Enhancing the forensic comparison process of common trace materials through the development of practical and systematic methods

    Get PDF
    An ongoing advancement in forensic trace evidence has driven the development of new and objective methods for comparing various materials. While many standard guides have been published for use in trace laboratories, different areas require a more comprehensive understanding of error rates and an urgent need for harmonizing methods of examination and interpretation. Two critical areas are the forensic examination of physical fits and the comparison of spectral data, which depend highly on the examiner’s judgment. The long-term goal of this study is to advance and modernize the comparative process of physical fit examinations and spectral interpretation. This goal is fulfilled through several avenues: 1) improvement of quantitative-based methods for various trace materials, 2) scrutiny of the methods through interlaboratory exercises, and 3) addressing fundamental aspects of the discipline using large experimental datasets, computational algorithms, and statistical analysis. A substantial new body of knowledge has been established by analyzing population sets of nearly 4,000 items representative of casework evidence. First, this research identifies material-specific relevant features for duct tapes and automotive polymers. Then, this study develops reporting templates to facilitate thorough and systematic documentation of an analyst’s decision-making process and minimize risks of bias. It also establishes criteria for utilizing a quantitative edge similarity score (ESS) for tapes and automotive polymers that yield relatively high accuracy (85% to 100%) and, notably, no false positives. Finally, the practicality and performance of the ESS method for duct tape physical fits are evaluated by forensic practitioners through two interlaboratory exercises. Across these studies, accuracy using the ESS method ranges between 95-99%, and again no false positives are reported. The practitioners’ feedback demonstrates the method’s potential to assist in training and improve peer verifications. This research also develops and trains computational algorithms to support analysts making decisions on sample comparisons. The automated algorithms in this research show the potential to provide objective and probabilistic support for determining a physical fit and demonstrate comparative accuracy to the analyst. Furthermore, additional models are developed to extract feature edge information from the systematic comparison templates of tapes and textiles to provide insight into the relative importance of each comparison feature. A decision tree model is developed to assist physical fit examinations of duct tapes and textiles and demonstrates comparative performance to the trained analysts. The computational tools also evaluate the suitability of partial sample comparisons that simulate situations where portions of the item are lost or damaged. Finally, an objective approach to interpreting complex spectral data is presented. A comparison metric consisting of spectral angle contrast ratios (SCAR) is used as a model to assess more than 94 different-source and 20 same-source electrical tape backings. The SCAR metric results in a discrimination power of 96% and demonstrates the capacity to capture information on the variability between different-source samples and the variability within same-source samples. Application of the random-forest model allows for the automatic detection of primary differences between samples. The developed threshold could assist analysts with making decisions on the spectral comparison of chemically similar samples. This research provides the forensic science community with novel approaches to comparing materials commonly seen in forensic laboratories. The outcomes of this study are anticipated to offer forensic practitioners new and accessible tools for incorporation into current workflows to facilitate systematic and objective analysis and interpretation of forensic materials and support analysts’ opinions

    Out-of-Distribution Generalization of Deep Learning to Illuminate Dark Protein Functional Space

    Full text link
    Dark protein illumination is a fundamental challenge in drug discovery where majority human proteins are understudied, i.e. with only known protein sequence but no known small molecule binder. It\u27s a major road block to enable drug discovery paradigm shift from single-targeted which looks to identify a single target and design drug to regulate the single target to multi-targeted in a Systems Pharmacology perspective. Diseases such as Alzheimer\u27s and Opioid-Use-Disorder plaguing millions of patients call for effective multi-targeted approach involving dark proteins. Using limited protein data to predict dark protein property requires deep learning systems with OOD generalization capacity. Out-of-Distribution (OOD) generalization is a problem hindering the application and adoption of deep learning in real world problems. Classic deep learning setting in contrast is assuming training and testing data are independent identically distributed (iid). A well trained model under iid setting with reported 98% accuracy could deteriorate to worse than random guess in deployment to OOD data significantly different from training data. Numerous techniques in the research field emerged but are only addressing some specific OOD scenario instead of a general one. Dark protein illumination has unique complexity comparing to common deep learning tasks. There are three OOD axes, protein-OOD, compound-OOD, interaction-OOD. Previous research have only focused on compound-OOD, where new compound design algorithms are developed but still for 500 common proteins, instead of whole human genome 20,000 proteins, and only for single-targeted paradigm instead of multi-targeted. Focusing on an instrumental problem in drug discovery, dark protein function illumination problem is introduced from the OOD perspective. A series of dark protein OOD algorithms are developed to predict dark protein ligand interaction where multiple instrumental deep learning techniques are adapted to the biology context. By proposing the dark protein illumination problem, highlighting the neglected axes, demonstrating possibilities, numerous diseases now embrace new hopes

    Feasibility of functional MRI on point-of-care MR platforms

    Get PDF
    Magnetic resonance imaging (MRI) has proven to be a clinically valuable tool that can produce anatomical and functional images with improved soft tissue contrast compared to other imaging modalities. There has recently been a surge in low- and mid-field scanners due to hardware developments and innovative acquisition techniques. These compact scanners are accessible, offer reduced siting requirements and can be made operational at a reduced cost. This thesis aims to implement blood-oxygen-level-dependent (BOLD) resting-state functional MRI (fMRI) at such a mid-field point-of-care scanner. The availability of this technique can be beneficial to get neurological information in cases of traumatic brain injury, stroke, epilepsy, and dementia. This technique was previously not implemented at low- and mid-field since signal-to-noise ratio and the contrast scale with field strength. Studies were conducted to gauge the performance of an independent component analysis (ICA) based platform (GraphICA) to analyze artificially added noisy resting state functional data previously collected with a 3T scanner. This platform was used in later chapters to preprocess and perform functional connectivity studies with data from a mid-field scanner. A single echo gradient echo echoplanar imaging (GE-EPI) sequence is typically used for BOLD-based fMRI. Task-based fMRI experiments were performed with this sequence to gauge the feasibility of this technique on a mid-field scanner. Once the feasibility was established, the sequence was further optimized to suit mid-field scanners by considering all the imaging parameters. Resting-state experiments were conducted with an optimized single echo GE-EPI sequence with reduced dead time on a mid-field scanner. Temporal and image signal-to-noise ratio were calculated for different cortical regions. Along with that, functional connectivity studies and identification of resting-state networks were performed with GraphICA which demonstrated the feasibility of this resting-state fMRI at mid-field. The reliability and repeatability of the identified networks were assessed by comparing the networks identified with 3T data. Resting-state experiments were conducted with a multi-echo GE-EPI sequence to use the dead time due to long T2* at mid-field effectively. Temporal signal-to-noise was calculated for different cortical regions. Along with that, functional connectivity studies and identification of resting-state networks were performed with GraphICA which demonstrated the feasibility of this resting-state fMRI at mid-field

    Computational methods in drug repurposing and natural product based drug discovery

    Get PDF
    For a few decades now, computation methods have been widely used in drug discovery or drug repurposing process, especially when saving time and money are important factors. Development of bioinformatics, chemoinformatics, molecular modelling techniques and machine or deep learning tools, as well as availability of various biological and chemical databases, have had a significant impact on improving the process of obtaining successful drug candidates. This dissertation describes the role of natural products in drug discovery, as well as presents several computational methods used in drug discovery and drug repurposing. Application of these methods is presented with the example of searching for potential drug treatment options for the COVID-19 disease. The disease is caused by the novel coronavirus SARS-CoV-2, which was first discovered in December 2019 and has caused the death of more than 5.6 million people worldwide (until January 2022). Findings from two research projects, which aimed to identify potential inhibitors of main protease of SARS-CoV-2, are presented in this work. Moreover, a summary on COVID-19 treatment possibilities has been included. In the first project, a ligand-based virtual screening of around 360,000 compounds from natural products databases, as well as approved and withdrawn drugs databases was conducted, followed by molecular docking and molecular dynamics simulations. Moreover, computational predictions of toxicity and cytochrome activity profiles for selected candidates were provided. Twelve candidates as SARS-CoV-2 main protease inhibitors were identified - among them novel drug candidates, as well as existing drugs. The second project was focused on finding potential inhibitors from plants (Reynoutria japonica and Reynoutria sachalinensis) and was based on molecular docking studies, followed by in vitro studies of the activity of selected compounds, extract, and fractions from those plants against the enzyme. Several natural compounds were identified as promising candidates for SARS-CoV-2 main protease inhibitors. Additionally, butanol fraction of Ryenoutria rhizomes extracts also showed inhibitory activity on the enzyme. Suggested drugs, natural compounds and plant extracts should be further investigated to confirm their potential as COVID-19 therapeutic options. Presented workflow could be used for investigation of compounds for other biological targets and different diseases in the future research projects.Seit einigen Jahrzehnten werden bei der Entwicklung und Repositionierung von Arzneimitteln rechenintensive computergestĂŒtzte Methoden eingesetzt, insbesondere da Zeit- und Kostenersparnis wichtige Faktoren sind. Die Weiterentwicklung der Bioinformatik und Chemoinformatik und die damit einhergehende Optimierung von molekularen Modellierungstechniken und Tools fĂŒr maschinelles sowie tiefes Lernen ermöglicht die Verarbeitung von großen biologischen und chemischen Datenbanken und hat einen erheblichen Einfluss auf die Verbesserung des Prozesses zur Gewinnung erfolgreicher Arzneimittelkandidaten. In dieser Dissertation wird die Rolle von Naturstoffen bei der Entwicklung von Arzneimitteln beschrieben, und es werden verschiedene computergestĂŒtzte Methoden vorgestellt, die bei der Entdeckung von Arzneimitteln und der Repositionierung von Arzneimitteln eingesetzt werden. Die Anwendung dieser Methoden wird am Beispiel der Suche nach potenziellen medikamentösen Behandlungsmöglichkeiten fĂŒr die Krankheit COVID-19 vorgestellt. Die Krankheit wird durch das neuartige Coronavirus SARS-CoV-2 ausgelöst, das erst im Dezember 2019 entdeckt wurde und bisher (bis Januar 2022) weltweit mehr als 5,6 Millionen Menschen das Leben gekostet hat. In dieser Arbeit werden Ergebnisse aus zwei Forschungsprojekten vorgestellt, die darauf abzielten, potenzielle Hemmstoffe der Hauptprotease von SARS-CoV-2 zu identifizieren. Außerdem wird ein Überblick ĂŒber die Behandlungsmöglichkeiten von COVID-19 gegeben. Im ersten Projekt wurde ein ligandenbasiertes virtuelles Screening von rund 360.000 Kleinstrukturen aus Naturstoffdatenbanken sowie aus Datenbanken fĂŒr zugelassene und zurĂŒckgezogene Arzneimittel durchgefĂŒhrt, gefolgt von molekularem Docking und Molekulardynamiksimulationen. DarĂŒber hinaus wurden fĂŒr ausgewĂ€hlte Kandidaten rechnerische Vorhersagen zur ToxizitĂ€t und zu Cytochrom-P450-AktivitĂ€tsprofilen erstellt. Es wurden zwölf Kandidaten als SARS-CoV-2-Hauptproteaseinhibitoren identifiziert - darunter sowohl neuartige als auch bereits vorhandene Arzneimittel. Das zweite Projekt konzentrierte sich auf die Suche nach potenziellen Inhibitoren aus Pflanzen (Reynoutria japonica und Reynoutria sachalinensis) und basierte auf molekularen Docking-Studien, gefolgt von In-vitro-Studien der AktivitĂ€t ausgewĂ€hlter Verbindungen, Extrakte und Fraktionen aus diesen Pflanzen gegen das Enzym. Mehrere Naturstoffe wurden als vielversprechende Kandidaten fĂŒr SARS-CoV-2- Hauptproteaseinhibitoren identifiziert. Außerdem zeigte die Butanolfraktion von Ryenoutria Rhizomextrakten ebenfalls eine hemmende Wirkung auf das Enzym. Die vorgeschlagenen Arzneimittel, Naturstoffe und Pflanzenextrakte sollten weiter untersucht werden, um ihr Potenzial als COVID-19-Therapieoptionen zu bestĂ€tigen. Der vorgestellte Arbeitsablauf könnte in zukĂŒnftigen Forschungsprojekten zur Untersuchung von Verbindungen fĂŒr andere biologische Ziele und verschiedene Krankheiten verwendet werden

    Computational Approaches to Drug Profiling and Drug-Protein Interactions

    Get PDF
    Despite substantial increases in R&D spending within the pharmaceutical industry, denovo drug design has become a time-consuming endeavour. High attrition rates led to a long period of stagnation in drug approvals. Due to the extreme costs associated with introducing a drug to the market, locating and understanding the reasons for clinical failure is key to future productivity. As part of this PhD, three main contributions were made in this respect. First, the web platform, LigNFam enables users to interactively explore similarity relationships between ‘drug like’ molecules and the proteins they bind. Secondly, two deep-learning-based binding site comparison tools were developed, competing with the state-of-the-art over benchmark datasets. The models have the ability to predict offtarget interactions and potential candidates for target-based drug repurposing. Finally, the open-source ScaffoldGraph software was presented for the analysis of hierarchical scaffold relationships and has already been used in multiple projects, including integration into a virtual screening pipeline to increase the tractability of ultra-large screening experiments. Together, and with existing tools, the contributions made will aid in the understanding of drug-protein relationships, particularly in the fields of off-target prediction and drug repurposing, helping to design better drugs faster

    Systemic Circular Economy Solutions for Fiber Reinforced Composites

    Get PDF
    This open access book provides an overview of the work undertaken within the FiberEUse project, which developed solutions enhancing the profitability of composite recycling and reuse in value-added products, with a cross-sectorial approach. Glass and carbon fiber reinforced polymers, or composites, are increasingly used as structural materials in many manufacturing sectors like transport, constructions and energy due to their better lightweight and corrosion resistance compared to metals. However, composite recycling is still a challenge since no significant added value in the recycling and reprocessing of composites is demonstrated. FiberEUse developed innovative solutions and business models towards sustainable Circular Economy solutions for post-use composite-made products. Three strategies are presented, namely mechanical recycling of short fibers, thermal recycling of long fibers and modular car parts design for sustainable disassembly and remanufacturing. The validation of the FiberEUse approach within eight industrial demonstrators shows the potentials towards new Circular Economy value-chains for composite materials

    Current Challenges in the Application of Algorithms in Multi-institutional Clinical Settings

    Get PDF
    The Coronavirus disease pandemic has highlighted the importance of artificial intelligence in multi-institutional clinical settings. Particularly in situations where the healthcare system is overloaded, and a lot of data is generated, artificial intelligence has great potential to provide automated solutions and to unlock the untapped potential of acquired data. This includes the areas of care, logistics, and diagnosis. For example, automated decision support applications could tremendously help physicians in their daily clinical routine. Especially in radiology and oncology, the exponential growth of imaging data, triggered by a rising number of patients, leads to a permanent overload of the healthcare system, making the use of artificial intelligence inevitable. However, the efficient and advantageous application of artificial intelligence in multi-institutional clinical settings faces several challenges, such as accountability and regulation hurdles, implementation challenges, and fairness considerations. This work focuses on the implementation challenges, which include the following questions: How to ensure well-curated and standardized data, how do algorithms from other domains perform on multi-institutional medical datasets, and how to train more robust and generalizable models? Also, questions of how to interpret results and whether there exist correlations between the performance of the models and the characteristics of the underlying data are part of the work. Therefore, besides presenting a technical solution for manual data annotation and tagging for medical images, a real-world federated learning implementation for image segmentation is introduced. Experiments on a multi-institutional prostate magnetic resonance imaging dataset showcase that models trained by federated learning can achieve similar performance to training on pooled data. Furthermore, Natural Language Processing algorithms with the tasks of semantic textual similarity, text classification, and text summarization are applied to multi-institutional, structured and free-text, oncology reports. The results show that performance gains are achieved by customizing state-of-the-art algorithms to the peculiarities of the medical datasets, such as the occurrence of medications, numbers, or dates. In addition, performance influences are observed depending on the characteristics of the data, such as lexical complexity. The generated results, human baselines, and retrospective human evaluations demonstrate that artificial intelligence algorithms have great potential for use in clinical settings. However, due to the difficulty of processing domain-specific data, there still exists a performance gap between the algorithms and the medical experts. In the future, it is therefore essential to improve the interoperability and standardization of data, as well as to continue working on algorithms to perform well on medical, possibly, domain-shifted data from multiple clinical centers
    • 

    corecore