2,032 research outputs found

    Acute Myeloid Leukemia

    Get PDF
    Acute myeloid leukemia (AML) is the most common type of leukemia. The Cancer Genome Atlas Research Network has demonstrated the increasing genomic complexity of acute myeloid leukemia (AML). In addition, the network has facilitated our understanding of the molecular events leading to this deadly form of malignancy for which the prognosis has not improved over past decades. AML is a highly heterogeneous disease, and cytogenetics and molecular analysis of the various chromosome aberrations including deletions, duplications, aneuploidy, balanced reciprocal translocations and fusion of transcription factor genes and tyrosine kinases has led to better understanding and identification of subgroups of AML with different prognoses. Furthermore, molecular classification based on mRNA expression profiling has facilitated identification of novel subclasses and defined high-, poor-risk AML based on specific molecular signatures. However, despite increased understanding of AML genetics, the outcome for AML patients whose number is likely to rise as the population ages, has not changed significantly. Until it does, further investigation of the genomic complexity of the disease and advances in drug development are needed. In this review, leading AML clinicians and research investigators provide an up-to-date understanding of the molecular biology of the disease addressing advances in diagnosis, classification, prognostication and therapeutic strategies that may have significant promise and impact on overall patient survival

    Compilation and Synthesis for Fault-Tolerant Digital Microfluidic Biochips

    Get PDF

    Tools and Technologies for Enabling Characterisation in Synthetic Biology

    Get PDF
    Synthetic Biology represents a movement to utilise biological organisms for novel applications through the use of rigorous engineering principles. These principles rely on a solid and well versed understanding of the underlying biological components and functions (relevant to the application). In order to achieve this understanding, reliable behavioural and contextual information is required (more commonly known as characterisation data). Focussing on lowering the barrier of entry for current research facilities to regularly and easily perform characterisation assays will directly improve the communal knowledge base for Synthetic Biology and enable the further application of rational engineering principles. Whilst characterisation remains a fundamental principle for Synthetic Biology research, the high time costs, subjective measurement protocols, and ambiguous data analysis specifications, deter regular performance of characterisation assays. Vitally, this prevents the valid application of many of the key Synthetic Biology processes that have been derived to improve research yield (with regards to solving application problems) and directly prevent the intended goal of addressing the ad hoc nature of modern research from being realised. Designing new technologies and tools to facilitate rapid ‘hands off’ characterisation assays for research facilities will improve the uptake of characterisation within the research pipeline. To achieve this two core problem areas have been identified that limit current characterisation attempts in conventional research. Therefore, it was the primary aim of this investigation to overcome these two core problems to promote regular characterisation. The first issue identified as preventing the regular use of characterisation assays was the user-intensive methodologies and technologies available to researchers. There is currently no standardised characterisation equipment for assaying samples and the methodologies are heavily dependent on the researcher and their application for successful and complete characterisation. This study proposed a novel high throughput solution to the characterisation problem that was capable of low cost, concurrent, and rapid characterisation of simple biological DNA elements. By combining in vitro transcription-translation with microfluidics a potent solution to the characterisation problem was proposed. By utilising a completely in vitro approach along with excellent control abilities of microfluidic technologies, a prototype platform for high throughput characterisation was developed. The second issue identified was the lack of flexible, versatile software designed specifically for the data handling needs that are quickly arising within the characterisation speciality. The lack of general solutions in this area is problematic because of the increasing amount of data that is both required and generated for the characterisation output to be considered as rigorous and of value. To alleviate this issue a novel framework for laboratory data handling was developed that employs a plugin strategy for data submission and analysis. Employing a plugin strategy improves the shelf life of data handling software by allowing it to grow with the needs of the speciality. Another advantage to this strategy is the increased ability for well documented processing and analysis standards to arise that are available for all researchers. Finally, the software provided a powerful and flexible data storage schema that allowed all currently conceivable characterisation data types to be stored in a well-documented manner. The two solutions identified within this study increase the amount of enabling tools and technologies available to researchers within Synthetic Biology, which in turn will increase the uptake of regular characterisation. Consequently, this will potentially improve the lateral transfer of knowledge between research projects and reduce the need to perform ad hoc experiments to investigate facets of the fundamental biological components being utilised.Open Acces

    Design and Optimization Methods for Pin-Limited and Cyberphysical Digital Microfluidic Biochips

    Get PDF
    <p>Microfluidic biochips have now come of age, with applications to biomolecular recognition for high-throughput DNA sequencing, immunoassays, and point-of-care clinical diagnostics. In particular, digital microfluidic biochips, which use electrowetting-on-dielectric to manipulate discrete droplets (or "packets of biochemical payload") of picoliter volumes under clock control, are especially promising. The potential applications of biochips include real-time analysis for biochemical reagents, clinical diagnostics, flash chemistry, and on-chip DNA sequencing. The ease of reconfigurability and software-based control in digital microfluidics has motivated research on various aspects of automated chip design and optimization.</p><p>This thesis research is focused on facilitating advances in on-chip bioassays, enhancing the automated use of digital microfluidic biochips, and developing an "intelligent" microfluidic system that has the capability of making on-line re-synthesis while a bioassay is being executed. This thesis includes the concept of a "cyberphysical microfluidic biochip" based on the digital microfluidics hardware platform and on-chip sensing technique. In such a biochip, the control software, on-chip sensing, and the microfluidic operations are tightly coupled. The status of the droplets is dynamically monitored by on-chip sensors. If an error is detected, the control software performs dynamic re-synthesis procedure and error recovery.</p><p>In order to minimize the size and cost of the system, a hardware-assisted error-recovery method, which relies on an error dictionary for rapid error recovery, is also presented. The error-recovery procedure is controlled by a finite-state-machine implemented on a field-programmable gate array (FPGA) instead of a software running on a separate computer. Each state of the FSM represents a possible error that may occur on the biochip; for each of these errors, the corresponding sequence of error-recovery signals is stored inside the memory of the FPGA before the bioassay is conducted. When an error occurs, the FSM transitions from one state to another, and the corresponding control signals are updated. Therefore, by using inexpensive FPGA, a portable cyberphysical system can be implemented.</p><p>In addition to errors in fluid-handling operations, bioassay outcomes can also be erroneous due the uncertainty in the completion time for fluidic operations. Due to the inherent randomness of biochemical reactions, the time required to complete each step of the bioassay is a random variable. To address this issue, a new "operation-interdependence-aware" synthesis algorithm is proposed in this thesis. The start and stop time of each operation are dynamically determined based on feedback from the on-chip sensors. Unlike previous synthesis algorithms that execute bioassays based on pre-determined start and end times of each operation, the proposed method facilitates "self-adaptive" bioassays on cyberphysical microfluidic biochips.</p><p>Another design problem addressed in this thesis is the development of a layout-design algorithm that can minimize the interference between devices on a biochip. A probabilistic model for the polymerase chain reaction (PCR) has been developed; based on the model, the control software can make on-line decisions regarding the number of thermal cycles that must be performed during PCR. Therefore, PCR can be controlled more precisely using cyberphysical integration.</p><p>To reduce the fabrication cost of biochips, yet maintain application flexibility, the concept of a "general-purpose pin-limited biochip" is proposed. Using a graph model for pin-assignment, we develop the theoretical basis and a heuristic algorithm to generate optimized pin-assignment configurations. The associated scheduling algorithm for on-chip biochemistry synthesis has also been developed. Based on the theoretical framework, a complete design flow for pin-limited cyberphysical microfluidic biochips is presented.</p><p>In summary, this thesis research has led to an algorithmic infrastructure and optimization tools for cyberphysical system design and technology demonstrations. The results of this thesis research are expected to enable the hardware/software co-design of a new class of digital microfluidic biochips with tight coupling between microfluidics, sensors, and control software.</p>Dissertatio

    Fundamentals of SARS-CoV-2 Biosensors

    Get PDF
    COVID-19 diagnostic strategies based on advanced techniques are currently essential topics of interest, with crucial roles in scientific research. This book integrates fundamental concepts and critical analyses that explore the progress of modern methods for the detection of SARS-CoV-2

    Standard methods for molecular research in Apis mellifera

    Get PDF
    From studies of behaviour, chemical communication, genomics and developmental biology, among many others, honey bees have long been a key organism for fundamental breakthroughs in biology. With a genome sequence in hand, and much improved genetic tools, honey bees are now an even more appealing target for answering the major questions of evolutionary biology, population structure, and social organization. At the same time, agricultural incentives to understand how honey bees fall prey to disease, or evade and survive their many pests and pathogens, have pushed for a genetic understanding of individual and social immunity in this species. Below we describe and reference tools for using modern molecular-biology techniques to understand bee behaviour, health, and other aspects of their biology. We focus on DNA and RNA techniques, largely because techniques for assessing bee proteins are covered in detail in Hartfelder et al. (2013). We cover practical needs for bee sampling, transport, and storage, and then discuss a range of current techniques for genetic analysis. We then provide a roadmap for genomic resources and methods for studying bees, followed by specific statistical protocols for population genetics, quantitative genetics, and phylogenetics. Finally, we end with three important tools for predicting gene regulation and function in honey bees: Fluorescence in situ hybridization (FISH), RNA interference (RNAi), and the estimation of chromosomal methylation and its role in epigenetic gene regulation.Fundação para a CiĂȘncia e Tecnologi

    Platelet Diagnostics:A novel liquid biomarker

    Get PDF
    The aim of this thesis is to find a novel liquid biomarker for the detection of cancer and to optimize treatment. The first chapter gives an introduction to the oncology biomarker field and focuses on platelets and their role in cancer. In part 1, we evaluate extracellular vesicles (EVs). EVs are small vesicles released by all types of cells, including tumor cells, into the circulation. They carry protein kinases and can be isolated from plasma. We demonstrate that AKT and ERK kinase protein levels in EVs reflect the cellular expression levels and treatment with kinase inhibitors alters their concentration, depending on the clinical response to the drug. Therefore, EVs may provide a promising biomarker biosource for monitoring of treatment responses. Part 2 starts with reviews describing the function and role of platelets in greater depth. Chapter 3 focusses on thrombocytogenesis and several biological processes in which platelets play a role. Furthermore, the RNA processing machineries harboured by platelets are discussed. Both chapter 3 and 4 evaluate the change platelets undergo after being exposed to tumor and its environment. The exchange of biomolecules with tumor cells results in educated platelets, so-called tumor educated platelets (TEPs). TEPs play a role in several hallmarks of cancer and have the ability to respond to systemic alterations making them an interesting biomarker. In chapter 5 the diagnostic potential of platelets is first discussed. We determine their potential by sequencing the RNA of 283 platelet samples, of which 228 are patients with cancer, and 55 are healthy controls. We reach an accuracy of 96%. Furthermore, we are able to pinpoint the location of the primary tumor with an accuracy of 71%. In part 3, our developed thromboSeq platform is taken to the next level. Several potential confounding factors are taken into account such as age and comorbidity. We show that particle-swarm optimization (PSO)-enhanced algorithms enable efficient selection of RNA biomarker panels. In a validation cohort we apply these algorithms to non-small-cell lung cancer and reach an accuracy of 88% in late stage (n=518) and early-stage 81% accuracy. Finally, in chapter 7 we describe our wet- and dry-lab protocols in detail. This includes platelet RNA isolation, mRNA amplification, and preparation for next-generation sequencing. The dry-lab protocol describes the automated FASTQ file pre-processing to quantified gene counts, quality controls, data normalization and correction, and swarm intelligence-enhanced support vector machine (SVM) algorithm development. Part 4 focuses on central nervous system (CNS) malignancies especially on glioblastoma. Chapter 8 gives an overview of the different liquid biomarkers for diffuse glioma, the most common primary CNS malignancy. In chapter 9 we assess the specificity of the platelet education due to glioblastoma by comparing the RNA profile of TEPs from glioblastoma patients with a neuroinflammatory disease and brain metastasis patients. This results in a detection accuracy of 80%. Secondly, analysis of patients with glioblastoma versus healthy controls in an independent validation series provide a detection accuracy of 95%. Furthermore, we describe the potential value of platelets as a monitoring biomarker for patients with glioma, distinguishing pseudoprogression from real tumor progression. In part 5 thromboSeq is applied to breast cancer diagnostics both as a screening tool in the general population and in a high risk population, BRCA mutated women. In chapter 11 we first apply our technique to an inflammatory condition, multiple sclerosis (MS). Platelet RNA is used as input for the development of a diagnostic MS classifier capable of detecting MS with 80% accuracy in the independent validation series. In the final part we conclude this thesis with a general discussion of the main findings and suggestions for future research

    Development and evaluation of point-of-care diagnostic technologies for providers and consumers

    Get PDF
    Point-of-care (POC) diagnostic technologies aim to expand access to traditional laboratory-based testing to near-patient settings. These settings can range from emergency or intensive care-units (ICUs) in the United States, to remote health posts in sub Saharan Africa. Differences in budget and infrastructure play a role in characterizing the wide array of possible “near patient” settings and must be taken into consideration in the engineering design process. In this dissertation we use translational engineering to develop practical and accessible microfluidic POC immunoassays for diverse settings, that include both provider and consumer facing applications. First, we examined Lyme Disease in the U.S., where existing diagnostic technologies face the challenge of rapid and accurate serodiagnosis in the face of largely non-specific clinical symptoms. We developed a multiplexed rapid test that could replicate enzyme-linked immunosorbent assay (ELISA) performance for Lyme Disease diagnosis. After screening candidate biomarkers, we evaluated performance of the multiplexed microfluidic test against ELISA using clinical serum samples and illustrated the potential to streamline current clinical algorithms requiring two immunoassays (ELISA and Western Blot) into one standalone test suitable for physician’s offices or urgent care clinics in the U.S. We also showed exploratory work towards a similar multiplexed test design for another bacterial spirochete infection, Leptospirosis. Next, we built on previous work towards a POC HIV-syphilis antenatal screening tool, to develop a smartphone-integrated, microfluidic assay for healthcare workers to use in low resource settings. The low-cost (34),re−usabledevice(“smartphonedongle”)costs34), re-usable device (“smartphone dongle”) costs 34 to produce and provides results in 15 minutes. In this work, we focus on assay development efforts undertaken towards development of a fully integrated POC product suitable for deployment in the field, with practical considerations for the use of fingerstick blood, stability, scale-up and transport. We also streamlined the number of manual steps for end-user operation, through the use of lyophilized secondary antibodies, preloaded reagents on cassette, and an automatic result readout. While laboratory demonstration with clinical samples is important for initial characterization of POC devices, field evaluation reveals diagnostic performance under real-world conditions. We tested the device in the hands of minimally trained healthcare workers in Rwanda and saw comparable performance to other immunoassays run under field conditions. We also performed a follow-up pilot field study in Rwanda to evaluate the feasibility of the smartphone dongle platform for self-testing by patients/consumers in a low-resource setting, one of the most challenging use-cases for POC devices. Finally, we sought to integrate intellectual frameworks from behavioral research and user-experience (UX) design in creating a new framework for evaluation of consumer-facing microfluidic devices, specifically towards HIV home-testing in the U.S. While overall rates of HIV are decreasing in the U.S., the population of gay, bisexual and other men who have sex with men (MSM) are disproportionately affected. Self-testing products for sexually transmitted infection (STI) testing could address unmet needs for these target populations in both increasing access and frequency of testing, as well as integrating use with sexual partners for early diagnosis or even prevention. We worked with a cohort of MSMs at high risk for HIV/STI transmission in New York City, and performed for the first time, a structured assessment of completely naïve users interacting with a smartphone interfaced microfluidic diagnostic device (“SMARTtest”). We integrated UX design value model of device usability, credibility, accessibility and acceptability into our evaluation framework, which influence user’s information, knowledge, motivation and behavioral skills towards engaging with a prevention method (“IMB” model). Thus far, such frameworks have rarely been applied to other consumer health monitoring devices, including microfluidic POC devices. As the microfluidic field moves towards more field demonstrations of devices, more untrained and minimally trained users will have access to such tools. It is important to understand how they use devices, what the device failure points are and what the most relevant design features are to spur user adoption and meaningful usage. Underlying our work in creating accessible and practical POC immunoassay tools for infectious disease detection, is the illustration of the translational development roadmap from proof-of-concept assay development to field studies and user-based evaluations for intended end-use settings that range from U.S. based primary care clinics, rural health centers in low-resource settings as well as self-testing environments in both. Incorporating an understanding of the target use-case setting is critical in translating technologies for clinical use, whether in the infrastructure and services that are available, or end-user needs and constraints such as clinical workflow patterns, level of technical expertise and perceptions of usefulness and value. We show how user/use-case focused application of downstream translational engineering and testing informs upstream design choices and accelerates development of POC devices for real-world use. The sum of this work aims to illustrate tenets of translational engineering design and testing to advance insight into building POC products that are poised for greater adoption by target end users, whether they are health providers or consumers
    • 

    corecore