70 research outputs found

    Managing Overheads in Asynchronous Many-Task Runtime Systems

    Get PDF
    Asynchronous Many-Task (AMT) runtime systems are based on the idea of dividing an algorithm into small units of work, known as tasks. The runtime system is then responsible for scheduling and executing these tasks in an efficient manner by taking into account the resources provided to it and the associated data dependencies between the tasks. One of the primary challenges faced by AMTs is managing such fine-grained parallelism and the overheads associated with creating, scheduling and executing tasks. This work develops methodologies for assessing and managing overheads associated with fine-grained task execution in HPX, our exemplar Asynchronous Many-Task runtime system. Known optimization techniques, viz. active message coalescing, task inlining and parallel loop iteration chunking are applied to HPX. Active message coalescing, where messages bound to the same destination are aggregated into a single message, is presented as a solution to minimize overheads associated with fine-grained communications. Methodologies and metrics for analyzing fine-grained communication overheads are developed. The metrics identified and implemented in this research aid in evaluating network efficiency by giving us an intrinsic view of the underlying network overhead that would be difficult to measure using conventional methods. Task inlining, a method that allows runtime systems to manage the overheads introduced by a large number of tasks by merging tasks together into one thread of execution, is presented as a technique for minimizing fine-grained task overheads. A runtime policy that dynamically decides whether to inline a task is developed and evaluated on different processor architectures. A methodology to derive a largely machine independent constant that allows controlling task granularity is developed. Finally, the machine independent constant derived in the context of task inlining is applied to chunking of parallel loop iterations, which confirms its applicability to reduce overheads, in the context of finding the optimal chunk size of the combined loop iterations

    Teak: A Novel Computational And Gui Software Pipeline For Reconstructing Biological Networks, Detecting Activated Biological Subnetworks, And Querying Biological Networks.

    Get PDF
    As high-throughput gene expression data becomes cheaper and cheaper, researchers are faced with a deluge of data from which biological insights need to be extracted and mined since the rate of data accumulation far exceeds the rate of data analysis. There is a need for computational frameworks to bridge the gap and assist researchers in their tasks. The Topology Enrichment Analysis frameworK (TEAK) is an open source GUI and software pipeline that seeks to be one of many tools that fills in this gap and consists of three major modules. The first module, the Gene Set Cultural Algorithm, de novo infers biological networks from gene sets using the KEGG pathways as prior knowledge. The second and third modules query against the KEGG pathways using molecular profiling data and query graphs, respectively. In particular, the second module, also called TEAK, is a network partitioning module that partitions the KEGG pathways into both linear and nonlinear subpathways. In conjunction with molecular profiling data, the subpathways are ranked and displayed to the user within the TEAK GUI. Using a public microarray yeast data set, previously unreported fitness defects for dpl1 delta and lag1 delta mutants under conditions of nitrogen limitation were found using TEAK. Finally, the third module, the Query Structure Enrichment Analysis framework, is a network query module that allows researchers to query their biological hypotheses in the form of Directed Acyclic Graphs against the KEGG pathways

    Algorithms And Tools For Computational Analysis Of Human Transcriptome Using Rna-Seq

    Get PDF
    Alternative splicing plays a key role in regulating gene expression, and more than 90% of human genes are alternatively spliced through different types of alternative splicing. Dysregulated alternative splicing events have been linked to a number of human diseases. Recently, high-throughput RNA-Seq technologies have provided unprecedented opportunities to better characterize and understand transcriptomes, in particular useful for the detection of splicing variants between healthy and diseased human transcriptomes. We have developed two novel algorithms and tools and a computational workflow to interrogate human transcriptomes between healthy and diseased conditions. The first is a read count-based Expectation-Maximization (EM) algorithm and tool, which is called RAEM. It estimates relative transcript isoform proportions by maximizing the likelihood in each gene. The RAEM algorithm has been encoded in our published software suite, SAMMate. We have employed RAEM to predict isoform-level microRNA-155 targets. The second is called dSpliceType, which is a read coverage-based algorithm and tool to detect differential splicing events. It utilizes sequential dependency of normalized base-wise read coverage signals and a change-point analysis, followed by a parametric statistical hypothesis test using Schwarz Information Criterion (SIC) to detect significant differential splicing events in the form of the five well-known splicing types. The results of both simulation and real-world studies demonstrate that dSpliceType is an efficient computational tool for detecting various types of differential splicing events from a wide range of expressed genes. Finally, we developed a novel computational workflow to jointly study human diseases in terms of both differential expression and differential splicing. The workflow has been used to detect differential splicing variants from non-differentially expressed genes of human idiopathic pulmonary fibrosis (IPF) lung disease

    Energy-Aware Real-Time Scheduling on Heterogeneous and Homogeneous Platforms in the Era of Parallel Computing

    Get PDF
    Multi-core processors increasingly appear as an enabling platform for embedded systems, e.g., mobile phones, tablets, computerized numerical controls, etc. The parallel task model, where a task can execute on multiple cores simultaneously, can efficiently exploit the multi-core platform\u27s computational ability. Many computation-intensive systems (e.g., self-driving cars) that demand stringent timing requirements often evolve in the form of parallel tasks. Several real-time embedded system applications demand predictable timing behavior and satisfy other system constraints, such as energy consumption. Motivated by the facts mentioned above, this thesis studies the approach to integrating the dynamic voltage and frequency scaling (DVFS) policy with real-time embedded system application\u27s internal parallelism to reduce the worst-case energy consumption (WCEC), an essential requirement for energy-constrained systems. First, we propose an energy-sub-optimal scheduler, assuming the per-core speed tuning feature for each processor. Then we extend our solution to adapt the clustered multi-core platform, where at any given time, all the processors in the same cluster run at the same speed. We also present an analysis to exploit a task\u27s probabilistic information to improve the average-case energy consumption (ACEC), a common non-functional requirement of embedded systems. Due to the strict requirement of temporal correctness, the majority of the real-time system analysis considered the worst-case scenario, leading to resource over-provisioning and cost. The mixed-criticality (MC) framework was proposed to minimize energy consumption and resource over-provisioning. MC scheduling has received considerable attention from the real-time system research community, as it is crucial to designing safety-critical real-time systems. This thesis further addresses energy-aware scheduling of real-time tasks in an MC platform, where tasks with varying criticality levels (i.e., importance) are integrated into a common platform. We propose an algorithm GEDF-VD for scheduling MC tasks with internal parallelism in a multiprocessor platform. We also prove the correctness of GEDF-VD, provide a detailed quantitative evaluation, and reported extensive experimental results. Finally, we present an analysis to exploit a task\u27s probabilistic information at their respective criticality levels. Our proposed approach reduces the average-case energy consumption while satisfying the worst-case timing requirement

    Mapping healthcare IT

    Get PDF
    Thesis (S.M.)--Harvard-MIT Division of Health Sciences and Technology, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 56-58).In this thesis I have developed a map of Healthcare Information Technology applications used in the United States for care delivery, healthcare enterprise management, clinical support, research and patient engagement. No attempt has previously been made to develop such a taxonomy for use by healthcare policy makers and on-the-spot decision makers. Using my own fifteen years of experience in HIT, along with an extensive set of literature reviews, interviews and on-site research I assembled lists of applications and organized them into categories based on primary workflows. Seven categories of HIT systems emerged, which are Practice Tools, Advisory Tools, Financial Tools, Remote Healthcare Tools, Clinical Research Tools, Health 2.0 Tools and Enterprise Clinical Analytics, each of which have different operational characteristics and user communities. The results of this pilot study demonstrate that a map is possible. The draft map presented here will allow researchers and investors to focus on developing the next generation of HIT tools, including software platforms that orchestrate a variety of healthcare transactions, and will support policy makers as they consider the impact of Federal funding for HIT deployment and adoption. Further studies will refine the map, adding an additional level of detail below the seven categories established here, thus supporting tactical decision making at the hospital and medical practice level.by William Charles Richards Crawford.S.M

    Microcomputer

    Get PDF
    Issued as Reports [nos.1-2] and Final report, Project no. E-21-602 (includes subproject no. A-4431/Schlag

    Modeling the Evolution of Artifact Capabilities in Multi-Agent Based Simulations

    Get PDF
    Cognitive scientists agree that the exploitation of objects as tools or artifacts has played a significant role in the evolution of human societies. In the realm of autonomous agents and multi-agent systems, a recent artifact theory proposes the artifact concept as an abstraction for representing functional system components that proactive agents may exploit towards realizing their goals. As a complement, the cognition of rational agents has been extended to accommodate the notion of artifact capabilities denoting the reasoning and planning capacities of agents with respect to artifacts. Multi-Agent Based Simulation (MABS) a well established discipline for modeling complex social systems, has been identified as an area that should benefit from these theories. In MABS the evolution of artifact exploitation can play an important role in the overall performance of the system. The primary contribution of this dissertation is a computational model for integrating artifacts into MABS. The emphasis of the model is on an evolutionary approach that facilitates understanding the effects of artifacts and their exploitation in artificial social systems over time. The artifact theories are extended to support agents designed to evolve artifact exploitation through a variety of learning and adaptation strategies. The model accents strategies that benefit from the social dimensions of MABS. Realized with evolutionary computation methods specifically genetic algorithms, cultural algorithms and multi-population cultural algorithms, artifact capability evolution is supported at individual, population and multi-population levels. A generic MABS and case studies are provided to demonstrate the use of the model in new and existing MABS systems. The accommodation of artifact capability evolution in artificial social systems is applicable in many domains, particularly when the modeled system is one where artifact exploitation is relevant to the evolution of the society and its overall behavior. With artifacts acknowledged as major contributors to societal evolution the impact of our model is significant, providing advanced tools that enable social scientists to analyze their findings. The model can inform archaeologists, economists, evolution theorists, sociologists and anthropologists among others
    • …
    corecore