403 research outputs found

    List of requirements on formalisms and selection of appropriate tools

    Get PDF
    This deliverable reports on the activities for the set-up of the modelling environments for the evaluation activities of WP5. To this objective, it reports on the identified modelling peculiarities of the electric power infrastructure and the information infrastructures and of their interdependencies, recalls the tools that have been considered and concentrates on the tools that are, and will be, used in the project: DrawNET, DEEM and EPSys which have been developed before and during the project by the partners, and M\uf6bius and PRISM, developed respectively at the University of Illinois at Urbana Champaign and at the University of Birmingham (and recently at the University of Oxford)

    Introducing performance awareness in an integrated specification environment

    Get PDF
    With an increase in software complexity and modularization to create large software systems and software product lines it is increasingly difficult to ensure all requirements are met by the built system. Performance requirements are an important concern to software systems and research has developed approaches being capable of predicting software performance from annotated software architecture descriptions, such as the Palladio tool suite. However, the tooling when moving between specification, implementation and verification phase has a gap as the tools are commonly not linked, leading to inconsistencies and ambiguities in the produced artifacts. This thesis introduces performance awareness into the Integrated Specification Environment for the Specification of Technical Software Systems (IETS3), which is a specification environment aiming to close the tooling gap between the different lifecycle phases. Performance awareness is introduced by integrating existing approaches for software performance prediction from the Palladio tool suite and extending them to cope with variability-aware system models for software product lines. The thesis includes an experimental evaluation showing that the developed approach is able to provide performance predictions to users of the specification environment within 2000 ms for systems of up to 20 components and within 8000 ms for systems of up to 30 components.Mit zunehmender Software-KomplexitĂ€t und Modularisierung zur Entwicklung großer Softwaresysteme und Software-Produktlinien ist es zunehmend schwierig, alle Anforderungen des eingebauten Systems zu erfĂŒllen. Performanz ist eine wichtige Anforderung fĂŒr Software-Systeme und aktuelle Forschungsarbeiten haben AnsĂ€tze entwickelt, die in der Lage sind, Software-Performanz von annotierten Software-Architekturen vorherzusagen, wie beispielswiese die Palladio Tool Suite. Jedoch hat beim Wechseln zwischen Spezifikations-, Implementierungs- und Verifikationsphase die bestehende Toolchain eine LĂŒcke, da die eingesetzten Werkzeuge hĂ€ufig nicht miteinander verknĂŒpft sind. Dies fĂŒhrt zu Inkonsistenzen und Unklarheiten in den erzeugten Artefakten. Diese Arbeit fĂŒhrt Performanz-Bewusstsein in die Integrated Specification Environment for the Specification of Technical Software Systems (IETS3) ein - eine Spezifikationsumgebung, die die WerkzeuglĂŒcke zwischen den verschiedenen Phasen des Software-Lebenszyklus zu schließen versucht. Das Bewusstsein wird durch die Integration bestehender AnsĂ€tze zur Performanz-Vorhersage aus der Palladio Tool Suite hergestellt und um die Analyse von Produktlinien erweitert. Die experimentelle Evaluierung der Arbeit zeigt, dass der entwickelte Ansatz in der Lage ist, innerhalb von 2000 ms Systeme bestehend aus bis zu 20 Komponenten, und innerhalb von 8000 ms Systeme bestehend aus bis zu 30 Komponenten, zu analysieren

    Sequence analysis methods for the design of cancer vaccines that target tumor-specific mutant antigens (neoantigens)

    Get PDF
    The human adaptive immune system is programmed to distinguish between self and non-self proteins and if trained to recognize markers unique to a cancer, it may be possible to stimulate the selective destruction of cancer cells. Therapeutic cancer vaccines aim to boost the immune system by selectively increasing the population of T cells specifically targeted to the tumor-unique antigens, thereby initiating cancer cell death.. In the past, this approach has primarily focused on targeted selection of ‘shared’ tumor antigens, found across many patients. The advent of massively parallel sequencing and specialized analytical approaches has enabled more efficient characterization of tumor-specific mutant antigens, or neoantigens. Specifically, methods to predict which tumor-specific mutant peptides (neoantigens) can elicit anti-tumor T cell recognition improve predictions of immune checkpoint therapy response and identify one or more neoantigens as targets for personalized vaccines. Selecting the best/most immunogenic neoantigens from a large number of mutations is an important challenge, in particular in cancers with a high mutational load, such as melanomas and smoker-associated lung cancers. To address such a challenging task, Chapter 1 of this thesis describes a genome-guided in silico approach to identifying tumor neoantigens that integrates tumor mutation and expression data (DNA- and RNA-Seq). The cancer vaccine design process, from read alignment to variant calling and neoantigen prediction, typically assumes that the genotype of the Human Reference Genome sequence surrounding each somatic variant is representative of the patient’s genome sequence, and does not account for the effect of nearby variants (somatic or germline) in the neoantigenic peptide sequence. Because the accuracy of neoantigen identification has important implications for many clinical trials and studies of basic cancer immunology, Chapter 2 describes and supports the need for patient-specific inclusion of proximal variants to address this previously oversimplified assumption in the identification of neoantigens. The method of neoantigen identification described in Chapter 1 was subsequently extended (Chapter 3) and improved by the addition of a modular workflow that aids in each component of the neoantigen prediction process from neoantigen identification, prioritization, data visualization, and DNA vaccine design. These chapters describe massively parallel sequence analysis methods that will help in the identification and subsequent refinement of patient-specific antigens for use in personalized immunotherapy

    NASA Tech Briefs, July 2011

    Get PDF
    Topics covered include: 1) Collaborative Clustering for Sensor Networks; 2) Teleoperated Marsupial Mobile Sensor Platform Pair for Telepresence Insertion Into Challenging Structures; 3) Automated Verification of Spatial Resolution in Remotely Sensed Imagery; 4) Electrical Connector Mechanical Seating Sensor; 5) In Situ Aerosol Detector; 6) Multi-Parameter Aerosol Scattering Sensor; 7) MOSFET Switching Circuit Protects Shape Memory Alloy Actuators; 8) Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition; 9) Circuit for Communication Over Power Lines; 10) High-Efficiency Ka-Band Waveguide Two-Way Asymmetric Power Combiner; 11) 10-100 Gbps Offload NIC for WAN, NLR, and Grid Computing; 12) Pulsed Laser System to Simulate Effects of Cosmic Rays in Semiconductor Devices; 13) Flight Planning in the Cloud; 14) MPS Editor; 15) Object-Oriented Multi Disciplinary Design, Analysis, and Optimization Tool; 16) Cryogenic-Compatible Winchester Connector Mount and Retaining System for Composite Tubes; 17) Development of Position-Sensitive Magnetic Calorimeters for X-Ray Astronomy; 18) Planar Rotary Piezoelectric Motor Using Ultrasonic Horns; 19) Self-Rupturing Hermetic Valve; 20) Explosive Bolt Dual-Initiated from One Side; 21) Dampers for Stationary Labyrinth Seals; 22) Two-Arm Flexible Thermal Strap; 23) Carbon Dioxide Removal via Passive Thermal Approaches; 24) Polymer Electrolyte-Based Ambient Temperature Oxygen Microsensors for Environmental Monitoring; 25) Pressure Shell Approach to Integrated Environmental Protection; 26) Image Quality Indicator for Infrared Inspections; 27) Micro-Slit Collimators for X-Ray/Gamma-Ray Imaging; 28) Scatterometer-Calibrated Stability Verification Method; 29) Test Port for Fiber-Optic-Coupled Laser Altimeter; 30) Phase Retrieval System for Assessing Diamond Turning and Optical Surface Defects; 31) Laser Oscillator Incorporating a Wedged Polarization Rotator and a Porro Prism as Cavity Mirror; 32) Generic, Extensible, Configurable Push-Pull Framework for Large-Scale Science Missions; 33) Dynamic Loads Generation for Multi-Point Vibration Excitation Problems; 34) Optimal Control via Self-Generated Stochasticity; 35) Space-Time Localization of Plasma Turbulence Using Multiple Spacecraft Radio Links; 36) Surface Contact Model for Comets and Asteroids; 37) Dust Mitigation Vehicle; 38) Optical Coating Performance for Heat Reflectors of the JWST-ISIM Electronic Component; 39) SpaceCube Demonstration Platform; 40) Aperture Mask for Unambiguous Parity Determination in Long Wavelength Imagers; 41) Spaceflight Ka-Band High-Rate Radiation-Hard Modulator; 42) Enabling Disabled Persons to Gain Access to Digital Media; 43) Cytometer on a Chip; 44) Principles, Techniques, and Applications of Tissue Microfluidics; and 45) Two-Stage Winch for Kites and Tethered Balloons or Blimps

    On the design of R-based scalable frameworks for data science applications

    Get PDF
    This thesis is comprised of three papers "On the design of R-based scalable frameworks for data science applications". We discuss the design of conceptual and computational frameworks for the R language for statistical computing and graphics and build software artifacts for two typical data science use cases: optimization problem solving and large scale text analysis. Each part follows a design science approach. We use a verification method for the software frameworks introduced, i.e., prototypical instantiations of the designed artifacts are evaluated on the basis of real-world applications in mixed integer optimization (consensus journal ranking) and text mining (culturomics). The first paper introduces an extensible object oriented R Optimization Infrastructure (ROI). Methods from the field of optimization play an important role in many techniques routinely used in statistics, machine learning and data science. Often, implementations of these methods rely on highly specialized optimization algorithms, designed to be only applicable within a specific application. However, in many instances recent advances, in particular in the field of convex optimization, make it possible to conveniently and straightforwardly use modern solvers instead with the advantage of enabling broader usage scenarios and thus promoting reusability. With ROI one can formulate and solve optimization problems in a consistent way. It is capable of modeling linear, quadratic, conic, and general nonlinear optimization problems. Furthermore, the paper discusses how extension packages can add additional optimization solvers, read/write functions and additional resources such as model collections. Selected examples from the field of statistics conclude the paper. With the second paper we aim to answer two questions. Firstly, it addresses the issue on how to construct suitable aggregates of individual journal rankings, using an optimization-based consensus ranking approach. Secondly, the presented application serves as an evaluation of the ROI prototype. Regarding the first research question we apply the proposed method to a subset of marketing-related journals from a list of collected journal rankings. Next, the paper studies the stability of the derived consensus solution, and degeneration effects that occur when excluding journals and/or rankings. Finally, we investigate the similarities/dissimilarities of the consensus with a naive meta-ranking and with individual rankings. The results show that, even though journals are not uniformly ranked, one may derive a consensus ranking with considerably high agreement with the individual rankings. In the third paper we examine how we can extend the text mining package tm to handle large (text) corpora. This enables statisticians to answer many interesting research questions via statistical analysis or modeling of data sets that cannot be analyzed easily otherwise, e.g., due to software or hardware induced data size limitations. Adequate programming models like MapReduce facilitate parallelization of text mining tasks and allow for processing large data sets by using a distributed file system possibly spanning over several machines, e.g., in a cluster of workstations. The paper presents a plug-in package to tm called tm.plugin.dc implementing a distributed corpus class which can take advantage of the Hadoop MapReduce library for large scale text mining tasks. We evaluate the presented prototype on the basis of an application in culturomics and show that it can handle data sets of significant size efficiently

    XPL: a language for modular homogeneous language embedding

    Get PDF
    Languages that are used for Software Language Engineering (SLE) offer a range of features that support the construction and deployment of new languages. SLE languages offer features for constructing and processing syntax and defining the semantics of language features. New languages may be embedded within an existing language (internal) or may be stand-alone (external). Modularity is a desirable SLE property for which there is no generally agreed approach. This article analyses the current tools for SLE and identifies the key features that are common. It then proposes a language called XPL that supports these features. XPL is higher-order and allows languages to be constructed and manipulated as first-class elements and therefore can be used to represent a range of approaches to modular language definition. This is validated by using XPL to define the notion of a language module that supports modular language construction and language transformation

    XPL:A language for modular homogeneous language embedding

    Get PDF
    Languages that are used for Software Language Engineering (SLE) offer a range of features that support the construction and deployment of new languages. SLE languages offer features for constructing and processing syntax and defining the semantics of language features. New languages may be embedded within an existing language (internal) or may be stand-alone (external). Modularity is a desirable SLE property for which there is no generally agreed approach. This article analyses the current tools for SLE and identifies the key features that are common. It then proposes a language called XPL that supports these features. XPL is higher-order and allows languages to be constructed and manipulated as first-class elements and therefore can be used to represent a range of approaches to modular language definition. This is validated by using XPL to define the notion of a language module that supports modular language construction and language transformation

    Recent advances in inferring viral diversity from high-throughput sequencing data

    Get PDF
    Rapidly evolving RNA viruses prevail within a host as a collection of closely related variants, referred to as viral quasispecies. Advances in high-throughput sequencing (HTS) technologies have facilitated the assessment of the genetic diversity of such virus populations at an unprecedented level of detail. However, analysis of HTS data from virus populations is challenging due to short, error-prone reads. In order to account for uncertainties originating from these limitations, several computational and statistical methods have been developed for studying the genetic heterogeneity of virus population. Here, we review methods for the analysis of HTS reads, including approaches to local diversity estimation and global haplotype reconstruction. Challenges posed by aligning reads, as well as the impact of reference biases on diversity estimates are also discussed. In addition, we address some of the experimental approaches designed to improve the biological signal-to-noise ratio. In the future, computational methods for the analysis of heterogeneous virus populations are likely to continue being complemented by technological developments.ISSN:0168-170

    Reliably composable language extensions

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2017. Major: Computer Science. Advisor: Eric Van Wyk. 1 computer file (PDF); x, 300 pages.Many programming tasks are dramatically simpler when an appropriate domain-specific language can be used to accomplish them. These languages offer a variety of potential advantages, including programming at a higher level of abstraction, custom analyses specific to the problem domain, and the ability to generate very efficient code. But they also suffer many disadvantages as a result of their implementation techniques. Fully separate languages (such as YACC, or SQL) are quite flexible, but these are distinct monolithic entities and thus we are unable to draw on the features of several in combination to accomplish a single task. That is, we cannot compose their domain-specific features. "Embedded" DSLs (such as parsing combinators) accomplish something like a different language, but are actually implemented simply as libraries within a flexible host language. This approach allows different libraries to be imported and used together, enabling composition, but it is limited in analysis and translation capabilities by the host language they are embedded within. A promising combination of these two approaches is to allow a host language to be directly extended with new features (syntactic and semantic.) However, while there are plausible ways to attempt to compose language extensions, they can easily fail, making this approach unreliable. Previous methods of assuring reliable composition impose onerous restrictions, such as throwing out entirely the ability to introduce new analysis. This thesis introduces reliably composable language extensions as a technique for the implementation of DSLs. This technique preserves most of the advantages of both separate and "embedded" DSLs. Unlike many prior approaches to language extension, this technique ensures composition of multiple language extensions will succeed, and preserves strong properties about the behavior of the resulting composed compiler. We define an analysis on language extensions that guarantees the composition of several extensions will be well-defined, and we further define a set of testable properties that ensure the resulting compiler will behave as expected, along with a principle that assigns "blame" for bugs that may ultimately appear as a result of composition. Finally, to concretely compare our approach to our original goals for reliably composable language extension, we use these techniques to develop an extensible C compiler front-end, together with several example composable language extensions

    Projectional Editing of Software Product Lines–The PEoPL approach

    Get PDF
    • 

    corecore