248 research outputs found

    Deep Learning in Target Space

    Get PDF
    Deep learning uses neural networks which are parameterised by their weights. The neural networks are usually trained by tuning the weights to directly minimise a given loss function. In this paper we propose to re-parameterise the weights into targets for the firing strengths of the individual nodes in the network. Given a set of targets, it is possible to calculate the weights which make the firing strengths best meet those targets. It is argued that using targets for training addresses the problem of exploding gradients, by a process which we call cascade untangling, and makes the loss-function surface smoother to traverse, and so leads to easier, faster training, and also potentially better generalisation, of the neural network. It also allows for easier learning of deeper and recurrent network structures. The necessary conversion of targets to weights comes at an extra computational expense, which is in many cases manageable. Learning in target space can be combined with existing neural-network optimisers, for extra gain. Experimental results show the speed of using target space, and examples of improved generalisation, for fully-connected networks and convolutional networks, and the ability to recall and process long time sequences and perform natural-language processing with recurrent networks

    Metabolic network percolation quantifies biosynthetic capabilities across the human oral microbiome

    Get PDF
    The biosynthetic capabilities of microbes underlie their growth and interactions, playing a prominent role in microbial community structure. For large, diverse microbial communities, prediction of these capabilities is limited by uncertainty about metabolic functions and environmental conditions. To address this challenge, we propose a probabilistic method, inspired by percolation theory, to computationally quantify how robustly a genome-derived metabolic network produces a given set of metabolites under an ensemble of variable environments. We used this method to compile an atlas of predicted biosynthetic capabilities for 97 metabolites across 456 human oral microbes. This atlas captures taxonomically-related trends in biomass composition, and makes it possible to estimate inter-microbial metabolic distances that correlate with microbial co-occurrences. We also found a distinct cluster of fastidious/uncultivated taxa, including several Saccharibacteria (TM7) species, characterized by their abundant metabolic deficiencies. By embracing uncertainty, our approach can be broadly applied to understanding metabolic interactions in complex microbial ecosystems.T32GM008764 - NIGMS NIH HHS; T32 GM008764 - NIGMS NIH HHS; R01 DE024468 - NIDCR NIH HHS; R01 GM121950 - NIGMS NIH HHS; DE-SC0012627 - Biological and Environmental Research; RGP0020/2016 - Human Frontier Science Program; NSFOCE-BSF 1635070 - National Science Foundation; HR0011-15-C-0091 - Defense Advanced Research Projects Agency; R37DE016937 - NIDCR NIH HHS; R37 DE016937 - NIDCR NIH HHS; R01GM121950 - NIGMS NIH HHS; R01DE024468 - NIDCR NIH HHS; 1457695 - National Science FoundationPublished versio

    Well-Formed and Scalable Invasive Software Composition

    Get PDF
    Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons

    High frequency response of adenine-derived carbon in aqueous electrochemical capacitor

    Get PDF
    Electrochemical capacitors are attractive power sources, especially when they are able to operate at high frequency (high current regime). In order to meet this requirement their constituents should be made of high conductivity materials with a suitable porosity. In this study, enhanced power and simultaneously high capacitance (120 F g−1 at 1 Hz or 10 A g−1) electrode material obtained from carbonized adenine precursor is presented. A micro/mesoporous character of the carbon with optimal pore size ratio and high surface area was proven by the physicochemical characterization. The beneficial pore structure and morphology resembling highly conductive carbon black, together with a significant nitrogen content (5.5%) allow for high frequency response of aqueous capacitor to be obtained. The carbon/carbon symmetric capacitor (in 1 mol L−1 Li2SO4) has been tested to the voltage of 1.5 V. The cyclic voltammetry indicates a good electrochemical response even at high scan rate (50 mV s−1). The cyclability of the capacitor is comparable to the one operating with commercial carbon (YP50F). The adenine-based capacitor is especially favourable for stationary applications requiring high power.Partners acknowledge M-ERA.NET network, MCIN/AEI/10.13039/501100011033 (Ref. PCI2019–103637), CIBER-BBN, ICTS ‘‘NANBIOSIS’’, ICTS ELECMI node "Laboratorio de Microscopias Avanzadas", National Science Centre, Poland (2018/30/Z/ST4/00901), and Ministrstvo za izobraževanje, znanost in šport for financial support and the grant of Ministry of Science and Higher Education in Poland, no. 0911/SBAD/2101. A.V., B.T., E.T. and R.D. additionally acknowledge financial support from the Slovenian Research Agency (ARRS) research core funding P2–0393.Peer reviewe

    Software Design Change Artifacts Generation through Software Architectural Change Detection and Categorisation

    Get PDF
    Software is solely designed, implemented, tested, and inspected by expert people, unlike other engineering projects where they are mostly implemented by workers (non-experts) after designing by engineers. Researchers and practitioners have linked software bugs, security holes, problematic integration of changes, complex-to-understand codebase, unwarranted mental pressure, and so on in software development and maintenance to inconsistent and complex design and a lack of ways to easily understand what is going on and what to plan in a software system. The unavailability of proper information and insights needed by the development teams to make good decisions makes these challenges worse. Therefore, software design documents and other insightful information extraction are essential to reduce the above mentioned anomalies. Moreover, architectural design artifacts extraction is required to create the developer’s profile to be available to the market for many crucial scenarios. To that end, architectural change detection, categorization, and change description generation are crucial because they are the primary artifacts to trace other software artifacts. However, it is not feasible for humans to analyze all the changes for a single release for detecting change and impact because it is time-consuming, laborious, costly, and inconsistent. In this thesis, we conduct six studies considering the mentioned challenges to automate the architectural change information extraction and document generation that could potentially assist the development and maintenance teams. In particular, (1) we detect architectural changes using lightweight techniques leveraging textual and codebase properties, (2) categorize them considering intelligent perspectives, and (3) generate design change documents by exploiting precise contexts of components’ relations and change purposes which were previously unexplored. Our experiment using 4000+ architectural change samples and 200+ design change documents suggests that our proposed approaches are promising in accuracy and scalability to deploy frequently. Our proposed change detection approach can detect up to 100% of the architectural change instances (and is very scalable). On the other hand, our proposed change classifier’s F1 score is 70%, which is promising given the challenges. Finally, our proposed system can produce descriptive design change artifacts with 75% significance. Since most of our studies are foundational, our approaches and prepared datasets can be used as baselines for advancing research in design change information extraction and documentation

    A Unified Framework for Parallel Anisotropic Mesh Adaptation

    Get PDF
    Finite-element methods are a critical component of the design and analysis procedures of many (bio-)engineering applications. Mesh adaptation is one of the most crucial components since it discretizes the physics of the application at a relatively low cost to the solver. Highly scalable parallel mesh adaptation methods for High-Performance Computing (HPC) are essential to meet the ever-growing demand for higher fidelity simulations. Moreover, the continuous growth of the complexity of the HPC systems requires a systematic approach to exploit their full potential. Anisotropic mesh adaptation captures features of the solution at multiple scales while, minimizing the required number of elements. However, it also introduces new challenges on top of mesh generation. Also, the increased complexity of the targeted cases requires departing from traditional surface-constrained approaches to utilizing CAD (Computer-Aided Design) kernels. Alongside the functionality requirements, is the need of taking advantage of the ubiquitous multi-core machines. More importantly, the parallel implementation needs to handle the ever-increasing complexity of the mesh adaptation code. In this work, we develop a parallel mesh adaptation method that utilizes a metric-based approach for generating anisotropic meshes. Moreover, we enhance our method by interfacing with a CAD kernel, thus enabling its use on complex geometries. We evaluate our method both with fixed-resolution benchmarks and within a simulation pipeline, where the resolution of the discretization increases incrementally. With the Telescopic Approach for scalable mesh generation as a guide, we propose a parallel method at the node (multi-core) for mesh adaptation that is expected to scale up efficiently to the upcoming exascale machines. To facilitate an effective implementation, we introduce an abstract layer between the application and the runtime system that enables the use of task-based parallelism for concurrent mesh operations. Our evaluation indicates results comparable to state-of-the-art methods for fixed-resolution meshes both in terms of performance and quality. The integration with an adaptive pipeline offers promising results for the capability of the proposed method to function as part of an adaptive simulation. Moreover, our abstract tasking layer allows the separation of different aspects of the implementation without any impact on the functionality of the method

    Empirical studies of structural phenomena using a curated corpus of Java code

    Full text link
    Contrary to 50 years\u27 worth of advice in the instructional literature on software design, long cyclic dependencies are found to be widespread in sizeable, curated corpus of real Java software. Among their causes may be overuse of static members, underuse of dependency injection and poor tool support for avoiding them.<br /

    Meta-parametric design: Developing a computational approach for early stage collaborative practice

    Get PDF
    Computational design is the study of how programmable computers can be integrated into the process of design. It is not simply the use of pre-compiled computer aided design software that aims to replicate the drawing board, but rather the development of computer algorithms as an integral part of the design process. Programmable machines have begun to challenge traditional modes of thinking in architecture and engineering, placing further emphasis on process ahead of the final result. Just as Darwin and Wallace had to think beyond form and inquire into the development of biological organisms to understand evolution, so computational methods enable us to rethink how we approach the design process itself. The subject is broad and multidisciplinary, with influences from design, computer science, mathematics, biology and engineering. This thesis begins similarly wide in its scope, addressing both the technological aspects of computational design and its application on several case study projects in professional practice. By learning through participant observation in combination with secondary research, it is found that design teams can be most effective at the early stage of projects by engaging with the additional complexity this entails. At this concept stage, computational tools such as parametric models are found to have insufficient flexibility for wide design exploration. In response, an approach called Meta-Parametric Design is proposed, inspired by developments in genetic programming (GP). By moving to a higher level of abstraction as computational designers, a Meta-Parametric approach is able to adapt to changing constraints and requirements whilst maintaining an explicit record of process for collaborative working

    Hierarchical architecture design and simulation environment

    Get PDF
    The Hierarchical Architectural design and Simulation Environment (HASE)is intended as a flexible tool for computer architects who wish to experiment with alternative architectural configurations and design parameters. HASE is both a design environment and a simulator. Architecture components are described by a hierarchical library of objects defined in terms of an object oriented simulation language. HASE instantiates these objects to simulate and animate the execution of a computer architecture. An event trace generated by the simulator therefore describes the interaction between architecture components, for example, fetch stages, address and data buses, sequencers, instruction buffers and register files. The objects can model physical components at different abstraction levels, eg. PMS (processor memory switch), ISP (instruction set processor) and RTL (register transfer level). HASE applies the concepts of inheritance, encapsulation and polymorphism associated with object orientation, to simplify the design and implementation of an architecture simulation that models component operations at different abstraction levels. For example, HASE can probe the performance of a processor's floating point unit, executing a multiplication operation, at a lower level of abstraction, i.e. the RTL, whilst simulating remaining architecture components at a PMS level of abstraction. By adopting this approach, HASE returns a more meaningful and relevant event trace from an architecture simulation. Furthermore, an animator visualises the simulation's event trace to clarify the collaborations and interactions between architecture components. The prototype version of HASE is based on GSS (Graphical Support System), and DEMOS (Discrete Event Modelling On Simula)
    corecore