635 research outputs found

    The development of corpus-based computer assisted composition program and its application for instrumental music composition

    Get PDF
    In the last 20 years, we have seen the nourishing environment for the development of music software using a corpus of audio data expanding significantly, namely that synthesis techniques producing electronic sounds, and supportive tools for creative activities are the driving forces to the growth. Some software produces a sequence of sounds by means of synthesizing a chunk of source audio data retrieved from an audio database according to a rule. Since the matching of sources is processed according to their descriptive features extracted by FFT analysis, the quality of the result is significantly influenced by the outcomes of the Audio Analysis, Segmentation, and Decomposition. Also, the synthesis process often requires a considerable amount of sample data and this can become an obstacle to establish easy, inexpensive, and user-friendly applications on various kinds of devices. Therefore, it is crucial to consider how to treat the data and construct an efficient database for the synthesis. We aim to apply corpusbased synthesis techniques to develop a Computer Assisted Composition program, and to investigate the actual application of the program on ensemble pieces. The goal of this research is to apply the program to the instrumental music composition, refine its function, and search new avenues for innovative compositional method

    Stimulus Optimization in Hardware Verification Using Machine-Learning

    Get PDF
    Simulation-based functional verification is a commonly used technique for hardware verification, with the goal of exercising critical scenarios in the design, detecting and fixing bugs, and achieving close to 100% of the coverage targets required for tape-out. As chip complexity continues to grow, functional verification is also becoming a bottleneck for the overall chip design cycle. The primary goal is to shorten the time taken for functional coverage convergence in the volume verification phase, which in return, accelerates the bug detection in the design. In this thesis, I have investigated the application of machine learning towards this objective. I accessed the machine learning-guided stimulus generation with two approaches: coarse-grained test-level optimization and fine-grained transaction-level optimization. The effectiveness of machine learning was first confirmed on test-level optimization, which rests on achieving full coverage for a certain group of functional coverage metrics in reduced time with a minimal number of simulated tests. It was observed that test-level optimization was limited to some common functional coverage metrics. This was the motivation to explore and implement transaction-level optimization in two novel ways: transaction pruning and directed sequence generation for accelerated functional coverage closure. These techniques were applied on FSM (Finite State Machine) and Non-FSM based coverage metrics and compared the gains using different ML classifiers. Experimental results showed that the fine-grained implementation can potentially reduce the overall CPU time for the verification coverage closure; thus, I propose that complementary application of both the levels of stimulus optimization is the recommended path for efficiency improvements in functional verification coverage convergence

    High-resolution genetic map and QTL analysis of growth-related traits of Hevea brasiliensis cultivated under suboptimal temperature and humidity conditions

    Get PDF
    Rubber tree (Hevea brasiliensis) cultivation is the main source of natural rubber worldwide and has been extended to areas with suboptimal climates and lengthy drought periods; this transition affects growth and latex production. High-density genetic maps with reliable markers support precise mapping of quantitative trait loci (QTL), which can help reveal the complex genome of the species, provide tools to enhance molecular breeding, and shorten the breeding cycle. In this study, QTL mapping of the stem diameter, tree height, and number of whorls was performed for a full-sibling population derived from a GT1 and RRIM701 cross. A total of 225 simple sequence repeats (SSRs) and 186 single-nucleotide polymorphism (SNP) markers were used to construct a base map with 18 linkage groups and to anchor 671 SNPs from genotyping by sequencing (GBS) to produce a very dense linkage map with small intervals between loci. The final map was composed of 1,079 markers, spanned 3,779.7 cM with an average marker density of 3.5 cM, and showed collinearity between markers from previous studies. Significant variation in phenotypic characteristics was found over a 59-month evaluation period with a total of 38 QTLs being identified through a composite interval mapping method. Linkage group 4 showed the greatest number of QTLs (7), with phenotypic explained values varying from 7.67 to 14.07%. Additionally, we estimated segregation patterns, dominance, and additive effects for each QTL. A total of 53 significant effects for stem diameter were observed, and these effects were mostly related to additivity in the GT1 clone. Associating accurate genome assemblies and genetic maps represents a promising strategy for identifying the genetic basis of phenotypic traits in rubber trees. Then, further research can benefit from the QTLs identified herein, providing a better understanding of the key determinant genes associated with growth of Hevea brasiliensis under limiting water conditions

    Keeping checkpoint/restart viable for exascale systems

    Get PDF
    Next-generation exascale systems, those capable of performing a quintillion operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systems due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoints) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms

    Multilevel Runtime Verification for Safety and Security Critical Cyber Physical Systems from a Model Based Engineering Perspective

    Get PDF
    Advanced embedded system technology is one of the key driving forces behind the rapid growth of Cyber-Physical System (CPS) applications. CPS consists of multiple coordinating and cooperating components, which are often software-intensive and interact with each other to achieve unprecedented tasks. Such highly integrated CPSs have complex interaction failures, attack surfaces, and attack vectors that we have to protect and secure against. This dissertation advances the state-of-the-art by developing a multilevel runtime monitoring approach for safety and security critical CPSs where there are monitors at each level of processing and integration. Given that computation and data processing vulnerabilities may exist at multiple levels in an embedded CPS, it follows that solutions present at the levels where the faults or vulnerabilities originate are beneficial in timely detection of anomalies. Further, increasing functional and architectural complexity of critical CPSs have significant safety and security operational implications. These challenges are leading to a need for new methods where there is a continuum between design time assurance and runtime or operational assurance. Towards this end, this dissertation explores Model Based Engineering methods by which design assurance can be carried forward to the runtime domain, creating a shared responsibility for reducing the overall risk associated with the system at operation. Therefore, a synergistic combination of Verification & Validation at design time and runtime monitoring at multiple levels is beneficial in assuring safety and security of critical CPS. Furthermore, we realize our multilevel runtime monitor framework on hardware using a stream-based runtime verification language

    SPICE²: A Spatial, Parallel Architecture for Accelerating the Spice Circuit Simulator

    Get PDF
    Spatial processing of sparse, irregular floating-point computation using a single FPGA enables up to an order of magnitude speedup (mean 2.8X speedup) over a conventional microprocessor for the SPICE circuit simulator. We deliver this speedup using a hybrid parallel architecture that spatially implements the heterogeneous forms of parallelism available in SPICE. We decompose SPICE into its three constituent phases: Model-Evaluation, Sparse Matrix-Solve, and Iteration Control and parallelize each phase independently. We exploit data-parallel device evaluations in the Model-Evaluation phase, sparse dataflow parallelism in the Sparse Matrix-Solve phase and compose the complete design in streaming fashion. We name our parallel architecture SPICE²: Spatial Processors Interconnected for Concurrent Execution for accelerating the SPICE circuit simulator. We program the parallel architecture with a high-level, domain-specific framework that identifies, exposes and exploits parallelism available in the SPICE circuit simulator. This design is optimized with an auto-tuner that can scale the design to use larger FPGA capacities without expert intervention and can even target other parallel architectures with the assistance of automated code-generation. This FPGA architecture is able to outperform conventional processors due to a combination of factors including high utilization of statically-scheduled resources, low-overhead dataflow scheduling of fine-grained tasks, and overlapped processing of the control algorithms. We demonstrate that we can independently accelerate Model-Evaluation by a mean factor of 6.5X(1.4--23X) across a range of non-linear device models and Matrix-Solve by 2.4X(0.6--13X) across various benchmark matrices while delivering a mean combined speedup of 2.8X(0.2--11X) for the two together when comparing a Xilinx Virtex-6 LX760 (40nm) with an Intel Core i7 965 (45nm). With our high-level framework, we can also accelerate Single-Precision Model-Evaluation on NVIDIA GPUs, ATI GPUs, IBM Cell, and Sun Niagara 2 architectures. We expect approaches based on exploiting spatial parallelism to become important as frequency scaling slows down and modern processing architectures turn to parallelism (\eg multi-core, GPUs) due to constraints of power consumption. This thesis shows how to express, exploit and optimize spatial parallelism for an important class of problems that are challenging to parallelize.</p

    Index to 1985 NASA Tech Briefs, volume 10, numbers 1-4

    Get PDF
    Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1985 Tech Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences

    Cell Mol Life Sci

    Get PDF
    Hospital-associated infections are a major concern for global public health. Infections with antibiotic-resistant pathogens can cause empiric treatment failure, and for infections with multidrug-resistant bacteria which can overcome antibiotics of "last resort" there exists no alternative treatments. Despite extensive sanitization protocols, the hospital environment is a potent reservoir and vector of antibiotic-resistant organisms. Pathogens can persist on hospital surfaces and plumbing for months to years, acquire new antibiotic resistance genes by horizontal gene transfer, and initiate outbreaks of hospital-associated infections by spreading to patients via healthcare workers and visitors. Advancements in next-generation sequencing of bacterial genomes and metagenomes have expanded our ability to (1) identify species and track distinct strains, (2) comprehensively profile antibiotic resistance genes, and (3) resolve the mobile elements that facilitate intra- and intercellular gene transfer. This information can, in turn, be used to characterize the population dynamics of hospital-associated microbiota, track outbreaks to their environmental reservoirs, and inform future interventions. This review provides a detailed overview of the approaches and bioinformatic tools available to study isolates and metagenomes of hospital-associated bacteria, and their multi-layered networks of transmission.R01HD092414/Eunice Kennedy Shriver National Institute of Child Health and Human Development/R01OH011578/OH/NIOSH CDC HHSUnited States/R01 AI123394/AI/NIAID NIH HHSUnited States/R01 HD092414/HD/NICHD NIH HHSUnited States/W81XWH1810225/Congressionally Directed Medical Research Programs/R01 AT009741/AT/NCCIH NIH HHSUnited States/R01 OH011578/OH/NIOSH CDC HHSUnited States/R01AT009741/AT/NCCIH NIH HHSUnited States/R01AI123394/National Institute of Allergy and Infectious Diseases/2022-03-01T00:00:00Z33582841PMC800548011017vault:3682

    Advancement in robot programming with specific reference to graphical methods

    Get PDF
    This research study is concerned with the derivation of advanced robot programming methods. The methods include the use of proprietary simulation modelling and design software tools for the off-line programming of industrial robots. The study has involved the generation of integration software to facilitate the co-operative operation of these software tools. The three major researcli'themes7of "ease of usage", calibration and the integration of product design data have been followed to advance robot programming. The "ease of usage" is concerned with enhancements in the man-machine interface for robo t simulation systems in terms of computer assisted solid modelling and computer assisted task generation. Robot simulation models represent an idealised situation, and any off-line robot programs generated from'them may contain'discrepancies which could seriously effect thq programs' performance; Calibration techniques have therefore been investigated as 'a method of overcoming discrepancies between the simulation model and the real world. At the present time, most computer aided design systems operate as isolated islands of computer technology, whereas their product databases should be used to support decision making processes and ultimately facilitate the generation of machine programs. Thus the integration of product design data has been studied as an important step towards truly computer integrated manufacturing. The functionality of the three areas of study have been generalised and form the basis for recommended enhancements to future robot programming systems
    • …
    corecore