44 research outputs found

    A Internet of Things Improvng Deep Neural Network Based Particle Swarm Optimization Computation Prediction Approach for Healthcare System

    Get PDF
    Internet of Things (IoT) systems tend to generate with energy and good data to process and responding. In internet of things devices, the most important challenge when sending data to the cloud the level of energy consumption. This paper introduces an energy-efficient abstraction method data collection in medical with IoT-based for the exchange. Initially, the data required for IoT devices is collected from the person. First, Adaptive Optimized Sensor-Lamella Zive Welch (AOSLZW) is a pressure sensing prior to the data transmission technique used in the process. A cloud server is used data reducing  the amount of data sent from IoT devices to the AOSLZW strategy. Finally, a deep neural network (DNN) based on Particle Swarm Optimization (PSO) known as DNN-PSO algorithm is used for data sensed result model make decisions based as a predictive to make it. The results are studied under distinct scenarios of the presented of the performance for AOSLZW-DNN-PSO method, for that simation are studied under different sections. This current pattern of simalation results indicates that the AOSLZW-DNN-PSO method is effective under several aspects

    Huffman-based Code Compression Techniques for Embedded Systems

    Get PDF

    From LCF to Isabelle/HOL

    Get PDF
    Interactive theorem provers have developed dramatically over the past four decades, from primitive beginnings to today's powerful systems. Here, we focus on Isabelle/HOL and its distinctive strengths. They include automatic proof search, borrowing techniques from the world of first order theorem proving, but also the automatic search for counterexamples. They include a highly readable structured language of proofs and a unique interactive development environment for editing live proof documents. Everything rests on the foundation conceived by Robin Milner for Edinburgh LCF: a proof kernel, using abstract types to ensure soundness and eliminate the need to store proofs. Compared with the research prototypes of the 1970s, Isabelle is a practical and versatile tool. It is used by system designers, mathematicians and many others

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    Improving Structural Features Prediction in Protein Structure Modeling

    Get PDF
    Proteins play a vital role in the biological activities of all living species. In nature, a protein folds into a specific and energetically favorable three-dimensional structure which is critical to its biological function. Hence, there has been a great effort by researchers in both experimentally determining and computationally predicting the structures of proteins. The current experimental methods of protein structure determination are complicated, time-consuming, and expensive. On the other hand, the sequencing of proteins is fast, simple, and relatively less expensive. Thus, the gap between the number of known sequences and the determined structures is growing, and is expected to keep expanding. In contrast, computational approaches that can generate three-dimensional protein models with high resolution are attractive, due to their broad economic and scientific impacts. Accurately predicting protein structural features, such as secondary structures, disulfide bonds, and solvent accessibility is a critical intermediate step stone to obtain correct three-dimensional models ultimately. In this dissertation, we report a set of approaches for improving the accuracy of structural features prediction in protein structure modeling. First of all, we derive a statistical model to generate context-based scores characterizing the favorability of segments of residues in adopting certain structural features. Then, together with other information such as evolutionary and sequence information, we incorporate the context-based scores in machine learning approaches to predict secondary structures, disulfide bonds, and solvent accessibility. Furthermore, we take advantage of the emerging high performance computing architectures in GPU to accelerate the calculation of pairwise and high-order interactions in context-based scores. Finally, we make these prediction methods available to the public via web services and software packages

    Progress Report : 1991 - 1994

    Get PDF

    Alternative Method for Parallel M-way Tree Search on Distributed Memory Architectures

    Get PDF
    Computer Science

    Quantum Transpiler Optimization: On the Development, Implementation, and Use of a Quantum Research Testbed

    Get PDF
    Quantum computing research is at the cusp of a paradigm shift. As the complexity of quantum systems increases, so does the complexity of research procedures for creating and testing layers of the quantum software stack. However, the tools used to perform these tasks have not experienced the increase in capability required to effectively handle the development burdens involved. This case is made particularly clear in the context of IBM QX Transpiler optimization algorithms and functions. IBM QX systems use the Qiskit library to create, transform, and execute quantum circuits. As coherence times and hardware qubit counts increase and qubit topologies become more complex, so does orchestration of qubit mapping and qubit state movement across these topologies. The transpiler framework used to create and test improved algorithms has not kept pace. A testbed is proposed to provide abstractions to create and test transpiler routines. The development process is analyzed and implemented, from design principles through requirements analysis and verification testing. Additionally, limitations of existing transpiler algorithms are identified and initial results are provided that suggest more effective algorithms for qubit mapping and state movement

    Investigation into Formalization of Domain-Oriented Parallel Software Development

    Get PDF
    This research investigates the conceptual design of a semi-automated platform for parallel software development. The proposed semi-automated environment applies transformational techniques and domain-specific knowledge to a parallel software development process. Domain-specific and software design knowledge interact within the transformational development process in the creation of a software application. The underlying parallel specification language requires a set of parallel composition operators in order to capture an application’s concurrent properties. A set of parallel composition operators is proposed that consists of parallel composition, parallel enumeration, nondeterministic choice, sequential composition specific communication and synchronization variable types are also proposed. A semi-automated environment based on this set of composition operators is considered and presented
    corecore