4,733 research outputs found

    Gesture Based Home Automation for the Physically Disabled

    Get PDF
    Paralysis and motor-impairments can greatly reduce the autonomy and quality of life of a patient while presenting a major recurring cost in home-healthcare. Augmented with a non-invasive wearable sensor system and home-automation equipment, the patient can regain a level of autonomy at a fraction of the cost of home nurses. A system which utilizes sensor fusion, low-power digital components, and smartphone cellular capabilities can extend the usefulness of such a system to allow greater adaptivity for patients with various needs. This thesis develops such a system as a Bluetooth enabled glove device which communicates with a remote web server to control smart-devices within the home. The power consumption of the system is considered as a major component to allow the system to operate while requiring little maintenance, allowing for greater patient autonomy. The system is evaluated in terms of power consumption and accuracy to prove its viability as a home accessibility tool

    MANUFACTURE OF INDIVIDUALIZED DOSING: DEVELOPMENT AND CONTROL OF A DROPWISE ADDITIVE MANUFACTURING PROCESS FOR MELT BASED PHARMACEUTICAL PRODUCTS

    Get PDF
    The improvements in healthcare systems and the advent of precision medicine initiative have created the need to develop more innovative manufacturing methods for the delivery of individualized dosing and personalized treatments. In recent years, the US Food and Drug Administration (FDA) introduced the Quality by Design (QbD) and Process Analytical Technology (PAT) guidelines to encourage innovation and efficiency in pharmaceutical development, manufacturing and quality assurance. As a result of emerging technologies and encouragement from the regulatory authorities, the pharmaceutical industry has begun to develop more efficient production systems with more intensive use of on-line measurement and sensing, real time quality control and process control tools, which offer the potential for reduced variability, increased flexibility and efficiency, and improved product quality

    Heterogeneous hierarchical workflow composition

    Get PDF
    Workflow systems promise scientists an automated end-to-end path from hypothesis to discovery. However, expecting any single workflow system to deliver such a wide range of capabilities is impractical. A more practical solution is to compose the end-to-end workflow from more than one system. With this goal in mind, the integration of task-based and in situ workflows is explored, where the result is a hierarchical heterogeneous workflow composed of subworkflows, with different levels of the hierarchy using different programming, execution, and data models. Materials science use cases demonstrate the advantages of such heterogeneous hierarchical workflow composition.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02- 06CH11357, program manager Laura Biven, and by the Spanish Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Bidirectional optimization of the melting spinning process

    Get PDF
    This is the author's accepted manuscript (under the provisional title "Bi-directional optimization of the melting spinning process with an immune-enhanced neural network"). The final published article is available from the link below. Copyright 2014 @ IEEE.A bidirectional optimizing approach for the melting spinning process based on an immune-enhanced neural network is proposed. The proposed bidirectional model can not only reveal the internal nonlinear relationship between the process configuration and the quality indices of the fibers as final product, but also provide a tool for engineers to develop new fiber products with expected quality specifications. A neural network is taken as the basis for the bidirectional model, and an immune component is introduced to enlarge the searching scope of the solution field so that the neural network has a larger possibility to find the appropriate and reasonable solution, and the error of prediction can therefore be eliminated. The proposed intelligent model can also help to determine what kind of process configuration should be made in order to produce satisfactory fiber products. To make the proposed model practical to the manufacturing, a software platform is developed. Simulation results show that the proposed model can eliminate the approximation error raised by the neural network-based optimizing model, which is due to the extension of focusing scope by the artificial immune mechanism. Meanwhile, the proposed model with the corresponding software can conduct optimization in two directions, namely, the process optimization and category development, and the corresponding results outperform those with an ordinary neural network-based intelligent model. It is also proved that the proposed model has the potential to act as a valuable tool from which the engineers and decision makers of the spinning process could benefit.National Nature Science Foundation of China, Ministry of Education of China, the Shanghai Committee of Science and Technology), and the Fundamental Research Funds for the Central Universities

    Efficient Experimental and Data-Centered Workflow for Microstructure-Based Fatigue Data – Towards a Data Basis for Predictive AI Models

    Get PDF
    Background Early fatigue mechanisms for various materials are yet to be unveiled for the (very) high-cycle fatigue (VHCF) regime. This can be ascribed to a lack of available data capturing initial fatigue damage evolution, which continues to adversely affect data scientists and computational modeling experts attempting to derive microstructural dependencies from small sample size data and incomplete feature representations. Objective The aim of this work is to address this lack and to drive the digital transformation of materials such that future virtual component design can be rendered more reliable and more efficient. Achieving this relies on fatigue models that comprehensively capture all relevant dependencies. Methods To this end, this work proposes a combined experimental and data post-processing workflow to establish multimodal fatigue crack initiation and propagation data sets efficiently. It evolves around fatigue testing of mesoscale specimens to increase damage detection sensitivity, data fusion through multimodal registration to address data heterogeneity, and image-based data-driven damage localization. Results A workflow with a high degree of automation is established, that links large distortion-corrected microstructure data with damage localization and evolution kinetics. The workflow enables cycling up to the VHCF regime in comparatively short time spans, while maintaining unprecedented time resolution of damage evolution. Resulting data sets capture the interaction of damage with microstructural features and hold the potential to unravel a mechanistic understanding. Conclusions The proposed workflow lays the foundation for future data mining and data-driven modeling of microstructural fatigue by providing statistically meaningful data sets extendable to a wide range of materials

    Constraint-Aware, Scalable, and Efficient Algorithms for Multi-Chip Power Module Layout Optimization

    Get PDF
    Moving towards an electrified world requires ultra high-density power converters. Electric vehicles, electrified aerospace, data centers, etc. are just a few fields among wide application areas of power electronic systems, where high-density power converters are essential. As a critical part of these power converters, power semiconductor modules and their layout optimization has been identified as a crucial step in achieving the maximum performance and density for wide bandgap technologies (i.e., GaN and SiC). New packaging technologies are also introduced to produce reliable and efficient multichip power module (MCPM) designs to push the current limits. The complexity of the emerging MCPM layouts is surpassing the capability of a manual, iterative design process to produce an optimum design with agile development requirements. An electronic design automation tool called PowerSynth has been introduced with ongoing research toward enhanced capabilities to speed up the optimized MCPM layout design process. This dissertation presents the PowerSynth progression timeline with the methodology updates and corresponding critical results compared to v1.1. The first released version (v1.1) of PowerSynth demonstrated the benefits of layout abstraction, and reduced-order modeling techniques to perform rapid optimization of the MCPM module compared to the traditional, manual, and iterative design approach. However, that version is limited by several key factors: layout representation technique, layout generation algorithms, iterative design-rule-checking (DRC), optimization algorithm candidates, etc. To address these limitations, and enhance PowerSynth’s capabilities, constraint-aware, scalable, and efficient algorithms have been developed and implemented. PowerSynth layout engine has evolved from v1.3 to v2.0 throughout the last five years to incorporate the algorithm updates and generate all 2D/2.5D/3D Manhattan layout solutions. These fundamental changes in the layout generation methodology have also called for updates in the performance modeling techniques and enabled exploring different optimization algorithms. The latest PowerSynth 2 architecture has been implemented to enable electro-thermo-mechanical and reliability optimization on 2D/2.5D/3D MCPM layouts, and set up a path toward cabinet-level optimization. PowerSynth v2.0 computer-aided design (CAD) flow has been hardware-validated through manufacturing and testing of an optimized novel 3D MCPM layout. The flow has shown significant speedup compared to the manual design flow with a comparable optimization result

    Modeling, Simulation and Emulation of Intelligent Domotic Environments

    Get PDF
    Intelligent Domotic Environments are a promising approach, based on semantic models and commercially off-the-shelf domotic technologies, to realize new intelligent buildings, but such complexity requires innovative design methodologies and tools for ensuring correctness. Suitable simulation and emulation approaches and tools must be adopted to allow designers to experiment with their ideas and to incrementally verify designed policies in a scenario where the environment is partly emulated and partly composed of real devices. This paper describes a framework, which exploits UML2.0 state diagrams for automatic generation of device simulators from ontology-based descriptions of domotic environments. The DogSim simulator may simulate a complete building automation system in software, or may be integrated in the Dog Gateway, allowing partial simulation of virtual devices alongside with real devices. Experiments on a real home show that the approach is feasible and can easily address both simulation and emulation requirement

    Methods Included:Standardizing Computational Reuse and Portability with the Common Workflow Language

    Get PDF
    A widely used standard for portable multilingual data analysis pipelines would enable considerable benefits to scholarly publication reuse, research/industry collaboration, regulatory cost control, and to the environment. Published research that used multiple computer languages for their analysis pipelines would include a complete and reusable description of that analysis that is runnable on a diverse set of computing environments. Researchers would be able to easier collaborate and reuse these pipelines, adding or exchanging components regardless of programming language used; collaborations with and within the industry would be easier; approval of new medical interventions that rely on such pipelines would be faster. Time will be saved and environmental impact would also be reduced, as these descriptions contain enough information for advanced optimization without user intervention. Workflows are widely used in data analysis pipelines, enabling innovation and decision-making for the modern society. In many domains the analysis components are numerous and written in multiple different computer languages by third parties. However, lacking a standard for reusable and portable multilingual workflows, then reusing published multilingual workflows, collaborating on open problems, and optimizing their execution would be severely hampered. Moreover, only a standard for multilingual data analysis pipelines that was widely used would enable considerable benefits to research-industry collaboration, regulatory cost control, and to preserving the environment. Prior to the start of the CWL project, there was no standard for describing multilingual analysis pipelines in a portable and reusable manner. Even today / currently, although there exist hundreds of single-vendor and other single-source systems that run workflows, none is a general, community-driven, and consensus-built standard
    corecore