21,732 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Sensitivity analysis for ReaxFF reparameterization using the Hilbert-Schmidt independence criterion

    Full text link
    We apply a global sensitivity method, the Hilbert-Schmidt independence criterion (HSIC), to the reparameterization of a Zn/S/H ReaxFF force field to identify the most appropriate parameters for reparameterization. Parameter selection remains a challenge in this context as high dimensional optimizations are prone to overfitting and take a long time, but selecting too few parameters leads to poor quality force fields. We show that the HSIC correctly and quickly identifies the most sensitive parameters, and that optimizations done using a small number of sensitive parameters outperform those done using a higher dimensional reasonable-user parameter selection. Optimizations using only sensitive parameters: 1) converge faster, 2) have loss values comparable to those found with the naive selection, 3) have similar accuracy in validation tests, and 4) do not suffer from problems of overfitting. We demonstrate that an HSIC global sensitivity is a cheap optimization pre-processing step that has both qualitative and quantitative benefits which can substantially simplify and speedup ReaxFF reparameterizations.Comment: author accepted manuscrip

    A Design Science Research Approach to Smart and Collaborative Urban Supply Networks

    Get PDF
    Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness. A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense. Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice

    Corporate Social Responsibility: the institutionalization of ESG

    Get PDF
    Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective

    Construction of radon chamber to expose active and passive detectors

    Get PDF
    In this research and development, we present the design and manufacture of a radon chamber (PUCP radon chamber), a necessary tool for the calibration of passive detectors, verification of the operation of active radon monitors as well as diffusion chamber calibration used in radon measurements in air, and soils. The first chapter is an introduction to describe radon gas and national levels of radon concentration given by many organizations. Parameters that influence the calibration factor of the LR 115 type 2 film detector are studied, such as the energy window, critical angle, and effective volumes. Those are strongly related to the etching processes and counting of tracks all seen from a semi-empirical approach studied in the second chapter. The third chapter presents a review of some radon chambers that have been reported in the literature, based on their size and mode of operation as well as the radon source they use. The design and construction of the radon chamber are presented, use of uranium ore (autunite) as a chamber source is also discussed. In chapter fourth, radon chamber characterization is presented through leakage lambda, homogeneity of radon concentration, regimes-operation modes, and the saturation concentrations that can be reached. Procedures and methodology used in this work are contained in the fifth chapter and also some uses and applications of the PUCP radon chamber are presented; the calibration of cylindrical metallic diffusion chamber based on CR-39 chips detectors taking into account overlapping effect; transmission factors of gaps and pinhole for the same diffusion chambers are determined; permeability of glass fiber filter for 222Rn is obtained after reach equilibrium through Ramachandran model and taking into account a partition function as the rate of track density. The results of this research have been published in indexed journals. Finally, the conclusion and recommendations that reflect the fulfillment aims of this thesis are presented

    Statistical phase estimation and error mitigation on a superconducting quantum processor

    Full text link
    Quantum phase estimation (QPE) is a key quantum algorithm, which has been widely studied as a method to perform chemistry and solid-state calculations on future fault-tolerant quantum computers. Recently, several authors have proposed statistical alternatives to QPE that have benefits on early fault-tolerant devices, including shorter circuits and better suitability for error mitigation techniques. However, practical implementations of the algorithm on real quantum processors are lacking. In this paper we practically implement statistical phase estimation on Rigetti's superconducting processors. We specifically use the method of Lin and Tong [PRX Quantum 3, 010318 (2022)] using the improved Fourier approximation of Wan et al. [PRL 129, 030503 (2022)], and applying a variational compilation technique to reduce circuit depth. We then incorporate error mitigation strategies including zero-noise extrapolation and readout error mitigation with bit-flip averaging. We propose a simple method to estimate energies from the statistical phase estimation data, which is found to improve the accuracy in final energy estimates by one to two orders of magnitude with respect to prior theoretical bounds, reducing the cost to perform accurate phase estimation calculations. We apply these methods to chemistry problems for active spaces up to 4 electrons in 4 orbitals, including the application of a quantum embedding method, and use them to correctly estimate energies within chemical precision. Our work demonstrates that statistical phase estimation has a natural resilience to noise, particularly after mitigating coherent errors, and can achieve far higher accuracy than suggested by previous analysis, demonstrating its potential as a valuable quantum algorithm for early fault-tolerant devices.Comment: 24 pages, 13 figure

    Establishing a Data Science for Good Ecosystem: The Case of ATLytiCS

    Get PDF
    Data science for social good (DSSG) initiatives have been championed as worthy mechanisms for transformative change and social impact. However, researchers have not fully explored the systems by which actors coordinate, access data, determine goals and communicate opportunities for change. We contribute to the information systems ecosystems and the nonprofit volunteering literatures by exploring the ways in which data science volunteers leverage their talents to address social impact goals. We use Atlanta Analytics for Community Service (ATLytiCS), an organization that aids nonprofits and government agencies, as a case study. ATLytiCS represents a rare example of a nonprofit organization (NPO) managed and run by highly-skilled volunteer data scientists within a regionally networked system of actors and institutions. Based on findings from this case, we build a DSSG ecosystem framework to describe and distinguish DSSG ecosystems from related data and entrepreneurial ecosystems

    Kerker-Type Positional Disorder Immune Metasurfaces

    Full text link
    Metasurfaces that can work without the rigorous periodic arrangement of meta-atoms are highly desired by practical optical micro-nano devices. In this work, we proposed two kinds of Kerker-type metasurfaces possessing positional disorder immunity. The metasurfaces are composed of two different core-shell cylinders satisfying the first and second Kerker conditions, respectively. Even with large positional disorder perturbation of the meta-atoms, the metasurfaces can still maintain the same excellent performances as periodic ones, such as the total transmission and magnetic mirror responses. This disorder immunity is due to the unidirectional forward and backward scatterings of a single core-shell cylinder leading to very weak lateral couplings between neighboring cylinders thus rarely affecting the multiple scatterings in the forward or backward direction. In contrast, the dominant response of the disordered non-Kerker-type metasurface decreases significantly. Our findings provide a new idea for designing robust metasurfaces and extend the scope of metasurface applications in sensing and communication under complex practical circumstances.Comment: 18 pages, 9 figure

    Inferring networks from time series: a neural approach

    Full text link
    Network structures underlie the dynamics of many complex phenomena, from gene regulation and foodwebs to power grids and social media. Yet, as they often cannot be observed directly, their connectivities must be inferred from observations of their emergent dynamics. In this work we present a powerful and fast computational method to infer large network adjacency matrices from time series data using a neural network. Using a neural network provides uncertainty quantification on the prediction in a manner that reflects both the non-convexity of the inference problem as well as the noise on the data. This is useful since network inference problems are typically underdetermined, and a feature that has hitherto been lacking from network inference methods. We demonstrate our method's capabilities by inferring line failure locations in the British power grid from observations of its response to a power cut. Since the problem is underdetermined, many classical statistical tools (e.g. regression) will not be straightforwardly applicable. Our method, in contrast, provides probability densities on each edge, allowing the use of hypothesis testing to make meaningful probabilistic statements about the location of the power cut. We also demonstrate our method's ability to learn an entire cost matrix for a non-linear model from a dataset of economic activity in Greater London. Our method outperforms OLS regression on noisy data in terms of both speed and prediction accuracy, and scales as N2N^2 where OLS is cubic. Since our technique is not specifically engineered for network inference, it represents a general parameter estimation scheme that is applicable to any parameter dimension

    A direct-laser-written heart-on-a-chip platform for generation and stimulation of engineered heart tissues

    Full text link
    In this dissertation, we first develop a versatile microfluidic heart-on-a-chip model to generate 3D-engineered human cardiac microtissues in highly-controlled microenvironments. The platform, which is enabled by direct laser writing (DLW), has tailor-made attachment sites for cardiac microtissues and comes with integrated strain actuators and force sensors. Application of external pressure waves to the platform results in controllable time-dependent forces on the microtissues. Conversely, oscillatory forces generated by the microtissues are transduced into measurable electrical outputs. After characterization of the responsivity of the transducers, we demonstrate the capabilities of this platform by studying the response of cardiac microtissues to prescribed mechanical loading and pacing. Next, we tune the geometry and mechanical properties of the platform to enable parametric studies on engineered heart tissues. We explore two geometries: a rectangular seeding well with two attachment sites, and a stadium-like seeding well with six attachment sites. The attachment sites are placed symmetrically in the longitudinal direction. The former geometry promotes uniaxial contraction of the tissues; the latter additionally induces diagonal fiber alignment. We systematically increase the length for both configurations and observe a positive correlation between fiber alignment at the center of the microtissues and tissue length. However, progressive thinning and “necking” is also observed, leading to the failure of longer tissues over time. We use the DLW technique to improve the platform, softening the mechanical environment and optimizing the attachment sites for generation of stable microtissues at each length and geometry. Furthermore, electrical pacing is incorporated into the platform to evaluate the functional dynamics of stable microtissues over the entire range of physiological heart rates. Here, we typically observe a decrease in active force and contraction duration as a function of frequency. Lastly, we use a more traditional ?TUG platform to demonstrate the effects of subthreshold electrical pacing on the rhythm of the spontaneously contracting cardiac microtissues. Here, we observe periodic M:N patterns, in which there are ? cycles of stimulation for every ? tissue contractions. Using electric field amplitude, pacing frequency, and homeostatic beating frequencies of the tissues, we provide an empirical map for predicting the emergence of these rhythms
    • …
    corecore