17,072 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Extending the reach of uncertainty quantification in nuclear theory

    Get PDF
    The theory of the strong interaction—quantum chromodynamics (QCD)—is unsuited to practical calculations of nuclear observables and approximate models for nuclear interaction potentials are required. In contrast to phenomenological models, chiral effective field theories (χEFTs) of QCD grant a handle on the theoretical uncertainty arising from the truncation of the chiral expansion. Uncertainties in χEFT are preferably quantified using Bayesian inference, but quantifying reliable posterior predictive distributions for nuclear observables presents several challenges. First, χEFT is parametrized by unknown low-energy constants (LECs) whose values must be inferred from low-energy data of nuclear structure and reaction observables. There are 31 LECs at fourth order in Weinberg power counting, leading to a high-dimensional inference problem which I approach by developing an advanced sampling protocol using Hamiltonian Monte Carlo (HMC). This allows me to quantify LEC posteriors up to and including fourth chiral order. Second, the χEFT truncation error is correlated across independent variables such as scattering energies and angles; I model correlations using a Gaussian process. Third, the computational cost of computing few- and many-nucleon observables typically precludes their direct use in Bayesian parameter estimation as each observable must be computed in excess of 100,000 times during HMC sampling. The one exception is nucleon-nucleon scattering observables, but even these incur a substantial computational cost in the present applications. I sidestep such issues using eigenvector-continuation emulators, which accurately mimic exact calculations while dramatically reducing the computational cost. Equipped with Bayesian posteriors for the LECs, and a model for the truncation error, I explore the predictive ability of χEFT, presenting the results as the probability distributions they always were

    Scaling up integrated photonic reservoirs towards low-power high-bandwidth computing

    No full text

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Offline and Online Models for Learning Pairwise Relations in Data

    Get PDF
    Pairwise relations between data points are essential for numerous machine learning algorithms. Many representation learning methods consider pairwise relations to identify the latent features and patterns in the data. This thesis, investigates learning of pairwise relations from two different perspectives: offline learning and online learning.The first part of the thesis focuses on offline learning by starting with an investigation of the performance modeling of a synchronization method in concurrent programming using a Markov chain whose state transition matrix models pairwise relations between involved cores in a computer process.Then the thesis focuses on a particular pairwise distance measure, the minimax distance, and explores memory-efficient approaches to computing this distance by proposing a hierarchical representation of the data with a linear memory requirement with respect to the number of data points, from which the exact pairwise minimax distances can be derived in a memory-efficient manner. Then, a memory-efficient sampling method is proposed that follows the aforementioned hierarchical representation of the data and samples the data points in a way that the minimax distances between all data points are maximally preserved. Finally, the thesis proposes a practical non-parametric clustering of vehicle motion trajectories to annotate traffic scenarios based on transitive relations between trajectories in an embedded space.The second part of the thesis takes an online learning perspective, and starts by presenting an online learning method for identifying bottlenecks in a road network by extracting the minimax path, where bottlenecks are considered as road segments with the highest cost, e.g., in the sense of travel time. Inspired by real-world road networks, the thesis assumes a stochastic traffic environment in which the road-specific probability distribution of travel time is unknown. Therefore, it needs to learn the parameters of the probability distribution through observations by modeling the bottleneck identification task as a combinatorial semi-bandit problem. The proposed approach takes into account the prior knowledge and follows a Bayesian approach to update the parameters. Moreover, it develops a combinatorial variant of Thompson Sampling and derives an upper bound for the corresponding Bayesian regret. Furthermore, the thesis proposes an approximate algorithm to address the respective computational intractability issue.Finally, the thesis considers contextual information of road network segments by extending the proposed model to a contextual combinatorial semi-bandit framework and investigates and develops various algorithms for this contextual combinatorial setting

    Machine learning techniques for predicting the stock market using daily market variables

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligencePredicting the stock market was never seen as an easy task. The complexity of the financial systems makes it extremely difficult for anything or anyone to predict what the future of prices holds, let it be a day, a week, a month or even a year. Many variables influence the market’s volatility and some of these may even be the gut feeling of an investor on a specific day. Several machine learning techniques were already applied to forecast multiple stock market indexes, some presenting good values of accuracy when it comes to predict whether the prices will go up or down, and low values of error when dealing with regression data. This work aims to apply some state-of-the-art algorithms and compare their performance with Long Short-term Memory (LSTM) as well as between each other. The variables used to this empirical work were the prices of the Dow Jones Industrial Average (DJIA) registered for every business day, from January 1st of 2006 to January 1st of 2018, for 29 companies. Some changes and adjustments were made to the original variables to present different data types to the algorithms. To ensure good quality and certainty when evaluating the flexibility and stability of each model, the error measure used was the Root Mean Squared Error and the Mann-Whitney U test was also applied to assess statistical significance of the results obtained.Prever a bolsa nunca foi considerado ser uma tarefa fácil. A complexidade dos sistemas financeiros torna extremamente difícil que um ser humano ou uma máquina consigam prever o que o futuro dos preços reserva, seja para um dia, uma semana, um mês ou um ano. Muitas variáveis influenciam a volatilidade do mercado e algumas podem até ser a confiança de um investidor em apostar em determinada empresa, naquele dia específico. Várias técnicas de aprendizagem automática foram aplicadas ao longo do tempo para prever vários índices de bolsas, algumas apresentando bons valores de precisão quando se tratou de prever se os preços subiam ou desciam e outras, baixos valores de erro ao lidar com dados de regressão. Este trabalho tem como objetivo aplicar alguns dos mais conhecidos algoritmos e comparar os seus desempenhos com o Long Short-Term Memory (LSTM), e entre si. As variáveis utilizadas para a elaboração deste trabalho empírico foram os preços da Dow Jones Industrial Average (DJIA) registados para todos os dias úteis, de 1 de Janeiro de 2006 a 1 de Janeiro de 2018, para 29 empresas. Algumas alterações e ajustes foram efetuados sobre as variáveis originais de forma a construír diferentes tipos de dados para posteriormente dar aos algoritmos. Para garantir boa qualidade e veracidade ao avaliar a flexibilidade e estabilidade de cada modelo, a medida de erro utilizada foi o erro médio quadrático da raíz e, de seguida, o teste U de Mann-Whitney foi aplicado para avaliar a significância estatística dos resultados obtidos

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    A proteomic survival predictor for COVID-19 patients in intensive care

    Get PDF
    © 2022 Demichev et al. This is an open access article distributed under the terms of the Creative Commons Attribution License. https://creativecommons.org/licenses/by/4.0/Global healthcare systems are challenged by the COVID-19 pandemic. There is a need to optimize allocation of treatment and resources in intensive care, as clinically established risk assessments such as SOFA and APACHE II scores show only limited performance for predicting the survival of severely ill COVID-19 patients. Additional tools are also needed to monitor treatment, including experimental therapies in clinical trials. Comprehensively capturing human physiology, we speculated that proteomics in combination with new data-driven analysis strategies could produce a new generation of prognostic discriminators. We studied two independent cohorts of patients with severe COVID-19 who required intensive care and invasive mechanical ventilation. SOFA score, Charlson comorbidity index, and APACHE II score showed limited performance in predicting the COVID-19 outcome. Instead, the quantification of 321 plasma protein groups at 349 timepoints in 50 critically ill patients receiving invasive mechanical ventilation revealed 14 proteins that showed trajectories different between survivors and non-survivors. A predictor trained on proteomic measurements obtained at the first time point at maximum treatment level (i.e. WHO grade 7), which was weeks before the outcome, achieved accurate classification of survivors (AUROC 0.81). We tested the established predictor on an independent validation cohort (AUROC 1.0). The majority of proteins with high relevance in the prediction model belong to the coagulation system and complement cascade. Our study demonstrates that plasma proteomics can give rise to prognostic predictors substantially outperforming current prognostic markers in intensive care.Peer reviewedFinal Published versio
    • …
    corecore