9,701 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Self-Supervised Learning to Prove Equivalence Between Straight-Line Programs via Rewrite Rules

    Full text link
    We target the problem of automatically synthesizing proofs of semantic equivalence between two programs made of sequences of statements. We represent programs using abstract syntax trees (AST), where a given set of semantics-preserving rewrite rules can be applied on a specific AST pattern to generate a transformed and semantically equivalent program. In our system, two programs are equivalent if there exists a sequence of application of these rewrite rules that leads to rewriting one program into the other. We propose a neural network architecture based on a transformer model to generate proofs of equivalence between program pairs. The system outputs a sequence of rewrites, and the validity of the sequence is simply checked by verifying it can be applied. If no valid sequence is produced by the neural network, the system reports the programs as non-equivalent, ensuring by design no programs may be incorrectly reported as equivalent. Our system is fully implemented for a given grammar which can represent straight-line programs with function calls and multiple types. To efficiently train the system to generate such sequences, we develop an original incremental training technique, named self-supervised sample selection. We extensively study the effectiveness of this novel training approach on proofs of increasing complexity and length. Our system, S4Eq, achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent programsComment: 30 pages including appendi

    When to be critical? Performance and evolvability in different regimes of neural Ising agents

    Full text link
    It has long been hypothesized that operating close to the critical state is beneficial for natural, artificial and their evolutionary systems. We put this hypothesis to test in a system of evolving foraging agents controlled by neural networks that can adapt agents' dynamical regime throughout evolution. Surprisingly, we find that all populations that discover solutions, evolve to be subcritical. By a resilience analysis, we find that there are still benefits of starting the evolution in the critical regime. Namely, initially critical agents maintain their fitness level under environmental changes (for example, in the lifespan) and degrade gracefully when their genome is perturbed. At the same time, initially subcritical agents, even when evolved to the same fitness, are often inadequate to withstand the changes in the lifespan and degrade catastrophically with genetic perturbations. Furthermore, we find the optimal distance to criticality depends on the task complexity. To test it we introduce a hard and simple task: for the hard task, agents evolve closer to criticality whereas more subcritical solutions are found for the simple task. We verify that our results are independent of the selected evolutionary mechanisms by testing them on two principally different approaches: a genetic algorithm and an evolutionary strategy. In summary, our study suggests that although optimal behaviour in the simple task is obtained in a subcritical regime, initializing near criticality is important to be efficient at finding optimal solutions for new tasks of unknown complexity.Comment: arXiv admin note: substantial text overlap with arXiv:2103.1218

    Die akute Appendizitis im Kindes- und Jugendalter: neue diagnostische Verfahren für die prätherapeutische Differenzierung histopathologischer Entitäten zur Unterstützung konservativer Therapiestrategien

    Get PDF
    Hintergrund der hier zusammengefassten Studien war die aktuelle Datenlage, die dafür spricht, dass es sich bei der klinisch unkomplizierten, histopathologisch phlegmonösen und der klinisch komplizierten, histopathologisch gangränösen Appendizitis um unabhängige Entitäten handelt. Diese können unterschiedlichen Therapieoptionen (konservativ vs. operativ) zugeführt werden. Vor diesem Hintergrund war es ein Ziel der Arbeiten zu untersuchen, wie die Formen der akuten Appendizitis im Kindes- und Jugendalter bereits prätherapeutisch unterschieden werden können. Sowohl in der Labordiagnostik (P1 und P2) als auch im Ultraschall (P3) lassen sich Unterschiede zwischen Patient*innen mit unkomplizierter, phlegmonöser und komplizierter (gangränöser und perforierender) Appendizitis aufzeigen. Hierdurch allein kann allerdings aufgrund unzureichender Trennschärfe noch keine ausreichende Entscheidungssicherheit erreicht werden. Mit Verfahren der künstlichen Intelligenz auf Untersucher-unabhängige diagnostische Parameter (P4) konnte die Vorhersagegenauigkeit der akuten Appendizitis weiter gesteigert werden. Interessante Ergebnisse bezüglich der unterschiedlichen Pathomechanismen der beiden inflammatorischen Entitäten ergaben sich durch eine differenzielle Genexpressionsanalyse (P5). In einer Proof-of-Concept-Studie wurden zuvor beschriebene Methoden der künstlichen Intelligenz auf die Genexpressionsdaten angewandt (P6). Hierdurch konnte im Modell eine grundsätzliche Differenzierbarkeit der Entitäten durch die Anwendung der neuen Methode aufgezeigt werden. Ein mittelfristiges Ziel ist es, eine Biomarkersignatur zu definieren, die ihre Aussagekraft durch einen Computeralgorithmus hat. Hierdurch soll eine schnelle Therapieentscheidung ermöglicht werden. Im Idealfall sollte diese Biomarkersignatur sicher, objektiv und einfach zu bestimmen sein sowie eine höhere diagnostische Sicherheit als die bisherige Diagnostik mittels Anamnese, Untersuchung, Laboranalyse und Ultraschall bieten. Langfristiges Ziel von Folgestudien ist die Identifizierung einer Biomarkersignatur mit der bestmöglichen Vorhersagekraft. Hinsichtlich der routinemäßigen klinischen Diagnostik ist die Anwendung von Point-of-Care Devices auf PCR-Basis denkbar. Hier könnte eine limitierte Anzahl von Primern für eine Biomarkersignatur mit hoher Vorhersagekraft zum Einsatz kommen. Der dadurch ermittelte Biomarker würde seine Aussagekraft durch einen einfach anzuwendenden Computeralgorithmus erhalten. Die Kombination aus Genexpressionsanalyse mit Methoden der künstlichen Intelligenz kann somit die Grundlage für ein neues diagnostisches Instrument zur sicheren Unterscheidung unterschiedlicher Appendizitisentitäten darstellen

    Identification of biomarkers co-associated with M1 macrophages, ferroptosis and cuproptosis in alcoholic hepatitis by bioinformatics and experimental verification

    Get PDF
    BackgroundsAlcoholic hepatitis (AH) is a major health problem worldwide. There is increasing evidence that immune cells, iron metabolism and copper metabolism play important roles in the development of AH. We aimed to explore biomarkers that are co-associated with M1 macrophages, ferroptosis and cuproptosis in AH patients.MethodsGSE28619 and GSE103580 datasets were integrated, CIBERSORT algorithm was used to analyze the infiltration of 22 types of immune cells and GSVA algorithm was used to calculate ferroptosis and cuproptosis scores. Using the “WGCNA” R package, we established a gene co-expression network and analyzed the correlation between M1 macrophages, ferroptosis and cuproptosis scores and module characteristic genes. Subsequently, candidate genes were screened by WGCNA and differential expression gene analysis. The LASSO-SVM analysis was used to identify biomarkers co-associated with M1 macrophages, ferroptosis and cuproptosis. Finally, we validated these potential biomarkers using GEO datasets (GSE155907, GSE142530 and GSE97234) and a mouse model of AH.ResultsThe infiltration level of M1 macrophages was significantly increased in AH patients. Ferroptosis and cuproptosis scores were also increased in AH patients. In addition, M1 macrophages, ferroptosis and cuproptosis were positively correlated with each other. Combining bioinformatics analysis with a mouse model of AH, we found that ALDOA, COL3A1, LUM, THBS2 and TIMP1 may be potential biomarkers co-associated with M1 macrophages, ferroptosis and cuproptosis in AH patients.ConclusionWe identified 5 potential biomarkers that are promising new targets for the treatment and diagnosis of AH patients

    Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review

    Full text link
    Globally, the external Internet is increasingly being connected to the contemporary industrial control system. As a result, there is an immediate need to protect the network from several threats. The key infrastructure of industrial activity may be protected from harm by using an intrusion detection system (IDS), a preventive measure mechanism, to recognize new kinds of dangerous threats and hostile activities. The most recent artificial intelligence (AI) techniques used to create IDS in many kinds of industrial control networks are examined in this study, with a particular emphasis on IDS-based deep transfer learning (DTL). This latter can be seen as a type of information fusion that merge, and/or adapt knowledge from multiple domains to enhance the performance of the target task, particularly when the labeled data in the target domain is scarce. Publications issued after 2015 were taken into account. These selected publications were divided into three categories: DTL-only and IDS-only are involved in the introduction and background, and DTL-based IDS papers are involved in the core papers of this review. Researchers will be able to have a better grasp of the current state of DTL approaches used in IDS in many different types of networks by reading this review paper. Other useful information, such as the datasets used, the sort of DTL employed, the pre-trained network, IDS techniques, the evaluation metrics including accuracy/F-score and false alarm rate (FAR), and the improvement gained, were also covered. The algorithms, and methods used in several studies, or illustrate deeply and clearly the principle in any DTL-based IDS subcategory are presented to the reader

    Cardiovascular diseases prediction by machine learning incorporation with deep learning

    Get PDF
    It is yet unknown what causes cardiovascular disease (CVD), but we do know that it is associated with a high risk of death, as well as severe morbidity and disability. There is an urgent need for AI-based technologies that are able to promptly and reliably predict the future outcomes of individuals who have cardiovascular disease. The Internet of Things (IoT) is serving as a driving force behind the development of CVD prediction. In order to analyse and make predictions based on the data that IoT devices receive, machine learning (ML) is used. Traditional machine learning algorithms are unable to take differences in the data into account and have a low level of accuracy in their model predictions. This research presents a collection of machine learning models that can be used to address this problem. These models take into account the data observation mechanisms and training procedures of a number of different algorithms. In order to verify the efficacy of our strategy, we combined the Heart Dataset with other classification models. The proposed method provides nearly 96 percent of accuracy result than other existing methods and the complete analysis over several metrics has been analysed and provided. Research in the field of deep learning will benefit from additional data from a large number of medical institutions, which may be used for the development of artificial neural network structures
    corecore