9,738 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC

    Get PDF
    The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this final state has yielded some of the most precise measurements of the particle. As measurements of the Higgs boson become increasingly precise, greater import is placed on the factors that constitute the uncertainty. Reducing the effects of these uncertainties requires an understanding of their causes. The research presented in this thesis aims to illuminate how uncertainties on simulation modelling are determined and proffers novel techniques in deriving them. The upgrade of the FastCaloSim tool is described, used for simulating events in the ATLAS calorimeter at a rate far exceeding the nominal detector simulation, Geant4. The integration of a method that allows the toolbox to emulate the accordion geometry of the liquid argon calorimeters is detailed. This tool allows for the production of larger samples while using significantly fewer computing resources. A measurement of the total Higgs boson production cross-section multiplied by the diphoton branching ratio (σ × Bγγ) is presented, where this value was determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement with the Standard Model prediction. The signal and background shape modelling is described, and the contribution of the background modelling uncertainty to the total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production mechanism. A method for estimating the number of events in a Monte Carlo background sample required to model the shape is detailed. It was found that the size of the nominal γγ background events sample required a multiplicative increase by a factor of 3.60 to adequately model the background with a confidence level of 68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate, 0.5 billion additional simulated events were produced, substantially reducing the background modelling uncertainty. A technique is detailed for emulating the effects of Monte Carlo event generator differences using multivariate reweighting. The technique is used to estimate the event generator uncertainty on the signal modelling of tHqb events, improving the reliability of estimating the tHqb production cross-section. Then this multivariate reweighting technique is used to estimate the generator modelling uncertainties on background V γγ samples for the first time. The estimated uncertainties were found to be covered by the currently assumed background modelling uncertainty

    Augmented classification for electrical coil winding defects

    Get PDF
    A green revolution has accelerated over the recent decades with a look to replace existing transportation power solutions through the adoption of greener electrical alternatives. In parallel the digitisation of manufacturing has enabled progress in the tracking and traceability of processes and improvements in fault detection and classification. This paper explores electrical machine manufacture and the challenges faced in identifying failures modes during this life cycle through the demonstration of state-of-the-art machine vision methods for the classification of electrical coil winding defects. We demonstrate how recent generative adversarial networks can be used to augment training of these models to further improve their accuracy for this challenging task. Our approach utilises pre-processing and dimensionality reduction to boost performance of the model from a standard convolutional neural network (CNN) leading to a significant increase in accuracy

    Interval Type-2 Beta Fuzzy Near Sets Approach to Content-Based Image Retrieval

    Get PDF
    In computer-based search systems, similarity plays a key role in replicating the human search process. Indeed, the human search process underlies many natural abilities such as image recovery, language comprehension, decision making, or pattern recognition. The search for images consists of establishing a correspondence between the available image and that sought by the user, by measuring the similarity between the images. Image search by content is generaly based on the similarity of the visual characteristics of the images. The distance function used to evaluate the similarity between images depends notonly on the criteria of the search but also on the representation of the characteristics of the image. This is the main idea of a content-based image retrieval (CBIR) system. In this article, first, we constructed type-2 beta fuzzy membership of descriptor vectors to help manage inaccuracy and uncertainty of characteristics extracted the feature of images. Subsequently, the retrieved images are ranked according to the novel similarity measure, noted type-2 fuzzy nearness measure (IT2FNM). By analogy to Type-2 Fuzzy Logic and motivated by near sets theory, we advanced a new fuzzy similarity measure (FSM) noted interval type-2 fuzzy nearness measure (IT-2 FNM). Then, we proposed three new IT-2 FSMs and we have provided mathematical justification to demonstrate that the proposed FSMs satisfy proximity properties (i.e. reflexivity, transitivity, symmetry, and overlapping). Experimental results generated using three image databases showing consistent and significant results

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    Psychographic And Behavioral Segmentation Of Food Delivery Application Customers To Increase Intention To Use

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceThis study presents a framework for segmenting Food Delivery Application (FDA) customers based on psychographic and behavioral variables as an alternative to existing segmentation. Customer segments are proposed by applying clustering methods to primary data from an electronic survey. Psychographic and behavioral constructs are formulated as hypotheses based on existing literature, and then evaluated as segmentation variables regarding their discriminatory power for customer segmentation. Detected relevant variables are used in the application of clustering techniques to find adequate boundaries within customer groupings for segmentation purposes. Characterization of customer segments is performed and enriched with implications of findings in FDA marketing strategies. This paper contributes to theory by providing new findings on segmentation that are relevant for an online context. In addition, it contributes to practice by detailing implications of customer segments in an online sales strategy, allowing marketing managers and FDA businesses to capitalize knowledge in their conversion funnel designs

    A framework for the Comparative analysis of text summarization techniques

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceWe see that with the boom of information technology and IOT (Internet of things), the size of information which is basically data is increasing at an alarming rate. This information can always be harnessed and if channeled into the right direction, we can always find meaningful information. But the problem is this data is not always numerical and there would be problems where the data would be completely textual, and some meaning has to be derived from it. If one would have to go through these texts manually, it would take hours or even days to get a concise and meaningful information out of the text. This is where a need for an automatic summarizer arises easing manual intervention, reducing time and cost but at the same time retaining the key information held by these texts. In the recent years, new methods and approaches have been developed which would help us to do so. These approaches are implemented in lot of domains, for example, Search engines provide snippets as document previews, while news websites produce shortened descriptions of news subjects, usually as headlines, to make surfing easier. Broadly speaking, there are mainly two ways of text summarization – extractive and abstractive summarization. Extractive summarization is the approach in which important sections of the whole text are filtered out to form the condensed form of the text. While the abstractive summarization is the approach in which the text as a whole is interpreted and examined and after discerning the meaning of the text, sentences are generated by the model itself describing the important points in a concise way

    Annals [...].

    Get PDF
    Pedometrics: innovation in tropics; Legacy data: how turn it useful?; Advances in soil sensing; Pedometric guidelines to systematic soil surveys.Evento online. Coordenado por: Waldir de Carvalho Junior, Helena Saraiva Koenow Pinheiro, Ricardo Simão Diniz Dalmolin

    Educating Sub-Saharan Africa:Assessing Mobile Application Use in a Higher Learning Engineering Programme

    Get PDF
    In the institution where I teach, insufficient laboratory equipment for engineering education pushed students to learn via mobile phones or devices. Using mobile technologies to learn and practice is not the issue, but the more important question lies in finding out where and how they use mobile tools for learning. Through the lens of Kearney et al.’s (2012) pedagogical model, using authenticity, personalisation, and collaboration as constructs, this case study adopts a mixed-method approach to investigate the mobile learning activities of students and find out their experiences of what works and what does not work. Four questions are borne out of the over-arching research question, ‘How do students studying at a University in Nigeria perceive mobile learning in electrical and electronic engineering education?’ The first three questions are answered from qualitative, interview data analysed using thematic analysis. The fourth question investigates their collaborations on two mobile social networks using social network and message analysis. The study found how students’ mobile learning relates to the real-world practice of engineering and explained ways of adapting and overcoming the mobile tools’ limitations, and the nature of the collaborations that the students adopted, naturally, when they learn in mobile social networks. It found that mobile engineering learning can be possibly located in an offline mobile zone. It also demonstrates that investigating the effectiveness of mobile learning in the mobile social environment is possible by examining users’ interactions. The study shows how mobile learning personalisation that leads to impactful engineering learning can be achieved. The study shows how to manage most interface and technical challenges associated with mobile engineering learning and provides a new guide for educators on where and how mobile learning can be harnessed. And it revealed how engineering education can be successfully implemented through mobile tools
    • …
    corecore