10,834 research outputs found

    Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions

    Get PDF
    In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request

    Computertomographie-basierte Bestimmung von Aortenklappenkalk und seine Assoziation mit Komplikationen nach interventioneller Aortenklappenimplantation (TAVI)

    Get PDF
    Background: Severe aortic valve calcification (AVC) has generally been recognized as a key factor in the occurrence of adverse events after transcatheter aortic valve implantation (TAVI). To date, however, a consensus on a standardized calcium detection threshold for aortic valve calcium quantification in contrast-enhanced computed tomography angiography (CTA) is still lacking. The present thesis aimed at comparing two different approaches for quantifying AVC in CTA scans based on their predictive power for adverse events and survival after a TAVI procedure.   Methods: The extensive dataset of this study included 198 characteristics for each of the 965 prospectively included patients who had undergone TAVI between November 2012 and December 2019 at the German Heart Center Berlin (DHZB). AVC quantification in CTA scans was performed at a fixed Hounsfield Unit (HU) threshold of 850 HU (HU 850 approach) and at a patient-specific threshold, where the HU threshold was set by multiplying the mean luminal attenuation of the ascending aorta by 2 (+100 % HUAorta approach). The primary endpoint of this study consisted of a combination of post-TAVI outcomes (paravalvular leak ≥ mild, implant-related conduction disturbances, 30-day mortality, post-procedural stroke, annulus rupture, and device migration). The Akaike information criterion was used to select variables for the multivariable regression model. Multivariable analysis was carried out to determine the predictive power of the investigated approaches.   Results: Multivariable analyses showed that a fixed threshold of 850 HU (calcium volume cut-off 146 mm3) was unable to predict the composite clinical endpoint post-TAVI (OR=1.13, 95 % CI 0.87 to 1.48, p=0.35). In contrast, the +100 % HUAorta approach (calcium volume cut-off 1421 mm3) enabled independent prediction of the composite clinical endpoint post-TAVI (OR=2, 95 % CI 1.52 to 2.64, p=9.2x10-7). No significant difference in the Kaplan-Meier survival analysis was observed for either of the approaches.   Conclusions: The patient-specific calcium detection threshold +100 % HUAorta is more predictive of post-TAVI adverse events included in the combined clinical endpoint than the fixed HU 850 approach. For the +100 % HUAorta approach, a calcium volume cut-off of 1421 mm3 of the aortic valve had the highest predictive value.Hintergrund: Ein wichtiger Auslöser von Komplikationen nach einer Transkatheter-Aortenklappen-Implantation (TAVI) sind ausgeprägte Kalkablagerung an der Aortenklappe. Dennoch erfolgte bisher keine Einigung auf ein standardisiertes Messverfahren zur Quantifizierung der Kalklast der Aortenklappe in einer kontrastverstärkten dynamischen computertomographischen Angiographie (CTA). Die vorliegende Dissertation untersucht, inwieweit die Wahl des Analyseverfahrens zur Quantifizierung von Kalkablagerungen in der Aortenklappe die Prognose von Komplikationen und der Überlebensdauer nach einer TAVI beeinflusst.   Methodik: Der Untersuchung liegt ein umfangreicher Datensatz von 965 Patienten mit 198 Merkmalen pro Patienten zugrunde, welche sich zwischen 2012 und 2019 am Deutschen Herzzentrum Berlin einer TAVI unterzogen haben. Die Quantifizierung der Kalkablagerung an der Aortenklappe mittels CTA wurde einerseits mit einem starren Grenzwert von 850 Hounsfield Einheiten (HU) (HU 850 Verfahren) und andererseits anhand eines individuellen Grenzwertes bemessen. Letzterer ergibt sich aus der HU-Dämpfung in dem Lumen der Aorta ascendens multipliziert mit 2 (+100 % HUAorta Verfahren). Der primäre klinische Endpunkt dieser Dissertation besteht aus einem aus sechs Variablen zusammengesetzten klinischen Endpunkt, welcher ungewünschte Ereignisse nach einer TAVI abbildet (paravalvuläre Leckage ≥mild, Herzrhythmusstörungen nach einer TAVI, Tod innerhalb von 30 Tagen, post-TAVI Schlaganfall, Ruptur des Annulus und Prothesendislokation). Mögliche Störfaktoren, die auf das Eintreten der Komplikationen nach TAVI Einfluss haben, wurden durch den Einsatz des Akaike Informationskriterium ermittelt. Um die Vorhersagekraft von Komplikationen nach einer TAVI durch beide Verfahren zu ermitteln, wurde eine multivariate Regressionsanalyse durchgeführt.   Ergebnisse: Die multivariaten logistischen Regressionen zeigen, dass die Messung der Kalkablagerungen anhand der HU 850 Messung (Kalklast Grenzwert von 146 mm3) die Komplikationen und die Überlebensdauer nicht vorhersagen konnten (OR=1.13, 95 % CI 0.87 bis 1.48, p=0.35). Die nach dem +100 % HUAorta Verfahren (Kalklast Grenzwert von 1421 mm3) individualisierte Kalkmessung erwies sich hingegen als sehr aussagekräftig, da hiermit Komplikationen nach einer TAVI signifikant vorhergesagt werden konnten (OR=2, 95 % CI 1.52 bis 2.64, p=9.2x10-7). In Hinblick auf die postoperative Kaplan-Meier Überlebenszeitanalyse kann auch mit dem +100 % HUAorta Verfahren keine Vorhersage getroffen werden.   Fazit: Aus der Dissertation ergibt sich die Empfehlung, die Messung von Kalkablagerungen nach dem +100 % HUAorta Verfahren vorzunehmen, da Komplikationen wesentlich besser und zuverlässiger als nach der gängigen HU 850 Messmethode vorhergesagt werden können. Für das +100 % HUAorta Verfahren lag der optimale Kalklast Grenzwert bei 1421 mm3

    Trainable Variational Quantum-Multiblock ADMM Algorithm for Generation Scheduling

    Full text link
    The advent of quantum computing can potentially revolutionize how complex problems are solved. This paper proposes a two-loop quantum-classical solution algorithm for generation scheduling by infusing quantum computing, machine learning, and distributed optimization. The aim is to facilitate employing noisy near-term quantum machines with a limited number of qubits to solve practical power system optimization problems such as generation scheduling. The outer loop is a 3-block quantum alternative direction method of multipliers (QADMM) algorithm that decomposes the generation scheduling problem into three subproblems, including one quadratically unconstrained binary optimization (QUBO) and two non-QUBOs. The inner loop is a trainable quantum approximate optimization algorithm (T-QAOA) for solving QUBO on a quantum computer. The proposed T-QAOA translates interactions of quantum-classical machines as sequential information and uses a recurrent neural network to estimate variational parameters of the quantum circuit with a proper sampling technique. T-QAOA determines the QUBO solution in a few quantum-learner iterations instead of hundreds of iterations needed for a quantum-classical solver. The outer 3-block ADMM coordinates QUBO and non-QUBO solutions to obtain the solution to the original problem. The conditions under which the proposed QADMM is guaranteed to converge are discussed. Two mathematical and three generation scheduling cases are studied. Analyses performed on quantum simulators and classical computers show the effectiveness of the proposed algorithm. The advantages of T-QAOA are discussed and numerically compared with QAOA which uses a stochastic gradient descent-based optimizer.Comment: 11 page

    A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms

    Get PDF
    Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data. A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability. To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity. A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case. The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change. The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing

    Get PDF
    Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging. In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging. Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid. Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components
    corecore