9,467 research outputs found

    A Political Theory of Engineered Systems and A Study of Engineering and Justice Workshops

    Get PDF
    Since there are good reasons to think that some engineered systems are socially undesirable—for example, internal combustion engines that cause climate change, algorithms that are racist, and nuclear weapons that can destroy all life—there is a well-established literature that attempts to identify best practices for designing and regulating engineered systems in order to prevent harm and promote justice. Most of this literature, especially the design theory and engineering justice literature meant to help guide engineers, focuses on environmental, physical, social, and mental harms such as ecosystem and bodily poisoning, racial and gender discrimination, and urban alienation. However, the literature that focuses on how engineered systems can produce political harms—harms to how we shape the way we live in community together—is not well established. The first part of this thesis contributes to identifying how particular types of engineered systems can harm a democratic politics. Building on democratic theory, philosophy of collective harms, and design theory, it argues that engineered systems that extend in space and time beyond a certain threshold subvert the knowledge and empowerment necessary for a democratic politics. For example, the systems of global shipping and the internet that fundamentally shape our lives are so large that people cannot attain the knowledge necessary to regulate them well nor the empowerment necessary to shape them. The second part of this thesis is an empirical study of a workshop designed to encourage engineering undergraduates to understand how engineered systems can subvert a democratic politics, with the ultimate goal of supporting students in incorporating that understanding into their work. 32 Dartmouth undergraduate engineering students participated in the study. Half were assigned to participate in a workshop group, half to a control group. The workshop group participants took a pretest; then participated in a 3-hour, semi-structured workshop with 4 participants per session (as well as a discussion leader and note-taker) over lunch or dinner; and then took a posttest. The control group participants took the same pre- and post- tests, but had no suggested activity in the intervening 3 hours. We find that the students who participated in workshops had a statistically significant test-score improvement as compared to the control group (Brunner-Munzel test, p \u3c .001). Using thematic analysis methods, we show the data is consistent with the hypothesis that workshops produced a score improvement because of certain structure (small size, long duration, discussion-based, over homemade food) and content (theoretically rich, challenging). Thematic analysis also reveals workshop failures and areas for improvement (too much content for the duration, not well enough organized). The thesis concludes with a discussion of limitations and suggestions for future theoretical, empirical, and pedagogical research

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Ultra High Strength Steels for Roll Formed Automotive Body in White

    Get PDF
    One of the more recent steel developments is the quenching and partitioning process, first proposed by Speer et al. in 2003 on developing 3rd generation advanced high-strength steel (AHSS). The quenching and partitioning (Q&P) process set a new way of producing martensitic steels with enhanced austenite levels, realised through controlled thermal treatments. The main objective of the so-called 3rd generation steels was to realise comparable properties to the 2nd generation but without high alloying additions. Generally, Q&P steels have remained within lab-scale environments, with only a small number of Q&P steels produced industrially. Q&P steels are produced either by a one-step or two-step process, and the re-heating mechanism for the two-step adds additional complexities when heat treating the material industrially. The Q&P steels developed and tested throughout this thesis have been designed to achieve the desired microstructural evolution whilst fitting in with Tata’s continuous annealing processing line (CAPL) capabilities. The CALPHAD approach using a combination of thermodynamics, kinetics, and phase transformation theory with software packages ThermoCalc and JMatPro has been successfully deployed to find novel Q&P steels. The research undertaken throughout this thesis has led to two novel Q&P steels, which can be produced on CAPL without making any infrastructure changes to the line. The two novel Q&P steels show an apparent reduction in hardness mismatch, illustrated visually and numerically after nano-indentation experiments. The properties realised after Q&P heat treatments on the C-Mn-Si alloy with 0.2 Wt.% C and the C-Mn-Si alloy with the small Cr addition is superior to the commercially available QP980/1180 steels by BaoSteel. Both novel alloys had comparable levels of elongation and hole expansion ratio to QP1180 but are substantially stronger with a > 320MPa increase in tensile stress. The heat treatment is also less complex as there is no requirement to heat the steel back up after quenching due to one-step quenching and partitioning being employed on the novel alloys

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    Study of soft materials, flexible electronics, and machine learning for fully portable and wireless brain-machine interfaces

    Get PDF
    Over 300,000 individuals in the United States are afflicted with some form of limited motor function from brainstem or spinal-cord related injury resulting in quadriplegia or some form of locked-in syndrome. Conventional brain-machine interfaces used to allow for communication or movement require heavy, rigid components, uncomfortable headgear, excessive numbers of electrodes, and bulky electronics with long wires that result in greater data artifacts and generally inadequate performance. Wireless, wearable electroencephalograms, along with dry non-invasive electrodes can be utilized to allow recording of brain activity on a mobile subject to allow for unrestricted movement. Additionally, multilayer microfabricated flexible circuits, when combined with a soft materials platform allows for imperceptible wearable data acquisition electronics for long term recording. This dissertation aims to introduce new electronics and training paradigms for brain-machine interfaces to provide remedies in the form of communication and movement for these individuals. Here, training is optimized by generating a virtual environment from which a subject can achieve immersion using a VR headset in order to train and familiarize with the system. Advances in hardware and implementation of convolutional neural networks allow for rapid classification and low-latency target control. Integration of materials, mechanics, circuit and electrode design results in an optimized brain-machine interface allowing for rehabilitation and overall improved quality of life.Ph.D

    Toward circularity : life cycle-based approach in waste management

    Get PDF
    Our current “throwaway” lifestyle places great strain on the environment; resources that enter the economy remain for only a short period and are quickly disposed of. This dissertation aims to evaluate the economic and environmental impacts of shifting toward more circular economy (CE) practices that advocate value retention for as long as possible within the economy. The research was carried out by conceptualizing CE and solving real cases focusing on the product end-of-life (EoL) stage. Life cycle assessment (LCA) was the main tool used to assess environmental impacts of different circular scenarios. The tool was paired with life cycle costing (LCC) to evaluate economic performances. Three cases in Finland were assessed: shifting toward source-separated biowaste collection, establishing an agricultural plastics waste recycling system, and waste-to-energy optimization. It was found that CE covers multiple aspects within the value chain; thus, its adoption model can occur at any stage of the value chain, thereby enabling various stakeholders to be more circular through different actions. The cases suggested that being more circular at the EoL stage may improve value retention through secondary material production, waste treatment by-products, and energy recovery. Shifting toward circularity was shown to be economically and environmentally viable. The dissertation illustrated the importance of stakeholders’ collaboration because a circular approach could affect all actors within the supply chain, including manufacturing, the energy sector, and society. The study showed that it is important to quantify environmental impacts of products or services, and to date, LCA remains the most suitable tool for quantifying results and evaluating options. In addition, a combination with LCC will provide more comprehensive results to anticipate any trade-off between environmental and economic aspects. CE must start somewhere, so let it start with organizations evaluating their environmental performance to identify better alternatives, define targets, and foster circularity in the long run.Nykyinen kertakäyttöelämäntapa aiheuttaa painetta ympäristölle. Monia raaka-aineita, joita käytetään taloudessa, hyödynnetään vain lyhyen aikaa ja hävitetään nopeasti. Tämän väitöskirjan tavoitteena on arvioida taloudellisia ja ympäristövaikutuksia yritysten siirtymisessä kohti kiertotalouden (CE) käytäntöjä, joiden avulla pyritään arvon säilyttämiseen mahdollisimman pitkään. Tutkimus toteutettiin tarkastelemalla kiertotalouden käsitteitä ja esittämällä ratkaisumalleja tapaustutkimuksiin, joissa keskityttiin tuotteen elinkaaren loppuvaiheeseen (EoL). Elinkaariarviointi (LCA) oli näissä tärkein työkalu erilaisten kiertoskenaarioiden ympäristövaikutusten arvioinnissa. Tämä työkalu yhdistettiin elinkaarikustannuslaskentaan (LCC) taloudellisen suorituskyvyn arvioimiseksi. Kolme tapaustutkimusta toteutettiin Suomessa: (1) siirtyminen biojätteen lajittelukeräykseen, (2) maatalouden muovijätteen kierrätysjärjestelmän suunnittelu ja (3) jätteen energian optimointi. Tulokset osoittivat, että kiertotalouden avulla voidaan kattaa useita arvoketjun näkökohtia; käyttöönotto voidaan toteuttaa millä tahansa arvoketjun tasolla, ja eri sidosryhmät voivat lisätä kiertoa eri toimien kautta. Tulokset viittaavat siihen, että kierron lisääminen EoL-vaiheessa voisi parantaa arvon säilyttämistä uusiomateriaalituotannon, jätteenkäsittelyn sivutuotteiden ja energian talteenoton avulla. Tyyppitapausten perusteella yritysten siirtyminen kiertotalouskäytäntöihin osoittautui sekä taloudellisesti ja ympäristön kannalta kannattavaksi. Työn tulokset ovat havainnollistaneet sidosryhmien yhteistyön tärkeyttä. Kierron rakentaminen voi vaikuttaa kaikkiin toimitusketjun toimijoihin, mukaan lukien valmistus, energiantuotanto ja yhteiskunta laajemmin. Tutkimus osoitti, että tuotteiden tai palveluiden ympäristövaikutusten kvantitatiivinen mittaaminen on tärkeää, ja LCA on edelleen sopivin väline tulosten kvantifiointiin ja erilaisten vaihtoehtojen keskinäiseen arviointiin. Elinkaarilaskelmaan yhdistettynä elinkaarikustannuslaskentaan saadaan aikaan kattavampia tuloksia, joilla voidaan vertailla ympäristö- ja talousnäkökohtien mahdollisia ristiriitoja. Kiertotaloustyö on aloitettava jostain, ja se voi alkaa siitä, että organisaatiot mittaavat ympäristötehokkuuttaan rakentaakseen parempia vaihtoehtoja, määritelläkseen tavoitteitaan ja edistääkseen kiertojen kehittymistä pitkällä aikavälillä.fi=vertaisarvioitu|en=peerReviewed

    Green Carbon Footprint for Model Inference Serving via Exploiting Mixed-Quality Models and GPU Partitioning

    Full text link
    This paper presents a solution to the challenge of mitigating carbon emissions from large-scale high performance computing (HPC) systems and datacenters that host machine learning (ML) inference services. ML inference is critical to modern technology products, but it is also a significant contributor to datacenter compute cycles and carbon emissions. We introduce Clover, a carbon-friendly ML inference service runtime system that balances performance, accuracy, and carbon emissions through mixed-quality models and GPU resource partitioning. Our experimental results demonstrate that Clover is effective in substantially reducing carbon emissions while maintaining high accuracy and meeting service level agreement (SLA) targets. Therefore, it is a promising solution toward achieving carbon neutrality in HPC systems and datacenters

    Application of additive technology and reverse engineering in the realization of damaged obsolete parts

    Get PDF
    Reverse engineering (RE) aims to design a new replacement part based on the existing part. The goal is to perform a quality reproduction of the physical part with the best possible mechanical characteristics aiming to find optimal solutions regarding the shape and dimensions of the part. The procedure is implemented through a series of steps: creating a digital 3D model, improving model parameters, and realizing products using additive technologies. In this paper, a review and implementation of the fundamental methodologies of RE were carried out on the example of a damaged protective cover with an unknown geometry and material essential for the function of a discontinued device with no technical documentation and spare parts. An optical scanning method, 3D CAD, FEA, and additive manufacturing were used to realize the reproduced part. It was shown that by utilizing RE the lifecycle of the device could be significantly extended with minimal cost
    corecore