10,786 research outputs found

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Extending the reach of uncertainty quantification in nuclear theory

    Get PDF
    The theory of the strong interaction—quantum chromodynamics (QCD)—is unsuited to practical calculations of nuclear observables and approximate models for nuclear interaction potentials are required. In contrast to phenomenological models, chiral effective field theories (χEFTs) of QCD grant a handle on the theoretical uncertainty arising from the truncation of the chiral expansion. Uncertainties in χEFT are preferably quantified using Bayesian inference, but quantifying reliable posterior predictive distributions for nuclear observables presents several challenges. First, χEFT is parametrized by unknown low-energy constants (LECs) whose values must be inferred from low-energy data of nuclear structure and reaction observables. There are 31 LECs at fourth order in Weinberg power counting, leading to a high-dimensional inference problem which I approach by developing an advanced sampling protocol using Hamiltonian Monte Carlo (HMC). This allows me to quantify LEC posteriors up to and including fourth chiral order. Second, the χEFT truncation error is correlated across independent variables such as scattering energies and angles; I model correlations using a Gaussian process. Third, the computational cost of computing few- and many-nucleon observables typically precludes their direct use in Bayesian parameter estimation as each observable must be computed in excess of 100,000 times during HMC sampling. The one exception is nucleon-nucleon scattering observables, but even these incur a substantial computational cost in the present applications. I sidestep such issues using eigenvector-continuation emulators, which accurately mimic exact calculations while dramatically reducing the computational cost. Equipped with Bayesian posteriors for the LECs, and a model for the truncation error, I explore the predictive ability of χEFT, presenting the results as the probability distributions they always were

    PIKS: A Technique to Identify Actionable Trends for Policy-Makers Through Open Healthcare Data

    Full text link
    With calls for increasing transparency, governments are releasing greater amounts of data in multiple domains including finance, education and healthcare. The efficient exploratory analysis of healthcare data constitutes a significant challenge. Key concerns in public health include the quick identification and analysis of trends, and the detection of outliers. This allows policies to be rapidly adapted to changing circumstances. We present an efficient outlier detection technique, termed PIKS (Pruned iterative-k means searchlight), which combines an iterative k-means algorithm with a pruned searchlight based scan. We apply this technique to identify outliers in two publicly available healthcare datasets from the New York Statewide Planning and Research Cooperative System, and California's Office of Statewide Health Planning and Development. We provide a comparison of our technique with three other existing outlier detection techniques, consisting of auto-encoders, isolation forests and feature bagging. We identified outliers in conditions including suicide rates, immunity disorders, social admissions, cardiomyopathies, and pregnancy in the third trimester. We demonstrate that the PIKS technique produces results consistent with other techniques such as the auto-encoder. However, the auto-encoder needs to be trained, which requires several parameters to be tuned. In comparison, the PIKS technique has far fewer parameters to tune. This makes it advantageous for fast, "out-of-the-box" data exploration. The PIKS technique is scalable and can readily ingest new datasets. Hence, it can provide valuable, up-to-date insights to citizens, patients and policy-makers. We have made our code open source, and with the availability of open data, other researchers can easily reproduce and extend our work. This will help promote a deeper understanding of healthcare policies and public health issues

    Study of the behavior of a thermoplastic injection mold and prediction of fatigue failure with numerical simulation

    Get PDF
    Tese de doutoramento em Engenharia MecânicaO objetivo deste trabalho é a criação de uma metodologia de análise da resistência à fadiga de moldes de injeção de termoplásticos. Uma metodologia capaz de satisfazer o mercado atual que exige a diminuição do tempo de entrega e custos de moldes de injeção, sem comprometer a sua fiabilidade. Para o desenvolvimento desta metodologia, foram utilizados modelos digitais. Com estes modelos é possível executar-se várias iterações sem os custos de um modelo físico. Além do menor custo dos modelos digitais, também é possível compreender o comportamento de cada molde no decorrer da fase de projeto. Com o aumento da complexidade dos componentes injetados, o estudo da resistência à fadiga tende a ser cada vez mais importante. Neste trabalho serão apresentados cuidados a ter na preparação dos modelos digitais, de forma a obter-se resultados fiáveis. No desenvolvimento desta metodologia, usaram-se dois softwares de simulação numérica para gerar os modelos digitais. Um deles dedica-se ao estudo reológico de peças termoplásticas e outro ao comportamento estrutural dos moldes de injeção. A execução de simulações numéricas requer uma boa caracterização dos materiais usados. No caso dos termoplásticos, os fabricantes têm uma grande base de dados com a informação necessária para as simulações numéricas. No entanto, para as simulações estruturais, os fabricantes tendem apenas a fornecer os dados das curvas monotónicas, os quais não fornecem qualquer informação sobre o comportamento à fadiga. Portanto, neste trabalho foram estudados modelos empíricos que se adaptam aos aços usados em moldes de injeção, a partir dos quais é possível gerar as curvas S-N e e-N. De modo a avaliar qual o modelo empírico que se adaptaria melhor a esta área, foram realizados ensaios experimentais com provetes feitos em EN 1.2311. A partir destes ensaios, escolheu-se o modelo empírico mais conservador. Com base no modelo empírico escolhido, foi desenvolvida uma aplicação capaz de gerar as curvas S-N e e-N, a partir das informações fornecidas pela aciaria. Além da caracterização dos materiais, também é importante que as condições de carregamento do modelo numérico estrutural sejam o mais aproximadas possível do que irá ocorrer no modelo físico. Como as cargas deste modelo numérico podem ser previstas a partir do modelo numérico reológico, a criação de uma ponte entre estes dois modelos numéricos é imprescindível. Logo, neste trabalho foi construída uma aplicação capaz de converter os dados gerados pelo software comercial Moldflow em ficheiros capazes de serem lidos por softwares comerciais de simulação numérica estrutural. Usando esta aplicação para a conversão dos dados, foram realizadas simulações e comparadas com os respetivos modelos físicos. Verificou-se que é possível replicar o comportamento do molde em modelos digitais. No entanto, os modelos digitais dos moldes de injeção estudados tenderam a apresentar resultados conservadores quando comparados com os modelos físicos. Por fim, foi desenvolvida uma aplicação capaz de usar dados calculados a partir de softwares comerciais de cálculo numérico estrutural para a determinação da resistência dos moldes à fadiga. Aqui foi tido em conta o modelo para geração das curvas de fadiga dos materiais validado. Os modelos de cálculo à fadiga na aplicação baseiam-se na regra de Palmgren – Miner para a determinação dos ciclos até à nucleação da fissura. O cálculo das tensões alternadas foi realizado a partir de dois métodos, o critério da tensão de corte octaédrica e o método de Sines. Para testar a aplicação foram escolhidos cinco moldes que apresentaram falhas por fadiga. Em seguida, foi aplicada a metodologia proposta neste trabalho para a determinação da resistência dos mesmos à fadiga. A partir da aplicação desta metodologia e das ferramentas desenvolvidas para o seu emprego, foi possível verificar que esta é capaz de prever as zonas onde ocorreram as falhas, bem como outras com probabilidade de nucleação de fissuras. Portanto, no decorrer deste trabalho foi possível criar uma metodologia e ferramentas de apoio para o cálculo de moldes à fadiga. Assim, projetistas de moldes podem ter uma boa perspetiva da resistência à fadiga de moldes de injeção ainda em projeto, tendo por base métodos científicos.The objective of this work is to create a methodology to analyze the fatigue resistance of thermoplastic injection molds. A methodology capable of satisfying the current market that demands a decrease in the delivery time and costs of injection molds, without compromising their reliability. To develop this methodology, digital models were used. With these models it is possible to execute several iterations without the costs of a physical model. Besides the lower cost of digital models, it is also possible to understand the behavior of each mold during the design phase. With the increasing complexity of injected components, the study of fatigue resistance tends to be more and more important. In this work, care will be presented in the preparation of the digital models, in order to obtain reliable results. In the development of this methodology, two numerical simulation software’s were used to generate the digital models. One of them is dedicated to the rheological study of thermoplastic parts and the other to the structural behavior of injection molds. The execution of numerical simulations requires a good characterization of the materials used. In the case of thermoplastics, manufacturers have a large database with the information needed for numerical simulations. However, for structural simulations, manufacturers tend to provide only monotonic curve data, which do not provide any information about fatigue behavior. Therefore, in this work, empirical models that fit the steels used in injection molds were studied, from which it is possible to generate the S-N and e-N curves. In order to evaluate which empirical model would best fit this area, experimental tests were performed with specimens made in EN 1.2311. From these tests, the most conservative empirical model was chosen. Based on the chosen empirical model, an application capable of generating the S-N and e-N curves from the information provided by the steel mill was developed. Besides the characterization of the materials, it is also important that the loading conditions of the numerical structural model are as close as possible to what will occur in the physical model. Since the loads of this numerical model can be predicted from the rheological numerical model, the creation of a bridge between these two numerical models is essential. Therefore, in this work was built an application capable of converting the data generated by the commercial software Moldflow into files capable of being read by commercial structural numerical simulation software. Using this application for data conversion, simulations were performed and compared with the respective physical models. It was found that it is possible to replicate the mold behavior in digital models. However, the digital models of the injection molds studied tended to present conservative results when compared to the physical models. Finally, an application capable of using data calculated from commercial numerical structural calculation software was developed for determining the fatigue resistance of molds. Here the validated model for generating the fatigue curves of the materials was taken into account. The fatigue calculation models in the application are based on the Palmgren - Miner rule for the determination of the cycles until crack nucleation. The alternating stresses calculation was performed from two methods, the octahedral shear stress criterion and the Sines method. To test the application, five molds that presented fatigue failures were chosen. Then, the methodology proposed in this work was applied to determine their fatigue resistance. From the application of this methodology and the tools developed for its use, it was possible to verify that it is able to predict the areas where the failures occurred, as well as others with a probability of crack nucleation. Therefore, during this work it was possible to create a methodology and support tools for the calculation of fatigue molds. Thus, mold designers can have a good perspective of the fatigue resistance of injection molds still in project, based on scientific methods

    DataComp: In search of the next generation of multimodal datasets

    Full text link
    Multimodal datasets are a critical component in recent breakthroughs such as Stable Diffusion and GPT-4, yet their design does not receive the same research attention as model architectures or training algorithms. To address this shortcoming in the ML ecosystem, we introduce DataComp, a testbed for dataset experiments centered around a new candidate pool of 12.8 billion image-text pairs from Common Crawl. Participants in our benchmark design new filtering techniques or curate new data sources and then evaluate their new dataset by running our standardized CLIP training code and testing the resulting model on 38 downstream test sets. Our benchmark consists of multiple compute scales spanning four orders of magnitude, which enables the study of scaling trends and makes the benchmark accessible to researchers with varying resources. Our baseline experiments show that the DataComp workflow leads to better training sets. In particular, our best baseline, DataComp-1B, enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet, outperforming OpenAI's CLIP ViT-L/14 by 3.7 percentage points while using the same training procedure and compute. We release DataComp and all accompanying code at www.datacomp.ai

    Prediction & Active Control of Multi-Rotor Noise

    Get PDF
    Significant developments have been made in designing and implementation of Advanced Air Mobility Vehicles (AAMV). However, wider applications in urban areas require addressing several challenges, such as safety and quietness. These vehicles differ from conventional helicopter in that they operate at a relatively lower Reynolds number. More chiefly, they operate with multiples of rotors, which may pose some issues aerodynamically, as well as acoustically. The aim of this research is to first investigate the various noise sources in multi-rotor systems. High-fidelity simulations of two in-line counter-rotating propellers in hover, and in forward flight conditions are performed. Near field flow and acoustic properties were resolved using Hybrid LES-Unsteady RANS approach. Far-field sound predictions were performed using Ffowcs-Williams-Hawkings formulation. The two-propeller results in hovering are compared with that of the single propeller. This enabled us to identify the aerodynamic changes resulting from the proximity of the two propellers to each other and to understand the mechanisms causing the changes in the radiated sound. It was discovered that there is a dip in the thrust due to the relative proximity of the rotors. Owing to this, there is also some acoustic banding above the rotors mainly because they operate at the same rotational rate. We then considered the forward flight case and compared it with the corresponding hovering case. This enabled us to identify the aerodynamic changes resulting from the incoming stream. By examining the near acoustic field, the far-field spectra, the Spectral Proper Orthogonal Decomposition, and by conducting periodic averaging, we were able to identify the sources of the changes to the observed tonal and broadband noise. The convection of the oncoming flow was seen to partially explain the observed enhancement in the tonal and BBN, compared to the hovering case. It is well known that High fidelity methods are critical in predicting the full spectrum of rotor acoustics. However, these methods can be prohibitively expensive. We present here an investigation of the feasibility of reduction methods such as Proper Orthogonal Decomposition as well as Dynamic Mode decomposition for reduction of data obtained via Hybrid Large-Eddy – Unsteady Reynolds Averaged Navier Stokes approach (HLES) to be used further to obtain additional parameters. Specifically, we investigate how accurate reduced models of the high-fidelity computations can be used to predict the far-field noise. It was found that POD was capable of reconstructing accurately the parameters of interest with 15-40% of the total mode energies, whereas the DMD could only reconstruct primitive parameters such as velocity and pressure loosely. A rank truncation convergence criterion \u3e 99.8% was needed for better performance of the DMD algorithm. In the far-field spectra, DMD could only predict the tonal contents in the lower- mid frequencies whiles the POD could reproduce all frequencies of interest. Lastly, we develop an active rotor noise control technology to reduce the in-plane thickness noise associated with multi-rotor Advanced Air Mobility Vehicles (AAMV). An actuation signal is determined via the Ffowcs-Williams-Hawking (FWH) formula. Two in-line rotors are considered and we showed that the FWH-determined actuation signal can produce perfect cancellation at a point target. However, the practical need is to achieve noise reduction over an azimuthal zone, not just a single point. To achieve this zonal noise reduction, an optimization technique is developed to determine the required actuation signal produced by the on-blade distribution of embedded actuators on the two rotors. For the specific geometry considered here, this produced about 9 dB reduction in the in-plane thickness noise during forward flight of the two rotors. We further developed a technology that replaces using a point actuator on each bladed by distributed micro actuators system to achieve the same noise reduction goal with significantly reduced loading amplitudes per actuator. Overall, this research deepens the knowledge base of multi-rotor interaction. We utilize several techniques for extracting various flow and acoustic features that help understand the dynamics of such systems. Additionally, we provide a more practical approach to active rotor noise control without a performance penalty to the rotor system

    Land Use and Land Cover Mapping in a Changing World

    Get PDF
    It is increasingly being recognized that land use and land cover changes driven by anthropogenic pressures are impacting terrestrial and aquatic ecosystems and their services, human society, and human livelihoods and well-being. This Special Issue contains 12 original papers covering various issues related to land use and land use changes in various parts of the world (see references), with the purpose of providing a forum to exchange ideas and progress in related areas. Research topics include land use targets, dynamic modelling and mapping using satellite images, pressures from energy production, deforestation, impacts on ecosystem services, aboveground biomass evaluation, and investigations on libraries of legends and classification systems

    Electrical and Optical Modeling of Thin-Film Photovoltaic Modules

    Get PDF
    Heutzutage ist durch viele wissenschaftliche Studien nachgewiesen, dass die Erde längst dem Klimawandel unterworfen ist. Daher muss die gesamte Menschheit vereint handeln, um die schlimmsten Katastrophenszenarien zu verhindern. Ein vielversprechender Ansatz - wenn nicht sogar der vielversprechendste überhaupt - um diese angesprochene, größte Herausforderung in der Geschichte der Menschheit zu bewältigen, ist es, den Energiehunger der Menschheit durch die Erzeugung erneuerbarer und unerschöpflicher Energie zu sättigen. Die Photovoltaik (PV)-Technologie ist ein vielversprechender Anwärter, die leistungsstärkste erneuerbare Energiequelle zu stellen, und spielt aufgrund ihrer direkten Umwandlung des Sonnenlichtes und ihrer skalierbaren Anwendbarkeit in Form von großflächigen Solarmodulen bereits jetzt eine große Rolle bei der Erzeugung erneuerbarer Energie. Im PV-Sektor sind Solarmodule aus Siliziumwafern die derzeit vorherrschende Technologie. Neu aufkommende PV-Technologien wie die Dünnschichttechnologie haben jedoch vorteilhafte Eigenschaften wie einen sehr geringen Kohlenstoffdioxid (CO2)-Fußabdruck, eine kurze energetische Amortisierungszeit und das Potenzial für eine kostengünstige monolithische Massenproduktion, obwohl diese derzeit noch nicht final ausgereift ist. Um die Dünnschichttechnologie jedoch gezielt in Richtung einer breiten Marktreife zu entwickeln, sind numerische Simulationen eine wichtige Säule für das wissenschaftliche Verständnis und die technologische Optimierung. Während sich traditionelle Simulationsliteratur häufig mit materialspezifischen Herausforderungen befasst, konzentriert sich diese Arbeit auf industrieorientierte Herausforderungen auf Modulebene, ohne die zugrundeliegenden Materialparameter zu verändern. Um ein allumfassendes, digitales Modell eines Solarmoduls zu erstellen, werden in dieser Arbeit mehrere Simulationsansätze aus verschiedenen physikalischen Bereichen kombiniert. Zur Abbildung elektrischer Effekte, einschließlich der räumlichen Spannungsvariation innerhalb des Moduls, wird eine Finite Elemente Methode (FEM) zur Lösung der räumlich quantisierten Poisson-Gleichung verwendet. Um optische Effekte zu berücksichtigen, wird eine generalisierte Transfermatrix-Methode (TMM) verwendet. Alle Simulationsmethoden sind in dieser Arbeit von Grund auf neu programmiert worden, um eine Verknüpfung aller Simulationsebenen mit dem höchstmöglichen Grad an Anpassung und Verknüpfung zu ermöglichen. Die Simulation und die Korrektheit der Parameter wird durch externe Quanteneffizienz (EQE)-Messungen, experimentelle Reflexionsdaten und gemessene Strom-Spannungs (I-U)-Kennlinien verifiziert. Der Kernpunkt der Vorgehensweise dieser Arbeit ist eine ganzheitliche Simulationsmethodik auf Modulebene. Dies ermöglicht es, die Lücke zwischen der Simulation auf Materialebene über die Berechnung von Laborwirkungsgraden bis hin zur Bestimmung der von zahlreichen Umweltfaktoren beeinflusste Leistung der Module im Freifeld zu überbrücken. Durch diese Verknüpfung von Zellsimulation und Systemdesign ist es lediglich aus Laboreigenschaften möglich, das Freifeldverhalten von Solarmodulen zu prognostizieren. Sogar das Zurückrechnen von experimentellen Messungen zu Materialparameter ist mittels des in dieser Arbeit entwickelten Verfahrens des Reverse Engineering Fittings (REF) möglich. Das in dieser Arbeit entwickelte numerische Verfahren kann für mehrere Anwendungen genutzt werden. Zunächst können durch die Kombination von elektrischen und optischen Simulationen ganzheitliche Top-Down-Verlustanalysen durchgeführt werden. Dies ermöglicht eine wissenschaftliche Einordnung und einen quantitativen Vergleich aller Verlustleistungsmechanismen auf einen Blick, was die zukünftige Forschung und Entwicklung in Richtung von technologischen Schwachstellen von Solarmodulen lenkt. Darüber hinaus ermöglicht die Kombination von Elektrik und Optik die Detektion von Verlusten, die auf dem nichtlinearen Zusammenspiel dieser beiden Ebenen beruhen und auf eine räumliche Spannungsverteilung im Solarmodul zurückzuführen sind. Diese Arbeit verwendet die entwickelten numerischen Modelle ebenfalls für Optimierungsprobleme, die an digitalen Modellen realer Solarmodule durchgeführt werden. Häufig auftretende Fragestellungen bei der Entwicklung von Solarmodulen sind beispielsweise die Schichtdicke des vorderen optisch transparenten, elektrisch leitfähigen Oxids (TCO) oder die Breite von monolithisch verschalteten Zellen. Die Bestimmung des Optimums dieser mehrdimensionalen Abwägungen zwischen optischer Transparenz, elektrischer Leitfähigkeit und geometrisch inaktiver Fläche zwischen den einzelnen Zellen ist ein Hauptmerkmal der Methodik dieser Arbeit. Mittels des FEM-Ansatzes dieser Arbeit ist es möglich, alle gegenseitigen Wechselwirkungen über verschiedene physikalische Ebenen hinweg zu berücksichtigen und ein ganzheitlich optimiertes Moduldesign zu finden. Auch topologisch komplexere Probleme, wie das Finden eines geeigneten Designs für das Metallisierungsgitter, können auf Grundlage der Simulation mittels der Methode der Topologie-Optimierung (TO) gelöst werden. In dieser Arbeit wurde das TO-Verfahren zum ersten Mal für monolithisch integrierte Zellen eingesetzt. Darüber hinaus wurde gezeigt, dass sowohl einfache Optimierungen der TCO-Schichtdicken als auch Topologie-Optimierungen stark von den vorherrschenden Beleuchtungsverhältnissen abhängen. Daher ist eine Optimierung auf den Jahresertrag anstelle des Laborwirkungsgrades für industrienahe Anwendungen wesentlich sinnvoller, da die mittleren Jahreseinstrahlungen deutlich von den Laborbedingungen abweichen. Mit Hilfe dieser Ertragsoptimierung wurde in dieser Arbeit für die Kupfer-Indium-Gallium-Diselenid CuIn1x_{1-x}Gax_xSe2_2 (CIGS)-Technologie ein Leistungsgewinn von über 1 % im Ertrag für einige geografische Standorte und gleichzeitig eine Materialeinsparung für die Metallisierungs- und TCO-Schicht von bis zu 50 % errechnet. Mit Hilfe der numerischen Simulationen dieser Arbeit können alle denkbaren technologischen Verbesserungen auf Modulebene in das Modell eingebracht werden. Auf diese Weise wurde das aktuelle technologische Limit für CIGS-Dünnschicht-Solarmodule berechnet. Unter Verwendung der Randbedingungen der derzeit verfügbaren Materialien, Technologie- und Fertigungstoleranzen und des derzeit besten in der Literatur veröffentlichten CIGS-Materials ergibt sich ein theoretisches Wirkungsgradmaximum von 24 % auf Modulebene. Das derzeit beste veröffentlichte Modul mit den gegebenen Restriktionen weist einen Wirkungsgrad von 19,2 % auf [1]. Verbessert sich der CIGS-Absorber vergleichbar mit jenem von Galliumarsenid (GaAs) im Hinblick auf dessen Rekombinationsrate, ergibt sich ein erhöhtes Wirkungsgradlimit von etwa 28 %. Im Falle eines idealen CIGS-Absorbers ohne intrinsische Rekombinationsverluste wird in dieser Arbeit eine maximale Effizienzobergrenze von 29 % berechnet

    1-D broadside-radiating leaky-wave antenna based on a numerically synthesized impedance surface

    Get PDF
    A newly-developed deterministic numerical technique for the automated design of metasurface antennas is applied here for the first time to the design of a 1-D printed Leaky-Wave Antenna (LWA) for broadside radiation. The surface impedance synthesis process does not require any a priori knowledge on the impedance pattern, and starts from a mask constraint on the desired far-field and practical bounds on the unit cell impedance values. The designed reactance surface for broadside radiation exhibits a non conventional patterning; this highlights the merit of using an automated design process for a design well known to be challenging for analytical methods. The antenna is physically implemented with an array of metal strips with varying gap widths and simulation results show very good agreement with the predicted performance
    corecore