6,590 research outputs found

    The Development and Performance of the First BICEP Array Receiver at 30 and 40 GHz for Measuring the Polarized Synchrotron Foreground

    Get PDF
    The existence of the CMB marks a big success of the lambda cold dark matter standard model, which describes the universe’s evolution with six free parameters. The inflationary theory was added to the picture in the ’80s to explain the initial conditions of the universe. Scalar perturbations from inflation seeded the formation of the large-scale structure and produced the curl-free E-mode polarization pattern in the CMB. On the other hand, tensor fluctuations sourced primordial gravitational waves (PGW), which could leave unique imprints in the CMB polarization: the gradient-free B-mode pattern. The amplitude of B modes is directly related to the tensor-to-scalar ratio r of the primordial fluctuations, which indicates the energy scale of inflation. The detection of the primordial B modes will be strong supporting evidence of inflation and give us opportunities to study physics at energy scales far beyond what can ever be accessed in laboratory experiments on the Earth. Currently, the main challenge for the B-mode experiments is to separate the primordial B modes from those sourced by matter between us and the last scattering surface: the galactic foregrounds and the gravitational lensing effect. The two most important foregrounds are thermal dust and synchrotron, which have very different spectral properties from the CMB. Thus the key to foreground cleaning is the high sensitivity data at multiple frequency bands and the accurate modeling of the foregrounds in data analyses and simulations. In this dissertation, I present my work on ISM and dust property studies which enriched our understanding of the foregrounds. The BICEP/Keck (BK) experiments build a series of polarization-sensitive microwave telescopes targeting degree-scale B-modes from the early universe. The latest publication from the collaboration with data taken through 2018 reported tensor-to-scalar ratio r0.05 &#60; 0.036 at 95% C.L., providing the tightest constraint on the primordial tensor mode. BICEP Array is the latest generation of the series experiments. The final configuration of the BICEP Array has four BICEP3-class receivers spanning six frequency bands, aiming to achieve σ(r) ≾ 0.003. The first receiver of the BICEP Array is at 30 and 40 GHz, constraining the synchrotron foregrounds. In this dissertation, I cover the development of this new receiver focusing on the design and performance of the detectors. I report on the characterizing and diagnosing tests for the receiver during its first few observing seasons.</p

    Measurement of telescope transmission using a Collimated Beam Projector

    Full text link
    With the increasingly large number of type Ia supernova being detected by current-generation survey telescopes, and even more expected with the upcoming Rubin Observatory Legacy Survey of Space and Time, the precision of cosmological measurements will become limited by systematic uncertainties in flux calibration rather than statistical noise. One major source of systematic error in determining SNe Ia color evolution (needed for distance estimation) is uncertainty in telescope transmission, both within and between surveys. We introduce here the Collimated Beam Projector (CBP), which is meant to measure a telescope transmission with collimated light. The collimated beam more closely mimics a stellar wavefront as compared to flat-field based instruments, allowing for more precise handling of systematic errors such as those from ghosting and filter angle-of-incidence dependence. As a proof of concept, we present CBP measurements of the StarDICE prototype telescope, achieving a standard (1 sigma) uncertainty of 3 % on average over the full wavelength range measured with a single beam illumination

    Predictive Maintenance of Critical Equipment for Floating Liquefied Natural Gas Liquefaction Process

    Get PDF
    Predictive Maintenance of Critical Equipment for Liquefied Natural Gas Liquefaction Process Meeting global energy demand is a massive challenge, especially with the quest of more affinity towards sustainable and cleaner energy. Natural gas is viewed as a bridge fuel to a renewable energy. LNG as a processed form of natural gas is the fastest growing and cleanest form of fossil fuel. Recently, the unprecedented increased in LNG demand, pushes its exploration and processing into offshore as Floating LNG (FLNG). The offshore topsides gas processes and liquefaction has been identified as one of the great challenges of FLNG. Maintaining topside liquefaction process asset such as gas turbine is critical to profitability and reliability, availability of the process facilities. With the setbacks of widely used reactive and preventive time-based maintenances approaches, to meet the optimal reliability and availability requirements of oil and gas operators, this thesis presents a framework driven by AI-based learning approaches for predictive maintenance. The framework is aimed at leveraging the value of condition-based maintenance to minimises the failures and downtimes of critical FLNG equipment (Aeroderivative gas turbine). In this study, gas turbine thermodynamics were introduced, as well as some factors affecting gas turbine modelling. Some important considerations whilst modelling gas turbine system such as modelling objectives, modelling methods, as well as approaches in modelling gas turbines were investigated. These give basis and mathematical background to develop a gas turbine simulated model. The behaviour of simple cycle HDGT was simulated using thermodynamic laws and operational data based on Rowen model. Simulink model is created using experimental data based on Rowen’s model, which is aimed at exploring transient behaviour of an industrial gas turbine. The results show the capability of Simulink model in capture nonlinear dynamics of the gas turbine system, although constraint to be applied for further condition monitoring studies, due to lack of some suitable relevant correlated features required by the model. AI-based models were found to perform well in predicting gas turbines failures. These capabilities were investigated by this thesis and validated using an experimental data obtained from gas turbine engine facility. The dynamic behaviours gas turbines changes when exposed to different varieties of fuel. A diagnostics-based AI models were developed to diagnose different gas turbine engine’s failures associated with exposure to various types of fuels. The capabilities of Principal Component Analysis (PCA) technique have been harnessed to reduce the dimensionality of the dataset and extract good features for the diagnostics model development. Signal processing-based (time-domain, frequency domain, time-frequency domain) techniques have also been used as feature extraction tools, and significantly added more correlations to the dataset and influences the prediction results obtained. Signal processing played a vital role in extracting good features for the diagnostic models when compared PCA. The overall results obtained from both PCA, and signal processing-based models demonstrated the capabilities of neural network-based models in predicting gas turbine’s failures. Further, deep learning-based LSTM model have been developed, which extract features from the time series dataset directly, and hence does not require any feature extraction tool. The LSTM model achieved the highest performance and prediction accuracy, compared to both PCA-based and signal processing-based the models. In summary, it is concluded from this thesis that despite some challenges related to gas turbines Simulink Model for not being integrated fully for gas turbine condition monitoring studies, yet data-driven models have proven strong potentials and excellent performances on gas turbine’s CBM diagnostics. The models developed in this thesis can be used for design and manufacturing purposes on gas turbines applied to FLNG, especially on condition monitoring and fault detection of gas turbines. The result obtained would provide valuable understanding and helpful guidance for researchers and practitioners to implement robust predictive maintenance models that will enhance the reliability and availability of FLNG critical equipment.Petroleum Technology Development Funds (PTDF) Nigeri

    Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing

    Get PDF
    Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging. In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging. Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid. Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components

    Growth trends and site productivity in boreal forests under management and environmental change: insights from long-term surveys and experiments in Sweden

    Get PDF
    Under a changing climate, current tree and stand growth information is indispensable to the carbon sink strength of boreal forests. Important questions regarding tree growth are to what extent have management and environmental change influenced it, and how it might respond in the future. In this thesis, results from five studies (Papers I-V) covering growth trends, site productivity, heterogeneity in managed forests and potentials for carbon storage in forests and harvested wood products via differing management strategies are presented. The studies were based on observations from national forest inventories and long-term experiments in Sweden. The annual height growth of Scots pine (Pinus sylvestris) and Norway spruce (Picea abies) had increased, especially after the millennium shift, while the basal area growth remains stable during the last 40 years (Papers I-II). A positive response on height growth with increasing temperature was observed. The results generally imply a changing growing condition and stand composition. In Paper III, yield capacity of conifers was analysed and compared with existing functions. The results showed that there is a bias in site productivity estimates and the new functions give better prediction of the yield capacity in Sweden. In Paper IV, the variability in stand composition was modelled as indices of heterogeneity to calibrate the relationship between basal area and leaf area index in managed stands of Norway spruce and Scots pine. The results obtained show that the stand structural heterogeneity effects here are of such a magnitude that they cannot be neglected in the implementation of hybrid growth models, especially those based on light interception and light-use efficiency. In the long-term, the net climate benefits in Swedish forests may be maximized through active forest management with high harvest levels and efficient product utilization, compared to increasing carbon storage in standing forests through land set-asides for nature conservation (Paper V). In conclusion, this thesis offers support for the development of evidence-based policy recommendations for site-adapted and sustainable management of Swedish forests in a changing climate

    Machine learning for managing structured and semi-structured data

    Get PDF
    As the digitalization of private, commercial, and public sectors advances rapidly, an increasing amount of data is becoming available. In order to gain insights or knowledge from these enormous amounts of raw data, a deep analysis is essential. The immense volume requires highly automated processes with minimal manual interaction. In recent years, machine learning methods have taken on a central role in this task. In addition to the individual data points, their interrelationships often play a decisive role, e.g. whether two patients are related to each other or whether they are treated by the same physician. Hence, relational learning is an important branch of research, which studies how to harness this explicitly available structural information between different data points. Recently, graph neural networks have gained importance. These can be considered an extension of convolutional neural networks from regular grids to general (irregular) graphs. Knowledge graphs play an essential role in representing facts about entities in a machine-readable way. While great efforts are made to store as many facts as possible in these graphs, they often remain incomplete, i.e., true facts are missing. Manual verification and expansion of the graphs is becoming increasingly difficult due to the large volume of data and must therefore be assisted or substituted by automated procedures which predict missing facts. The field of knowledge graph completion can be roughly divided into two categories: Link Prediction and Entity Alignment. In Link Prediction, machine learning models are trained to predict unknown facts between entities based on the known facts. Entity Alignment aims at identifying shared entities between graphs in order to link several such knowledge graphs based on some provided seed alignment pairs. In this thesis, we present important advances in the field of knowledge graph completion. For Entity Alignment, we show how to reduce the number of required seed alignments while maintaining performance by novel active learning techniques. We also discuss the power of textual features and show that graph-neural-network-based methods have difficulties with noisy alignment data. For Link Prediction, we demonstrate how to improve the prediction for unknown entities at training time by exploiting additional metadata on individual statements, often available in modern graphs. Supported with results from a large-scale experimental study, we present an analysis of the effect of individual components of machine learning models, e.g., the interaction function or loss criterion, on the task of link prediction. We also introduce a software library that simplifies the implementation and study of such components and makes them accessible to a wide research community, ranging from relational learning researchers to applied fields, such as life sciences. Finally, we propose a novel metric for evaluating ranking results, as used for both completion tasks. It allows for easier interpretation and comparison, especially in cases with different numbers of ranking candidates, as encountered in the de-facto standard evaluation protocols for both tasks.Mit der rasant fortschreitenden Digitalisierung des privaten, kommerziellen und öffentlichen Sektors werden immer größere Datenmengen verfügbar. Um aus diesen enormen Mengen an Rohdaten Erkenntnisse oder Wissen zu gewinnen, ist eine tiefgehende Analyse unerlässlich. Das immense Volumen erfordert hochautomatisierte Prozesse mit minimaler manueller Interaktion. In den letzten Jahren haben Methoden des maschinellen Lernens eine zentrale Rolle bei dieser Aufgabe eingenommen. Neben den einzelnen Datenpunkten spielen oft auch deren Zusammenhänge eine entscheidende Rolle, z.B. ob zwei Patienten miteinander verwandt sind oder ob sie vom selben Arzt behandelt werden. Daher ist das relationale Lernen ein wichtiger Forschungszweig, der untersucht, wie diese explizit verfügbaren strukturellen Informationen zwischen verschiedenen Datenpunkten nutzbar gemacht werden können. In letzter Zeit haben Graph Neural Networks an Bedeutung gewonnen. Diese können als eine Erweiterung von CNNs von regelmäßigen Gittern auf allgemeine (unregelmäßige) Graphen betrachtet werden. Wissensgraphen spielen eine wesentliche Rolle bei der Darstellung von Fakten über Entitäten in maschinenlesbaren Form. Obwohl große Anstrengungen unternommen werden, so viele Fakten wie möglich in diesen Graphen zu speichern, bleiben sie oft unvollständig, d. h. es fehlen Fakten. Die manuelle Überprüfung und Erweiterung der Graphen wird aufgrund der großen Datenmengen immer schwieriger und muss daher durch automatisierte Verfahren unterstützt oder ersetzt werden, die fehlende Fakten vorhersagen. Das Gebiet der Wissensgraphenvervollständigung lässt sich grob in zwei Kategorien einteilen: Link Prediction und Entity Alignment. Bei der Link Prediction werden maschinelle Lernmodelle trainiert, um unbekannte Fakten zwischen Entitäten auf der Grundlage der bekannten Fakten vorherzusagen. Entity Alignment zielt darauf ab, gemeinsame Entitäten zwischen Graphen zu identifizieren, um mehrere solcher Wissensgraphen auf der Grundlage einiger vorgegebener Paare zu verknüpfen. In dieser Arbeit stellen wir wichtige Fortschritte auf dem Gebiet der Vervollständigung von Wissensgraphen vor. Für das Entity Alignment zeigen wir, wie die Anzahl der benötigten Paare reduziert werden kann, während die Leistung durch neuartige aktive Lerntechniken erhalten bleibt. Wir erörtern auch die Leistungsfähigkeit von Textmerkmalen und zeigen, dass auf Graph-Neural-Networks basierende Methoden Schwierigkeiten mit verrauschten Paar-Daten haben. Für die Link Prediction demonstrieren wir, wie die Vorhersage für unbekannte Entitäten zur Trainingszeit verbessert werden kann, indem zusätzliche Metadaten zu einzelnen Aussagen genutzt werden, die oft in modernen Graphen verfügbar sind. Gestützt auf Ergebnisse einer groß angelegten experimentellen Studie präsentieren wir eine Analyse der Auswirkungen einzelner Komponenten von Modellen des maschinellen Lernens, z. B. der Interaktionsfunktion oder des Verlustkriteriums, auf die Aufgabe der Link Prediction. Außerdem stellen wir eine Softwarebibliothek vor, die die Implementierung und Untersuchung solcher Komponenten vereinfacht und sie einer breiten Forschungsgemeinschaft zugänglich macht, die von Forschern im Bereich des relationalen Lernens bis hin zu angewandten Bereichen wie den Biowissenschaften reicht. Schließlich schlagen wir eine neuartige Metrik für die Bewertung von Ranking-Ergebnissen vor, wie sie für beide Aufgaben verwendet wird. Sie ermöglicht eine einfachere Interpretation und einen leichteren Vergleich, insbesondere in Fällen mit einer unterschiedlichen Anzahl von Kandidaten, wie sie in den de-facto Standardbewertungsprotokollen für beide Aufgaben vorkommen

    Mechanical Properties Of Fibrous Network Materials

    Get PDF
    We discuss mechanical behavior of specific fibrous network materials, including the evolution of tension in fibrin clots, compression of pulmonary emboli, and fracture of Whatman filter paper.The first material, fibrin clots, consist of random networks of fibrin fibers. When clots form by polymerization they develop tensile pre-stresses. We construct a mathematical model for the evolution of tension in isotropic fibrin gels. As the fiber diameter grows over time, properties which depend on it, such as the stored energy per unit length of a single fiber, the force-stretch relation of a fiber, and therefore the tension in the network as a whole, also evolve over time. The second fibrous network is pulmonary emboli, which consist of random networks of fibrin fibers with fluid-filled pores and red blood cells (RBCs). Stress-strain responses of human pulmonary emboli under cyclic compression were measured, revealing that emboli exhibit hysteretic stress-strain curves characteristic of foams. We describe the hysteretic response of emboli using a model of phase transitions, in which the compressed embolus is segregated into coexisting rarefied and densified phases whose fractions change as compression progresses. Our model takes into account RBC rupture in compressed emboli and stresses due to fluid flow through their small pores. The mechanical response of emboli is shown to vary depending on their RBC content. The third fibrous network is Whatman filter paper. The effect of humidity on properties such as out-of-plane fracture toughness of Whatman filter paper is studied for a broad range of relative humidities. Crack growth is modeled using traction-separation laws, whose parameters are fitted to experiments. Additionally, a novel model is developed to capture the high peak and sudden drop in the experimental force measurement caused by the existence of an initiation region, an imperfect zone ahead of a nascent crack. The relative effect of each independent parameter is explored to better understand the humidity dependence of the traction-separation parameters. The materials studied have biological, clinical, and industrial applications, and the methods described here are also applicable to other fibrous network materials

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    Industrial machine structural components’ optimization and redesign

    Get PDF
    Tese de doutoramento em Líderes para as Indústrias TecnológicasO corte por laser é um processo altamente flexível com numerosas vantagens sobre tecnologias concorrentes. O crescimento do mercado é revelador do seu potencial, totalizando 4300 milhões de dólares americanos em 2020. O processo é utilizado em muitas indústrias e as tendências atuais passam por melhorias ao nível do tempo de ciclo, qualidade, custos e exatidão. Os materiais compósitos (nomeadamente polímeros reforçados por fibras) apresentam propriedades mecânicas atrativas para várias aplicações, incluindo a que se relaciona com o presente trabalho: componentes de máquinas industriais. A utilização de compósitos resulta tipicamente em máquinas mais eficientes, exatidão dimensional acrescida, melhor qualidade superficial, melhor eficiência energética e menor impacto ambiental. O principal objetivo deste trabalho é aumentar a produtividade de uma máquina de corte laser, através do redesign de um componente crítico (o pórtico), grande influenciador da exatidão da máquina. Pretende-se com isto criar uma metodologia genérica capaz de auxiliar no processo de redesign de componentes industriais. Dado que o problema lida com dois objetivos concorrentes (redução de peso e aumento de rigidez) e com um elevado número de variáveis, a implementação de uma rotina de otimização é um aspeto central. É crucial demonstrar que o processo de otimização proposto resulta em soluções efetivas. Estas foram validadas através de análise de elementos finitos e de validação experimental, com recurso a um protótipo à escala. O algoritmo de otimização usado é uma metaheurística, inspirado no comportamento de grupos de animais. Algoritmos Particle Swarm são sugeridos com sucesso para problemas de otimização semelhantes. A otimização focou-se na espessura de cada laminado, para diferentes orientações. A rotina de otimização resultou na definição de uma solução quase-ótima para os laminados analisados e permitiu a redução do peso da peça em 43% relativamente à solução atual, bem como um aumento de 25% na aceleração máxima permitida, o que se reflete na produtividade da máquina, enquanto a mesma exatidão é garantida. A comparação entre os resultados numéricos e experimentais para os protótipos mostra uma boa concordância, com divergências pontuais, mas que ainda assim resultam na validação do modelo de elementos finitos no qual se baseia a otimização.Laser cutting is a highly flexible process with numerous advantages over competing technologies. These have ensured the growth of its market, totalling 4300 million United States dollars in 2020. Being used in many industries, the current trends are focused on reduced lead time, increased quality standards and competitive costs, while ensuring accuracy. Composite materials (namely fibre reinforced polymers) present attractive mechanical properties that poses them as advantageous for several applications, including the matter of this thesis: industrial machine components. The use of these materials leads to machines with higher efficiency, dimensional accuracy, surface quality, energy efficiency, and environmental impact. The main goal of this work is to increase the productivity of a laser cutting machine through the redesign of a critical component (gantry), also key for the overall machine accuracy. Beyond that, it is intended that this work lays out a methodology capable of assisting in the redesign of other machine critical components. As the problem leads with two opposing objectives (reducing weight and increasing stiffness), and with many variables, the implementation of an optimization routine is a central aspect of the present work. It is of major importance that the proposed optimization method leads to reliable results, demonstrated in this work by a finite element analysis and through experimental validation, by means of a scale prototype. The optimization algorithm selected is a metaheuristic inspired by the behaviour of swarms of animals. Particle swarm algorithms are proven to provide good and fast results in similar optimization problems. The optimization was performed focusing on the thickness of each laminate and on the orientations present in these. The optimization routine resulted in a definition of a near-optimal solution for the laminates analysed and allowed a weight reduction of 43% regarding the current solution, as well as an increase of 25% in the maximum allowed acceleration, which reflects on the productivity of the machine, while ensuring the same accuracy. The comparison between numeric and experimental testing of the prototypes shows a good agreement, with punctual divergences, but that still validates the Finite elements upon which the optimization process is supported.Portuguese Foundation for Science and Technology - SFRH/BD/51106/2010
    corecore