9,999 research outputs found

    Are JWST/NIRCam color gradients in the lensed z=2.3 dusty star-forming galaxy El Anzuelo due to central dust attenuation or inside-out galaxy growth?

    Full text link
    Gradients in the mass-to-light ratio of distant galaxies impede our ability to characterize their size and compactness. The long-wavelength filters of JWSTJWST's NIRCam offer a significant step forward. For galaxies at Cosmic Noon (z2z\sim2), this regime corresponds to the rest-frame near-infrared, which is less biased towards young stars and captures emission from the bulk of a galaxy's stellar population. We present an initial analysis of an extraordinary lensed dusty star-forming galaxy (DSFG) at z=2.3z=2.3 behind the El GordoEl~Gordo cluster (z=0.87z=0.87), named El AnzueloEl~Anzuelo ("The Fishhook") after its partial Einstein-ring morphology. The FUV-NIR SED suggests an intrinsic star formation rate of 812+7 M yr181^{+7}_{-2}~M_\odot~{\rm yr}^{-1} and dust attenuation AV1.6A_V\approx 1.6, in line with other DSFGs on the star-forming main sequence. We develop a parametric lens model to reconstruct the source-plane structure of dust imaged by the Atacama Large Millimeter/submillimeter Array, far-UV to optical light from HubbleHubble, and near-IR imaging with 8 filters of JWSTJWST/NIRCam, as part of the Prime Extragalactic Areas for Reionization and Lensing Science (PEARLS) program. The source-plane half-light radius is remarkably consistent from 14.5 μ\sim 1-4.5~\mum, despite a clear color gradient where the inferred galaxy center is redder than the outskirts. We interpret this to be the result of both a radially-decreasing gradient in attenuation and substantial spatial offsets between UV- and IR-emitting components. A spatial decomposition of the SED reveals modestly suppressed star formation in the inner kiloparsec, which suggests that we are witnessing the early stages of inside-out quenching.Comment: 29 pages, 11 figures, 5 tables. Accepted for publication in Ap

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    Assessing Atmospheric Pollution and Its Impacts on the Human Health

    Get PDF
    This reprint contains articles published in the Special Issue entitled "Assessing Atmospheric Pollution and Its Impacts on the Human Health" in the journal Atmosphere. The research focuses on the evaluation of atmospheric pollution by statistical methods on the one hand, and on the other hand, on the evaluation of the relationship between the level of pollution and the extent of its effect on the population's health, especially on pulmonary diseases

    Magnetic Material Modelling of Electrical Machines

    Get PDF
    The need for electromechanical energy conversion that takes place in electric motors, generators, and actuators is an important aspect associated with current development. The efficiency and effectiveness of the conversion process depends on both the design of the devices and the materials used in those devices. In this context, this book addresses important aspects of electrical machines, namely their materials, design, and optimization. It is essential for the design process of electrical machines to be carried out through extensive numerical field computations. Thus, the reprint also focuses on the accuracy of these computations, as well as the quality of the material models that are adopted. Another aspect of interest is the modeling of properties such as hysteresis, alternating and rotating losses and demagnetization. In addition, the characterization of materials and their dependence on mechanical quantities such as stresses and temperature are also considered. The reprint also addresses another aspect that needs to be considered for the development of the optimal global system in some applications, which is the case of drives that are associated with electrical machines

    Convex Optimization for Machine Learning

    Get PDF
    This book covers an introduction to convex optimization, one of the powerful and tractable optimization problems that can be efficiently solved on a computer. The goal of the book is to help develop a sense of what convex optimization is, and how it can be used in a widening array of practical contexts with a particular emphasis on machine learning. The first part of the book covers core concepts of convex sets, convex functions, and related basic definitions that serve understanding convex optimization and its corresponding models. The second part deals with one very useful theory, called duality, which enables us to: (1) gain algorithmic insights; and (2) obtain an approximate solution to non-convex optimization problems which are often difficult to solve. The last part focuses on modern applications in machine learning and deep learning. A defining feature of this book is that it succinctly relates the “story” of how convex optimization plays a role, via historical examples and trending machine learning applications. Another key feature is that it includes programming implementation of a variety of machine learning algorithms inspired by optimization fundamentals, together with a brief tutorial of the used programming tools. The implementation is based on Python, CVXPY, and TensorFlow. This book does not follow a traditional textbook-style organization, but is streamlined via a series of lecture notes that are intimately related, centered around coherent themes and concepts. It serves as a textbook mainly for a senior-level undergraduate course, yet is also suitable for a first-year graduate course. Readers benefit from having a good background in linear algebra, some exposure to probability, and basic familiarity with Python

    Inferring network structures using hierarchical exponential random graph models

    Get PDF
    Networks are broadly used to represent the interaction relationships between entities in a wide range of scientific fields. Ensembles of networks are employed to provide multiple network observations from the same set of entities. These observations may capture different features of the relationships: some ensembles exhibit group structures; some ensembles are collected over time; other ensembles have more individual differences. Statistical models for ensembles of networks should describe not only the dependency structure within each network, but also the variations of the structural patterns across networks. Exponential random graph models (ERGMs) provide a highly flexible way to study the complex dependency structures within networks. We aim to develop novel methodologies that utilise ERGMs to infer the underlying structures of ensembles of networks from the following three aspects: (1) identifying and characterising groups of networks that are similar with respect to the effects of local connectivity patterns and covariates of interest on shaping the global structure of networks; (2) modelling the evolution of networks over time by representing the associated parameters using a piecewise linear function; (3) analysing the individual characteristics of each network and the population structures of the whole ensemble in terms of the block structure, homophily, transitivity and other local structural properties. For identifying the group structure of ensembles and the block structure of networks, we employ a Bayesian nonparametric prior on an infinite sample space, instead of requiring a fixed number of groups in advance as in the existing models. In this way, the number of mixture models can grow along with the data size. This appealing property enables our models to fit the data better. Moreover, for the ensembles of networks with a time order, we utilise a fused lasso penalty to encourage similarities on the parameter estimation of the consecutive networks as they tend to share similar connectivity patterns. The inference of ERGMs under a Bayesian nonparametric framework is very challenging due to the fact that we have an infinite number of intractable ERGM likelihood functions in the model. Besides, the dependency among edges within the same block and the unknown number of blocks also significantly increase the difficulty of recovering the unknown block structure. What's more, the correlation between dynamic networks also requires us to work on all the possible edges of the ensemble simultaneously, posing a big challenge for the algorithm. To solve these issues, we develop five algorithms for the model estimation: (1) a novel Metropolis-Hastings algorithm to sample from the intractable posterior distribution of ERGMs with multiple networks using an intermediate importance sampling technique; (2) a new Metropolis-within-slice sampling algorithm to perform full Bayesian inference on infinite mixtures of ERGMs; (3) a pseudo likelihood based Metropolis-within-slice sampling algorithm to learn the group structure of ensembles fast and adaptively; (4) an alternating direction method of multipliers (ADMM) algorithm for the fast estimation of dynamic ensembles using a matrix decomposition technique; (5) a Metropolis-within-Gibbs sampling algorithm for the population analysis of structural patterns with an approximated stick-breaking prior

    Hydrologic responses to climate and land use/cover changes in world heritage site of Ngorongoro conservation area and surrounding catchments, northern Tanzania

    Get PDF
    A Dissertation submitted in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy in Hydrology and Water Resources Engineering of the Nelson Mandela African Institution of Science and TechnologyIn Tanzania, various studies have analyzed the impact of climate and land use/cover changes on water resources. However, information on the interactions between climate and land use/cover change, temporal and spatial variability of hydrological components and water quality at the local scale is insufficient. The objective of this study was to evaluate the hydrological response to climate and land use/cover changes in Ngorongoro Conservation Area (NCA) and surroundings. The study performed climate change analysis using outputs from a multi-model ensemble of Regional Climate Models (RCMs) and statistically downscaled Global Climate Models (GCMs). The CA–Markov model applied to project Land use/cover for the future 2025 and 2035. This study further used the Soil Water Assessment Tool (SWAT) modelling approaches to analyse the hydrological responses and HYDRUS 1D to determine the change in Groundwater quality due to climate and land use/cover changes. The analysis of climate change between historical period (1982-2011) and future period (2021-2050) indicated an increase in the mean annual rainfall and temperature, seasonal rainfall except June to September (JJAS) season which showed a decreasing trend. Spatially, rainfall and temperatures would increase over the entire area. The projected Land use/cover change for the period 2025 to 2035 compared to the baseline 2016, showed a reduction in bushland, forest, water, and woodland, but an intensification in cultivated land, grassland, bare land, and the built-up area. The surface runoff, evapotranspiration, lateral flow, and water yield would significantly increase in the future, while groundwater would decrease under combined climate and land use/cover change. It is predicted that two anions (Cl− and PO4 −3 ) and two cations (Na+ and K+ ) would exceed the permissible limits for the drinking water set by the World Health organisation (WHO) and Tanzania Bureau of Standards (TBS), from 2036 to 2050. Changes in groundwater quality due to major cations and anions is significantly correlated to evapotranspiration and temperature with Pearson correlation (r) between 0.35 and 0.85. Furthermore, correlate to the changes in all land use/ cover types with Pearson correlation (r) between 0.56 and 0.96. The results obtained provide further insight into future water resources management planning and adaptation strategi

    Data pre-processing to identify environmental risk factors associated with diabetes

    Get PDF
    Genetics, diet, obesity, and lack of exercise play a major role in the development of type II diabetes. Additionally, environmental conditions are also linked to type II diabetes. The aim of this research is to identify the environmental conditions associated with diabetes. To achieve this, the research study utilises hospital-admitted patient data in NSW integrated with weather, pollution, and demographic data. The environmental variables (air pollution and weather) change over time and space, necessitating spatiotemporal data analysis to identify associations. Moreover, the environmental variables are measured using sensors, and they often contain large gaps of missing values due to sensor failures. Therefore, enhanced methodologies in data cleaning and imputation are needed to facilitate research using this data. Hence, the objectives of this study are twofold: first, to develop a data cleaning and imputation framework with improved methodologies to clean and pre-process the environmental data, and second, to identify environmental conditions associated with diabetes. This study develops a novel data-cleaning framework that streamlines the practice of data analysis and visualisation, specifically for studying environmental factors such as climate change monitoring and the effects of weather and pollution. The framework is designed to efficiently handle data collected by remote sensors, enabling more accurate and comprehensive analyses of environmental phenomena that would otherwise not be possible. The study initially focuses on the Sydney Region, identifies missing data patterns, and utilises established imputation methods. It assesses the performance of existing techniques and finds that Kalman smoothing on structural time series models outperforms other methods. However, when dealing with larger gaps in missing data, none of the existing methods yield satisfactory results. To address this, the study proposes enhanced methodologies for filling substantial gaps in environmental datasets. The first proposed algorithm employs regularized regression models to fill large gaps in air quality data using a univariate approach. It is then extended to incorporate seasonal patterns and expand its applicability to weather data with similar patterns. Furthermore, the algorithm is enhanced by incorporating other correlated variables to accurately fill substantial gaps in environmental variables. Consistently, the algorithm presented in this thesis outperforms other methods in imputing large gaps. This algorithm is applicable for filling large gaps in air pollution and weather data, facilitating downstream analysis

    So you think you can track?

    Full text link
    This work introduces a multi-camera tracking dataset consisting of 234 hours of video data recorded concurrently from 234 overlapping HD cameras covering a 4.2 mile stretch of 8-10 lane interstate highway near Nashville, TN. The video is recorded during a period of high traffic density with 500+ objects typically visible within the scene and typical object longevities of 3-15 minutes. GPS trajectories from 270 vehicle passes through the scene are manually corrected in the video data to provide a set of ground-truth trajectories for recall-oriented tracking metrics, and object detections are provided for each camera in the scene (159 million total before cross-camera fusion). Initial benchmarking of tracking-by-detection algorithms is performed against the GPS trajectories, and a best HOTA of only 9.5% is obtained (best recall 75.9% at IOU 0.1, 47.9 average IDs per ground truth object), indicating the benchmarked trackers do not perform sufficiently well at the long temporal and spatial durations required for traffic scene understanding
    corecore