1,169 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Undergraduate Catalog of Studies, 2022-2023

    Get PDF

    LOCAL SITE SEISMIC RESPONSE IN AN INTER-ANDEAN VALLEY: GEOTECHNICAL CHARACTERIZATION AND SEISMIC AMPLIFICATION ZONATION OF THE SOUTHERN QUITO AREA

    Get PDF
    Over the last century, earthquakes have claimed the lives of thousands of people and caused considerable damage to existing buildings in several places in South America. According to Chunga et al (2018), in Ecuador, there are records since 1906 showing significant events with magnitudes between Mw 7.1 and Mw 8.8. However, the population has not been aware of the potential effects of an earthquake of these magnitudes, causing considerable seismic vulnerability due to unstudied and low-cost informal constructions. For this reason, the necessity to analyze the local seismic response of the South Quito area arises, evaluating the seismic amplification considering the lithostratigraphic and geomorphological characteristics of the inter-Andean area. To achieve this, 20 boreholes of 30 m depth distributed in this area were complemented with a campaign of 1332 field tests and 2774 laboratory tests. The information obtained from the campaign was used to form 9 zones consisting of one or a group of boreholes according to their geographic location, physical and mechanical characteristics, generating a soil column for each zone. Three types of analysis were carried out to define the soil dynamic parameters: with theoretical values, with parameters derived from dry and remolded samples, performing a total of 46 resonant column tests. The results showed that, for the 9 defined zones in southern Quito, the amplification factors ranged between 3.07 and 7.74, which helps us to evaluate the vulnerability of this area of the city, by zoning and risk mapping. Nevertheless, the need for further investigation of the subsoil is emphasized, in addition to the analysis of amplification factors based on the earthquakes in this sector.Nell'ultimo secolo, i terremoti hanno provocato la morte di migliaia di persone e causato notevoli danni agli edifici esistenti in diverse località del Sud America. Secondo Chunga et al (2018), in Ecuador esistono registrazioni dal 1906 che mostrano eventi significativi con magnitudo tra Mw 7.1 e Mw 8.8. Tuttavia, la popolazione non era a conoscenza dei potenziali effetti di un terremoto di queste magnitudo, causando una notevole vulnerabilità sismica a causa di costruzioni informali non studiate ea basso costo. Nasce pertanto la necessità di analizzare la risposta sismica locale dell'area sud di Quito, valutandone l'amplificazione sismica considerando le caratteristiche litostratigrafiche e geomorfologiche dell'area interandina. Per raggiungere questo obiettivo, 20 sondaggi di 30 m di profondità distribuiti in quest'area sono stati integrati con una campagna di 1332 prove in campo e 2774 prove in laboratorio. Le informazioni ottenute dalla campagna sono state utilizzate per formare 9 zone costituite da uno o da un gruppo di sondaggi in base alla loro posizione geografica, caratteristiche fisiche e meccaniche, generando una colonna di suolo per ciascuna zona. Sono state effettuate tre tipologie di analisi per definire i parametri dinamici del suolo: con valori teorici, con parametri derivati da campioni secchi e rimodellati, eseguendo un totale di 46 prove su colonna risonante. I risultati hanno mostrato che, per le 9 zone definite nel sud di Quito, i fattori di amplificazione erano compresi tra 3,07 e 7,74, il che ci aiuta a valutare la vulnerabilità di quest'area della città, mediante la zonizzazione e la mappatura del rischio. Tuttavia, viene sottolineata la necessità di ulteriori indagini del sottosuolo, oltre all'analisi dei fattori di amplificazione basati sui terremoti in questo settore

    Thermo-Electro-Optical Properties of Disordered Nanowire Networks

    Get PDF
    Metallic nanowire networks are promising candidates for next-generation transparent conductors, owing to their exceptional electrical and thermal conductivity, high optical transparency, and mechanical flexibility. A nanowire network is a disordered arrangement of nanowires that exhibits no discernible long-range order or periodicity. Previous studies have placed significant emphasis on the individual analysis of electrical resistance, optical transmission, and thermal conduction in diverse network materials. Nonetheless, insufficient focus has been devoted to comprehending the relationship between the multiple extrinsic and intrinsic variables that characterize a disordered nanowire network (or an ensemble of them) and the trade-offs that arise when investigating the system response trio of namely electrical/ optical/thermal natures. This thesis presents a comprehensive computational study that exclusively employs theoretical and numerical models to examine the thermoelectric and optical characteristics of two types of disordered metallic nanowire networks: (i) junction-based random nanowire networks and (ii) seamless random nanowire networks. The raw materials that compose their nanowires are metals namely, silver, gold, copper, and aluminium and we used a variety of computational tools to obtain prominent physical quantities that infer the network’s performance such as sheet (electrical) resistance, optical transmission, and temperature variation. A range of adjustable parameters, including those pertaining to geometrical structure in device design, have been systematically tuned in order to conduct a figure of merit analysis with respect to thermal and electrical conduction, and optical transmission of the network materials. Moreover, we obtained local current and temperature mappings that detail the conduction mechanisms used by the networks to propagate signals through their disordered skeleton. We verified that, under certain conditions, junction-based and seamless nanowire networks fall into the same temperature distribution mechanisms that can be generally described with Weibull probability density functions. This study offers valuable insights into the electrical/optical/thermal performance of disordered nanowire networks prone to transparent conductor applications

    Broadband Thin Film Absorber Based on Plasmonic Nanoparticles

    Get PDF
    Harvesting solar energy presents a formidable challenge, primarily rooted in the need to capture light across a broad spectrum range efficiently. Addressing this challenge, we describe the concept of designing a broadband perfect absorber in the form of a thin-film system with plasmonic nanoparticles as its foundational basis. We study a thin-film absorber made from the scattering responses of an Au144 gold molecule. It turns out that this thin-film absorber absorbs the light in the entire visible light region quite well. As a further aspect, we employ bulk copper nanoparticles as the basis for the nanoparticle layer within the absorber. We inspect on computational grounds the effect of the nanoparticles filling factor and the thin-film thicknesses on the absorber performance. Remarkably, our findings reveal that the thin-film absorber with copper nanoparticles can absorb 90% of light energy across a broad spectrum ranging from ultraviolet to near-infrared wavelengths. To validate the accuracy of our simulations, we translate these optimized absorber layouts into fabrications together with experimental partners from the University of Kiel. The experimental results align remarkably closely with our simulations, confirming the capability of our designed broadband perfect absorber

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning

    Full text link
    The paper introduces the application of information geometry to describe the ground states of Ising models by utilizing parity-check matrices of cyclic and quasi-cyclic codes on toric and spherical topologies. The approach establishes a connection between machine learning and error-correcting coding. This proposed approach has implications for the development of new embedding methods based on trapping sets. Statistical physics and number geometry applied for optimize error-correcting codes, leading to these embedding and sparse factorization methods. The paper establishes a direct connection between DNN architecture and error-correcting coding by demonstrating how state-of-the-art architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range arena can be equivalent to of block and convolutional LDPC codes (Cage-graph, Repeat Accumulate). QC codes correspond to certain types of chemical elements, with the carbon element being represented by the mixed automorphism Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix are elaborated upon in detail. The Quantum Approximate Optimization Algorithm (QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous to the back-propagation loss function landscape in training DNNs. This similarity creates a comparable problem with TS pseudo-codeword, resembling the belief propagation method. Additionally, the layer depth in QAOA correlates to the number of decoding belief propagation iterations in the Wiberg decoding tree. Overall, this work has the potential to advance multiple fields, from Information Theory, DNN architecture design (sparse and structured prior graph topology), efficient hardware design for Quantum and Classical DPU/TPU (graph, quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text overlap with arXiv:2109.08184 by other author

    A scalable formulation of joint modelling for longitudinal and time to event data and its application on large electronic health record data of diabetes complications

    Get PDF
    INTRODUCTION: Clinical decision-making in the management of diabetes and other chronic diseases depends upon individualised risk predictions of progression of the disease or complica- tions of disease. With sequential measurements of biomarkers, it should be possible to make dynamic predictions that are updated as new data arrive. Since the 1990s, methods have been developed to jointly model longitudinal measurements of biomarkers and time-to-event data, aiming to facilitate predictions in various fields. These methods offer a comprehensive approach to analyse both the longitudinal changes in biomarkers, and the occurrence of events, allowing for a more integrated understanding of the underlying processes and improved predictive capabilities. The aim of this thesis is to investigate whether established methods for joint modelling are able to scale to large-scale electronic health record datasets with multiple biomarkers measured asynchronously, and evaluates the performance of a novel approach that overcomes the limitations of existing methods. METHODS: The epidemiological study design utilised in this research is a retrospective observa- tional study. The data used for these analyses were obtained from a registry encompassing all individuals with type 1 diabetes in Scotland, which is delivered by the Scottish Care Information - Diabetes Collaboration platform. The two outcomes studied were time to cardiovascular disease (CVD) and time to end-stage renal disease (ESRD) from T1D diag- nosis. The longitudinal biomarkers examined in the study were glycosylated haemoglobin (HbA1c) and estimated glomerular filtration rate (eGFR). These biomarkers and endpoints were selected based on their prevalence in the T1D population and the established association between these biomarkers and the outcomes. As a state-of-the-art method for joint modelling, Brilleman’s stan_jm() function was evaluated. This is an implementation of a shared parameter joint model for longitudinal and time-to- event data in Stan contributed to the rstanarm package. This was compared with a novel approach based on sequential Bayesian updating of a continuous-time state-space model for the biomarkers, with predictions generated by a Kalman filter algorithm using the ctsem package fed into a Poisson time-splitting regression model for the events. In contrast to the standard joint modelling approach that can only fit a linear mixed model to the biomarkers, the ctsem package is able to fit a broader family of models that include terms for autoregressive drift and diffusion. As a baseline for comparison, a last-observation-carried-forward model was evaluated to predict time-to-event. RESULTS: The analyses were conducted using renal replacement therapy outcome data regarding 29764 individuals and cardiovascular disease outcome data on 29479 individuals in Scotland (as per the 2019 national registry extract). The CVD dataset was reduced to 24779 individuals with both HbA1c and eGFR data measured on the same date; a limitation of the modelling function itself. The datasets include 799 events of renal replacement therapy (RRT) or death due to renal failure (6.71 years average follow-up) and 2274 CVD events (7.54 years average follow-up) respectively. The standard approach to joint modelling using quadrature to integrate over the trajectories of the latent biomarker states, implemented in rstanarm, was found to be too slow to use even with moderate-sized datasets, e.g. 17.5 hours for a subset of 2633 subjects, 35.9 hours for 5265 subjects, and more than 68 hours for 10532 subjects. The sequential Bayesian updating approach was much faster, as it was able to analyse a dataset of 29121 individuals over 225598.3 person-years in 19 hours. Comparison of the fit of different longitudinal biomarker submodels showed that the fit of models that also included a drift and diffusion term was much better (AIC 51139 deviance units lower) than models that included only a linear mixed model slope term. Despite this, the improvement in predictive performance was slight for CVD (C-statistic 0.680 to 0.696 for 2112 individuals) and only moderate for end-stage renal disease (C-statistic 0.88 to 0.91 for 2000 individuals) by adding terms for diffusion and drift. The predictive performance of joint modelling in these datasets was only slightly better than using last-observation-carried-forward in the Poisson regression model (C-statistic 0.819 over 8625 person-years). CONCLUSIONS: I have demonstrated that unlike the standard approach to joint modelling, implemented in rstanarm, the time-splitting joint modelling approach based on sequential Bayesian updating can scale to a large dataset and allows biomarker trajectories to be modelled with a wider family of models that have better fit than simple linear mixed models. However, in this application, where the only biomarkers were HbA1c and eGFR, and the outcomes were time-to-CVD and end-stage renal disease, the increment in the predictive performance of joint modelling compared with last-observation-carried forward was slight. For other outcomes, where the ability to predict time-to-event depends upon modelling latent biomarker trajectories rather than just using the last-observation-carried-forward, the advantages of joint modelling may be greater. This thesis proceeds as follows. The first two chapters serve as an introduction to the joint modelling of longitudinal and time-to-event data and its relation to other methods for clinical risk prediction. Briefly, this part explores the rationale for utilising such an approach to manage chronic diseases, such as T1D, better. The methodological chapters of this thesis describe the mathematical formulation of a multivariate shared-parameter joint model and introduce its application and performance on a subset of individuals with T1D and data pertaining to CVD and ESRD outcomes. Additionally, the mathematical formulation of an alternative time-splitting approach is demonstrated and compared to a conventional method for estimating longitudinal trajectories of clinical biomarkers used in risk prediction. Also, the key features of the pipeline required to implement this approach are outlined. The final chapters of the thesis present an applied example that demonstrates the estimation and evaluation of the alternative modelling approach and explores the types of inferences that can be obtained for a subset of individuals with T1D that might progress to ESRD. Finally, this thesis highlights the strengths and weaknesses of applying and scaling up more complex modelling approaches to facilitate dynamic risk prediction for precision medicine
    • …
    corecore