85 research outputs found
Courbes de fragilité pour la vulnérabilité sismique de barrages-poids en béton
Les barrages-poids en béton sont des structures essentielles afin de régulariser les apports en eau potable, la gestion des bassins hydrologiques et la génération d'électricité. Par contre, la plupart des barrages-poids, au Québec, ont été conçus et construits au cours du dernier siècle avec des méthodes d'analyse et des forces sismiques qui sont jugées inadéquates aujourd'hui. En effet, au cours des dernières décennies, les connaissances en sismologie, en dynamique des structures et en génie parasismique ont grandement évolué, rendant nécessaire la réévaluation des barrages existants afin d'assurer la sécurité du public. Actuellement, des méthodes déterministes, utilisant des facteurs de sécurité, sont utilisées pour évaluer la sécurité des barrages. Cependant, étant donné le caractère aléatoire des sollicitations sismiques et des incertitudes sur les matériaux et propriétés d'un barrage, les méthodes probabilistes se révèlent plus adaptées et deviennent de plus en plus populaires. L'une de ces méthodes est l'utilisation de courbes de fragilité. Ces courbes sont des fonctions permettant de représenter la probabilité d'endommagement ou de rupture d'une structure pour toute une gamme de chargement. Ce projet de recherche présente donc le développement de courbes de fragilité pour évaluer la vulnérabilité sismique de barrage-poids en béton. La méthodologie est appliquée à un barrage spécifique, le barrage-poids aux Outardes-3, le plus grand barrage-poids en béton au Québec. La méthode des éléments finis est utilisée pour modéliser ce barrage et prendre en compte les différentes interactions entre le barrage, le réservoir et la fondation. De plus, des résultats d'essais dynamiques in situ sont utilisés pour calibrer le modèle numérique. Les courbes de fragilité sont ensuite générées à l'aide d'analyses dynamiques temporelles non linéaires afin d'évaluer deux états limites d'endommagement : le glissement à la base du barrage et le glissement aux joints de reprise dans le barrage. Les incertitudes associées aux paramètres de modélisation et à la sollicitation sismique sont incluses dans l'analyse de fragilité et ces sources d'incertitudes sont propagées à l'aide d'une méthode d'échantillonnage. Une étude de sensibilité est également réalisée afin de déterminer les paramètres de modélisation ayant une influence significative sur la réponse sismique du système. L'incertitude associée à la variation spatiale des propriétés du barrage est également prise en compte et est modélisée à l'aide de champs aléatoires ; cette source d'incertitude peut s'avérer importante pour des ouvrages de grandes dimensions comme les barrages. Les résultats montrent que la variation spatiale des propriétés du barrage a un impact minime sur la fragilité du barrage aux Outardes-3 et qu'elle pourrait être négligée. Malgré tout, la méthodologie mise en place pour développer des courbes de fragilité applicables aux barrages-poids est efficace, pratique et donne d'excellents résultats. Par surcroît, un des grands avantages des courbes de fragilité est qu'elles permettent d'obtenir des informations quantitatives sur la vulnérabilité d'un barrage au contraire des méthodes déterministes actuelles
Neural Networks for Estimating Storm Surge Loads on Storage Tanks
Failures of aboveground storage tanks (ASTs) during past storm surge events have highlighted the need to evaluate the reliability of these structures. To assess the reliability of ASTs, an adequate estimation of the loads acting on them is first required. Although finite element (FE) models are typically used to estimate storm surge loads on ASTs, the computational cost of such numerical models can prohibit their use for reliability analysis. This paper explores the use of computationally efficient surrogate models to estimate storm surge loads acting on ASTs. First, a FE model is presented to compute hydrodynamic pressure distributions on ASTs subjected to storm surge and wave loads. A statistical sampling method is then employed to generate samples of ASTs with different geometries and load conditions, and FE analyses are performed to obtain training, validation, and testing data. Using the data, an Artificial Neural Network (ANN) is developed and results indicate that the trained ANN yields accurate estimates of hydrodynamic pressure distributions around ASTs. More importantly, the ANN model requires less than 0.5 second to estimate the hydrodynamic pressure distribution compared to more than 30 CPU hours needed for the FE model, thereby greatly facilitating future sensitivity, fragility, and reliability studies across a broad range of AST and hazard conditions. To further highlight its predictive capability, the ANN is also compared to other surrogate models. Finally, a method to propagate the error associated with the ANN in fragility or reliability analyses of ASTs is presented.The authors acknowledge the financial support of the National Science Foundation under award #1635784. The first author was also supported in part by the Natural Sciences and Engineering Research Council of Canada. The authors thank Prof. Clint Dawson for providing the ADCIRC+SWAN results. The computational resources were provided by the Big-Data Private-Cloud Research Cyberinfrastructure MRI-award funded by NSF under grant CNS-1338099 and by Rice University. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the sponsors
Estimation of normalized point-source sensitivity of segment surface specifications for extremely large telescopes
We present a method which estimates the normalized point-source sensitivity (PSSN) of a segmented telescope when only information from a single segment surface is known. The estimation principle is based on a statistical approach with an assumption that all segment surfaces have the same power spectral density (PSD) as the given segment surface. As presented in this paper, the PSSN based on this statistical approach represents a worst-case scenario among statistical random realizations of telescopes when all segment surfaces have the same PSD. Therefore, this method, which we call the vendor table, is expected to be useful for individual segment specification such as the segment polishing specification. The specification based on the vendor table can be directly related to a science metric such as PSSN and provides the mirror vendors significant flexibility by specifying a single overall PSSN value for them to meet. We build a vendor table for the Thirty Meter Telescope (TMT) and test it using multiple mirror samples from various mirror vendors to prove its practical utility. Accordingly, TMT has a plan to adopt this vendor table for its M1 segment final mirror polishing requirement
ANSI/NISO Z39.99-2017 ResourceSync Framework Specification
This ResourceSync specification describes a synchronization framework for the web consisting of various capabilities that allow third-party systems to remain synchronized with a server’s evolving resources. The capabilities may be combined in a modular manner to meet local or community requirements. This specification also describes how a server should advertise the synchronization capabilities it supports and how third-party systems may discover this information. The specification repurposes the document formats defined by the Sitemap protocol and introduces extensions for them
The birth of a human-specific neural gene by incomplete duplication and gene fusion
Background: Gene innovation by duplication is a fundamental evolutionary process but is difficult to study in humans due to the large size, high sequence identity, and mosaic nature of segmental duplication blocks. The human-specific gene hydrocephalus-inducing 2, HYDIN2, was generated by a 364 kbp duplication of 79 internal exons of the large ciliary gene HYDIN from chromosome 16q22.2 to chromosome 1q21.1. Because the HYDIN2 locus lacks the ancestral promoter and seven terminal exons of the progenitor gene, we sought to characterize transcription at this locus by coupling reverse transcription polymerase chain reaction and long-read sequencing. Results: 5' RACE indicates a transcription start site for HYDIN2 outside of the duplication and we observe fusion transcripts spanning both the 5' and 3' breakpoints. We observe extensive splicing diversity leading to the formation of altered open reading frames (ORFs) that appear to be under relaxed selection. We show that HYDIN2 adopted a new promoter that drives an altered pattern of expression, with highest levels in neural tissues. We estimate that the HYDIN duplication occurred ~3.2 million years ago and find that it is nearly fixed (99.9%) for diploid copy number in contemporary humans. Examination of 73 chromosome 1q21 rearrangement patients reveals that HYDIN2 is deleted or duplicated in most cases. Conclusions: Together, these data support a model of rapid gene innovation by fusion of incomplete segmental duplications, altered tissue expression, and potential subfunctionalization or neofunctionalization of HYDIN2 early in the evolution of the Homo lineage
Relative Burden of Large CNVs on a Range of Neurodevelopmental Phenotypes
While numerous studies have implicated copy number variants (CNVs) in a range of neurological phenotypes, the impact relative to disease severity has been difficult to ascertain due to small sample sizes, lack of phenotypic details, and heterogeneity in platforms used for discovery. Using a customized microarray enriched for genomic hotspots, we assayed for large CNVs among 1,227 individuals with various neurological deficits including dyslexia (376), sporadic autism (350), and intellectual disability (ID) (501), as well as 337 controls. We show that the frequency of large CNVs (>1 Mbp) is significantly greater for ID–associated phenotypes compared to autism (p = 9.58×10−11, odds ratio = 4.59), dyslexia (p = 3.81×10−18, odds ratio = 14.45), or controls (p = 2.75×10−17, odds ratio = 13.71). There is a striking difference in the frequency of rare CNVs (>50 kbp) in autism (10%, p = 2.4×10−6, odds ratio = 6) or ID (16%, p = 3.55×10−12, odds ratio = 10) compared to dyslexia (2%) with essentially no difference in large CNV burden among dyslexia patients compared to controls. Rare CNVs were more likely to arise de novo (64%) in ID when compared to autism (40%) or dyslexia (0%). We observed a significantly increased large CNV burden in individuals with ID and multiple congenital anomalies (MCA) compared to ID alone (p = 0.001, odds ratio = 2.54). Our data suggest that large CNV burden positively correlates with the severity of childhood disability: ID with MCA being most severely affected and dyslexics being indistinguishable from controls. When autism without ID was considered separately, the increase in CNV burden was modest compared to controls (p = 0.07, odds ratio = 2.33)
Contemporary management of cancer of the oral cavity
Oral cancer represents a common entity comprising a third of all head and neck malignant tumors. The options for curative treatment of oral cavity cancer have not changed significantly in the last three decades; however, the work up, the approach to surveillance, and the options for reconstruction have evolved significantly. Because of the profound functional and cosmetic importance of the oral cavity, management of oral cavity cancers requires a thorough understanding of disease progression, approaches to management and options for reconstruction. The purpose of this review is to discuss the most current management options for oral cavity cancers
The James Webb Space Telescope Mission
Twenty-six years ago a small committee report, building on earlier studies,
expounded a compelling and poetic vision for the future of astronomy, calling
for an infrared-optimized space telescope with an aperture of at least .
With the support of their governments in the US, Europe, and Canada, 20,000
people realized that vision as the James Webb Space Telescope. A
generation of astronomers will celebrate their accomplishments for the life of
the mission, potentially as long as 20 years, and beyond. This report and the
scientific discoveries that follow are extended thank-you notes to the 20,000
team members. The telescope is working perfectly, with much better image
quality than expected. In this and accompanying papers, we give a brief
history, describe the observatory, outline its objectives and current observing
program, and discuss the inventions and people who made it possible. We cite
detailed reports on the design and the measured performance on orbit.Comment: Accepted by PASP for the special issue on The James Webb Space
Telescope Overview, 29 pages, 4 figure
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
- …