101 research outputs found

    An Equivalent Point-Source Stochastic Model of the NGA-East Ground-Motion Models and a Seismological Method for Estimating the Long-Period Transition Period TL

    Get PDF
    This dissertation deals with the stochastic simulation of the Next Generation Attenuation- East (NGA-East) ground-motion models and a proposing a new method of calculating the long-period transition period parameter, TL, in the seismic building codes. The work of this dissertation is carried out in two related studies. In the first study, a set of correlated and consistent seismological parameters are estimated in the in Central and Eastern United States (CEUS) by inverting the median 5%-damped spectral acceleration (PSA) predicted from the Next Generation Attenuation-East (NGA-East) ground-motion models (GMMs). These seismological parameters together form a point-source stochastic GMM. Magnitude-specific inversions are performed for moment magnitude ranges Mw 4.0-8.0, rupture distances Rrup = 1-1000 km and periods T = 0.01-10s, and National Earthquake Hazard Reduction Program site class A conditions. In the second study, the long-period transition period parameter TL is investigated, and an alternate seismological approach is used to calculate it. The long-period transition period parameter is utilized in the determination of the design spectral acceleration of long-period structures. The estimation of TL has remained unchanged since its original introduction FEMA 450-1/2003; The calculation is loosely based on a correlation between modal magnitude Mw and TL that does not account for different seismological parameters in different regions of the country. This study will calculate TL based on the definition of corner period, and will include two seismological parameters, the stress parameters Δσ and crustal velocity in the source region β, in its estimation. The results yield a generally more conservative (or longer) estimation of TL than the estimation that is currently used in engineering design standards

    Minimization of Cost and CO2 Emissions for Rectangular Spread Footings Subjected to Biaxial Loading

    Get PDF
    A Big Bang-Big Crunch (BB-BC) optimization algorithm was applied to the analysis and design of reinforced concrete spread footings subjected to concentric, uniaxial, and biaxial loading. For spread footings subjected to eccentric loading conditions, it is convenient to assume that the entire base of the footing remains in contact with the soil, resulting in a compressive bearing pressure distribution. However, this assumption does not accurately describe the nature of the bearing pressure distribution. Analysis procedures for spread footings subjected to eccentric loading conditions that allow uniaxial and biaxial uplift were developed. From these formulations, an analysis chart of the bearing pressure surface equations for one, two, and three footing corners detached was developed to determine percentages of detachment along the edges of a spread footing that is subjected to biaxial uplift. In addition to assuming that the entire footing base remains in compression, it is common to make several other simplifying assumptions when designing spread footings subjected to uniaxial and biaxial loading. A BB-BC optimization algorithm is applied in order to compare spread footing designs based upon theoretical analysis procedures and designs based upon simplifying assumptions. Since cost has always been an integral part of engineering design and CO2 emissions are becoming of greater concern, a multi-objective optimization was utilized to develop relationships between cost and CO2 emission associated with the design of reinforced spread footings subjected to concentric, uniaxial, and biaxial loading

    Statistical Analysis of the Seismic Vulnerability of Mid-South Building Structures

    Get PDF
    A study of buildings in Shelby County, Tennessee and Tipton County, Tennessee was conducted using a sidewalk survey procedure developed by the Federal Emergency Management Agency (FEMA), known as a Rapid Visual Survey (RVS). Its purpose is to identify buildings that are potentially at risk to a seismic event. A database of these buildings was generated from the data gathered in the RVS procedure. A loss estimation program developed by FEMA, known as HAZUS-MH MR3, was used to perform a more detailed analysis on the structures utilizing user defined ground motion maps. A rank of the structures was developed based upon the RVS procedure and the HAZUS output.FEMA developed HAZUS-MH MR3 which estimates structural and non-structural losses for a variety of hazards. In this study, three earthquake scenarios were analyzed: a magnitude 6.5 earthquake based upon site-specific ground motion maps, a magnitude 7.7 earthquake based upon site-specific ground motion maps, and a magnitude 7.7 earthquake based upon ground motion maps provided by the United States Geological Survey (USGS). All of these ground motion maps simulate a desired earthquake scenario to perform a loss estimate of the buildings; however, the site-specific, user-supplied maps have many more unique ground motion parameters than the USGS maps. HAZUS provides loss estimates by computing damage state probabilities for each building. One objective of this research is to develop a prioritization of the structures based upon building performance from the HAZUS loss estimate and the RVS procedure, which has a possible application to air emergency planners in selecting suitable locations to be used as mass population shelters in the case of a seismic event. The second objectiveof this research is to assess how well the RVS procedure performs in identifying structures which may be seismically at risk as compared to the HAZUS output by performing a statistical analysis and hypothesis testing on the data. The results of this objective can be utilized in determining if the RVS procedure is suitable for the seismic evaluation of structures or it a more detailed, site specific analysis should be performed using hazard software like HAZUS. The third objective is to investigate how the building type and the construction time period of structures affect the results of HAZUS and the RVS using statistical analysis. The results of the third objective can help in determining which construction materials perform better in a seismic event, which can have structural design applications for regions of high seismicity. The last objective is to examine how the effects of site specific ground motion maps compare with those provided by the USGS, in HAZUS loss estimates

    The representation of the verb's argument structure as disclosed by fMRI

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In the composition of an event the verb's argument structure defines the number of participants and their relationships. Previous studies indicated distinct brain responses depending on how many obligatory arguments a verb takes. The present functional magnetic resonance imaging (fMRI) study served to verify the neural structures involved in the processing of German verbs with one (e.g. "snore") or three (e.g. "gives") argument structure. Within a silent reading design, verbs were presented either in isolation or with a minimal syntactic context ("snore" vs. "Peter snores").</p> <p>Results</p> <p>Reading of isolated one-argument verbs ("snore") produced stronger BOLD responses than three-argument verbs ("gives") in the inferior temporal fusiform gyrus (BA 37) of the left hemisphere, validating previous magnetoencephalographic findings. When presented in context one-argument verbs ("Peter snores") induced more pronounced activity in the inferior frontal gyrus (IFG) of the left hemisphere than three-argument verbs ("Peter gives").</p> <p>Conclusion</p> <p>In line with previous studies our results corroborate the left temporal lobe as site of representation and the IFG as site of processing of verbs' argument structure.</p

    From Lateral Flow Devices to a Novel Nano-Color Microfluidic Assay

    Get PDF
    Improving the performance of traditional diagnostic lateral flow assays combined with new manufacturing technologies is a primary goal in the research and development plans of diagnostic companies. Taking into consideration the components of lateral flow diagnostic test kits; innovation can include modification of labels, materials and device design. In recent years, Resonance-Enhanced Absorption (REA) of metal nano-particles has shown excellent applicability in bio-sensing for the detection of a variety of bio-molecular binding interactions. In a novel approach, we have now integrated REA-assays in a diagnostic microfluidic setup thus resolving the bottleneck of long incubation times inherent in previously existing REA-assays and simultaneously integrated automated fabrication techniques for diagnostics manufacture. Due to the roller-coating based technology and chemical resistance, we used PET-co-polyester as a substrate and a CO2 laser ablation system as a fast, highly precise and contactless alternative to classical micro-milling. It was possible to detect biological binding within three minutes – visible to the eye as colored text readout within the REA-fluidic device. A two-minute in-situ silver enhancement was able to enhance the resonant color additionally, if required

    Editorial: Nanotechnological Advances in Biosensors

    Get PDF
    A biosensor is a physicochemical or hybrid physical-chemical-biological device that detects a biological molecule, organism, or process. Because of the nature of their targets, biosensors need to be faster, smaller, more sensitive, and more specific than nearly all of their physicochemical counterparts or the traditional methods that they are designed to replace. Speed is of the essence in medical diagnosis as it permits for rapid, accurate treatment and does not allow patients to be lost to follow-up. Small size and greater sensitivity mean less-invasive sampling and detection of molecules such as neurotransmitters or hormones at biologically-relevant levels. Greater specificity allows assays to be performed in complex fluids such as blood or urine without false negative or false positive results. [...

    Representation of the verb's argument-structure in the human brain

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A verb's argument structure defines the number and relationships of participants needed for a complete event. One-argument (intransitive) verbs require only a subject to make a complete sentence, while two- and three-argument verbs (transitives and ditransitives) normally take direct and indirect objects. Cortical responses to verbs embedded into sentences (correct or with syntactic violations) indicate the processing of the verb's argument structure in the human brain. The two experiments of the present study examined whether and how this processing is reflected in distinct spatio-temporal cortical response patterns to isolated verbs and/or verbs presented in minimal context.</p> <p>Results</p> <p>The magnetoencephalogram was recorded while 22 native German-speaking adults saw 130 German verbs, presented one at a time for 150 ms each in experiment 1. Verb-evoked electromagnetic responses at 250 – 300 ms after stimulus onset, analyzed in source space, were higher in the left middle temporal gyrus for verbs that take only one argument, relative to two- and three-argument verbs. In experiment 2, the same verbs (presented in different order) were preceded by a proper name specifying the subject of the verb. This produced additional activation between 350 and 450 ms in or near the left inferior frontal gyrus, activity being larger and peaking earlier for one-argument verbs that required no further arguments to form a complete sentence.</p> <p>Conclusion</p> <p>Localization of sources of activity suggests that the activation in temporal and frontal regions varies with the degree by which representations of an event as a part of the verbs' semantics are completed during parsing.</p

    Word Processing differences between dyslexic and control children

    Get PDF
    BACKGROUND: The aim of this study was to investigate brain responses triggered by different wordclasses in dyslexic and control children. The majority of dyslexic children have difficulties to phonologically assemble a word from sublexical parts following grapheme-to-phoneme correspondences. Therefore, we hypothesised that dyslexic children should mainly differ from controls processing low frequent words that are unfamiliar to the reader. METHODS: We presented different wordclasses (high and low frequent words, pseudowords) in a rapid serial visual word (RSVP) design and performed wavelet analysis on the evoked activity. RESULTS: Dyslexic children had lower evoked power amplitudes and a higher spectral frequency for low frequent words compared to control children. No group differences were found for high frequent words and pseudowords. Control children had higher evoked power amplitudes and a lower spectral frequency for low frequent words compared to high frequent words and pseudowords. This pattern was not present in the dyslexic group. CONCLUSION: Dyslexic children differed from control children only in their brain responses to low frequent words while showing no modulated brain activity in response to the three word types. This might support the hypothesis that dyslexic children are selectively impaired reading words that require sublexical processing. However, the lacking differences between word types raise the question if dyslexic children were able to process the words presented in rapid serial fashion in an adequate way. Therefore the present results should only be interpreted as evidence for a specific sublexical processing deficit with caution
    corecore