1,535 research outputs found

    Deep-Water Near-Bottom Turbulence in Lake Michigan: An Underwater Investigation

    Get PDF
    Motivated by a need to characterize near-bottom deep-water turbulence for an understanding of the filtration capabilities of invasive quagga mussels, an instrument tripod was deployed in Lake Michigan for six months in 60m of water to measure current velocities, with specific interest being paid to near-bottom (0.10 to 0.95 meters above bottom) velocities during the deployment. The deployment period (September 2012-April 2013) was characterized by very little stratification and a median temperature of about throughout the water column. A mean horizontal velocity of 3.6 cm/s with a standard deviation of 2 cm/s was also measured at 1 meter above the lake bed. In spite of the 60m depth of the measurement site, surface waves were found to influence near-bottom velocities for a significant fraction of the time, with periods between 6.5 and 12.5 seconds. Fluctuations in velocity were used to quantify turbulence through the use of turbulent kinetic energy (tke) calculations, while simple spectral analysis was used to verify tke levels and identify possible wave contamination. At distances greater than 500 z+ from the bed, turbulent kinetic energy levels follow canonical scaling with values of approximately 5. However, very near-bottom tke levels are greatly elevated relative to the expected values, which we speculate may be due to mussel-induced currents. These conclusions coupled with further modeling will allow for the development of mussel-influence models that will prove important to understanding the impact of these invasive species

    Comparison of Fuzzy Clustering Methods and Their Applications to Geophysics Data

    Get PDF
    Fuzzy clustering algorithms are helpful when there exists a dataset with subgroupings of points having indistinct boundaries and overlap between the clusters. Traditional methods have been extensively studied and used on real-world data, but require users to have some knowledge of the outcome a priori in order to determine howmany clusters to look for. Additionally, iterative algorithms choose the optimal number of clusters based on one of several performance measures. In this study, the authors compare the performance of three algorithms (fuzzy c-means, Gustafson-Kessel, and an iterative version of Gustafson-Kessel) when clustering a traditional data set as well as real-world geophysics data that were collected from an archaeological site in Wyoming. Areas of interest in the were identified using a crisp cutoff value as well as a fuzzy α-cut to determine which provided better elimination of noise and non-relevant points. Results indicate that the α-cut method eliminates more noise than the crisp cutoff values and that the iterative version of the fuzzy clustering algorithm is able to select an optimum number of subclusters within a point set (in both the traditional and real-world data), leading to proper indication of regions of interest for further expert analysis

    Comparison of Fuzzy Clustering Methods and Their Applications to Geophysics Data

    Get PDF
    Fuzzy clustering algorithms are helpful when there exists a dataset with subgroupings of points having indistinct boundaries and overlap between the clusters. Traditional methods have been extensively studied and used on real-world data, but require users to have some knowledge of the outcome a priori in order to determine howmany clusters to look for. Additionally, iterative algorithms choose the optimal number of clusters based on one of several performance measures. In this study, the authors compare the performance of three algorithms (fuzzy c-means, Gustafson-Kessel, and an iterative version of Gustafson-Kessel) when clustering a traditional data set as well as real-world geophysics data that were collected from an archaeological site in Wyoming. Areas of interest in the were identified using a crisp cutoff value as well as a fuzzy α-cut to determine which provided better elimination of noise and non-relevant points. Results indicate that the α-cut method eliminates more noise than the crisp cutoff values and that the iterative version of the fuzzy clustering algorithm is able to select an optimum number of subclusters within a point set (in both the traditional and real-world data), leading to proper indication of regions of interest for further expert analysis

    Velocity Profiling, Turbulence, and Chlorophyll Concentrations in the Bottom Boundary Layer of Lake Michigan near Muskegon, Michigan

    Get PDF
    The characterization of water flow and turbulence near lake beds is important for modelling environmental and ecological effects throughout a lake. In Lake Michigan, where invasive filter-feeding Quagga mussels dominate the lake bed, turbulence plays an important role in determining how much of chlorophyll is mixed down to the Quagga Mussels. Deep in Lake Michigan (44m) near Muskegon, MI, a large tripod was deployed, attached with an Acoustic Doppler Velocimeter, a fluorometer to measure chlorophyll concentrations, and a temperature sensor. Measurements were recorded from late May until early August by sampling velocities every hour in ten-minute bursts at 4 Hz, and sampling temperature and concentration approximately every minute, continuously. Several important turbulent parameters were calculated using the data collected. Chlorophyll data from the site showed that the water column here displayed a Concentration Boundary Layer (CBL), in which the chlorophyll concentration increases as distance from the lake floor increases. The median speed (U = 2.85cm/s) and Turbulent Kinetic Energy (TKE = 2.1 x 10-5 m2/s2) were also calculated. All of these results have previously had very little documentation in such deep waters. The observation of a CBL shows that the invasive Quagga Mussels are able to drastically alter chlorophyll concentrations near the lake floor, an important result for future modeling efforts. The quantification of turbulence parameters will be useful in further studies to find causation between various turbulence levels and concentrations

    Small Scale-Wind Power Dispatchable Energy Source Modeling

    Get PDF
    Due to the importance of Wind energy as an intermittent renewable resource in Micro-Grids applications; this paper is proposed. So this research proposal seeks to model and analyze components of a wind turbine generator (WTG) system to store energy and supply loads with the stored energy. Focus is placed on the storage of energy into a lead acid battery and using the battery with the inverter as a dispatchable energy source. The storage device and inverter acts as a steam power plant generator. The small-scale system consists of wind turbine, wind generator, loads, dc-dc converter, ac-dc inverter, controller and battery. We use the desired power value delivered to each load to determine characteristics of the wind turbine system. Some characteristics are: wind speed, power, and charging / discharging characteristics for the battery are presented. We build the proposed real system to present a system with its components in details on a small scale. Such model’ components are tested together with other distributed system models in order to evaluate and predict the overall system performance. The proposed research presents to show an operational wind power system, for a small-scale micro-grid application. The experimental test-bed is implemented to supply the Neural Network model with its real training data. Using the Artificial Neural Network (ANN), with feed forward back-propagation technique to introduce discharging ANN model with Time as input and Voltage, Ampere-hours and Power (Watt) as outputs. ANN network consists of two layers one hidden with log-sigmoid function (has two neurons) and the second with pure-line function (has three neurons). This is done to make benefits from the ability of neural network for interpolation between points and also curves. ANN model with Back - Propagation (BP) technique is created with suitable numbers of layers and neurons. The model is checked and verified by comparing actual and predicted ANN values, with good error value and excellent regression factor to imply accuracy

    A Unified Near Infrared Spectral Classification Scheme for T Dwarfs

    Full text link
    A revised near infrared classification scheme for T dwarfs is presented, based on and superseding prior schemes developed by Burgasser et al. and Geballe et al., and defined following the precepts of the MK Process. Drawing from two large spectroscopic libraries of T dwarfs identified largely in the Sloan Digital Sky Survey and the Two Micron All Sky Survey, nine primary spectral standards and five alternate standards spanning spectral types T0 to T8 are identified that match criteria of spectral character, brightness, absence of a resolved companion and accessibility from both northern and southern hemispheres. The classification of T dwarfs is formally made by the direct comparison of near infrared spectral data of equivalent resolution to the spectra of these standards. Alternately, we have redefined five key spectral indices measuring the strengths of the major H2_2O and CH4_4 bands in the 1-2.5 micron region that may be used as a proxy to direct spectral comparison. Two methods of determining T spectral type using these indices are outlined and yield equivalent results. These classifications are also equivalent to those from prior schemes, implying that no revision of existing spectral type trends is required. The one-dimensional scheme presented here provides a first step toward the observational characterization of the lowest luminosity brown dwarfs currently known. Future extensions to incorporate spectral variations arising from differences in photospheric dust content, gravity and metallicity are briefly discussed. A compendium of all currently known T dwarfs with updated classifications is presented.Comment: 52 pages, 11 figures; accepted for publication to Ap

    The prevalence of psychosis in epilepsy; a systematic review and meta-analysis.

    Get PDF
    BACKGROUND: Epilepsy has long been considered to be a risk factor for psychosis. However there is a lack of consistency in findings across studies on the effect size of this risk which reflects methodological differences in studies and changing diagnostic classifications within neurology and psychiatry. The aim of this study was to assess the prevalence of psychosis in epilepsy and to estimate the risk of psychosis among individuals with epilepsy compared with controls. METHODS: A systematic review and meta-analysis was conducted of all published literature pertaining to prevalence rates of psychosis in epilepsy using electronic databases PUBMED, OVIDMEDLINE, PsychINFO and Embase from their inception until September 2010 with the following search terms: prevalence, incidence, rate, rates, psychosis, schizophrenia, schizophreniform illness, epilepsy, seizures, temporal lobe epilepsy. RESULTS: The literature search and search of reference lists yielded 215 papers. Of these, 58 (27%) had data relevant to the review and 157 were excluded following a more detailed assessment. 10% of the included studies were population based studies. The pooled odds ratio for risk of psychosis among people with epilepsy compared with controls was 7.8. The pooled estimate of prevalence of psychosis in epilepsy was found to be 5.6% (95% CI: 4.8-6.4). There was a high level of heterogeneity. The prevalence of psychosis in temporal lobe epilepsy was 7% (95% CI: 4.9-9.1). The prevalence of interictal psychosis in epilepsy was 5.2% (95% CI: 3.3-7.2). The prevalence of postictal psychosis in epilepsy was 2% (95% CI: 1.2-2.8). CONCLUSIONS: Our systematic review found that up to 6% of individuals with epilepsy have a co-morbid psychotic illness and that patients have an almost eight fold increased risk of psychosis. The prevalence rate of psychosis is higher in temporal lobe epilepsy (7%). We suggest that further investigation of this association could give clues to the aetiology of psychosis

    A Mechanism for Genome Size Reduction Following Genomic Rearrangements

    Get PDF
    The factors behind genome size evolution have been of great interest, considering that eukaryotic genomes vary in size by more than three orders of magnitude. Using a model of two wild peanut relatives, Arachis duranensis and Arachis ipaensis, in which one genome experienced large rearrangements, we find that the main determinant in genome size reduction is a set of inversions that occurred in A. duranensis, and subsequent net sequence removal in the inverted regions. We observe a general pattern in which sequence is lost more rapidly at newly distal (telomeric) regions than it is gained at newly proximal (pericentromeric) regions – resulting in net sequence loss in the inverted regions. The major driver of this process is recombination, determined by the chromosomal location. Any type of genomic rearrangement that exposes proximal regions to higher recombination rates can cause genome size reduction by this mechanism. In comparisons between A. duranensis and A. ipaensis, we find that the inversions all occurred in A. duranensis. Sequence loss in those regions was primarily due to removal of transposable elements. Illegitimate recombination is likely the major mechanism responsible for the sequence removal, rather than unequal intrastrand recombination. We also measure the relative rate of genome size reduction in these two Arachis diploids. We also test our model in other plant species and find that it applies in all cases examined, suggesting our model is widely applicable
    • …
    corecore