49 research outputs found

    A Practical Compensation Method for Differential Column Shortenings in High-rise Reinforced Concrete Buildings

    Get PDF
    High-rise reinforced concrete buildings have technical, economic and environmental advantages for high density development and they have become a distinctive feature for densely populated urban areas around the world. For this purpose, structural design of high-rise reinforced concrete buildings have become forward and particularly serviceability requirements gained more interest. Differential shortening of vertical members is one of the serviceability requirements; however, only a limited number of studies exist. In this study, a practical compensation method was proposed for the differential shortening of columns and shear walls in high-rise reinforced concrete buildings. In the proposed compensation method, vertical members were grouped and the total error was aimed to be minimized by penalizing the higher shortening differences in the groups to simplify the process of building construction. In order to validate the proposed method, a 32-storey high-rise building that was built in Izmir Turkey was investigated considering both the construction sequence and time-dependent effects as shrinkage and creep. Vertical shortening of columns and shear walls in the tower part of the building were calculated. Uniform-grouped compensation method and the proposed penalized errors compensation method with using L1-norm and L2-norm were applied for differential shortenings of columns and shear walls with considering different numbers of member groups. The magnitude of errors for each compensation method was presented and evaluated. Results of the numerical study reveal that the proposed penalized errors compensation method was capable of determining the compensation errors by minimizing the maximum errors efficiently

    Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System Using Wavelet Features and Various Machine Learning Methods

    Get PDF
    Steady-state visual evoked potentials (SSVEPs) have been designated to be appropriate and are in use in many areas such as clinical neuroscience, cognitive science, and engineering. SSVEPs have become popular recently, due to their advantages including high bit rate, simple system structure and short training time. To design SSVEP-based BCI system, signal processing methods appropriate to the signal structure should be applied. One of the most appropriate signal processing methods of these non-stationary signals is the Wavelet Transform. In this study, we investigated both the effect of choosing a mother wavelet function and the most successful combination of classifier algorithm, wavelet features, and frequency pairs assigned to BCI commands. SSVEP signals that were recorded at seven different stimulus frequencies (6–6.5 – 7 – 7.5 – 8.2 – 9.3 – 10 Hz) were used in this study. A total of 115 features were extracted from time, frequency, and time-frequency domains. These features were classified by a total of seven different classification processes. Classification evaluation was presented with the 5-fold cross-validation method and accuracy values. According to the results, (I) the most successful wavelet function was Haar wavelet, (II) the most successful classifier was Ensemble Learning, (III) using the feature vector consisting of energy, entropy, and variance features yielded higher accuracy than using one of these features alone, and (IV) the highest performances were obtained in the frequency pairs with “6–10”, “6.5–10”, “7–10”, and “7.5–10” Hz

    New-Age Pyroelectric Radiographic X-Ray Generators

    Get PDF
    Medical imaging history has begun with the discovery of X-rays. X-rays are widely used since their invention in different areas, from projectional radiography to computed tomography. Year by year their technology is improved in many aspects, especially for their generation, their tubes are changed to get the most efficient rays. Nowadays different mechanisms are studied to obtain X-rays; one of them is pyroelectricity phenomena. Pyroelectricity is a material’s electricity generation from temperature changes. The output spectra of the pyroelectric X-ray generator is quite similar to traditional X-ray tubes, which gives a chance for replacing low-voltage pyroelectric X-ray generators instead of high-voltage conventional X-ray tubes. The results of conducted experiments and continued studies show us that the use of pyroelectricity for X-ray generation has great advantages. Thanks to the compactness of the pyroelectric X-ray generator, more portable X-ray devices may be available in the near future. In addition, these new designs offer safer and easier to operate since they use only 12 Volts instead of kiloVolts. In conclusion, healthcare technologies require high budges in general, this low-cost alternative might make the radiological imaging available for low-income countries. In this paper, the fundamentals of X-ray generation from pyroelectric material is reviewed, a device on the market, COOL-X, is investigated, and both conventional method and pyroelectricity methods are compared

    Statistically significant features improve binary and multiple Motor Imagery task predictions from EEGs

    Get PDF
    In recent studies, in the field of Brain-Computer Interface (BCI), researchers have focused on Motor Imagery tasks. Motor Imagery-based electroencephalogram (EEG) signals provide the interaction and communication between the paralyzed patients and the outside world for moving and controlling external devices such as wheelchair and moving cursors. However, current approaches in the Motor Imagery-BCI system design require effective feature extraction methods and classification algorithms to acquire discriminative features from EEG signals due to the non-linear and non-stationary structure of EEG signals. This study investigates the effect of statistical significance-based feature selection on binary and multi-class Motor Imagery EEG signal classifications. In the feature extraction process performed 24 different time-domain features, 15 different frequency-domain features which are energy, variance, and entropy of Fourier transform within five EEG frequency subbands, 15 different time-frequency domain features which are energy, variance, and entropy of Wavelet transform based on five EEG frequency subbands, and 4 different Poincare plot-based non-linear parameters are extracted from each EEG channel. A total of 1,364 Motor Imagery EEG features are supplied from 22 channel EEG signals for each input EEG data. In the statistical significance-based feature selection process, the best one among all possible combinations of these features is tried to be determined using the independent t-test and one-way analysis of variance (ANOVA) test on binary and multi-class Motor Imagery EEG signal classifications, respectively. The whole extracted feature set and the feature set that contain statistically significant features only are classified in this study. We implemented 6 and 7 different classifiers in multi-class and binary (two-class) classification tasks, respectively. The classification process is evaluated using the five-fold cross-validation method, and each classification algorithm is tested 10 times. These repeated tests provide to check the repeatability of the results. The maximum of 61.86 and 47.36% for the two-class and four-class scenarios, respectively, are obtained with Ensemble Subspace Discriminant among all these classifiers using selected features including only statistically significant features. The results reveal that the introduced statistical significance-based feature selection approach improves the classifier performances by achieving higher classifier performances with fewer relevant components in Motor Imagery task classification. In conclusion, the main contribution of the presented study is two-fold evaluation of non-linear parameters as an alternative to the commonly used features and the prediction of multiple Motor Imagery tasks using statistically significant features

    The HELLP syndrome: Clinical issues and management. A Review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The HELLP syndrome is a serious complication in pregnancy characterized by haemolysis, elevated liver enzymes and low platelet count occurring in 0.5 to 0.9% of all pregnancies and in 10–20% of cases with severe preeclampsia. The present review highlights occurrence, diagnosis, complications, surveillance, corticosteroid treatment, mode of delivery and risk of recurrence.</p> <p>Methods</p> <p>Clinical reports and reviews published between 2000 and 2008 were screened using Pub Med and Cochrane databases.</p> <p>Results and conclusion</p> <p>About 70% of the cases develop before delivery, the majority between the 27th and 37th gestational weeks; the remainder within 48 hours after delivery. The HELLP syndrome may be complete or incomplete. In the Tennessee Classification System diagnostic criteria for HELLP are haemolysis with increased LDH (> 600 U/L), AST (≥ 70 U/L), and platelets < 100·10<sup>9</sup>/L. The Mississippi Triple-class HELLP System further classifies the disorder by the nadir platelet counts. The syndrome is a progressive condition and serious complications are frequent. Conservative treatment (≥ 48 hours) is controversial but may be considered in selected cases < 34 weeks' gestation. Delivery is indicated if the HELLP syndrome occurs after the 34th gestational week or the foetal and/or maternal conditions deteriorate. Vaginal delivery is preferable. If the cervix is unfavourable, it is reasonable to induce cervical ripening and then labour. In gestational ages between 24 and 34 weeks most authors prefer a single course of corticosteroid therapy for foetal lung maturation, either 2 doses of 12 mg betamethasone 24 hours apart or 6 mg or dexamethasone 12 hours apart before delivery. Standard corticosteroid treatment is, however, of uncertain clinical value in the maternal HELLP syndrome. High-dose treatment and repeated doses should be avoided for fear of long-term adverse effects on the foetal brain. Before 34 weeks' gestation, delivery should be performed if the maternal condition worsens or signs of intrauterine foetal distress occur. Blood pressure should be kept below 155/105 mmHg. Close surveillance of the mother should be continued for at least 48 hours after delivery.</p

    A software for simulating steady-state properties of passive dendrites based on the cable theory

    No full text
    In this study, a computer software, CableTeo, is introduced for simulating the steady-state electrical properties of passive dendrite based on the cable theory The cable theory for dendritic neurons addresses to current-voltage relations in a continuous passive dendritic tree. it is briefly summarized that the cable theory related to passive cables and dendrites, which is a useful approximation and an important reference for excitable cases. The proposed software can be used to construct user-defined dendritic tree model. The user can define the model in detail, display the constructed dendritic tree, and examine the basic electrical properties of the dendritic tree using transfer impedance approach. The software addresses to ones who want to run simple simulations of the cable theory without need to any programming language skills or expensive software. it can also be used for educational purposes. (C) 2007 Elsevier Ireland Ltd. All rights reserved

    Investigating effects of wavelet entropy detailed measures in heart rate variability analysis

    No full text
    In this study, wavelet entropy, which is calculated from the wavelet transform coefficients obtained from heart rate variability data, is used to distinguish the control group from the patients with congestive heart failure. Wavelet entropies are obtained from 29 patients with congestive heart failure and 54 subjects in the control group. In addition, standard heart rate variability (HRV) indices are also calculated for the whole dataset. Then, the performance of these indices in classifying these two groups is evaluated using k-Nearest Neighbor classifier and genetic algorithm. As a result, the subset of the HRV indices that increase the performance of the classifier is obtained. Using the optimal subset of HRV measures gives discrimination accuracy of 97.59%

    Combining classical HRV indices with wavelet entropy measures improves to performance in diagnosing congestive heart failure

    No full text
    In this study, best combination of short-term heart rate variability (HRV) measures are sought for to distinguish 29 patients with congestive heart failure (CHF) from 54 healthy subjects in the control group. In the analysis performed, in addition to the standard HRV measures, wavelet entropy measures are also used. A genetic algorithm is used to select the best ones from among all possible combinations of these measures. A k-nearest neighbor classifier is used to evaluate the performance of the feature combinations in classifying these two groups. The results imply that two combinations of all HRV measures, both of which include wavelet entropy measures, have the highest discrimination power in terms of sensitivity and specificity values. (c) 2007 Elsevier Ltd. All rights reserved

    Investigating the effects of wavelet entropy in heart rate variability analysis for diagnosing of congestive heart failure

    No full text
    In this study, wavelet entropy, which is calculated from the wavelet transform coefficients obtained from heart rate variability data, is used to distinguish the control group from the patients with congestive heart failure. Wavelet entropies are obtained from 29 patients with congestive heart failure and 54 subjects in the control group. Standard heart rate variability measurements are also calculated for the whole dataset. Finally, linear discriminant analysis is used to evaluate the performance of measurements in classifying these two groups
    corecore