532 research outputs found

    Dynamic impact induced by tornadoes through simulations based on two-way wind-structure interactions

    Get PDF
    Tornadoes have become a significant cause of property damage, injuries and life losses. Investigations of tornadoes indicate that most fatalities were caused by building failure. For example, in the Joplin, MO tornado of 22 May 2011, 161 people were killed, and 84% fatalities were related to building failure. Therefore, it is imperative to develop science-based tornado-resistant building codes, in order to provide a better level of occupant protection from tornadoes and to minimize the tornado-induced damage. This requires in-depth understanding of the wind characteristics of tornadoes and their wind effects on civil structures, based on which design tornadic wind loading can be properly determined. To achieve this, in this study, Computational Fluid Dynamics (CFD) simulations and Computational Structural Dynamics (CSD) simulations are combined for the first time to systematically investigate tornado dynamics and its dynamic impact on civil structures. First, wind effects on large-scale space structures induced by straight-line winds are investigated to fully understand the current building code against wind loads. Then, a real-world tornado is numerically simulated and verified based on full-scale radar-measured data. Based on the verified CFD model, non-stationary wind characteristics of tornadoes and the induced wind effects on large-scale space structures are investigated under different flow structures of tornadoes. Next, CFD and CSD are combined to investigate tornado-induced dynamic responses of large-scale space structures. Finally, tornado-induced dynamic responses of large-scale space structures are compared with that induced by the equivalent straight-line winds, in order to properly modify the equation for calculating the design wind pressure specified in ASCE7-16 --Abstract, page iv

    Wind loads for petrochemical structures

    Get PDF
    Techniques currently available to practicing engineers for estimating wind loads for petrochemical structures have little theoretical or experimental basis. This dissertation research is an effort to expand the understanding of wind effects on petrochemical and other, similar structures. Petrochemical structures introduce geometric scales into wind tunnel model simulations below what are common for enclosed structures. Wind tunnel experiments were performed to help determine whether this will introduce problems in achieving dynamic similarity between models and prototypes. The experiments did not reveal any clear indication that petrochemical structures cannot be modeled in wind tunnels at scales similar to those used for enclosed buildings. Aerodynamic coefficients were measured for models of open frame structures, partially clad structures, and vertical vessels in the LSU Wind Tunnel Laboratory. When possible, the values were compared with the literature or current analysis techniques. For open frames, diagonal braces and solid flooring had significant effects on the wind loads which are not reflected in current analysis methods. Shielding of equipment located within open frames was found to be underestimated by current analysis methods. Wind loads for partially clad structures exceeded those of enclosed structures with similar overall geometry for some cladding configurations. Wind loads for vertical vessels in paired arrangements were found to deviate significantly from wind load estimates for single vessels - a fact that is not represented adequately in current analysis techniques. When appropriate, recommendations were made to address the shortcomings in wind load analysis for these structures. An analytical model was developed to describe the variation of the wind force coefficient for higher-solidity open frame structures with respect to solidity ratio and plan aspect ratio. The model reproduced trends in experimental data from previous researchers and provided insight into the development of upper-bound wind loads for open frame structures. Experimental data was used to estimate the bias and variance of analytical estimates of wind force coefficients for petrochemical structures. Applying recommendations from this research reduced the variance in these estimates. The structural reliability of a petrochemical structure designed for wind loads according to current industry guidelines is only slightly lower than an enclosed structure

    Visual Clutter Study for Pedestrian Using Large Scale Naturalistic Driving Data

    Get PDF
    Some of the pedestrian crashes are due to driver’s late or difficult perception of pedestrian’s appearance. Recognition of pedestrians during driving is a complex cognitive activity. Visual clutter analysis can be used to study the factors that affect human visual search efficiency and help design advanced driver assistant system for better decision making and user experience. In this thesis, we propose the pedestrian perception evaluation model which can quantitatively analyze the pedestrian perception difficulty using naturalistic driving data. An efficient detection framework was developed to locate pedestrians within large scale naturalistic driving data. Visual clutter analysis was used to study the factors that may affect the driver’s ability to perceive pedestrian appearance. The candidate factors were explored by the designed exploratory study using naturalistic driving data and a bottom-up image-based pedestrian clutter metric was proposed to quantify the pedestrian perception difficulty in naturalistic driving data. Based on the proposed bottom-up clutter metrics and top-down pedestrian appearance based estimator, a Bayesian probabilistic pedestrian perception evaluation model was further constructed to simulate the pedestrian perception process

    N-colour separation methods for accurate reproduction of spot colours

    Full text link
    In packaging, spot colours are used to print key information like brand logos and elements for which the colour accuracy is critical. The present study investigates methods to aid the accurate reproduction of these spot colours with the n-colour printing process. Typical n-colour printing systems consist of supplementary inks in addition to the usual CMYK inks. Adding these inks to the traditional CMYK set increases the attainable colour gamut, but the added complexity creates several challenges in generating suitable colour separations for rendering colour images. In this project, the n-colour separation is achieved by the use of additional sectors for intermediate inks. Each sector contains four inks with the achromatic ink (black) common to all sectors. This allows the extension of the principles of the CMYK printing process to these additional sectors. The methods developed in this study can be generalised to any number of inks. The project explores various aspects of the n-colour printing process including the forward characterisation methods, gamut prediction of the n-colour process and the inverse characterisation to calculate the n-colour separation for target spot colours. The scope of the study covers different printing technologies including lithographic offset, flexographic, thermal sublimation and inkjet printing. A new method is proposed to characterise the printing devices. This method, the spot colour overprint (SCOP) model, was evaluated for the n-colour printing process with different printing technologies. In addition, a set of real-world spot colours were converted to n-colour separations and printed with the 7-colour printing process to evaluate against the original spot colours. The results show that the proposed methods can be effectively used to replace the spot coloured inks with the n-colour printing process. This can save significant material, time and costs in the packaging industry

    Using communicative patterns to predict Twitter users' social capital, likability, and popularity gains with natural language processing

    Get PDF
    Social media constructs a computer-mediated public space where individuals' visibility and influence can be quantitatively measured by the number of likes, retweets, and followers they receive. These metrics serve as a reward system that not only reflects users' popularity and social capital but also influences the climate of public opinion and deliberative democracy by encouraging and discouraging certain types of communication. Through analyzing Twitter data collected from U.S. congressional politicians and ordinary U.S. Twitter users in seven/eight waves, this study explores how communicative patterns--dual-process styles and sentiment--predict users' social capital, likability, and popularity gains on Twitter as well as how political identity and intergroup communication moderate the relationships between these variables. It found that: (a) rational expressions increase social capital and popularity gains while emotional expressions increase likability gains; (b) positive expressions generate a curvilinear effect on social capital, likability, and popularity gains in the politician dataset; (c) compared with Democratic users, Republican users receive relatively more social capital, likability, and popularity gains from emotional and negative expressions than from rational and positive expressions; (d) rational expressions lead to relatively more likability and popularity gains than emotional expressions in a group-salient context; and (e) positive expressions in ingroup/outgroup conversations generate opposite effects in the politician and ordinary user datasets. In addition, this study develops and advances computational methods in detecting communicative patterns, political identities, and intergroup communication. By implementing Distributed Dictionary Representations, this study creates metrics to measure dual-process thinking styles and sentiment in text; by developing a two-step model with deep learning using an attention mechanism, this study creates an interpretable method to detect political partisanship and intergroup communication.Includes bibliographical references

    Molecular Dynamics Simulation

    Get PDF
    Condensed matter systems, ranging from simple fluids and solids to complex multicomponent materials and even biological matter, are governed by well understood laws of physics, within the formal theoretical framework of quantum theory and statistical mechanics. On the relevant scales of length and time, the appropriate ‘first-principles’ description needs only the Schroedinger equation together with Gibbs averaging over the relevant statistical ensemble. However, this program cannot be carried out straightforwardly—dealing with electron correlations is still a challenge for the methods of quantum chemistry. Similarly, standard statistical mechanics makes precise explicit statements only on the properties of systems for which the many-body problem can be effectively reduced to one of independent particles or quasi-particles. [...

    Doctor of Philosophy

    Get PDF
    dissertationThe continuous growth of wireless communication use has largely exhausted the limited spectrum available. Methods to improve spectral efficiency are in high demand and will continue to be for the foreseeable future. Several technologies have the potential to make large improvements to spectral efficiency and the total capacity of networks including massive multiple-input multiple-output (MIMO), cognitive radio, and spatial-multiplexing MIMO. Of these, spatial-multiplexing MIMO has the largest near-term potential as it has already been adopted in the WiFi, WiMAX, and LTE standards. Although transmitting independent MIMO streams is cheap and easy, with a mere linear increase in cost with streams, receiving MIMO is difficult since the optimal methods have exponentially increasing cost and power consumption. Suboptimal MIMO detectors such as K-Best have a drastically reduced complexity compared to optimal methods but still have an undesirable exponentially increasing cost with data-rate. The Markov Chain Monte Carlo (MCMC) detector has been proposed as a near-optimal method with polynomial cost, but it has a history of unusual performance issues which have hindered its adoption. In this dissertation, we introduce a revised derivation of the bitwise MCMC MIMO detector. The new approach resolves the previously reported high SNR stalling problem of MCMC without the need for hybridization with another detector method or adding heuristic temperature scaling terms. Another common problem with MCMC algorithms is an unknown convergence time making predictable fixed-length implementations problematic. When an insufficient number of iterations is used on a slowly converging example, the output LLRs can be unstable and overconfident, therefore, we develop a method to identify rare, slowly converging runs and mitigate their degrading effects on the soft-output information. This improves forward-error-correcting code performance and removes a symptomatic error floor in bit-error-rates. Next, pseudo-convergence is identified with a novel way to visualize the internal behavior of the Gibbs sampler. An effective and efficient pseudo-convergence detection and escape strategy is suggested. Finally, the new excited MCMC (X-MCMC) detector is shown to have near maximum-a-posteriori (MAP) performance even with challenging, realistic, highly-correlated channels at the maximum MIMO sizes and modulation rates supported by the 802.11ac WiFi specification, 8x8 256 QAM. Further, the new excited MCMC (X-MCMC) detector is demonstrated on an 8-antenna MIMO testbed with the 802.11ac WiFi protocol, confirming its high performance. Finally, a VLSI implementation of the X-MCMC detector is presented which retains the near-optimal performance of the floating-point algorithm while having one of the lowest complexities found in the near-optimal MIMO detector literature

    A new in-camera color imaging model for computer vision

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Radiomics risk modelling using machine learning algorithms for personalised radiation oncology

    Get PDF
    One major objective in radiation oncology is the personalisation of cancer treatment. The implementation of this concept requires the identification of biomarkers, which precisely predict therapy outcome. Besides molecular characterisation of tumours, a new approach known as radiomics aims to characterise tumours using imaging data. In the context of the presented thesis, radiomics was established at OncoRay to improve the performance of imaging-based risk models. Two software-based frameworks were developed for image feature computation and risk model construction. A novel data-driven approach for the correction of intensity non-uniformity in magnetic resonance imaging data was evolved to improve image quality prior to feature computation. Further, different feature selection methods and machine learning algorithms for time-to-event survival data were evaluated to identify suitable algorithms for radiomics risk modelling. An improved model performance could be demonstrated using computed tomography data, which were acquired during the course of treatment. Subsequently tumour sub-volumes were analysed and it was shown that the tumour rim contains the most relevant prognostic information compared to the corresponding core. The incorporation of such spatial diversity information is a promising way to improve the performance of risk models.:1. Introduction 2. Theoretical background 2.1. Basic physical principles of image modalities 2.1.1. Computed tomography 2.1.2. Magnetic resonance imaging 2.2. Basic principles of survival analyses 2.2.1. Semi-parametric survival models 2.2.2. Full-parametric survival models 2.3. Radiomics risk modelling 2.3.1. Feature computation framework 2.3.2. Risk modelling framework 2.4. Performance assessments 2.5. Feature selection methods and machine learning algorithms 2.5.1. Feature selection methods 2.5.2. Machine learning algorithms 3. A physical correction model for automatic correction of intensity non-uniformity in magnetic resonance imaging 3.1. Intensity non-uniformity correction methods 3.2. Physical correction model 3.2.1. Correction strategy and model definition 3.2.2. Model parameter constraints 3.3. Experiments 3.3.1. Phantom and simulated brain data set 3.3.2. Clinical brain data set 3.3.3. Abdominal data set 3.4. Summary and discussion 4. Comparison of feature selection methods and machine learning algorithms for radiomics time-to-event survival models 4.1. Motivation 4.2. Patient cohort and experimental design 4.2.1. Characteristics of patient cohort 4.2.2. Experimental design 4.3. Results of feature selection methods and machine learning algorithms evaluation 4.4. Summary and discussion 5. Characterisation of tumour phenotype using computed tomography imaging during treatment 5.1. Motivation 5.2. Patient cohort and experimental design 5.2.1. Characteristics of patient cohort 5.2.2. Experimental design 5.3. Results of computed tomography imaging during treatment 5.4. Summary and discussion 6. Tumour phenotype characterisation using tumour sub-volumes 6.1. Motivation 6.2. Patient cohort and experimental design 6.2.1. Characteristics of patient cohorts 6.2.2. Experimental design 6.3. Results of tumour sub-volumes evaluation 6.4. Summary and discussion 7. Summary and further perspectives 8. Zusammenfassun

    A study of the mechanics of microcantilever sensors

    Get PDF
    Microcantilever sensors are being studied as a new platform for chemical vapor detection. It has been demonstrated by many groups that they have the potential to detect a wide range of chemicals with high sensitivity. Since these sensors do not offer any intrinsic chemical selectivity, immobilized chemical interfaces coupled with pattern recognition algorithms are often employed. Selectivity based on these chemical coatings often fails due to the lack of orthogonality in the chemical interactions. However, the use of adsorption-induced signals based on physical properties can offer additional complementary information. To successfully employ these versatile sensors, a comprehensive investigation of the mechanics of microcantilevers is necessary to understand their responses. Such an investigation is presented in this work. Both dynamic and static microcantilever theory is addressed as well as nonlinear dynamics resulting from large amplitude oscillations. Experimental data is presented and compared to modeled data for verification. Finally, an application of microcantilever sensors in photothermal deflection spectroscopy (PDS) is given. The detection of explosive compounds with PDS is demonstrated
    corecore