892 research outputs found

    BER Performance Improvement in UWA Communication via Spatial Diversity

    Get PDF
    In present era while wireless communication has become an integral part of our life, the advancements in underwater communications (UWA) is still seem farfetched. Underwater communication is typically essential because of its ability to collect information from remote undersea locations. It don’t use radio signals for signal transmission as they can propagate over extremely short distance because of degradation in signal strength due to salinity of water, rather it uses acoustic waves. The underwater acoustic channel has many characteristics which makes receivers very difficult to be realized. Some of the characteristics are frequency dependent propagation loss, severe Doppler spread multipath, low speed of sound. Due to motion of transmitter and receiver the time variability and multipath makes underwater channel very difficult to be estimated. There are various channel estimation techniques to find out channel impulse response but in this thesis we have considered a flat slow fading channel modeled by Nakagami-m distribution. Noise in underwater communication channel is frequency dependent in nature as for a particular range of frequency of operation one among the various noise sources will be dominant. Here they don’t necessarily follow Gaussian statistics rather follows Generalized Gaussian statistics with decaying power spectral density. The flexible parametric form of this statistics makes it useful to fit any source of underwater noise source. In this thesis we have gone through two step approach. In the first step, we have considered transmission of information in presence of noise only and designed a suboptimal maximum likelihood detector. We have compared the performance of this proposed detector with the conventional Gaussian detector where decision is taken based on a single threshold value and the threshold value is calculated by using various techniques. Here it is being observed that the ML detector outperforms the Gaussian detectors and the performance can be improved further by exploiting the multipath components. In the second step we have considered channel along with noise and have designed a ML detector where we have considered the receiver is supplied with two copies of the same transmitted signal and have gone through a two-dimensional analysis. Again we compared the performance with conventional maximal ratio combiner where we can observe the ML detector performance is better. Further we have incorporated selection combining along with these detectors and compared the performance. Simulation results shows that the proposed detector always outperforms the existing detectors in terms of error performance

    Automatic Selection of MapReduce Machine Learning Algorithms: A Model Building Approach

    Get PDF
    As the amount of information available for data mining grows larger, the amount of time needed to train models on those huge volumes of data also grows longer. Techniques such as sub-sampling and parallel algorithms have been employed to deal with this growth. Some studies have shown that sub-sampling can have adverse effects on the quality of models produced, and the degree to which it affects different types of learning algorithms varies. Parallel algorithms perform well when enough computing resources (e.g. cores, memory) are available, however for a limited sized cluster the growth in data will still cause an unacceptable growth in model training time. In addition to the data size mitigation problem, picking which algorithms are well suited to a particular dataset, can be a challenge. While some studies have looked at selection criteria for picking a learning algorithm based on the properties of the dataset, the additional complexity of parallel learners or possible run time limitations has not been considered. This study explores run time and model quality results of various techniques for dealing with large datasets, including using different numbers of compute cores, sub-sampling the datasets, and exploiting the iterative anytime nature of the training algorithms. The algorithms were studied using MapReduce implementations of four supervised learning algorithms, logistic regression, tree induction, bagged trees, and boosted stumps for binary classification using probabilistic models. Evaluation of these techniques was done using a modified form of learning curves which has a temporal component. Finally, the data collected was used to train a set of models to predict which type of parallel learner best suits a particular dataset, given run time limitations and the number of compute cores to be used. The predictions of those models were then compared to the actual results of running the algorithms on the datasets they were attempting to predict

    Development of a Methodology for Condition-Based Maintenance in a Large-Scale Application Field

    Get PDF
    This paper describes a methodology, developed by the authors, for condition monitoring and diagnostics of several critical components in the large-scale applications with machines. For industry, the main target of condition monitoring is to prevent the machine stopping suddenly and thus avoid economic losses due to lack of production. Once the target is reached at a local level, usually through an R&D project, the extension to a large-scale market gives rise to new goals, such as low computational costs for analysis, easily interpretable results by local technicians, collection of data from worldwide machine installations, and the development of historical datasets to improve methodology, etc. This paper details an approach to condition monitoring, developed together with a multinational corporation, that covers all the critical points mentioned above

    The Impacts of Operational Risks in the Supply Chain of Construction Projects in Malaysia

    Get PDF
    Construction glitches have become serious issues for Malaysian construction projects. The construction industry is one of the industries driven by supply chains and affected by interconnected risks. Any disruption happening anywhere will halt the whole project or even other projects. However, massive literature is available to deal with various kinds of risks from the supply chain (SC) of the construction industry that have never been discussed before. This is an empirical investigation and the data was collected through a questionnaire distributed to the construction industry through systemic probability sampling. Final and purified data was analyzed with Structural Equation Modelling through Smart PLS. A total of three types of risks were identified from literature namely supply side risks (SR), process side risks (PR) and demand side risks (DR). It has been found that supply side risks and demand side risks have significant negative effects on supply chain performance (SCP) while process side risks also have negative effects on supply chain performance but not significant. This study will help managers to understand how supply chain risks (SCR) affect the construction industry and what type of risks they should be more aware of. This study covers only operational side risks while future research can be on other risks. Furthermore, various approaches can be proposed for mitigation but there is also a need to verify these approaches for Malaysia

    Structural Equation Modelling applied to proposed Statistics Attitudes-Outcomes Model: A case of a University in South Africa

    Get PDF
    The purpose of the study is to investigate the structural relationships among constructs of the statistics attitudes-outcomes model (SA-OM) using exploratory structural equation modelling (ESEM) methodology. The sample consists of 583 first-year undergraduate students enrolled for statistics courses at the university in South Africa. ESEM reveal that all but two of the nine constructs have well to excellent reliability. To enhance the model, we deleted the eight variables. All other indicators have a significant loading into a construct. Congruency of the SA-OM and expectancy value model (EVM) is noted. The SRMR for all modified models are less than 0.10 suggesting that all these models have acceptable fit. Moreover, all the modified models have RMSE values within the ranges of adequate fit. On the contrary, all the models have unacceptable fit according to PCF, CFI, AGFI and PGFI statistics, i.e. according to all parsimony fit indices except the RMSE. The results also reveal that all incremental fit indices but the BBNFI approve the modified models as acceptable since most of these indices are almost equal to a cut-off point of 0.9. However, BBNNI disapprove the ML3 and ML5 models as being acceptable. A host of inconsistencies in fit indices are noted

    Breaking Down the Barriers To Operator Workload Estimation: Advancing Algorithmic Handling of Temporal Non-Stationarity and Cross-Participant Differences for EEG Analysis Using Deep Learning

    Get PDF
    This research focuses on two barriers to using EEG data for workload assessment: day-to-day variability, and cross- participant applicability. Several signal processing techniques and deep learning approaches are evaluated in multi-task environments. These methods account for temporal, spatial, and frequential data dependencies. Variance of frequency- domain power distributions for cross-day workload classification is statistically significant. Skewness and kurtosis are not significant in an environment absent workload transitions, but are salient with transitions present. LSTMs improve day- to-day feature stationarity, decreasing error by 59% compared to previous best results. A multi-path convolutional recurrent model using bi-directional, residual recurrent layers significantly increases predictive accuracy and decreases cross-participant variance. Deep learning regression approaches are applied to a multi-task environment with workload transitions. Accounting for temporal dependence significantly reduces error and increases correlation compared to baselines. Visualization techniques for LSTM feature saliency are developed to understand EEG analysis model biases

    Low Probability of Intercept Waveforms via Intersymbol Dither Performance under Multipath Conditions

    Get PDF
    This thesis examines the effects of multipath interference on Low Probability of Intercept (LPI) waveforms generated using intersymbol dither. LPI waveforms are designed to be difficult for non-cooperative receivers to detect and manipulate, and have many uses in secure communications applications. In prior research, such a waveform was designed using a dither algorithm to vary the time between the transmission of data symbols in a communication system. This work showed that such a method can be used to frustrate attempts to use non-cooperative receiver algorithms to recover the data. This thesis expands on prior work by examining the effects of multipath interference on cooperative and non-cooperative receiver performance to assess the above method’s effectiveness using a more realistic model of the physical transmission channel. Both two and four ray multipath interference channel models were randomly generated using typical multipath power profiles found in existing literature. Different combinations of maximum allowable symbol delay, pulse shapes and multipath channels were used to examine the bit error rate performance of 1) a Minimum Mean Squared Error (MMSE) cooperative equalizer structure with prior knowledge of the dither pattern and 2) a Constant Modulus Algorithm (CMA) non-cooperative equalizer. Cooperative MMSE equalization resulted in approximately 6-8 dB BER performance improvement in Eb/No over non-cooperative equalization, and for a full range symbol timing dither non-cooperative equalization yields a theoretical BER limit of Pb=10−1. For 50 randomly generated multipath channels, six of the four ray channels and 15 of the two ray channels exhibited extremely poor equalization results, indicating a level of algorithm sensitivity to multipath conditions
    corecore