3,057 research outputs found

    User-Friendly Investing Apps Granting Novice Access to Stock Exchanges— Overvaluing Stocks

    Get PDF
    The internet and smartphones have decreased information asymmetry in the stock market. More and more \u27regular\u27 people can trade securities(stocks) at their fingertips. With their user-friendly trading platforms, apps like Robinhood let individuals buy and sell stocks with minor barriers. Increased market participation has been good for markets as prices continue to rise and more capital is available. However, most retail investors lack the proper knowledge and are far more susceptible to herding behavior fueled by internet speculators. Stocks experience artificial inflation in prices as retail investors\u27 demand increases. Traditional investors use a wide variety of information available that most individuals don\u27t have before making trades. Nevertheless, vast groups of uniformed individuals significantly impact stock prices. For example, Kodak had an 879.8% stock increase in July 2020. Companies are becoming overvalued because of speculating being the driving factor. The threat we can potentially face with increased market participation is that many of our economic sectors could become overvalued if novice investors fund them. Presentation Time: Wednesday, 1-2 p.m. Zoom link: https://usu-edu.zoom.us/j/82747382202?pwd=MmJHRFF0SG5kR21RQ0RsR2lDN1RBdz0

    Reducing the Source in the Holy Quran Morphological and Grammatical Study ``Rwydaan a Model''

    Get PDF
    For grammarians, the working infinitive of the action of the verb is not reduced because if the verb is diminished in its characteristics it approaches the noun because the diminutive is one of its characteristics. And while reading, you will find in the Qur'an and Arabic speeches a working miniature source that means ``Rwydaan.'' The subject of the working source and the diminutive is one of the important topics that have filled a lesson and an explanation for the people of grammar and morphology. But to find ``Rwydaan'' in the books of the Qur'an's parsing and grammar was something worthy of study, and it prompted me to search more as nobody else did. Keywords: working infinitive, verb work, minimization, Rwydaa

    Optimization of coded signals based on wavelet neural network

    Get PDF
    Pulse compression technique is used in many modern radar signal processing systems to achieve the range accuracy and resolution of a narrow pulse while retaining the detection capability of a long pulse. It is important for improving range resolution for target. Matched filtering of binary phase coded radar signals create undesirable sidelobes, which may mask important information. The application of neural networks for pulse compression has been explored in the past. Nonetheless, there is still need for improvement in pulse compression to improve the range resolution for target. A novel approach for pulse compression using Feed-forward Wavelet Neural Network (WNN) was proposed, using one input layer and output layer and one hidden layer that consists three neurons. Each hidden layer uses Morlet function as activation function. WNN is a new class of network that combines the classic sigmoid neural network and wavelet analysis. We performed a simulation to evaluate the effectiveness of the proposed method. The simulation results demonstrated great approximation ability of WNN and its ability in prediction and system modeling. We performed evaluation using 13-bit, 35-bit and 69-bit Barker codes as signal codes to WNN. When compared with other existing methods, WNN yields better PSR, low Mean Square Error (MSE), less noise, range resolution ability and Doppler shift performance than the previous and some traditional algorithms like auto correlation function (ACF) algorithm

    An Automated Method For Model-Plant Mismatch Detection And Correction In Process Plants Employing Model Predictive Control (MPC)

    Get PDF
    A model-predictive controller (MPC) uses the process model to predict future outputs of the system. Hence, its performance is directly related to the quality of the model. The difference between the model and the actual plant is termed model-plant mismatch (MPM). Since MPM has significant effect on MPC performance, the model has to be corrected and updated whenever high MPM is detected. Re-identification of the process model with large number of inputs and outputs is costly due to potential production losses and high manpower efforts. Therefore, detection of the location of the mismatch is needed so that only that channel is re-identified. Detection methods using partial correlation analysis as well as other methods have been developed, but these are qualitative methods that does not indicate the extent of the mismatch clearly and whether or not corrective action is necessary. The proposed methodology of this project uses a quantitative variable (e/u) which is the model errors divided by the manipulated variables, to identify changes in the plant gain and hence the mismatch. Taguchi experiments were carried out to identity the most contributing gains to the overall process, and then focus on these major contributors to find the threshold limits of mismatch by trial and error. When the mismatch indicated by the variable (e/u) exceeds the threshold limit, auto-correction of the model gain of the controller is made to match with the new plant gain. The proposed method was assessed in simulations using MA TLAB and Simulink on the Wood and Berry distillation column case study and was successfully validated. Testing for various mismatch scenarios for both two major contributors to the process, the algorithm was able to bring the output back to the desired set-point in a very short time

    Evaluation of Pre-Trained CNN Models for Cardiovascular Disease Classification: A Benchmark Study

    Get PDF
    In this paper, we present an up-to-date benchmarking of the most commonly used pre-trained CNN models using a merged set of three available public datasets to have a large enough sample range. From the 18th century up to the present day, cardiovascular diseases, which are considered among the most significant health risks globally, have been diagnosed by the auscultation of heart sounds using a stethoscope. This method is elusive, and a highly experienced physician is required to master it. Artificial intelligence and, subsequently, machine learning are being applied to equip modern medicine with powerful tools to improve medical diagnoses. Image and audio pre-trained convolution neural network (CNN) models have been used for classifying normal and abnormal heartbeats using phonocardiogram signals. We objectively benchmark more than two dozen image-pre-trained CNN models in addition to two of the most popular audio-based pre-trained CNN models: VGGish and YAMnet, which have been developed specifically for audio classification. The experimental results have shown that audio-based models are among the best- performing models. In particular, the VGGish model had the highest average validation accuracy and average true positive rate of 87% and 85%, respectively

    Distributed detection, localization, and estimation in time-critical wireless sensor networks

    Get PDF
    In this thesis the problem of distributed detection, localization, and estimation (DDLE) of a stationary target in a fusion center (FC) based wireless sensor network (WSN) is considered. The communication process is subject to time-critical operation, restricted power and bandwidth (BW) resources operating over a shared communication channel Buffering from Rayleigh fading and phase noise. A novel algorithm is proposed to solve the DDLE problem consisting of two dependent stages: distributed detection and distributed estimation. The WSN performs distributed detection first and based on the global detection decision the distributed estimation stage is performed. The communication between the SNs and the FC occurs over a shared channel via a slotted Aloha MAC protocol to conserve BW. In distributed detection, hard decision fusion is adopted, using the counting rule (CR), and sensor censoring in order to save power and BW. The effect of Rayleigh fading on distributed detection is also considered and accounted for by using distributed diversity combining techniques where the diversity combining is among the sensor nodes (SNs) in lieu of having the processing done at the FC. Two distributed techniques are proposed: the distributed maximum ratio combining (dMRC) and the distributed Equal Gain Combining (dEGC). Both techniques show superior detection performance when compared to conventional diversity combining procedures that take place at the FC. In distributed estimation, the segmented distributed localization and estimation (SDLE) framework is proposed. The SDLE enables efficient power and BW processing. The SOLE hinges on the idea of introducing intermediate parameters that are estimated locally by the SNs and transmitted to the FC instead of the actual measurements. This concept decouples the main problem into a simpler set of local estimation problems solved at the SNs and a global estimation problem solved at the FC. Two algorithms are proposed for solving the local problem: a nonlinear least squares (NLS) algorithm using the variable projection (VP) method and a simpler gird search (GS) method. Also, Four algorithms are proposed to solve the global problem: NLS, GS, hyperspherical intersection method (HSI), and robust hyperspherical intersection (RHSI) method. Thus, the SDLE can be solved through local and global algorithm combinations. Five combinations are tied: NLS2 (NLS-NLS), NLS-HSI, NLS-RHSI, GS2, and GS-N LS. It turns out that the last algorithm combination delivers the best localization and estimation performance. In fact , the target can be localized with less than one meter error. The SNs send their local estimates to the FC over a shared channel using the slotted-Aloha MAC protocol, which suits WSNs since it requires only one channel. However, Aloha is known for its relatively high medium access or contention delay given the medium access probability is poorly chosen. This fact significantly hinders the time-critical operation of the system. Hence, multi-packet reception (MPR) is used with slotted Aloha protocol, in which several channels are used for contention. The contention delay is analyzed for slotted Aloha with and without MPR. More specifically, the mean and variance have been analytically computed and the contention delay distribution is approximated. Having theoretical expressions for the contention delay statistics enables optimizing both the medium access probability and the number of MPR channels in order to strike a trade-off between delay performance and complexity

    Non-Orthogonal Multiple Access for Hybrid VLC-RF Networks with Imperfect Channel State Information

    Get PDF
    The present contribution proposes a general framework for the energy efficiency analysis of a hybrid visible light communication (VLC) and Radio Frequency (RF) wireless system, in which both VLC and RF subsystems utilize nonorthogonal multiple access (NOMA) technology. The proposed framework is based on realistic communication scenarios as it takes into account the mobility of users, and assumes imperfect channel-state information (CSI). In this context, tractable closed-form expressions are derived for the corresponding average sum rate of NOMA-VLC and its orthogonal frequency division multiple access (OFDMA)-VLC counterparts. It is shown extensively that incurred CSI errors have a considerable impact on the average energy efficiency of both NOMA-VLC and OFDMAVLC systems and hence, they should not be neglected in practical designs and deployments. Interestingly, we further demonstrate that the average energy efficiency of the hybrid NOMA-VLCRF system outperforms NOMA-VLC system under imperfect CSI. Respective computer simulations corroborate the derived analytic results and interesting theoretical and practical insights are provided, which will be useful in the effective design and deployment of conventional VLC and hybrid VLC-RF systems

    Quality Assurance in Testing of Highway Materials in Pakistan

    Get PDF
    In Pakistan three modes of transportation are usually used. It is believed that road sector is the major source of transport, which carries about 92 percent of travelers and 96 percent of cargo traffic. There are various factors which may cause the deterioration of pavement such as inadequate drainage, frost action, unsatisfactory compaction and overloading etc. One of the important factors seriously affecting the pavement performance is quality of material. Although some research work has been carried out related to use of required quality materials in highway construction; however, limited research has been done related to quality assurance in testing of highway materials in commercial laboratories. As the approval of material usage mainly depends upon testing reports provided by laboratories, therefore poor quality assurance system of testing laboratories may have adverse effect on material approvals. Rapid growth of population in Pakistan demands expansions in highway constructions; therefore, assessment and improvement of quality assurance will ensure proper usage of required quality materials. In the above mentioned context assessment of few commercial laboratories involved in highway material testing was done in comparison to public sector laboratory. The results indicate that there are significant variations in test results carried out at different commercial laboratories. Based on the assessment the study highlights the improvement areas to raise the standard of quality assurance in highway material testing in Pakistan
    • …
    corecore