124 research outputs found

    The structure of employee compensation in Saudi Arabia : the case of chemical and petrochemical industries

    Get PDF
    This study examines the wage differentials and wage discrimination among employees in the chemical and petrochemical industries in Saudi Arabia. The context of segmentation is discussed through a detailed examination of the distinct features of the Saudi labour market, with a special emphasis on the Saudisation labour policy that reflects government intervention in the labour market. Under the Saudisation labour policy, the government compelled private firms to attract Saudi nationals to join their services and to secure them permanent jobs. The present study discusses how this policy has distorted the structure and function of the Saudi labour market from both the demand and supply side perspectives. Due to the lack of official data on the Saudi labour market and the restrictions by the Statistics Law in Saudi Arabia on access to any cross-sectional data, a purpose designed cross-sectional survey was conducted among a sample of six hundred Saudi and non-Saudi workers in these industries. Simple statistical analyses of the survey returns have revealed substantial differences in the pay and working conditions between Saudi and non-Saudi workers across a number of personal characteristics, such as levels of education, occupation, years of working experience and marital status. Regression analyses have further confirmed the significant differences in the effects of supply side factors on the monthly earnings on Saudi and non-Saudi workers. Using the Oaxaca-Blinder technique to measure and to decompose differences in average monthly earnings between Saudis and non-Saudis in the chemical and petrochemical industries, the study reveals that the aggregate earnings differentials between the two groups of workers is 62.6% in favour of Saudi workers, while the explained portion of the earnings differential between the two groups of workers is estimated at 3%, and the unexplained portion is calculated at 97%, which indicates a significant level of discrimination in the chemical and petrochemical industries. This study provides an original and systematic attempt at examining wage differentials and wage discrimination with emphasis on the sources of segmentation in the Saudi Arabian labour market between indigenous and migrant workers. It contributes to bridging the gap in the studies on wage differentials and the labour market's segmentation in Saudi Arabia with a hope that the economic reforms that have started in the country will consider such issues to reform its labour market policy

    Efficient Deep Learning-based Estimation of the Vital Signs on Smartphones

    Full text link
    Nowadays, due to the widespread use of smartphones in everyday life and the improvement of computational capabilities of these devices, many complex tasks can now be deployed on them. Concerning the need for continuous monitoring of vital signs, especially for the elderly or those with certain types of diseases, the development of algorithms that can estimate vital signs using smartphones has attracted researchers worldwide. Such algorithms estimate vital signs (heart rate and oxygen saturation level) by processing an input PPG signal. These methods often apply multiple pre-processing steps to the input signal before the prediction step. This can increase the computational complexity of these methods, meaning only a limited number of mobile devices can run them. Furthermore, multiple pre-processing steps also require the design of a couple of hand-crafted stages to obtain an optimal result. This research proposes a novel end-to-end solution to mobile-based vital sign estimation by deep learning. The proposed method does not require any pre-processing. Due to the use of fully convolutional architecture, the parameter count of our proposed model is, on average, a quarter of the ordinary architectures that use fully-connected layers as the prediction heads. As a result, the proposed model has less over-fitting chance and computational complexity. A public dataset for vital sign estimation, including 62 videos collected from 35 men and 27 women, is also provided. The experimental results demonstrate state-of-the-art estimation accuracy.Comment: 6 pages, 9 figures, 4 table

    Predictive study on time series modeling and comparison with application

    Get PDF
    Efficient time series modeling and forecasting are essential in different practice areas. Consequently, much active research work on this topic has been ongoing for several years. Given the importance of different prediction methods, this research aims to provide a brief description of some common time series prediction models used with their salient features. Therefore, Box-Jenkins and exponential booting models were compared, along with the strengths and weaknesses of the prediction. Our discussion on various time series models is supported by giving the experimental prediction results, which were made to the actual monthly sales of some fuel products for the period 2014-2017. While installing the Data Set template, special care is taken to select the most creative. To evaluate prediction accuracy in addition to comparing it, we used several criteria, mean square error (MSE), mean absolute deviation (MAD), mean absolute percentage error (MAPE), and mean square error (RMSE). To obtain originality and clarity in our discussion on modeling and forecasting a time series, we were able to obtain assistance from various published research work from famous magazines and some standard books and it was concluded that the 3ARMA terms best model among the Box-Jenkins models built based on the dependence of gas oil sales in Iraq, as well as Simple exponential smoothing is the best exponential smoothing model to forecast in the coming years for sales of improved gas and gas oil in Iraq

    Enhanced separation of azeotropic mixtures by ultrasound-assisted distillation process

    Get PDF
    The main objective of this study is to develop an ultrasound-assisted distillation process that can break minimum boiling azeotropes under various operating conditions for enhancing the effectiveness of distillation processes in providing solution to high purity separation requirement. As a case study, ethanol/ethyl acetate (ETOH/ETAC) separation process was considered. The effect of both intensity and frequency of the ultrasonic waves on the vapor–liquid equilibrium (VLE) of this system was experimentally studied. The sonication was found to affect the VLE significantly in a way which led to an alteration in the relative volatility and a complete elimination of the azeotropic point, with the preference towards a combination of low frequency and high intensity operation. A mathematical model describing the system was developed based on conservation principles, VLE of the system and sonication effects. The model, which took into account a single-stage VLE system enhanced with ultrasonic waves, was coded using the Aspen Custom Modeler. The effects of ultrasonic waves on the relative volatility and azeotropic point were examined and the experimental data were successfully used in validating the model with a reasonable accuracy. The mathematical model was exported to the Aspen Plus to form a model that represents the sonication equilibrium stages, which were connected serially to configure an ultrasound-assisted distillation (UAD) process for separation of ETOH/ETAC mixture. The simulation results revealed that ETAC can be recovered from the azeotropic mixture with a purity of 99 mol% using 27 sonication stages. To validate the suitability of UAD process for separation of other minimum boiling azeotropes, separation of other mixtures were tested such as ethanol/water, methanol/methyl acetate and nbutanol/ water. The developed model was found to have some limitations with respect to separation of maximum boiling azeotropes

    Wireless body area network revisited

    Get PDF
    Rapid growth of wireless body area networks (WBANs) technology allowed the fast and secured acquisition as well as exchange of vast amount of data information in diversified fields. WBANs intend to simplify and improve the speed, accuracy, and reliability of communica-tions from sensors (interior motors) placed on and/or close to the human body, reducing the healthcare cost remarkably. However, the secu-rity of sensitive data transfer using WBANs and subsequent protection from adversaries attack is a major issue. Depending on the types of applications, small and high sensitive sensors having several nodes obtained from invasive/non-invasive micro- and nano- technology can be installed on the human body to capture useful information. Lately, the use of micro-electro-mechanical systems (MEMS) and integrated circuits in wireless communications (WCs) became widespread because of their low-power operation, intelligence, accuracy, and miniaturi-zation. IEEE 802.15.6 and 802.15.4j standards have already been set to specifically regulate the medical networks and WBANs. In this view, present communication provides an all-inclusive overview of the past development, recent progress, challenges and future trends of security technology related to WBANs

    Collaborative Research Team

    Get PDF
    poster abstractIn the University atmosphere, research has become the forefront for an academic facility. Most large universities have various schools that conduct continuous research throughout the academic year, such as engineering, sciences, and business. Given the size of these research departments, they are by nature independent. However, many research projects may overlap in certain aspects, and can even collaborate to pursue further research. Our team is pursuing the feasibility of adapting software platforms for use as tools to facilitate research collaborations. Implementation of such software would enhance communication, data sharing, and productivity across research departments

    CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs

    Full text link
    Instruction-based multitasking has played a critical role in the success of large language models (LLMs) in multi-turn dialog applications. While publicly available LLMs have shown promising performance, when exposed to complex instructions with multiple constraints, they lag against state-of-the-art models like ChatGPT. In this work, we hypothesize that the availability of large-scale complex demonstrations is crucial in bridging this gap. Focusing on dialog applications, we propose a novel framework, CESAR, that unifies a large number of dialog tasks in the same format and allows programmatic induction of complex instructions without any manual effort. We apply CESAR on InstructDial, a benchmark for instruction-based dialog tasks. We further enhance InstructDial with new datasets and tasks and utilize CESAR to induce complex tasks with compositional instructions. This results in a new benchmark called InstructDial++, which includes 63 datasets with 86 basic tasks and 68 composite tasks. Through rigorous experiments, we demonstrate the scalability of CESAR in providing rich instructions. Models trained on InstructDial++ can follow compositional prompts, such as prompts that ask for multiple stylistic constraints.Comment: EMNLP 202

    Performance evaluation measurement of image steganography techniques with analysis of LSB based on variation image formats

    Get PDF
    Recently, Steganography is an outstanding research area which used for data protection from unauthorized access. Steganography is defined as the art and science of covert information in plain sight in various media sources such as text, images, audio, video, network channel etc. so, as to not stimulate any suspicion; while steganalysis is the science of attacking the steganographic system to reveal the secret message. This research clarifies the diverse showing the evaluation factors based on image steganographic algorithms. The effectiveness of a steganographic is rated to three main parameters, payload capacity, image quality measure and security measure. This study is focused on image steganographic which is most popular in in steganographic branches. Generally, the Least significant bit is major efficient approach utilized to embed the secret message. In addition, this paper has more detail knowledge based on Least significant bit LSB within various Images formats. All metrics are illustrated in this study with arithmetical equations while some important trends are discussed also at the end of the paper

    Swordfish: A Framework for Evaluating Deep Neural Network-based Basecalling using Computation-In-Memory with Non-Ideal Memristors

    Full text link
    Basecalling, an essential step in many genome analysis studies, relies on large Deep Neural Networks (DNNs) to achieve high accuracy. Unfortunately, these DNNs are computationally slow and inefficient, leading to considerable delays and resource constraints in the sequence analysis process. A Computation-In-Memory (CIM) architecture using memristors can significantly accelerate the performance of DNNs. However, inherent device non-idealities and architectural limitations of such designs can greatly degrade the basecalling accuracy, which is critical for accurate genome analysis. To facilitate the adoption of memristor-based CIM designs for basecalling, it is important to (1) conduct a comprehensive analysis of potential CIM architectures and (2) develop effective strategies for mitigating the possible adverse effects of inherent device non-idealities and architectural limitations. This paper proposes Swordfish, a novel hardware/software co-design framework that can effectively address the two aforementioned issues. Swordfish incorporates seven circuit and device restrictions or non-idealities from characterized real memristor-based chips. Swordfish leverages various hardware/software co-design solutions to mitigate the basecalling accuracy loss due to such non-idealities. To demonstrate the effectiveness of Swordfish, we take Bonito, the state-of-the-art (i.e., accurate and fast), open-source basecaller as a case study. Our experimental results using Sword-fish show that a CIM architecture can realistically accelerate Bonito for a wide range of real datasets by an average of 25.7x, with an accuracy loss of 6.01%.Comment: To appear in 56th IEEE/ACM International Symposium on Microarchitecture (MICRO), 202
    corecore