4 research outputs found

    Literature Review: Penerapan Sistem Job Satisfaction dan Job Rotation pada Karyawan Perusahaan

    Get PDF
    Job rotation policy is a strategy for developing employee skills by the company to improve employee performance, and this can provide refreshment and a sense of a new atmosphere for employees. So that it can provide job satisfaction or satisfaction to employees. The method in this research is a literature study by reviewing some of the results of previous research which originates from national and international articles. The research results were analyzed and used as material for discussion to determine answers to problems regarding the relationship between job rotation policies and job satisfaction which can have an impact on employee performance. All ideas from each study provide information about the methodologically desirable theory of the study being analyzed. The results of the literature review show that the job rotation policy implemented by the company has a good impact on employee job satisfaction, so it is good for improving employee performance. Keywords: Job Rotation, Job Satisfaction, Company Employee

    An out-of-the-box full-network embedding for convolutional neural networks

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Features extracted through transfer learning can be used to exploit deep learning representations in contexts where there are very few training samples, where there are limited computational resources, or when the tuning of hyper-parameters needed for training deep neural networks is unfeasible. In this paper we propose a novel feature extraction embedding called full-network embedding. This embedding is based on two main points. First, the use of all layers in the network, integrating activations from different levels of information and from different types of layers (i.e., convolutional and fully connected). Second, the contextualisation and leverage of information based on a novel three-valued discretisation method. The former provides extra information useful to extend the characterisation of data, while the later reduces noise and regularises the embedding space. Significantly, this also reduces the computational cost of processing the resultant representations. The proposed method is shown to outperform single layer embeddings on several image classification tasks, while also being more robust to the choice of the pre-trained model used as transfer source.This work is partially supported by the Joint Study Agreement no. W156463 under the IBM/BSC Deep Learning Center agreement, by the Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316-P project and by the Generalitat de Catalunya (contracts 2014-SGR-1051), and by the Core Research for Evolutional Science and Technology (CREST) program of Japan Science and Technology Agency (JST).Peer ReviewedPostprint (author's final draft

    Resiliency in Deep Convolutional Neural Networks

    Get PDF
    The enormous success and popularity of deep convolutional neural networks for object detection has prompted their deployment in various real-world applications. However, their performance in the presence of hardware faults or damage that could occur in the field has not been studied. This thesis explores the resiliency of six popular network architectures for image classification, AlexNet, VGG16, ResNet, GoogleNet, SqueezeNet and YOLO9000, when subjected to various degrees of failures. We introduce failures in a deep network by dropping a percentage of weights at each layer. We then assess the effects of these failures on classification performance. We find the fitness of the weights and then dropped from least fit to most fit weights. Finally, we determine the ability of the network to self-heal and recover its performance by retraining its healthy portions after partial damage. We try different methods to re-train the healthy portion by varying the optimizer. We also try to find the time and resources required for re-training. We also reduce the number of parameters in GoogleNet, VGG16 to the size of SqueezeNet and re-trained with varying percentage of dataset. This can be used as a network pruning method

    Deep neural networks under stress

    No full text
    In recent years, deep architectures have been used for transfer learning with state-of-the-art performance in many datasets. The properties of their features remain, however, largely unstudied under the transfer perspective. In this work, we present an extensive analysis of the resiliency of feature vectors extracted from deep models, with special focus on the trade-off between performance and compression rate. By introducing perturbations to image descriptions extracted from a deep convolutional neural network, we change their precision and number of dimensions, measuring how it affects the final score. We show that deep features are more robust to these disturbances when compared to classical approaches, achieving a compression rate of 98.4%, while losing only 0.88% of their original score for Pascal VOC 2007CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTĂŤFICO E TECNOLĂ“GICO - CNPQ8,248/9123rd IEEE International Conference on Image Processing (ICIP
    corecore