5,031 research outputs found

    Sustainable Growth and Ethics: a Study of Business Ethics in Vietnam Between Business Students and Working Adults

    Full text link
    Sustainable growth is not only the ultimate goal of business corporations but also the primary target of local governments as well as regional and global economies. One of the cornerstones of sustainable growth is ethics. An ethical organizational culture provides support to achieve sustainable growth. Ethical leaders and employees have great potential for positive influence on decisions and behaviors that lead to sustainability. Ethical behavior, therefore, is expected of everyone in the modern workplace. As a result, companies devote many resources and training programs to make sure their employees live according to the high ethical standards. This study provides an analysis of Vietnamese business students’ level of ethical maturity based on gender, education, work experience, and ethics training. The results of data from 260 business students compared with 704 working adults in Vietnam demonstrate that students have a significantly higher level of ethical maturity. Furthermore, gender and work experience are significant factors in ethical maturity. While more educated respondents and those who had completed an ethics course did have a higher level of ethical maturity, the results were not statistically significant. Analysis of the results along with suggestions and implications are provided

    A bayesian scene-prior-based deep network model for face verification

    Get PDF
    Face recognition/verification has received great attention in both theory and application for the past two decades. Deep learning has been considered as a very powerful tool for improving the performance of face recognition/verification recently. With large labeled training datasets, the features obtained from deep learning networks can achieve higher accuracy in comparison with shallow networks. However, many reported face recognition/verification approaches rely heavily on the large size and complete representative of the training set, and most of them tend to suffer serious performance drop or even fail to work if fewer training samples per person are available. Hence, the small number of training samples may cause the deep features to vary greatly. We aim to solve this critical problem in this paper. Inspired by recent research in scene domain transfer, for a given face image, a new series of possible scenarios about this face can be deduced from the scene semantics extracted from other face individuals in a face dataset. We believe that the “scene” or background in an image, that is, samples with more different scenes for a given person, may determine the intrinsic features among the faces of the same individual. In order to validate this belief, we propose a Bayesian scene-prior-based deep learning model in this paper with the aim to extract important features from background scenes. By learning a scene model on the basis of a labeled face dataset via the Bayesian idea, the proposed method transforms a face image into new face images by referring to the given face with the learnt scene dictionary. Because the new derived faces may have similar scenes to the input face, the face-verification performance can be improved without having background variance, while the number of training samples is significantly reduced. Experiments conducted on the Labeled Faces in the Wild (LFW) dataset view #2 subset illustrated that this model can increase the verification accuracy to 99.2% by means of scenes’ transfer learning (99.12% in literature with an unsupervised protocol). Meanwhile, our model can achieve 94.3% accuracy for the YouTube Faces database (DB) (93.2% in literature with an unsupervised protocol)

    Trapped interacting two-component bosons

    Full text link
    In this paper we solve one dimensional trapped SU(2) bosons with repulsive δ\delta-function interaction by means of Bethe-ansatz method. The features of ground state and low-lying excited states are studied by numerical and analytic methods. We show that the ground state is an isospin "ferromagnetic" state which differs from spin-1/2 fermions system. There exist three quasi-particles in the excitation spectra, and both holon-antiholon and holon-isospinon excitations are gapless for large systems. The thermodynamics equilibrium of the system at finite temperature is studied by thermodynamic Bethe ansatz. The thermodynamic quantities, such as specific heat etc. are obtained for the case of strong coupling limit.Comment: 15 pages, 9 figure

    Thermodynamics of Kondo Model with Electronic Interactions

    Full text link
    On the basis of Bethe ansatz solution of one dimensional Kondo model with electronic interaction, the thermodynamics equilibrium of the system in finite temperature is studied in terms of the strategy of Yang and Yang. The string hypothesis in the spin rapidity is discussed extensively. The thermodynamics quantities, such as specific heat and magnetic susceptibility, are obtained.Comment: 8 pages, 0 figures, Revte

    Finite-Temperature Scaling of Magnetic Susceptibility and Geometric Phase in the XY Spin Chain

    Full text link
    We study the magnetic susceptibility of 1D quantum XY model, and show that when the temperature approaches zero, the magnetic susceptibility exhibits the finite-temperature scaling behavior. This scaling behavior of the magnetic susceptibility in 1D quantum XY model, due to the quantum-classical mapping, can be easily experimentally tested. Furthermore, the universality in the critical properties of the magnetic susceptibility in quantum XY model is verified. Our study also reveals the close relation between the magnetic susceptibility and the geometric phase in some spin systems, where the quantum phase transitions are driven by an external magnetic field.Comment: 6 pages, 4 figures, get accepted for publication by J. Phys. A: Math. Theo

    Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data

    Get PDF
    BACKGROUND: Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges.METHODS: Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities.RESULTS: For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven.CONCLUSIONS: Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system

    Optical metrics and birefringence of anisotropic media

    Get PDF
    The material tensor of linear response in electrodynamics is constructed out of products of two symmetric second rank tensor fields which in the approximation of geometrical optics and for uniaxial symmetry reduce to "optical" metrics, describing the phenomenon of birefringence. This representation is interpreted in the context of an underlying internal geometrical structure according to which the symmetric tensor fields are vectorial elements of an associated two-dimensional space.Comment: 24 pages, accepted for publication in GR

    FedDCT: Federated Learning of Large Convolutional Neural Networks on Resource Constrained Devices using Divide and Co-Training

    Full text link
    We introduce FedDCT, a novel distributed learning paradigm that enables the usage of large, high-performance CNNs on resource-limited edge devices. As opposed to traditional FL approaches, which require each client to train the full-size neural network independently during each training round, the proposed FedDCT allows a cluster of several clients to collaboratively train a large deep learning model by dividing it into an ensemble of several small sub-models and train them on multiple devices in parallel while maintaining privacy. In this co-training process, clients from the same cluster can also learn from each other, further improving their ensemble performance. In the aggregation stage, the server takes a weighted average of all the ensemble models trained by all the clusters. FedDCT reduces the memory requirements and allows low-end devices to participate in FL. We empirically conduct extensive experiments on standardized datasets, including CIFAR-10, CIFAR-100, and two real-world medical datasets HAM10000 and VAIPE. Experimental results show that FedDCT outperforms a set of current SOTA FL methods with interesting convergence behaviors. Furthermore, compared to other existing approaches, FedDCT achieves higher accuracy and substantially reduces the number of communication rounds (with 484-8 times fewer memory requirements) to achieve the desired accuracy on the testing dataset without incurring any extra training cost on the server side.Comment: Under review by the IEEE Transactions on Network and Service Managemen

    Accurate and linear time pose estimation from points and lines

    Get PDF
    The final publication is available at link.springer.comThe Perspective-n-Point (PnP) problem seeks to estimate the pose of a calibrated camera from n 3Dto-2D point correspondences. There are situations, though, where PnP solutions are prone to fail because feature point correspondences cannot be reliably estimated (e.g. scenes with repetitive patterns or with low texture). In such scenarios, one can still exploit alternative geometric entities, such as lines, yielding the so-called Perspective-n-Line (PnL) algorithms. Unfortunately, existing PnL solutions are not as accurate and efficient as their point-based counterparts. In this paper we propose a novel approach to introduce 3D-to-2D line correspondences into a PnP formulation, allowing to simultaneously process points and lines. For this purpose we introduce an algebraic line error that can be formulated as linear constraints on the line endpoints, even when these are not directly observable. These constraints can then be naturally integrated within the linear formulations of two state-of-the-art point-based algorithms, the OPnP and the EPnP, allowing them to indistinctly handle points, lines, or a combination of them. Exhaustive experiments show that the proposed formulation brings remarkable boost in performance compared to only point or only line based solutions, with a negligible computational overhead compared to the original OPnP and EPnP.Peer ReviewedPostprint (author's final draft
    corecore