94,581 research outputs found

    User's Privacy in Recommendation Systems Applying Online Social Network Data, A Survey and Taxonomy

    Full text link
    Recommender systems have become an integral part of many social networks and extract knowledge from a user's personal and sensitive data both explicitly, with the user's knowledge, and implicitly. This trend has created major privacy concerns as users are mostly unaware of what data and how much data is being used and how securely it is used. In this context, several works have been done to address privacy concerns for usage in online social network data and by recommender systems. This paper surveys the main privacy concerns, measurements and privacy-preserving techniques used in large-scale online social networks and recommender systems. It is based on historical works on security, privacy-preserving, statistical modeling, and datasets to provide an overview of the technical difficulties and problems associated with privacy preserving in online social networks.Comment: 26 pages, IET book chapter on big data recommender system

    Strategic management of intellectual capital of the enterprise in the framework of informatization of the economy

    Get PDF
    In the course of the research, one has determined the scientific and theoretical approaches, related to the identification of the directions and tools needed to improve strategic management; one also provided the main directions of the intellectual capital formation and development within the enterprise, which include the state of the capital at the present stage and the requirements to it from the future knowledge economy: the introduction of human capital into the assets of the enterprise, the promotion of the creative activity of the employees of the enterprise by using factors of human and social capital activation, as well as the establishment of an accounting system and evaluation of intangible assets. We have highlighted a range of specific principles of the intellectual capital management at the enterprise: the establishment of a partnership between all participants of the production process within the enterprise, namely, its owners, managers, and employees; the determination of criteria for assessing the contribution of every employee into the final result of the enterprise activity; the arrangement of an integrated network of the workers’ mass participation to identify the potential reserves, improve the production efficiency, and the product quality; the development of measures upon the principles’ implementation; and the primary task orientation of management on the future competition. One formulated the scientific and methodological foundations regarding the development of measures, aimed to improve the management of the intellectual capital of an enterprise, which include a sequence, the procedure of determination, the justification, the evaluation of the appropriateness of particular measures, the involvement of a wide range of workers in their development via the methods of interrogation and questionin

    Institutional Stimuli of Economic Sustainability and Development: Financial Concept and Anticorruption Effects

    Get PDF
    The paper is devoted to elucidating the content and role of institutional stimuli in providing the economic sustainability and development. There is systematized the understanding of the economic content of the stimulating and coordinating function of institutions. The analysis of the essence and forms of manifestations of the dilemma “sustainability vs development” is realized. The complex of institutional sustainability and development stimuli as a complicated integrity of formal and informal alternatives, mechanisms of the economic development is separated. There is also elucidated the content of anticorruption effects of using institutional stimuli. Main institutional preconditions of the theoretical model of stimulating the economic development in the context of combining quality institutions, mechanisms and effective stimuli are revealed, its base principles are substantiated. The method of institutional analysis that is a base of systematization of the understanding of the economic content of the stimulating and coordinating function of institutions is central in the study. The method of system approach is used at analyzing the essence and forms of manifestations of the dilemma “sustainability vs development”. Based on method of structuring and synthesis, there is separated the complex of institutional stimuli of sustainability and development as a complicated integrity of formal and informal alternatives. The structural logic analysis gave a possibility to separate mechanisms and instruments of the economic development. Methods of comparison and generalization are used at elucidating the content of anticorruption effects of using institutional stimuli. There is established an importance of differentiation of legal institutions of state regulation that provide observation of property rights and responsibility and institutions that structure a market behavior of partners under conditions of the effective competition at making choice between the market and dirigiste mechanisms of sustainability and development. It is elucidated, that the effectiveness of institutional stimuli is connected with strengthening property rights and decreasing a corruption level. The influence mechanism of institutional stimuli of overcoming the conflict between private and social in the macrofinancial sphere is explained. It is established, that anticorruption effects of institutional stimulation need providing the rule of law, effective enforcement, support of democratic values, formation of the market competitive environment. At the same time there is revealed the ineffectiveness of anticorruption stimuli without raising the general culture of the population, formation of the anticorruption worldview. It is established the necessity of the civic society, more independent from the state, higher degree of personal responsibility of individuals for providing institutional stimuli that support stability, development and have the motivation effect in countries with forming markets. Institutional arrangements as to transferring economies of countries with forming markets to the way of development are determined

    Extent and Distribution of Urban Tax Delinquency

    Get PDF

    Modeling, Simulation and Emulation of Intelligent Domotic Environments

    Get PDF
    Intelligent Domotic Environments are a promising approach, based on semantic models and commercially off-the-shelf domotic technologies, to realize new intelligent buildings, but such complexity requires innovative design methodologies and tools for ensuring correctness. Suitable simulation and emulation approaches and tools must be adopted to allow designers to experiment with their ideas and to incrementally verify designed policies in a scenario where the environment is partly emulated and partly composed of real devices. This paper describes a framework, which exploits UML2.0 state diagrams for automatic generation of device simulators from ontology-based descriptions of domotic environments. The DogSim simulator may simulate a complete building automation system in software, or may be integrated in the Dog Gateway, allowing partial simulation of virtual devices alongside with real devices. Experiments on a real home show that the approach is feasible and can easily address both simulation and emulation requirement

    HP-CERTI: Towards a high performance, high availability open source RTI for composable simulations (04F-SIW-014)

    Get PDF
    Composing simulations of complex systems from already existing simulation components remains a challenging issue. Motivations for composable simulation include generation of a given federation driven by operational requirements provided "on the fly". The High Level Architecture, initially developed for designing fully distributed simulations, can be considered as an interoperability standard for composing simulations from existing components. Requirements for constructing such complex simulations are quite different from those discussed for distributed simulations. Although interoperability and reusability remain essential, both high performance and availability have also to be considered to fulfill the requirements of the end user. ONERA is currently designing a High Performance / High Availability HLA Run-time Infrastructure from its open source implementation of HLA 1.3 specifications. HP-CERTI is a software package including two main components: the first one, SHM-CERTI, provides an optimized version of CERTI based on a shared memory communication scheme; the second one, Kerrighed-CERTI, allows the deployment of CERTI through the control of the Kerrighed Single System Image operating system for clusters, currently designed by IRISA. This paper describes the design of both high performance and availability Runtime Infrastructures, focusing on the architecture of SHM-CERTI. This work is carried out in the context of the COCA (High Performance Distributed Simulation and Models Reuse) Project, sponsored by the DGA/STTC (Délégation Générale pour l'Armement/Service des Stratégies Techniques et des Technologies Communes) of the French Ministry of Defense

    Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study

    Get PDF
    Many automated system analysis techniques (e.g., model checking, model-based testing) rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt model-based system analysis and development techniques. To overcome this problem, researchers have proposed to automatically "learn" models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? In this work, we investigate existing algorithms for learning probabilistic models for model checking, propose an evolution-based approach for better controlling the degree of generalization and conduct an empirical study in order to answer the questions. One of our findings is that the effectiveness of learning may sometimes be limited.Comment: 15 pages, plus 2 reference pages, accepted by FASE 2017 in ETAP
    corecore