6 research outputs found

    A method for measuring detailed demand for workers’ competences

    Full text link
    There is an increasing need for analysing demand for skills at labour markets. While most studies aggregate skills in groups or use available proxies for them, the authors analyse companies’ demand for individual competences. Such an analysis better reflects reality, because companies usually require from future workers particular competences rather than generally defined groups of skills. However, no method exists to analyse on a large scale which competences are required by employers. At a detailed level, there are hundreds of competences, so this demand cannot be measured in a sample survey. The authors propose a method for a continuous and efficient analysis of demand for new workers’ competences. The method is based on gathering internet job offers and analysing them with data mining and text analysis tools. They applied it to analyse transversal competences on a Polish labour market during November 2012–December 2015. The authors used the detailed European Commission classification of transversal competences. They found that within the general groups of competences, companies required only certain ones, especially ‘language and communication competences’ and neglected others. The companies’ requirements were countercyclical, that is, they increased them during recession and decreased them during economic expansion. However, the structure of the demanded competences did not change during the analysed period, suggesting that the structure is relatively stable, at least over the business cycle. The method can be used continuously. Various institutions can analyse and publish up-to-date information on the current demand for competences as well as tendencies in this demand

    A method for measuring detailed demand for workers’ competences

    Get PDF
    There is an increasing need for analysing demand for skills at labour markets. While most studies aggregate skills in groups or use available proxies for them, the authors analyse companies' demand for individual competences. Such an analysis better reflects reality, because companies usually require from future workers particular competences rather than generally defined groups of skills. However, no method exists to analyse on a large scale which competences are required by employers. At a detailed level, there are hundreds of competences, so this demand cannot be measured in a sample survey. The authors propose a method for a continuous and efficient analysis of demand for new workers' competences. The method is based on gathering internet job offers and analysing them with data mining and text analysis tools. They applied it to analyse transversal competences on a Polish labour market during November 2012- December 2015. The authors used the detailed European Commission classification of transversal competences. They found that within the general groups of competences, companies required only certain ones, especially 'language and communication competences' and neglected others. The companies' requirements were countercyclical, that is, they increased them during recession and decreased them during economic expansion. However, the structure of the demanded competences did not change during the analysed period, suggesting that the structure is relatively stable, at least over the business cycle. The method can be used continuously. Various institutions can analyse and publish up-to-date information on the current demand for competences as well as tendencies in this demand

    Federated Similarity-Based Learning with Incomplete Data

    Full text link
    In the analysis of social, medical, and business issues, the problem of incomplete data often arises. In addition, in situations where privacy policy makes it difficult to share data with organizations conducting related activities, it is necessary to exchange knowledge instead of data, that is, to use federated learning. In this scenario there are several private data clients, whose models are improved through the aggregation of model components. Here, we propose a methodology for training local models to deal well with missing data, with an algorithm using similarity measures that take into account the uncertainty present in many types of data, such as medical data. Therefore, this paper describes a federated learning model capable of processing imprecise and missing data. Federated learning is a technique to overcome limitations resulting from data governance and privacy by training algorithms without exchanging the data itself. The performance of the proposed method is demonstrated using medical data on breast cancer cases. Results for different data loss scenarios and corresponding measures of classification quality are presented and discussed

    Eye-tracking Data, Complex Networks and Rough Sets: an Attempt Toward Combining Them

    Full text link
    Eye-tracking sequences can be considered in terms of complex networks. On the basis of complex network representation of eye-tracking data, we define a measure, derived from rough set theory, for assessing the cohesion of sacade connections between object components identified in visual stimuli used in eye-tracking experiments. Theoretical foundations given in the paper are supplemented with a numerical example explaining the proposed approach
    corecore