260 research outputs found
International migration and labour turnover: workers’ agency in the construction sector of Russia and Italy
This article focuses on migrant workers’ agency through exploring the relationship between working and employment conditions, on one side, and labour mobility, on the other. The study is based on qualitative research involving workers from Moldova and Ukraine working in the Russian and Italian construction sector. Fieldwork has been carried out in Russia, Italy and Moldova to investigate informal networks, recruitment mechanisms and employment conditions to establish their impact on migration processes. Overcoming methodological nationalism, this study recognises transnational spaces as the new terrain where antagonistic industrial relations are rearticulated. Labour turnover is posited as a key explanatory factor and understood not simply as the outcome of capital recruitment strategies but also as workers’ agency
Labour mobility in construction: migrant workers’ strategies between integration and turnover
The construction industry historically is characterised by high levels of labour mobility favouring the recruitment of migrant labour. In the EU migrant workers make up around 25% of overall employment in the sector and similar if not higher figures exist for the sector in Russia. The geo-political changes of the 1990s have had a substantial impact on migration flows, expanding the pool of labour recruitment within and from the post-socialist East but also changing the nature of migration. The rise of temporary employment has raised concerns about the weakness and isolation of migrant workers and the concomitant risk of abuse. Migrant workers though cannot be reduced to helpless victims of state policies and employers’ recruitment strategies. Findings of the research presented here unveil how they meet the challenges of the international labour market, the harshness of debilitating working conditions and the difficult implications for their family life choices
Recommended from our members
STAND: Sanitization Tool for ANomaly Detection
The efficacy of Anomaly Detection (AD) sensors depends heavily on the quality of the data used to train them. Artificial or contrived training data may not provide a realistic view of the deployment environment. Most realistic data sets are dirty; that is, they contain a number of attacks or anomalous events. The size of these high-quality training data sets makes manual removal or labeling of attack data infeasible. As a result, sensors trained on this data can miss attacks and their variations. We propose extending the training phase of AD sensors (in a manner agnostic to the underlying AD algorithm) to include a sanitization phase. This phase generates multiple models conditioned on small slices of the training data. We use these "micro-models"Ă‚ to produce provisional labels for each training input, and we combine the micro-models in a voting scheme to determine which parts of the training data may represent attacks. Our results suggest that this phase automatically and significantly improves the quality of unlabeled training data by making it as "attack-free"Ă‚ and "regular"Ă‚ as possible in the absence of absolute ground truth. We also show how a collaborative approach that combines models from different networks or domains can further refine the sanitization process to thwart targeted training or mimicry attacks against a single site
Quotient probabilistic normed spaces and completeness results
Quotient spaces of probabilistic normed spaces have never been considered. This note is a first attempt to fill this gap: the quotient space of a PN space with respect to one of its subspaces is introduced and its properties are studied. Finally, we investigate the completeness relationship among the PN spaces considered
Recommended from our members
Data Sanitization: Improving the Forensic Utility of Anomaly Detection Systems
Anomaly Detection (AD) sensors have become an invaluable tool for forensic analysis and intrusion detection. Unfortunately, the detection accuracy of all learning-based ADs depends heavily on the quality of the training data, which is often poor, severely degrading their reliability as a protection and forensic analysis tool. In this paper, we propose extending the training phase of an AD to include a sanitization phase that aims to improve the quality of unlabeled training data by making them as "attack-free" and "regular" as possible in the absence of absolute ground truth. Our proposed scheme is agnostic to the underlying AD, boosting its performance based solely on training-data sanitization. Our approach is to generate multiple AD models for content-based AD sensors trained on small slices of the training data. These AD "micro-models" are used to test the training data, producing alerts for each training input. We employ voting techniques to determine which of these training items are likely attacks. Our preliminary results show that sanitization increases 0-day attack detection while maintaining a low false positive rate, increasing confidence to the AD alerts. We perform an initial characterization of the performance of our system when we deploy sanitized versus unsanitized AD systems in combination with expensive host-based attack-detection systems. Finally, we provide some preliminary evidence that our system incurs only an initial modest cost, which can be amortized over time during online operation
Recommended from our members
From STEM to SEAD: Speculative Execution for Automated Defense
Most computer defense systems crash the process that they protect as part of their response to an attack. In contrast, self-healing software recovers from an attack by automatically repairing the underlying vulnerability. Although recent research explores the feasibility of the basic concept, self-healing faces four major obstacles before it can protect legacy applications and COTS software. Besides the practical issues involved in applying the system to such software (e.g., not modifying source code), self-healing has encountered a number of problems: knowing when to engage, knowing how to repair, and handling communication with external entities. Our previous work on a self-healing system, STEM, left these challenges as future work. STEM provides self-healing by speculatively executing "slices" of a process. This paper improves STEM's capabilities along three lines: (1) applicability of the system to COTS software (STEM does not require source code, and it imposes a roughly 73% performance penalty on Apache's normal operation), (2) semantic correctness of the repair (we introduce virtual proxies and repair policy to assist the healing process), and (3) creating a behavior profile based on aspects of data and control flow
Data Sanitization: Improving the Forensic Utility of Anomaly Detection Systems
Anomaly Detection (AD) sensors have become an invaluable tool for forensic analysis and intrusion detection. Unfortunately, the detection performance of all learning-based ADs depends heavily on the quality of the training data. In this paper, we extend the training phase of an AD to include a sanitization phase. This phase significantly improves the quality of unlabeled training data by making them as "attack-free"Ă‚ as possible in the absence of absolute ground truth. Our approach is agnostic to the underlying AD, boosting its performance based solely on training-data sanitization. Our approach is to generate multiple AD models for content-based AD sensors trained on small slices of the training data. These AD "micro-models"Ă‚ are used to test the training data, producing alerts for each training input. We employ voting techniques to determine which of these training items are likely attacks. Our preliminary results show that sanitization increases 0-day attack detection while in most cases reducing the false positive rate. We analyze the performance gains when we deploy sanitized versus unsanitized AD systems in combination with expensive hostbased attack-detection systems. Finally, we show that our system incurs only an initial modest cost, which can be amortized over time during online operation
Recommended from our members
Casting Out Demons: Sanitizing Training Data for Anomaly Sensors
The efficacy of anomaly detection (AD) sensors depends heavily on the quality of the data used to train them. Artificial or contrived training data may not provide a realistic view of the deployment environment. Most realistic data sets are dirty; that is, they contain a number of attacks or anomalous events. The size of these high-quality training data sets makes manual removal or labeling of attack data infeasible. As a result, sensors trained on this data can miss attacks and their variations. We propose extending the training phase of AD sensors (in a manner agnostic to the underlying AD algorithm) to include a sanitization phase. This phase generates multiple models conditioned on small slices of the training data. We use these "micro-models" to produce provisional labels for each training input, and we combine the micro-models in a voting scheme to determine which parts of the training data may represent attacks. Our results suggest that this phase automatically and significantly improves the quality of unlabeled training data by making it as "attack-free" and "regular" as possible in the absence of absolute ground truth. We also show how a collaborative approach that combines models from different networks or domains can further refine the sanitization process to thwart targeted training or mimicry attacks against a single site
- …