128 research outputs found

    Innovation, Workers Skills and Industrial Relations: Empirical Evidence from Firm-level Italian Data.

    Get PDF
    The shifting of labour demand towards relatively more skilled workers has been a hot issue in the economic field for many years. A consolidated explanation for the upskilling phenomenon is that technological-organisational changes have driven the labour demand with detrimental consequences for less skilled workers (skill-biased technological-organisational change). In order to upgrade the skill workforce the firm has at least two main channels at its disposal: the external labour market strategy, mainly based on hiring and firing mechanisms; the internal labour market strategies, which improve the skill base of the employees through training activities. The main objective of the present work is to verify the relations between innovative strategies and both the workforce composition and the training activities, within an integrated framework that also leads us to consider the role of specific aspects of the industrial relations system. The firm level analysis is based on original datasets which include data on manufacturing firms for two Italian local production systems, located in the Emilia-Romagna region. The results suggest that the firms use both the two channels to improve their skill base, which is actually related to the innovation activities, although there is weak supporting evidence of the use of external labour markets to upgrade the workforce skills: the upskilling phenomenon seems to be associated to specific innovative activities in the technological sphere, while specific organisational aspects emerge as detrimental for blue collars. On the side of internal labour market strategies the evidence supports the hypothesis that innovation intensity induce the firms to implement internal procedures in order to upskill the workforce, confirming the importance of internal labour market strategies. Moreover, we have recognized the important role of firm level industrial relations in determining the training activities for the blue collar workers.technological change; organisational change; industrial relations; skills

    End-to-End Entanglement Generation Strategies: Capacity Bounds and Impact on Quantum Key Distribution

    Get PDF
    A first quantum revolution has already brought quantum technologies into our everyday life for decades: in fact, electronics and optics are based on the quantum mechanical principles. Today, a second quantum revolution is underway, leveraging the quantum principles of superposition, entanglement and measurement, which were not fully exploited yet. International innovation activities and standardization bodies have identified four main application areas for quantum technologies and services: quantum secure communications, quantum computing, quantum simulation, and quantum sensing and metrology. This paper focuses on quantum secure communications by addressing the evolution of Quantum Key Distribution (QKD) networks (under early exploitation today) towards the Quantum-ready networks and the Quantum Internet based also on entanglement distribution. Assuming that management and control of quantum nodes is a key challenge under definition, today, a main obstacle in exploiting long-range QKD and Quantum-ready networks concerns the inherent losses due to the optical transmission channels. Currently, it is assumed that a most promising way for overcoming this limitation, while avoiding the presence of costly trusted nodes, it is to distribute entangled states by means of Quantum Repeaters. In this respect, the paper provides an overview of current methods and systems for end-to-end entanglement generation, with some simulations and a discussion of capacity upper bounds and their impact of secret key rate in QKD systems

    Some Controversial Opinions on Software-Defined Data Plane Services

    Get PDF
    Several recent proposals, namely Software Defined Networks (SDN), Network Functions Virtualization (NFV) and Network Service Chaining (NSC), aim to transform the network into a programmable platform, focusing respectively on the control plane (SDN) and on the data plane (NFV/NSC). This paper sits on the same line of the NFV/NSC proposals but with a more long-term horizon, and it presents its considerations on some controversial aspects that arise when considering the programmability of the data plane. Particularly, this paper discusses the relevance of data plane vs control plane services, the importance of the hardware platform, and the necessity to standardize northbound and southbound interfaces in future software-defined data plane service

    Pembuatan dan karakterisasi Hibrid Nanokomposit MWCNT-TiO2 menggunakan Microwave Heating

    Get PDF
    Penelitian menggabungkan dua bahan nanomaterial telah banyak dilakukan yang bertujuan untuk meningkatkan keunggulan pada potensi yang dimiliki material tersebut dan memperluas jangkauan fungsionalisnya sehingga dapat dikembangkan dalam bentuk produk. Seperti pada material titanium dioksida (TiO2) sebagai material yang mudah didapatkan dan tidak beracun dilakukan kombinasi dengan Carbon Nanotubes (CNT). Hibrid nanokomposit Multiwalled Carbon Nanotube – Titanium dioksida (MWCNT-TiO2) di sintesis dari campuran larutan MWCNT yang difungsionaliskan dan TTIP (Titanium (IV) Isopropoxide) sebagai preskursor TiO2 dengan teknik microwave heating sebagai metode sederhana dan efesien. Sintesis dilakukan dengan variasi massa MWCNT dan TTIP (1:2, 1:4, 1:8 dan 1:16) dengan durasi waktu proses microwave heating tertentu. Karakterisasi hibrid nanokomposit di lakukan dengan X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM), dan Transmission Electron Microsope (TEM). Hasil karakterisasi menunjukkan hibrid nanokomposit MWCNTs) dengan TiO2 mulai dapat menempel sempurna di rasio MWCNTs:TTIP 1:8 dengan microwave heating selama 6 menit, hal ini dapat diamati morfologinya melalui karakterisasi SEM, area dinding luar MWCNT tertutupi oleh TiO¬¬¬¬2¬ dengan pengukuran diameter masing-masing rasio sekitar 87 nm pada rasio 1:2, 94 nm di rasio 1:4, 110 nm pada rasio 1:8 dan 140 nm pada rasio 1:16. Morfologi yang telah dianalisis, di tegaskan melalui karakterisasi TEM dan SAED pada rasio MWCNT:TTIP (1:4, 1:8 dan 1:16) dengan dilakukan juga analisis TEM pada MWCNT fungsionalis sebagai standar difraksi SAED (Selected Area Electron Diffraction). Hasil karakterisasi TEM pada rasio 1:4 area yang dianalisis terdapat banyak aglomerasi dan hanya sebagian kecil dinding luar MWCNT yang tertutupi oleh TiO2 dengan diameter TiO2 rata-rata 8.8 nm, pada rasio MWCNT:TTIP (1:8) terdapat aglomerasi dan partikel TiO2 yang telah menempel di dinding luar MWCNT dengan diameter TiO2 sekitar 9 nm, pada MWCNT:TTIP (1:16) yang sudah menunjukkan dinding luar MWCNT tertutup sempurna oleh TiO2 dengan diameter sekitar 10 nm. Hasil pengukuran XRD menunjukkan pada rasio MWCNT:TTIP 1:2 dan 1:4, partikel TiO2 yang terbentuk adalah fasa brokit dan dengan meningkatkan massa TTIP dengan rasio MWCNT:TiO2 1:8 dan 1:16, fasa TiO2 bertransformasi menjadi TiO2 anatase. Hal ini ditegaskan dengan pengukuran SAED yang telah dilakukan di rasio MWCNT:TiO2 1:4 dan 1:8 menunjukkan fasa TiO2 masih dalam fasa brokit dan di rasio MWCNT:TiO2 1:16 menunjukkan fasa TiO2 bertransformasi menjadi TiO2 fasa anatase

    RAYGO: Reserve As You GO

    Get PDF
    The capability to predict the precise resource requirements of a microservice-based application is a very important problem for cloud services. In fact, the allocation of abundant resources guarantees an excellent quality of experience (QoE) for the hosted services, but it can translate into unnecessary costs for the cloud customer due to the reserved (but unused) resources. On the other side, poor resource provisioning may turn out in scarce performance when experiencing an unexpected peak of demand. This paper proposes RAYGO, a novel approach for dynamic resource provisioning to microservices in Kubernetes that (i) reliefs the customers from the definition of appropriate execution boundaries, (ii) ensures the right amount of resources at any time, according to the past and the predicted usage, and (iii) operates at the application level, acknowledging the dependency between multiple correlated microservices

    Quantum Internet Protocol Stack: a Comprehensive Survey

    Full text link
    Classical Internet evolved exceptionally during the last five decades, from a network comprising a few static nodes in the early days to a leviathan interconnecting billions of devices. This has been possible by the separation of concern principle, for which the network functionalities are organized as a stack of layers, each providing some communication functionalities through specific network protocols. In this survey, we aim at highlighting the impossibility of adapting the classical Internet protocol stack to the Quantum Internet, due to the marvels of quantum mechanics. Indeed, the design of the Quantum Internet requires a major paradigm shift of the whole protocol stack for harnessing the peculiarities of quantum entanglement and quantum information. In this context, we first overview the relevant literature about Quantum Internet protocol stack. Then, stemming from this, we sheds the light on the open problems and required efforts toward the design of an effective and complete Quantum Internet protocol stack. To the best of authors' knowledge, a survey of this type is the first of its own. What emerges from this analysis is that the Quantum Internet, though still in its infancy, is a disruptive technology whose design requires an inter-disciplinary effort at the border between quantum physics, computer and telecommunications engineering

    Computing Without Borders: The Way Towards Liquid Computing

    Get PDF
    Despite the de-facto technological uniformity fostered by the cloud and edge computing paradigms, resource fragmentation across isolated clusters hinders the dynamism in application placement, leading to suboptimal performance and operational complexity. Building upon and extending these paradigms, we propose a novel approach envisioning a transparent continuum of resources and services on top of the underlying fragmented infrastructure, called liquid computing. Fully decentralized, multi-ownership-oriented and intent-driven, it enables an overarching abstraction for improved applications execution, while at the same time opening up for new scenarios, including resource sharing and brokering. Following the above vision, we present liqo, an open-source project that materializes this approach through the creation of dynamic and seamless Kubernetes multi-cluster topologies. Extensive experimental evaluations have shown its effectiveness in different contexts, both in terms of Kubernetes overhead and compared to other open-source alternatives
    • …
    corecore