420 research outputs found

    Property rights, transaction costs, and the limits of the market

    Get PDF
    To clarify the determinants and interaction of property rights and transaction costs, I study the design of the property rights on either a good whose consensual transfer entails a transaction inefficiency or an upstream firm’s input whose random cost is nonverifiable and ex ante non-contractible. More disperse traders’ valuations and larger odds that the upstream party can appropriate the quasi-rent induced by contract incompleteness produce more severe transaction inefficiencies and larger incomplete contracting costs, respectively. Larger transaction costs, in turn, induce weaker property rights because of the trade-off between inefficient exclusion from trade/innovation and expropriation. These implications survive when some transactors have more political influence on institutional design, or I consider the disincentive effect of weak property rights. Furthermore, they are consistent with the interplay among proxies for the availability of technological progress, severity of transaction costs and strength of property rights for 139 countries observed between 2006 and 2018

    Quality of service, security and trustworthiness for network slices

    Get PDF
    (English) The telecommunications' systems are becoming much more intelligent and dynamic due to the expansion of the multiple network types (i.e., wired, wireless, Internet of Things (IoT) and cloud-based networks). Due to this network variety, the old model of designing a specific network for a single purpose and so, the coexistence of different and multiple control systems is evolving towards a new model in which the use of a more unified control system is able to offer a wide range of services for multiple purposes with different requirements and characteristics. To achieve this situation, the networks have become more digital and virtual thanks to the creation of the Software-Defined Networking (SDN) and the Network Function Virtualization (NFV).Network Slicing takes the strengths from these two technologies and allows the network control systems to improve their performance as the services may be deployed and their interconnection configured through multiple-transport domains by using NFV/SDN tools such as NFV-Orchestrators (NFV-O) and SDN Controllers. This thesis has the main objective to contribute to the state of the art of Network Slicing, with a special focus on security aspects towards the architectures and processes to deploy, monitor and enforce secured and trusted resources to compose network slices. Finally, this document is structured in eight chapters: Chapter 1 provides the motivation and objectives of this thesis which describes to where this thesis contributes and what it was expected to study, evaluate and research. Chapter 2 presents the background necessary to understand the following chapters. This chapter presents a state of the art with three clear sections: 1) the key technologies necessary to create network slices, 2) an overview about the relationship between Service Level Agreements (SLAs) and network slices with a specific view on Security Service Level Agreements (SSLAs), and, 3) the literature related about distributed architectures and systems and the use of abstraction models to generate trust, security, and avoid management centralization. Chapter 3 introduces the research done associated to Network Slicing. First with the creation of network slices using resources placed multiple computing and transport domains. Then, this chapter illustrates how the use of multiple virtualization technologies allows to have more efficient network slices deployments and where each technology fits better to accomplish the performance improvements. Chapter 4 presents the research done about the management of network slices and the definition of SLAs and SSLAs to define the service and security requirements to accomplish the expected QoS and the right security level. Chapter 5 studies the possibility to change at certain level the trend to centralise the control and management architectures towards a distributed design. Chapter 6 follows focuses on the generation of trust among service resources providers. This chapter first describes how the concept of trust is mapped into an analytical system and then, how the trust management among providers and clients is done in a transparent and fair way. Chapter 7 is devoted to the dissemination results and presents the set of scientific publications produced in the format of journals, international conferences or collaborations. Chapter 8 concludes the work and outcomes previously presented and presents possible future research.(Català) Els sistemes de telecomunicacions s'estan tornant molt més intel·ligents i dinàmics degut a l'expansió de les múltiples classes de xarxes (i.e., xarxes amb i sense fils, Internet of Things (IoT) i xarxes basades al núvol). Tenint en consideració aquesta varietat d'escenaris, el model antic de disseny d'una xarxa enfocada a una única finalitat i, per tant, la una coexistència de varis i diferents sistemes de control està evolucionant cap a un nou model en el qual es busca unificar el control cap a un sistema més unificat capaç d'oferir una amplia gama de serveis amb diferents finalitats, requeriments i característiques. Per assolir aquesta nova situació, les xarxes han hagut de canviar i convertir-se en un element més digitalitzat i virtualitzat degut a la creació de xarxes definides per software i la virtualització de les funcions de xarxa (amb anglès Software-Defined Networking (SDN) i Network Function Virtualization (NFV), respectivament). Network Slicing fa ús dels punts forts de les dues tecnologies anteriors (SDN i NFV) i permet als sistemes de control de xarxes millorar el seu rendiment ja que els serveis poden ser desaplegats i la seva interconnexió a través de múltiples dominis de transport configurada fent servir eines NFV/SDN com per exemple orquestradors NFV (NFV-O) i controladors SDN. Aquesta tesi té com a objectiu principal, contribuir en diferents aspectes a la literatura actual al voltant de les network slices. Més concretament, el focus és en aspectes de seguretat de cara a les arquitectures i processos necessaris per desplegar, monitoritzar i aplicar recursos segurs i fiables per generar network slices. Finalment, el document es divideix en 8 capítols: El Capítol 1correspon a la introducció de la temàtica principal, la motivació per estudiar-la i els objectius plantejats a l'inici dels estudis de doctorat. El Capítol 2 presenta un recull d'elements i exemples en la literatura actual per presentar els conceptes bàsics i necessaris en relació a les tecnologies NFV, SDN i Network Slicing. El Capítol 3 introdueix el lector a les tasques i resultats obtinguts per l'estudiant respecte l'ús de network slices enfocades en escenaris amb múltiples dominis de transport i posteriorment en la creació i gestió de network slices Híbrides que utilitzen diferents tecnologies de virtualització. El Capítol 4 s'enfoca en l'ús d’eines de monitorització tant en avaluar i assegurar que es compleixen els nivells esperats de qualitat del servei i sobretot de qualitat de seguretat de les network slices desplegades. Per fer-ho s'estudia l'ús de contractes de servei i de seguretat, en anglès: Service Level Agreements i Security Service Level Agreements. El Capítol 5 estudia la possibilitat de canviar el model d'arquitectura per tal de no seguir centralitzant la gestió de tots els dominis en un únic element, aquest capítol presenta la feina feta en l'ús del Blockchain com a eina per canviar el model de gestió de recursos de múltiples dominis cap a un punt de vista cooperatiu i transparent entre dominis. El Capítol 6 segueix el camí iniciat en el capítol anterior i presenta un escenari en el qual a part de tenir múltiples dominis, també tenim múltiples proveïdors oferint un mateix servei (multi-stakeholder). En aquest cas, l'objectiu del Blockchain passa a ser la generació, gestió i distribució de paràmetres de reputació que defineixin un nivell de fiabilitat associat a cada proveïdor. De manera que, quan un client vulgui demanar un servei, pugui veure quins proveïdors són més fiables i en quins aspectes tenen millor reputació. El Capítol 7 presenta les tasques de disseminació fetes al llarg de la tesi. El Capítol 8 finalitza la tesi amb les conclusions finals.Postprint (published version

    Jornadas Nacionales de Investigación en Ciberseguridad: actas de las VIII Jornadas Nacionales de Investigación en ciberseguridad: Vigo, 21 a 23 de junio de 2023

    Get PDF
    Jornadas Nacionales de Investigación en Ciberseguridad (8ª. 2023. Vigo)atlanTTicAMTEGA: Axencia para a modernización tecnolóxica de GaliciaINCIBE: Instituto Nacional de Cibersegurida

    Distributed Spatial Data Sharing: a new era in sharing spatial data

    Get PDF
    The advancements in information and communications technology, including the widespread adoption of GPS-based sensors, improvements in computational data processing, and satellite imagery, have resulted in new data sources, stakeholders, and methods of producing, using, and sharing spatial data. Daily, vast amounts of data are produced by individuals interacting with digital content and through automated and semi-automated sensors deployed across the environment. A growing portion of this information contains geographic information directly or indirectly embedded within it. The widespread use of automated smart sensors and an increased variety of georeferenced media resulted in new individual data collectors. This raises a new set of social concerns around individual geopricacy and data ownership. These changes require new approaches to managing, sharing, and processing geographic data. With the appearance of distributed data-sharing technologies, some of these challenges may be addressed. This can be achieved by moving from centralized control and ownership of the data to a more distributed system. In such a system, the individuals are responsible for gathering and controlling access and storing data. Stepping into the new area of distributed spatial data sharing needs preparations, including developing tools and algorithms to work with spatial data in this new environment efficiently. Peer-to-peer (P2P) networks have become very popular for storing and sharing information in a decentralized approach. However, these networks lack the methods to process spatio-temporal queries. During the first chapter of this research, we propose a new spatio-temporal multi-level tree structure, Distributed Spatio-Temporal Tree (DSTree), which aims to address this problem. DSTree is capable of performing a range of spatio-temporal queries. We also propose a framework that uses blockchain to share a DSTree on the distributed network, and each user can replicate, query, or update it. Next, we proposed a dynamic k-anonymity algorithm to address geoprivacy concerns in distributed platforms. Individual dynamic control of geoprivacy is one of the primary purposes of the proposed framework introduced in this research. Sharing data within and between organizations can be enhanced by greater trust and transparency offered by distributed or decentralized technologies. Rather than depending on a central authority to manage geographic data, a decentralized framework would provide a fine-grained and transparent sharing capability. Users can also control the precision of shared spatial data with others. They are not limited to third-party algorithms to decide their privacy level and are also not limited to the binary levels of location sharing. As mentioned earlier, individuals and communities can benefit from distributed spatial data sharing. During the last chapter of this work, we develop an image-sharing platform, aka harvester safety application, for the Kakisa indigenous community in northern Canada. During this project, we investigate the potential of using a Distributed Spatial Data sharing (DSDS) infrastructure for small-scale data-sharing needs in indigenous communities. We explored the potential use case and challenges and proposed a DSDS architecture to allow users in small communities to share and query their data using DSDS. Looking at the current availability of distributed tools, the sustainable development of such applications needs accessible technology. We need easy-to-use tools to use distributed technologies on community-scale SDS. In conclusion, distributed technology is in its early stages and requires easy-to-use tools/methods and algorithms to handle, share and query geographic information. Once developed, it will be possible to contrast DSDS against other data systems and thereby evaluate the practical benefit of such systems. A distributed data-sharing platform needs a standard framework to share data between different entities. Just like the first decades of the appearance of the web, these tools need regulations and standards. Such can benefit individuals and small communities in the current chaotic spatial data-sharing environment controlled by the central bodies

    Nuevos modelos de plataformas descentralizadas basados en tecnología blockchain.

    Get PDF
    203 p.La tesis doctoral tiene como objetivo evaluar la influencia de la tecnología blockchain en los principales sectores económicos y los modelos de negocio de plataforma. Se han identificado los valores de negocio más demandados y se ha analizado la inversión en blockchain por parte de las empresas. Además, se ha desarrollado una taxonomía de los modelos de plataforma descentralizados, se ha realizado la identificación y evaluación de una muestra representativa de estas plataformas, y, por último, se ha propuesto una clasificación de los tres arquetipos de plataformas descentralizadas: hosted, federated y shared.Los resultados de la investigación han revelado que la tecnología blockchain está siendo adoptada a nivel global en todos los sectores, generando un impacto significativo en la industria. Se han identificado múltiples beneficios de esta tecnología en diferentes áreas y se ha proporcionado información detallada sobre la inversión en blockchain en distintas empresas y sectores.La tesis representa una valiosa contribución a la literatura, ya que brinda información real sobre la inversión en blockchain y caracteriza los modelos emergentes de plataformas descentralizadas. Estos resultados tienen el potencial de ayudar a los emprendedores a comprender mejor las posibilidades de la tecnología blockchain y a diseñar modelos de negocio innovadores y sostenibles

    A Segmentation Study of Digital Pirates and Understanding the Effectiveness of Targeted Anti-Piracy Communication

    Get PDF
    The objective of this study is to improve the effectiveness of anti-piracy educational strategies by identifying unique digital pirate segments and delivering personalized campaign messages to the target audiences. In the first study, we introduced a segmentation study of digital pirates based on different types of risks involved in pirating activities. We identify four digital pirate segments (anti-pirates, hard-core pirates, performance-sensitive pirates, and finance-sensitive pirates), each demonstrating distinctive characteristics. Further profiling of the segments revealed different risk perceptions regarding gender and piracy experience. In the second study, we conduct an experiment to test the effects of targeted campaign messages for the newly identified pirating segments. Our results show that targeted piracy campaign messages have a significantly higher message persuasiveness, while they damage the attitude towards piracy. However, we found that the targeted piracy campaign messages have a marginal effect on changing the intention to pirate. Findings from this study offer useful implications for the design and implementation of anti-piracy educational campaigns. This article was published Open Access through the CCU Libraries Open Access Publishing Fund. The article was first published in the Journal of Theoretical and Applied Electronic Commerce Research: https://doi.org/10.3390/jtaer1803007

    A novel flow-based statistical pattern recognition architecture to detect and classify pivot attacks

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Pivot attack or pivoting is a well-known technique used by threat actors to cover their tracks and overcome connectivity restrictions imposed by the network defences or topology. Therefore, detecting ongoing pivot attacks while the opponent has not yet achieved their goals is essential for a solid defence strategy. However, recognising and classifying this technique in large corporate networks is a complex task. The literature presents limited studies regarding pivot attacks, and mitigation strategies have severe constraints to date. For example, related work still focuses on specific protocol restrictions techniques scoped at internal network assets only. This approach is inefficient since opponents commonly create pivot tunnels across the internet. This thesis introduces and evaluates APIVADS, a novel flow-based detection scheme to identify compromised assets supporting pivot attacks. Moreover, APIVADS outperforms previous approaches regarding features and capacities. To the best of our knowledge, this is the first protocol and cryptographic primitives agnostic, privacy-preserving approach capable of detecting pivot attacks over the internet. For example, Its efficient data reduction technique can achieve near real-time detection accuracy of 99.37% by distinguishing ongoing pivot attacks from regular enterprise traffic such as TLS, HTTPS, DNS and P2P over the internet. Additionally, this thesis proposes APCA, an automatic pivot attack classifier algorithm based on perceived indicators of attack (IoA) generated by APIVADS, to determine the level of connectivity achieved by the adversary. APCA can distinguish between different types of pivoting and contribute to the threat intelligence capabilities regarding the adversary modus operandi. The architecture composed by APIVADS and APCA considers a hybrid approach between decentralised pivoting host-based detection and a centralised approach to aggregate results and achieve scalability. Empirical results from our experiments show that even when the adversary uses evasive pivoting techniques, the proposed architecture is efficient and feasible regarding classification and detection, achieving high accuracy of 98.54% and low false positives

    Measuring knowledge sharing processes through social network analysis within construction organisations

    Get PDF
    The construction industry is a knowledge intensive and information dependent industry. Organisations risk losing valuable knowledge, when the employees leave them. Therefore, construction organisations need to nurture opportunities to disseminate knowledge through strengthening knowledge-sharing networks. This study aimed at evaluating the formal and informal knowledge sharing methods in social networks within Australian construction organisations and identifying how knowledge sharing could be improved. Data were collected from two estimating teams in two case studies. The collected data through semi-structured interviews were analysed using UCINET, a Social Network Analysis (SNA) tool, and SNA measures. The findings revealed that one case study consisted of influencers, while the other demonstrated an optimal knowledge sharing structure in both formal and informal knowledge sharing methods. Social networks could vary based on the organisation as well as the individuals’ behaviour. Identifying networks with specific issues and taking steps to strengthen networks will enable to achieve optimum knowledge sharing processes. This research offers knowledge sharing good practices for construction organisations to optimise their knowledge sharing processes

    Cloud-edge hybrid applications

    Get PDF
    Many modern applications are designed to provide interactions among users, including multi- user games, social networks and collaborative tools. Users expect application response time to be in the order of milliseconds, to foster interaction and interactivity. The design of these applications typically adopts a client-server model, where all interac- tions are mediated by a centralized component. This approach introduces availability and fault- tolerance issues, which can be mitigated by replicating the server component, and even relying on geo-replicated solutions in cloud computing infrastructures. Even in this case, the client-server communication model leads to unnecessary latency penalties for geographically close clients and high operational costs for the application provider. This dissertation proposes a cloud-edge hybrid model with secure and ecient propagation and consistency mechanisms. This model combines client-side replication and client-to-client propagation for providing low latency and minimizing the dependency on the server infras- tructure, fostering availability and fault tolerance. To realize this model, this works makes the following key contributions. First, the cloud-edge hybrid model is materialized by a system design where clients maintain replicas of the data and synchronize in a peer-to-peer fashion, and servers are used to assist clients’ operation. We study how to bring most of the application logic to the client-side, us- ing the centralized service primarily for durability, access control, discovery, and overcoming internetwork limitations. Second, we dene protocols for weakly consistent data replication, including a novel CRDT model (∆-CRDTs). We provide a study on partial replication, exploring the challenges and fundamental limitations in providing causal consistency, and the diculty in supporting client- side replicas due to their ephemeral nature. Third, we study how client misbehaviour can impact the guarantees of causal consistency. We propose new secure weak consistency models for insecure settings, and algorithms to enforce such consistency models. The experimental evaluation of our contributions have shown their specic benets and limitations compared with the state-of-the-art. In general, the cloud-edge hybrid model leads to faster application response times, lower client-to-client latency, higher system scalability as fewer clients need to connect to servers at the same time, the possibility to work oine or disconnected from the server, and reduced server bandwidth usage. In summary, we propose a hybrid of cloud-and-edge which provides lower user-to-user la- tency, availability under server disconnections, and improved server scalability – while being ecient, reliable, and secure.Muitas aplicações modernas são criadas para fornecer interações entre utilizadores, incluindo jogos multiutilizador, redes sociais e ferramentas colaborativas. Os utilizadores esperam que o tempo de resposta nas aplicações seja da ordem de milissegundos, promovendo a interação e interatividade. A arquitetura dessas aplicações normalmente adota um modelo cliente-servidor, onde todas as interações são mediadas por um componente centralizado. Essa abordagem apresenta problemas de disponibilidade e tolerância a falhas, que podem ser mitigadas com replicação no componente do servidor, até com a utilização de soluções replicadas geogracamente em infraestruturas de computação na nuvem. Mesmo neste caso, o modelo de comunicação cliente-servidor leva a penalidades de latência desnecessárias para clientes geogracamente próximos e altos custos operacionais para o provedor das aplicações. Esta dissertação propõe um modelo híbrido cloud-edge com mecanismos seguros e ecientes de propagação e consistência. Esse modelo combina replicação do lado do cliente e propagação de cliente para cliente para fornecer baixa latência e minimizar a dependência na infraestrutura do servidor, promovendo a disponibilidade e tolerância a falhas. Para realizar este modelo, este trabalho faz as seguintes contribuições principais. Primeiro, o modelo híbrido cloud-edge é materializado por uma arquitetura do sistema em que os clientes mantêm réplicas dos dados e sincronizam de maneira ponto a ponto e onde os servidores são usados para auxiliar na operação dos clientes. Estudamos como trazer a maior parte da lógica das aplicações para o lado do cliente, usando o serviço centralizado principalmente para durabilidade, controlo de acesso, descoberta e superação das limitações inter-rede. Em segundo lugar, denimos protocolos para replicação de dados fracamente consistentes, incluindo um novo modelo de CRDTs (∆-CRDTs). Fornecemos um estudo sobre replicação parcial, explorando os desaos e limitações fundamentais em fornecer consistência causal e a diculdade em suportar réplicas do lado do cliente devido à sua natureza efémera. Terceiro, estudamos como o mau comportamento da parte do cliente pode afetar as garantias da consistência causal. Propomos novos modelos seguros de consistência fraca para congurações inseguras e algoritmos para impor tais modelos de consistência. A avaliação experimental das nossas contribuições mostrou os benefícios e limitações em comparação com o estado da arte. Em geral, o modelo híbrido cloud-edge leva a tempos de resposta nas aplicações mais rápidos, a uma menor latência de cliente para cliente e à possibilidade de trabalhar oine ou desconectado do servidor. Adicionalmente, obtemos uma maior escalabilidade do sistema, visto que menos clientes precisam de estar conectados aos servidores ao mesmo tempo e devido à redução na utilização da largura de banda no servidor. Em resumo, propomos um modelo híbrido entre a orla (edge) e a nuvem (cloud) que fornece menor latência entre utilizadores, disponibilidade durante desconexões do servidor e uma melhor escalabilidade do servidor – ao mesmo tempo que é eciente, conável e seguro
    corecore