11 research outputs found

    Trust-empowered, IoT-driven legitimate data offloading

    Get PDF
    In an IoT environment deployed on top of fog and/or cloud nodes, offloading data between nodes is a common practice that aims at lessening the burden on these nodes and hence, meeting some real-time processing requirements. Existing initiatives put emphasis on “when to offload” and “where to offload” using criteria like resource constraint, load balancing, and data safety during transfer. However, there is limited emphasis on the trustworthiness of those nodes that will accept the offloaded data putting these data at risk of misuse. To address this limited emphasis, this paper advocates for trust as a decision criterion for identifying the appropriate nodes for hosting the offloaded data. A trust model is designed and then, developed considering factors like legitimacy, quality-of-service, and quality-of-experience. A system demonstrating the technical doability of the trust model is presented in the paper, as well

    Privacy and confidentiality issues in cloud computing architectures

    Get PDF
    Cloud computing is a computing paradigm in which organizations can store their data remotely in the cloud (Internet) and access applications, services and infrastructure on-demand from a shared pool of computing resources. It is clear that cloud technologies have proven a major commercial success over recent years (since the appearance of products and cloud offerings like Amazon EC2 and Microsoft Azure). According to Gardner, Cloud Computing will play a large part in the ICT (Information and Communication Technologies) domain over the next 10 years or more, since it provides cost-savings to enterprises thanks to virtualization technologies, opening gates for new business opportunities as well. However, Cloud Computing has to face several challenges and issues. Storing and processing data out of the boundaries of your company raises security and privacy concerns by itself. Nowadays information is the commodity of XXI century, and certain information can mean power and market advantage. As pointed out by Andreas Weiss, Director of the EuroCloud, in an interview we held with him, data is one of the most important and valuable resource any company has. Therefore, security mechanisms to protect this data are necessary to make the right choices and decisions for the company without worrying about data safety. In the paradigm of Cloud Computing we will have to trust a Cloud Service Provider (CSP), creating an extra dependency to a third party which some customers, depending on the value of their data, will inevitably feel uncomfortable. Outsourcing business data in a place not owned by oneself can scare organizations from using the benefits of Cloud Computing in an optimal way

    Artificial Intelligence in Computer Networks : Role of AI in Network Security

    Get PDF
    Artificial Intelligence (AI) in computer networks has been emerging for the last decade, there are revolutionary inventions that have created automation and digitalization in the fields of the Internet. The layout of computer networks works in layers of topologies with the help of AI, a virtual layer of software has been added that runs predictive algorithms of Artificial Neural Networks (ANNs) with the help of Machine Learning (ML) and Deep Learning (DL). This thesis describes the relation between AI algorithms and duplication of human cognitive behavior in emerging technologies. The advantages of AI in computer networks include automation, digitalization, Internet of Things (IoT), centralization of data, etc. At the same time, the biggest disadvantage is the ethical violation of privacy and the security of data. It is further discussed in the thesis that Artificial Intelligence uses many security protocols, including Next-Generation Firewalls, to prevent security violations. The Software Network Analysis (SNA) and Software Defined Networks (SDN) play an important role in Artificial Intelligence in computer Networks. This thesis aims to analyze the relationship between the development of AI algorithms and the duplication of the human cognitive behavior in various emerging technologies. Software Network Analysis (SNA) and Software Defined Networks (SDN) are critical components of computer network artificial intelligence. The purpose of this dissertation is to investigate the relationship between AI algorithms and network security. The thesis analyzes 2 main aspects, the role of Artificial Intelligence in Computer Networks and how Artificial Intelligence is helping in securing computer networks to deal with the modern network threats. Security today has become one of the main concerns, everyday a production networks receives arounds thousands of attacks of different scales, and proper network security measures are not configured and taken, a lot can be compromised. Network virtualization, Cloud Computing, has seen exponentially growth in few past years, because of the trend of less human interaction, and minimizing of doing repeated tasks over and over. Data in today’s world is now more important than it has been in decades earlier, this is because today everything is moving towards digitalization, proper Information Security policies are derived and implemented all over the world to ensure the protection of Data. Europe has its own General Data Protection Regulation (GDPR) which ensures that every company who deals with data is to implement certain measures to ensure the data is protected which also involves implementing the right network security measures so that the right people have the access to the sensitive information. This thesis covers the overall impact of Artificial Intelligence in Computer Networks and Network Security

    An OSINT Approach to Automated Asset Discovery and Monitoring

    Get PDF
    The main objective of this thesis is to improve the efficiency of security operations centersthrough the articulation of different publicly open sources of security related feeds. This ischallenging because of the different abstraction models of the feeds that need to be madecompatible, of the range of control values that each data source can have and that will impactthe security events, and of the scalability of computational and networking resources that arerequired to collect security events.Following the industry standards proposed by the literature (OSCP guide, PTES andOWASP), the detection of hosts and sub-domains using an articulation of several sources isregarded as the first interaction in an engagement. This first interaction often misses somesources that could allow the disclosure of more assets. This became important since networkshave scaled up to the cloud, where IP address range is not owned by the company, andimportant applications are often shared within the same IP, like the example of Virtual Hoststo host several application in the same server.We will focus on the first step of any engagement, the enumeration of the target network.Attackers often use several techniques to enumerate the target to discover vulnerable services.This enumeration could be improved by the addition of several other sources and techniquesthat are often left aside from the literature. Also, by creating an automated process it ispossible for security operation centers to discover these assets and map the applicationsin use to keep track of said vulnerabilities using OSINT techniques and publicly availablesolutions, before the attackers try to exploit the service. This gives a vision of the Internetfacing services often seen by attackers without querying the service directly evading thereforedetection. This research is in frame with the complete engagement process and should beintegrate in already built solutions, therefore the results should be able to connect to additionalapplications in order to reach forward in the engagement process.By addressing these challenges we expect to come in great aid of sysadmin and securityteams, helping them with the task of securing their assets and ensuring security cleanlinessof the enterprise resulting in a better policy compliance without ever connecting to the clienthosts

    Enhanced connectivity in wireless mobile programmable networks

    Get PDF
    Mención Interancional en el título de doctorThe architecture of current operator infrastructures is being challenged by the non-stop growing demand of data hungry services appearing every day. While currently deployed operator networks have been able to cope with traffic demands so far, the architectures for the 5th generation of mobile networks (5G) are expected to support unprecedented traffic loads while decreasing costs associated with the network deployment and operations. Indeed, the forthcoming set of 5G standards will bring programmability and flexibility to levels never seen before. This has required introducing changes in the architecture of mobile networks, enabling different features such as the split of control and data planes, as required to support rapid programming of heterogeneous data planes. Network softwarisation is hence seen as a key enabler to cope with such network evolution, as it permits controlling all networking functions through (re)programming, thus providing higher flexibility to meet heterogeneous requirements while keeping deployment and operational costs low. A great diversity in terms of traffic patterns, multi-tenancy, heterogeneous and stringent traffic requirements is therefore expected in 5G networks. Software Defined Networking (SDN) and Network Function Virtualisation (NFV) have emerged as a basic tool-set for operators to manage their infrastructure with increased flexibility and reduced costs. As a result, new 5G services can now be envisioned and quickly programmed and provisioned in response to user and market necessities, imposing a paradigm shift in the services design. However, such flexibility requires the 5G transport network to undergo a profound transformation, evolving from a static connectivity substrate into a service-oriented infrastructure capable of accommodating the various 5G services, including Ultra-Reliable and Low Latency Communications (URLLC). Moreover, to achieve the desired flexibility and cost reduction, one promising approach is to leverage virtualisation technologies to dynamically host contents, services, and applications closer to the users so as to offload the core network and reduce the communication delay. This thesis tackles the above challengeswhicharedetailedinthefollowing. A common characteristic of the 5G servicesistheubiquityandthealmostpermanent connection that is required from the mobile network. This really imposes a challenge in thesignallingproceduresprovidedtogettrack of the users and to guarantee session continuity. The mobility management mechanisms will hence play a central role in the 5G networks because of the always-on connectivity demand. Distributed Mobility Management (DMM) helps going towards this direction, by flattening the network, hence improving its scalability,andenablinglocalaccesstotheInternet and other communication services, like mobile-edge clouds. Simultaneously, SDN opens up the possibility of running a multitude of intelligent and advanced applications for network optimisation purposes in a centralised network controller. The combination of DMM architectural principles with SDN management appears as a powerful tool for operators to cope with the management and data burden expected in 5G networks. To meet the future mobile user demand at a reduced cost, operators are also looking at solutions such as C-RAN and different functional splits to decrease the cost of deploying and maintaining cell sites. The increasing stress on mobile radio access performance in a context of declining revenues for operators is hence requiring the evolution of backhaul and fronthaul transport networks, which currently work decoupled. The heterogeneity of the nodes and transmisión technologies inter-connecting the fronthaul and backhaul segments makes the network quite complex, costly and inefficient to manage flexibly and dynamically. Indeed, the use of heterogeneous technologies forces operators to manage two physically separated networks, one for backhaul and one forfronthaul. In order to meet 5G requirements in a costeffective manner, a unified 5G transport network that unifies the data, control, and management planes is hence required. Such an integrated fronthaul/backhaul transport network, denoted as crosshaul, will hence carry both fronthaul and backhaul traffic operating over heterogeneous data plane technologies, which are software-controlled so as to adapt to the fluctuating capacity demand of the 5G air interfaces. Moreover, 5G transport networks will need to accommodate a wide spectrum of services on top of the same physical infrastructure. To that end, network slicing is seen as a suitable candidate for providing the necessary Quality of Service (QoS). Traffic differentiation is usually enforced at the border of the network in order to ensure a proper forwarding of the traffic according to its class through the backbone. With network slicing, the traffic may now traverse many slice edges where the traffic policy needs to be enforced, discriminated and ensured, according to the service and tenants needs. However, the very basic nature that makes this efficient management and operation possible in a flexible way – the logical centralisation – poses important challenges due to the lack of proper monitoring tools, suited for SDN-based architectures. In order to take timely and right decisions while operating a network, centralised intelligence applications need to be fed with a continuous stream of up-to-date network statistics. However, this is not feasible with current SDN solutions due to scalability and accuracy issues. Therefore, an adaptive telemetry system is required so as to support the diversity of 5G services and their stringent traffic requirements. The path towards 5G wireless networks alsopresentsacleartrendofcarryingoutcomputations close to end users. Indeed, pushing contents, applications, and network functios closer to end users is necessary to cope with thehugedatavolumeandlowlatencyrequired in future 5G networks. Edge and fog frameworks have emerged recently to address this challenge. Whilst the edge framework was more infrastructure-focused and more mobile operator-oriented, the fog was more pervasive and included any node (stationary or mobile), including terminal devices. By further utilising pervasive computational resources in proximity to users, edge and fog can be merged to construct a computing platform, which can also be used as a common stage for multiple radio access technologies (RATs) to share their information, hence opening a new dimension of multi-RAT integration.La arquitectura de las infraestructuras actuales de los operadores está siendo desafiada por la demanda creciente e incesante de servicios con un elevado consumo de datos que aparecen todos los días. Mientras que las redes de operadores implementadas actualmente han sido capaces de lidiar con las demandas de tráfico hasta ahora, se espera que las arquitecturas de la quinta generación de redes móviles (5G) soporten cargas de tráfico sin precedentes a la vez que disminuyen los costes asociados a la implementación y operaciones de la red. De hecho, el próximo conjunto de estándares 5G traerá la programabilidad y flexibilidad a niveles nunca antes vistos. Esto ha requerido la introducción de cambios en la arquitectura de las redes móviles, lo que permite diferentes funciones, como la división de los planos de control y de datos, según sea necesario para soportar una programación rápida de planos de datos heterogéneos. La softwarisación de red se considera una herramienta clave para hacer frente a dicha evolución de red, ya que proporciona la capacidad de controlar todas las funciones de red mediante (re)programación, proporcionando así una mayor flexibilidad para cumplir requisitos heterogéneos mientras se mantienen bajos los costes operativos y de implementación. Por lo tanto, se espera una gran diversidad en términos de patrones de tráfico, multi-tenancy, requisitos de tráfico heterogéneos y estrictos en las redes 5G. Software Defined Networking (SDN) y Network Function Virtualisation (NFV) se han convertido en un conjunto de herramientas básicas para que los operadores administren su infraestructura con mayor flexibilidad y menores costes. Como resultado, los nuevos servicios 5G ahora pueden planificarse, programarse y aprovisionarse rápidamente en respuesta a las necesidades de los usuarios y del mercado, imponiendo un cambio de paradigma en el diseño de los servicios. Sin embargo, dicha flexibilidad requiere que la red de transporte 5G experimente una transformación profunda, que evoluciona de un sustrato de conectividad estática a una infraestructura orientada a servicios capaz de acomodar los diversos servicios 5G, incluso Ultra-Reliable and Low Latency Communications (URLLC). Además, para lograr la flexibilidad y la reducción de costes deseadas, un enfoque prometedores aprovechar las tecnologías de virtualización para alojar dinámicamente los contenidos, servicios y aplicaciones más cerca de los usuarios para descargar la red central y reducir la latencia. Esta tesis aborda los desafíos anteriores que se detallan a continuación. Una característica común de los servicios 5G es la ubicuidad y la conexión casi permanente que se requiere para la red móvil. Esto impone un desafío en los procedimientos de señalización proporcionados para hacer un seguimiento de los usuarios y garantizar la continuidad de la sesión. Por lo tanto, los mecanismos de gestión de la movilidad desempeñarán un papel central en las redes 5G debido a la demanda de conectividad siempre activa. Distributed Mobility Management (DMM) ayuda a ir en esta dirección, al aplanar la red, lo que mejora su escalabilidad y permite el acceso local a Internet y a otros servicios de comunicaciones, como recursos en “nubes” situadas en el borde de la red móvil. Al mismo tiempo, SDN abre la posibilidad de ejecutar una multitud de aplicaciones inteligentes y avanzadas para optimizar la red en un controlador de red centralizado. La combinación de los principios arquitectónicos DMM con SDN aparece como una poderosa herramienta para que los operadores puedan hacer frente a la carga de administración y datos que se espera en las redes 5G. Para satisfacer la demanda futura de usuarios móviles a un coste reducido, los operadores también están buscando soluciones tales como C-RAN y diferentes divisiones funcionales para disminuir el coste de implementación y mantenimiento de emplazamientos celulares. El creciente estrés en el rendimiento del acceso a la radio móvil en un contexto de menores ingresos para los operadores requiere, por lo tanto, la evolución de las redes de transporte de backhaul y fronthaul, que actualmente funcionan disociadas. La heterogeneidad de los nodos y las tecnologías de transmisión que interconectan los segmentos de fronthaul y backhaul hacen que la red sea bastante compleja, costosa e ineficiente para gestionar de manera flexible y dinámica. De hecho, el uso de tecnologías heterogéneas obliga a los operadores a gestionar dos redes separadas físicamente, una para la red de backhaul y otra para el fronthaul. Para cumplir con los requisitos de 5G de manera rentable, se requiere una red de transporte única 5G que unifique los planos de control, datos y de gestión. Dicha red de transporte fronthaul/backhaul integrada, denominada “crosshaul”, transportará tráfico de fronthaul y backhaul operando sobre tecnologías heterogéneas de plano de datos, que están controladas por software para adaptarse a la demanda de capacidad fluctuante de las interfaces radio 5G. Además, las redes de transporte 5G necesitarán acomodar un amplio espectro de servicios sobre la misma infraestructura física y el network slicing se considera un candidato adecuado para proporcionar la calidad de servicio necesario. La diferenciación del tráfico generalmente se aplica en el borde de la red para garantizar un reenvío adecuado del tráfico según su clase a través de la red troncal. Con el networkslicing, el tráfico ahora puede atravesar muchos fronteras entre “network slices” donde la política de tráfico debe aplicarse, discriminarse y garantizarse, de acuerdo con las necesidades del servicio y de los usuarios. Sin embargo, el principio básico que hace posible esta gestión y operación eficientes de forma flexible – la centralización lógica – plantea importantes desafíos debido a la falta de herramientas de supervisión necesarias para las arquitecturas basadas en SDN. Para tomar decisiones oportunas y correctas mientras se opera una red, las aplicaciones de inteligencia centralizada necesitan alimentarse con un flujo continuo de estadísticas de red actualizadas. Sin embargo, esto no es factible con las soluciones SDN actuales debido a problemas de escalabilidad y falta de precisión. Por lo tanto, se requiere un sistema de telemetría adaptable para respaldar la diversidad de los servicios 5G y sus estrictos requisitos de tráfico. El camino hacia las redes inalámbricas 5G también presenta una tendencia clara de realizar acciones cerca de los usuarios finales. De hecho, acercar los contenidos, las aplicaciones y las funciones de red a los usuarios finales es necesario para hacer frente al enorme volumen de datos y la baja latencia requerida en las futuras redes 5G. Los paradigmas de “edge” y “fog” han surgido recientemente para abordar este desafío. Mientras que el edge está más centrado en la infraestructura y más orientado al operador móvil, el fog es más ubicuo e incluye cualquier nodo (fijo o móvil), incluidos los dispositivos finales. Al utilizar recursos de computación de propósito general en las proximidades de los usuarios, el edge y el fog pueden combinarse para construir una plataforma de computación, que también se puede utilizar para compartir información entre múltiples tecnologías de acceso radio (RAT) y, por lo tanto, abre una nueva dimensión de la integración multi-RAT.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Carla Fabiana Chiasserini.- Secretario: Vincenzo Mancuso.- Vocal: Diego Rafael López Garcí

    Privacy by evidence: a software development methodology to provide privacy assurance.

    Get PDF
    Em um mundo cada vez mais conectado, uma diversidade de softwares e sensores coletam dados dos ambientes e seus habitantes. Devido à riqueza das informações coletadas, privacidade se torna um requisito importante. Aplicações estão sendo desenvolvidas, e, apesar de existirem princípios e regras para lidar com a privacidade dos indivíduos, faltam metodologias para guiar a integração das diretrizes de privacidade em um processo de desenvolvimento. Metodologias existentes como o Privacidade desde a Concepção (do inglês Privacy by Design – PbD) ainda são vagas e deixam muitos questionamentos em aberto sobre como aplicá-las na prática. Neste trabalho, nós propomos o conceito de Privacidade por Evidência (do inglês Privacy by Evidence – PbE), uma metodologia de desenvolvimento de software para prover privacidade. Dada a dificuldade em prover privacidade total, propomos que as documentações das mitigações sejam em formas de evidências de privacidade, objetivando aumentar a confiança no projeto. Para validar a eficácia, PbE tem sido utilizada durante o desenvolvimento de quatro aplicações que servem como estudos de caso. O primeiro estudo de caso considerado é uma aplicação de medição inteligente de energia; o segundo considera uma aplicação de contagem e monitoramento de pessoas; o terceiro considera um sistema de monitoramento de eficiência energética; e o quarto considera um sistema de autenticação de dois fatores. Para estas aplicações, os times proveram sete,cinco,cinco e quatro evidências de privacidade, respectivamente, e concluimos que a PbE pode ser efetiva em ajudar a entender e a tratar as necessidades de proteção à privacidade quando se está desenvolvendo software.In anincreasinglyconnectedworld,adiversityofsoftwareandsensorscollectdatafromthe environmentanditsinhabitants.Becauseoftherichnessoftheinformationcollected,privacy becomes animportantrequirement.Applicationsarebeingdeveloped,and,althoughthere are principlesandrulesregardingtheprivacyofindividuals,thereisstillalackofmethod- ologies toguidetheintegrationofprivacyguidelinesintothedevelopmentprocess.Existing methodologies likethe Privacy byDesign (PbD) arestillvagueandleavemanyopenques- tions onhowtoapplytheminpractice.Inthisworkweproposetheconceptof Privacy by Evidence (PbE), asoftwaredevelopmentmethodologytoprovideprivacyassurance.Given the difficultyinprovidingtotalprivacyinmanyapplications,weproposetodocumentthe mitigationsinformofevidencesofprivacy,aimingtoincreasetheconfidenceoftheproject. Tovalidateitseffectiveness, PbE has beenusedduringthedevelopmentoffourapplications that serveascasestudies.Thefirstconsideredcasestudyisasmartmeteringapplication; the secondconsidersapeoplecountingandmonitoringapplication;thethirdconsidersan energyefficiencymonitoringsystem;andthefourthconsidersatwofactorauthentication system. Fortheseapplications,theteamswereabletoprovideseven,five,five,andfour evidencesofprivacy,respectively,andweconcludethat PbE can beeffectiveinhelpingto understand andtoaddresstheprivacyprotectionneedswhendevelopingsoftware.Cape

    Antecipação na tomada de decisão com múltiplos critérios sob incerteza

    Get PDF
    Orientador: Fernando José Von ZubenTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A presença de incerteza em resultados futuros pode levar a indecisões em processos de escolha, especialmente ao elicitar as importâncias relativas de múltiplos critérios de decisão e de desempenhos de curto vs. longo prazo. Algumas decisões, no entanto, devem ser tomadas sob informação incompleta, o que pode resultar em ações precipitadas com consequências imprevisíveis. Quando uma solução deve ser selecionada sob vários pontos de vista conflitantes para operar em ambientes ruidosos e variantes no tempo, implementar alternativas provisórias flexíveis pode ser fundamental para contornar a falta de informação completa, mantendo opções futuras em aberto. A engenharia antecipatória pode então ser considerada como a estratégia de conceber soluções flexíveis as quais permitem aos tomadores de decisão responder de forma robusta a cenários imprevisíveis. Essa estratégia pode, assim, mitigar os riscos de, sem intenção, se comprometer fortemente a alternativas incertas, ao mesmo tempo em que aumenta a adaptabilidade às mudanças futuras. Nesta tese, os papéis da antecipação e da flexibilidade na automação de processos de tomada de decisão sequencial com múltiplos critérios sob incerteza é investigado. O dilema de atribuir importâncias relativas aos critérios de decisão e a recompensas imediatas sob informação incompleta é então tratado pela antecipação autônoma de decisões flexíveis capazes de preservar ao máximo a diversidade de escolhas futuras. Uma metodologia de aprendizagem antecipatória on-line é então proposta para melhorar a variedade e qualidade dos conjuntos futuros de soluções de trade-off. Esse objetivo é alcançado por meio da previsão de conjuntos de máximo hipervolume esperado, para a qual as capacidades de antecipação de metaheurísticas multi-objetivo são incrementadas com rastreamento bayesiano em ambos os espaços de busca e dos objetivos. A metodologia foi aplicada para a obtenção de decisões de investimento, as quais levaram a melhoras significativas do hipervolume futuro de conjuntos de carteiras financeiras de trade-off avaliadas com dados de ações fora da amostra de treino, quando comparada a uma estratégia míope. Além disso, a tomada de decisões flexíveis para o rebalanceamento de carteiras foi confirmada como uma estratégia significativamente melhor do que a de escolher aleatoriamente uma decisão de investimento a partir da fronteira estocástica eficiente evoluída, em todos os mercados artificiais e reais testados. Finalmente, os resultados sugerem que a antecipação de opções flexíveis levou a composições de carteiras que se mostraram significativamente correlacionadas com as melhorias observadas no hipervolume futuro esperado, avaliado com dados fora das amostras de treinoAbstract: The presence of uncertainty in future outcomes can lead to indecision in choice processes, especially when eliciting the relative importances of multiple decision criteria and of long-term vs. near-term performance. Some decisions, however, must be taken under incomplete information, what may result in precipitated actions with unforeseen consequences. When a solution must be selected under multiple conflicting views for operating in time-varying and noisy environments, implementing flexible provisional alternatives can be critical to circumvent the lack of complete information by keeping future options open. Anticipatory engineering can be then regarded as the strategy of designing flexible solutions that enable decision makers to respond robustly to unpredictable scenarios. This strategy can thus mitigate the risks of strong unintended commitments to uncertain alternatives, while increasing adaptability to future changes. In this thesis, the roles of anticipation and of flexibility on automating sequential multiple criteria decision-making processes under uncertainty are investigated. The dilemma of assigning relative importances to decision criteria and to immediate rewards under incomplete information is then handled by autonomously anticipating flexible decisions predicted to maximally preserve diversity of future choices. An online anticipatory learning methodology is then proposed for improving the range and quality of future trade-off solution sets. This goal is achieved by predicting maximal expected hypervolume sets, for which the anticipation capabilities of multi-objective metaheuristics are augmented with Bayesian tracking in both the objective and search spaces. The methodology has been applied for obtaining investment decisions that are shown to significantly improve the future hypervolume of trade-off financial portfolios for out-of-sample stock data, when compared to a myopic strategy. Moreover, implementing flexible portfolio rebalancing decisions was confirmed as a significantly better strategy than to randomly choosing an investment decision from the evolved stochastic efficient frontier in all tested artificial and real-world markets. Finally, the results suggest that anticipating flexible choices has lead to portfolio compositions that are significantly correlated with the observed improvements in out-of-sample future expected hypervolumeDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétric

    Security-aware Cooperation in Dynamic Spectrum Access

    Get PDF
    We have witnessed a massive growth in wireless data, which almost doubles every year. The wireless data is expected to skyrocket further in the future due to the proliferation of devices and the emerging data-hungry applications. To accommodate the explosive growth in mobile traffic, a large amount of wireless spectrum is needed. With the limited spectrum resource, the current static spectrum allocation policy cannot serve well for future wireless systems. Moreover, it exacerbates the spectrum scarcity by resulting in severe spectrum underutilization. As a promising solution, dynamic spectrum access (DSA) is envisaged to increase spectrum efficiency by dynamic sharing all the spectrum. DSA can be enabled by cognitive radio technologies, which allow the unlicensed users (the secondary users, i.e., SUs) to dynamically access the unused spectrum (i.e., spectrum holes) owned by the licensed users (the primary users i.e., PUs). In order to identify the unused spectrum (spectrum holes), unlicensed users need to conduct spectrum sensing. While spectrum sensing might be inaccurate due to multipath fading and shadowing. To address this problem, user cooperation can be leveraged, with two main forms: cooperative spectrum sensing and cooperative cognitive radio networking (CCRN). For the former, SUs cooperate with each other in spectrum sensing to better detect the spectrum holes. For the latter, SUs cooperate with the PUs to gain access opportunities from the PUs by improving the transmission performance of the PUs. Whereas cooperation can also incur security issues, e.g., malicious users might participate into cooperation, corrupting or disrupting the communication of legitimate users, selfish users might refuse to contribute to cooperation for self-interests, etc. Those security issues are of great importance and need to be considered for cooperation in DSA. In this thesis, we study security-aware cooperation in DSA. First, we investigate cooperative spectrum sensing in multi-channel scenario such that a user can be scheduled for spectrum sensing and spectrum sharing. The cooperative framework can achieve a higher average throughput per user, which provides the incentive for selfish users to participate in cooperative spectrum sensing. Second, secure communication in CCRN is studied, where the SUs cooperate with the PU to enhance the latter’s communication security and then gain transmission opportunities. Partner selection, spectrum access time allocation, and power allocation are investigated. Third, we study risk aware cooperation based DSA for the multiple channel scenario, where multiple SUs cooperate with multiple PUs for spectrum access opportunities, considering the trustworthiness of SUs. Lastly, we propose an incentive mechanism to stimulate SUs to cooperate with PUs when they have no traffic. The cooperating SUs are motivated to cooperate with PUs to enhance the security of the PUs by accumulating credits and then consume the earned credits for spectrum trading when they have traffic in the future
    corecore