570 research outputs found

    Access and Usage Control in Grid

    Get PDF
    Grid is a computational environment where heterogeneous resources are virtualized and outsourced to multiple users across the Internet. The increasing popularity of the resources visualization is explained by the emerging suitability of such technology for automated execution of heavy parts of business and research processes. Efficient and flexible framework for the access and usage control over Grid resources is a prominent challenge. The primary objective of this thesis is to design the novel access and usage control model providing the fine-grained and continuous control over computational Grid resources. The approach takes into account peculiarities of Grid: service-oriented architecture, long-lived interactions, heterogeneity and distribution of resources, openness and high dynamics. We tackle the access and usage control problem in Grid by Usage CONtrol (UCON) model, which presents the continuity of control and mutability of authorization information used to make access decisions. Authorization information is formed by attributes of the resource requestor, the resource provider and the environment where the system operates. Our access and usage control model is considered on three levels of abstraction: policy, enforcement and implementation. The policy level introduces security policies designed to specify the desired granularity of control: coarse-grained policies that manages access and usage of Grid services, and fine-grained policies that monitor the usage of underlying resources allocated for a particular Grid service instance. We introduce U-XACML and exploit POLPA policy languages to specify and formalize security policies. Next, the policy level presents attribute management models. Trust negotiations are applied to collect a set of attributes needed to produce access decisions. In case of mutable attributes, a risk-aware access and usage control model is given to approximate the continuous control and timely acquisition of fresh attribute values. The enforcement level presents the architecture of the state-full reference monitor designed to enforce security policies on coarse- and fine-grained levels of control. The implementation level presents a proof-of-concept realization of our access and usage control model in Globus Toolkit, the most widely used middleware to setup computational Grids

    From security to assurance in the cloud: a survey

    Get PDF
    The cloud computing paradigm has become a mainstream solution for the deployment of business processes and applications. In the public cloud vision, infrastructure, platform, and software services are provisioned to tenants (i.e., customers and service providers) on a pay-as-you-go basis. Cloud tenants can use cloud resources at lower prices, and higher performance and flexibility, than traditional on-premises resources, without having to care about infrastructure management. Still, cloud tenants remain concerned with the cloud's level of service and the nonfunctional properties their applications can count on. In the last few years, the research community has been focusing on the nonfunctional aspects of the cloud paradigm, among which cloud security stands out. Several approaches to security have been described and summarized in general surveys on cloud security techniques. The survey in this article focuses on the interface between cloud security and cloud security assurance. First, we provide an overview of the state of the art on cloud security. Then, we introduce the notion of cloud security assurance and analyze its growing impact on cloud security approaches. Finally, we present some recommendations for the development of next-generation cloud security and assurance solutions

    Policy-based asset sharing in collaborative environments

    Get PDF
    Resource sharing is an important but complex problem to be solved. The problem is exacerbated in a dynamic coalition context, due to multi-partner constraints (imposed by security, privacy and general operational issues) placed on the resources. Take for example scenarios such as emergency response operations, corporate collaborative environments, or even short-lived opportunistic networks, where multi-party teams are formed, utilizing and sharing their own resources in order to support collective endeavors, which otherwise would be difficult, if not impossible, to achieve by a single party. Policy-Based Management Systems (PBMS) have been proposed as a suitable paradigm to reduce this complexity and provide a means for effective resource sharing. The overarching problem that this thesis deals with, is the development of PBMS techniques and technologies that will allow in a dynamic and transparent way, users that operate in collaborative environments to share their assets through high-level policies. To do so, it focuses on three sub-problems each one of which is related to a different aspect of a PBMS, making three key contributions. The first is a novel model, which proposes an alternative way for asset sharing, better fit than the traditional approaches when dealing with collaborative and dynamic environments. In order for all of the existing asset sharing approaches to comply with situational changes, an extra overhead is needed due to the fact that the decision making centre – and therefore, the policy making centre – is far away from where the changes take place unlike the event-driven approach proposed in this thesis. The second contribution is the proposal of an efficient, high-level policy conflict analysis mechanism, that provides a more transparent – in terms of user interaction – alternative way for maintaining unconflicted PBMS. Its discrete and sequential execution, breaks down the analysis process into discrete steps, making the conflict analysis more efficient compared to existing approaches, while eases human policy authors to track the whole process interfacing with it, in a near to natural language representation. The contribution of the third piece of research work is an interest-based policy negotiation mechanism, for enhancing asset sharing while promoting collaboration in coalition environments. The enabling technology for achieving the last two contributions (contribution 2 & 3) is a controlled natural language representation, which is used for defining a policy language. For evaluating the proposed ideas, in the first and third contributions we run simulation experiments while we simulate and also conduct formal analysis for the second one

    A Comprehensive Security Framework for Securing Sensors in Smart Devices and Applications

    Get PDF
    This doctoral dissertation introduces novel security frameworks to detect sensor-based threats on smart devices and applications in smart settings such as smart home, smart office, etc. First, we present a formal taxonomy and in-depth impact analysis of existing sensor-based threats to smart devices and applications based on attack characteristics, targeted components, and capabilities. Then, we design a novel context-aware intrusion detection system, 6thSense, to detect sensor-based threats in standalone smart devices (e.g., smartphone, smart watch, etc.). 6thSense considers user activity-sensor co-dependence in standalone smart devices to learn the ongoing user activity contexts and builds a context-aware model to distinguish malicious sensor activities from benign user behavior. Further, we develop a platform-independent context-aware security framework, Aegis, to detect the behavior of malicious sensors and devices in a connected smart environment (e.g., smart home, offices, etc.). Aegis observes the changing patterns of the states of smart sensors and devices for user activities in a smart environment and builds a contextual model to detect malicious activities considering sensor-device-user interactions and multi-platform correlation. Then, to limit unauthorized and malicious sensor and device access, we present, kratos, a multi-user multi-device-aware access control system for smart environment and devices. kratos introduces a formal policy language to understand diverse user demands in smart environment and implements a novel policy negotiation algorithm to automatically detect and resolve conflicting user demands and limit unauthorized access. For each contribution, this dissertation presents novel security mechanisms and techniques that can be implemented independently or collectively to secure sensors in real-life smart devices, systems, and applications. Moreover, each contribution is supported by several user and usability studies we performed to understand the needs of the users in terms of sensor security and access control in smart devices and improve the user experience in these real-time systems

    Fatias de rede fim-a-fim : da extração de perfis de funções de rede a SLAs granulares

    Get PDF
    Orientador: Christian Rodolfo Esteve RothenbergTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Nos últimos dez anos, processos de softwarização de redes vêm sendo continuamente diversi- ficados e gradativamente incorporados em produção, principalmente através dos paradigmas de Redes Definidas por Software (ex.: regras de fluxos de rede programáveis) e Virtualização de Funções de Rede (ex.: orquestração de funções virtualizadas de rede). Embasado neste processo o conceito de network slice surge como forma de definição de caminhos de rede fim- a-fim programáveis, possivelmente sobre infrastruturas compartilhadas, contendo requisitos estritos de desempenho e dedicado a um modelo particular de negócios. Esta tese investiga a hipótese de que a desagregação de métricas de desempenho de funções virtualizadas de rede impactam e compõe critérios de alocação de network slices (i.e., diversas opções de utiliza- ção de recursos), os quais quando realizados devem ter seu gerenciamento de ciclo de vida implementado de forma transparente em correspondência ao seu caso de negócios de comu- nicação fim-a-fim. A verificação de tal assertiva se dá em três aspectos: entender os graus de liberdade nos quais métricas de desempenho de funções virtualizadas de rede podem ser expressas; métodos de racionalização da alocação de recursos por network slices e seus re- spectivos critérios; e formas transparentes de rastrear e gerenciar recursos de rede fim-a-fim entre múltiplos domínios administrativos. Para atingir estes objetivos, diversas contribuições são realizadas por esta tese, dentre elas: a construção de uma plataforma para automatização de metodologias de testes de desempenho de funções virtualizadas de redes; a elaboração de uma metodologia para análises de alocações de recursos de network slices baseada em um algoritmo classificador de aprendizado de máquinas e outro algoritmo de análise multi- critério; e a construção de um protótipo utilizando blockchain para a realização de contratos inteligentes envolvendo acordos de serviços entre domínios administrativos de rede. Por meio de experimentos e análises sugerimos que: métricas de desempenho de funções virtualizadas de rede dependem da alocação de recursos, configurações internas e estímulo de tráfego de testes; network slices podem ter suas alocações de recursos coerentemente classificadas por diferentes critérios; e acordos entre domínios administrativos podem ser realizados de forma transparente e em variadas formas de granularidade por meio de contratos inteligentes uti- lizando blockchain. Ao final deste trabalho, com base em uma ampla discussão as perguntas de pesquisa associadas à hipótese são respondidas, de forma que a avaliação da hipótese proposta seja realizada perante uma ampla visão das contribuições e trabalhos futuros desta teseAbstract: In the last ten years, network softwarisation processes have been continuously diversified and gradually incorporated into production, mainly through the paradigms of Software Defined Networks (e.g., programmable network flow rules) and Network Functions Virtualization (e.g., orchestration of virtualized network functions). Based on this process, the concept of network slice emerges as a way of defining end-to-end network programmable paths, possibly over shared network infrastructures, requiring strict performance metrics associated to a par- ticular business case. This thesis investigate the hypothesis that the disaggregation of network function performance metrics impacts and composes a network slice footprint incurring in di- verse slicing feature options, which when realized should have their Service Level Agreement (SLA) life cycle management transparently implemented in correspondence to their fulfilling end-to-end communication business case. The validation of such assertive takes place in three aspects: the degrees of freedom by which performance of virtualized network functions can be expressed; the methods of rationalizing the footprint of network slices; and transparent ways to track and manage network assets among multiple administrative domains. In order to achieve such goals, a series of contributions were achieved by this thesis, among them: the construction of a platform for automating methodologies for performance testing of virtual- ized network functions; an elaboration of a methodology for the analysis of footprint features of network slices based on a machine learning classifier algorithm and a multi-criteria analysis algorithm; and the construction of a prototype using blockchain to carry out smart contracts involving service level agreements between administrative systems. Through experiments and analysis we suggest that: performance metrics of virtualized network functions depend on the allocation of resources, internal configurations and test traffic stimulus; network slices can have their resource allocations consistently analyzed/classified by different criteria; and agree- ments between administrative domains can be performed transparently and in various forms of granularity through blockchain smart contracts. At the end of his thesis, through a wide discussion we answer all the research questions associated to the investigated hypothesis in such way its evaluation is performed in face of wide view of the contributions and future work of this thesisDoutoradoEngenharia de ComputaçãoDoutor em Engenharia ElétricaFUNCAM

    Building Efficient Smart Cities

    Get PDF
    Current technological developments offer promising solutions to the challenges faced by cities such as crowding, pollution, housing, the search for greater comfort, better healthcare, optimized mobility and other urban services that must be adapted to the fast-paced life of the citizens. Cities that deploy technology to optimize their processes and infrastructure fit under the concept of a smart city. An increasing number of cities strive towards becoming smart and some are even already being recognized as such, including Singapore, London and Barcelona. Our society has an ever-greater reliance on technology for its sustenance. This will continue into the future, as technology is rapidly penetrating all facets of human life, from daily activities to the workplace and industries. A myriad of data is generated from all these digitized processes, which can be used to further enhance all smart services, increasing their adaptability, precision and efficiency. However, dealing with large amounts of data coming from different types of sources is a complex process; this impedes many cities from taking full advantage of data, or even worse, a lack of control over the data sources may lead to serious security issues, leaving cities vulnerable to cybercrime. Given that smart city infrastructure is largely digitized, a cyberattack would have fatal consequences on the city’s operation, leading to economic loss, citizen distrust and shut down of essential city services and networks. This is a threat to the efficiency smart cities strive for

    Methodologies for innovation and best practices in Industry 4.0 for SMEs

    Get PDF
    Today, cyber physical systems are transforming the way in which industries operate, we call this Industry 4.0 or the fourth industrial revolution. Industry 4.0 involves the use of technologies such as Cloud Computing, Edge Computing, Internet of Things, Robotics and most of all Big Data. Big Data are the very basis of the Industry 4.0 paradigm, because they can provide crucial information on all the processes that take place within manufacturing (which helps optimize processes and prevent downtime), as well as provide information about the employees (performance, individual needs, safety in the workplace) as well as clients/customers (their needs and wants, trends, opinions) which helps businesses become competitive and expand on the international market. Current processing capabilities thanks to technologies such as Internet of Things, Cloud Computing and Edge Computing, mean that data can be processed much faster and with greater security. The implementation of Artificial Intelligence techniques, such as Machine Learning, can enable technologies, can help machines take certain decisions autonomously, or help humans make decisions much faster. Furthermore, data can be used to feed predictive models which can help businesses and manufacturers anticipate future changes and needs, address problems before they cause tangible harm

    Efficient Digital Management in Smart Cities

    Get PDF
    The concept of smart cities puts the citizen at the center of all processes. It is the citizen who decides what kind of city they live in. Their opinions and attitudes towards technologies and the solutions they would like to see in their cities must be listened to. With Deep Intelligence, cities will be able to create more optimal citizen-centered services as, as the tool can collect data from multiple sources, such as databases and social networks, from which valuable information on citizens’ opinions and attitudes regarding technology, smart city services and urban problems, may be extracted

    AIoT for Achieving Sustainable Development Goals

    Get PDF
    Artificial Intelligence of Things (AIoT) is a relatively new concept that involves the merging of Artificial Intelligence (AI) with the Internet of Things (IoT). It has emerged from the realization that Internet of Things networks could be further enhanced if they were also provided with Artificial Intelligence, enhancing the extraction of data and network operation. Prior to AIoT, the Internet of Things would consist of networks of sensors embedded in a physical environment, that collected data and sent them to a remote server. Upon reaching the server, a data analysis would be carried out which normally involved the application of a series of Artificial Intelligence techniques by experts. However, as Internet of Things networks expand in smart cities, this workflow makes optimal operation unfeasible. This is because the data that is captured by IoT is increasing in size continually. Sending such amounts of data to a remote server becomes costly, time-consuming and resource inefficient. Moreover, dependence on a central server means that a server failure, which would be imminent if overloaded with data, would lead to a halt in the operation of the smart service for which the IoT network had been deployed. Thus, decentralizing the operation becomes a crucial element of AIoT. This is done through the Edge Computing paradigm which takes the processing of data to the edge of the network. Artificial Intelligence is found at the edge of the network so that the data may be processed, filtered and analyzed there. It is even possible to equip the edge of the network with the ability to make decisions through the implementation of AI techniques such as Machine Learning. The speed of decision making at the edge of the network means that many social, environmental, industrial and administrative processes may be optimized, as crucial decisions may be taken faster. Deep Intelligence is a tool that employs disruptive Artificial Intelligence techniques for data analysis i.e., classification, clustering, forecasting, optimization, visualization. Its strength lies in its ability to extract data from virtually any source type. This is a very important feature given the heterogeneity of the data being produced in the world today. Another very important characteristic is its intuitiveness and ability to operate almost autonomously. The user is guided through the process which means that anyone can use it without any knowledge of the technical, technological and mathematical aspects of the processes performed by the platform. This means that the Deepint.net platform integrates functionalities that would normally take years to implement in any sector individually and that would normally require a group of experts in data analysis and related technologies [1-322]. The Deep Intelligence platform can be used to easily operate Edge Computing architectures and IoT networks. The joint characteristics of a well-designed Edge Computing platform (that is, one which brings computing resources to the edge of the network) and of the advanced Deepint.net platform deployed in a cloud environment, mean that high speed, real-time response, effective troubleshooting and management, as well as precise forecasting can be achieved. Moreover, the low cost of the solution, in combination with the availability of low-cost sensors, devices, Edge Computing hardware, means that deployment becomes a possibility for developing countries, where such solutions are needed most
    • …
    corecore