91 research outputs found

    Which Processes Do Users Not Want Online? - Extending Process Virtualization Theory

    Get PDF
    Following the advent of the Internet more and more processes are provided virtually, i.e., without physical interactions between involved people and objects. For instance, E-Commerce has virtualized shopping processes since products are bought without physical inspection and interaction with sales staff. This study is founded on the key idea of process virtualization theory (PVT) that from the users’ perspective not all processes are equally amenable for virtualization. We investigate characteristics of processes, which are causing users’ resistance toward the virtualized process. Surveying 501 individuals regarding 10 processes, this study constitutes the first quantitative test evaluating the prediction capabilities of PVT by analysis of varying processes. Moreover, it introduces and successfully tests the extended PVT (EPVT), which integrates PVT with multiple, related constructs from extant literature in a unified model with multi-order causal relations. Thereby, it clearly enhances our understanding of human behavior with regard to the frequent phenomenon process virtualization

    Big Data Optimization : Algorithmic Framework for Data Analysis Guided by Semantics

    Get PDF
    Fecha de Lectura de Tesis: 9 noviembre 2018.Over the past decade the rapid rise of creating data in all domains of knowledge such as traffic, medicine, social network, industry, etc., has highlighted the need for enhancing the process of analyzing large data volumes, in order to be able to manage them with more easiness and in addition, discover new relationships which are hidden in them Optimization problems, which are commonly found in current industry, are not unrelated to this trend, therefore Multi-Objective Optimization Algorithms (MOA) should bear in mind this new scenario. This means that, MOAs have to deal with problems, which have either various data sources (typically streaming) of huge amount of data. Indeed these features, in particular, are found in Dynamic Multi-Objective Problems (DMOPs), which are related to Big Data optimization problems. Mostly with regards to velocity and variability. When dealing with DMOPs, whenever there exist changes in the environment that affect the solutions of the problem (i.e., the Pareto set, the Pareto front, or both), therefore in the fitness landscape, the optimization algorithm must react to adapt the search to the new features of the problem. Big Data analytics are long and complex processes therefore, with the aim of simplify them, a series of steps are carried out through. A typical analysis is composed of data collection, data manipulation, data analysis and finally result visualization. In the process of creating a Big Data workflow the analyst should bear in mind the semantics involving the problem domain knowledge and its data. Ontology is the standard way for describing the knowledge about a domain. As a global target of this PhD Thesis, we are interested in investigating the use of the semantic in the process of Big Data analysis, not only focused on machine learning analysis, but also in optimization

    IT Laws in the Era of Cloud-Computing

    Get PDF
    This book documents the findings and recommendations of research into the question of how IT laws should develop on the understanding that today’s information and communication technology is shaped by cloud computing, which lies at the foundations of contemporary and future IT as its most widespread enabler. In particular, this study develops on both a comparative and an interdisciplinary axis, i.e. comparatively by examining EU and US law, and on an interdisciplinary level by dealing with law and IT. Focusing on the study of data protection and privacy in cloud environments, the book examines three main challenges on the road towards more efficient cloud computing regulation: -understanding the reasons behind the development of diverging legal structures and schools of thought on IT law -ensuring privacy and security in digital clouds -converging regulatory approaches to digital clouds in the hope of more harmonised IT laws in the future

    Preserving Virtual Worlds Final Report

    Get PDF
    The Preserving Virtual Worlds project is a collaborative research venture of the Rochester Institute of Technology, Stanford University, the University of Maryland, the University of Illinois at Urbana-Champaign and Linden Lab, conducted as part of Preserving Creative America, an initiative of the National Digital Information Infrastructure and Preservation Program at the Library of Congress. The primary goals of our project have been to investigate issues surrounding the preservation of video games and interactive fiction through a series of case studies of games and literature from various periods in computing history, and to develop basic standards for metadata and content representation of these digital artifacts for long-term archival storage

    Functionality-based application confinement: A parameterised and hierarchical approach to policy abstraction for rule-based application-oriented access controls

    Get PDF
    Access controls are traditionally designed to protect resources from users, and consequently make access decisions based on the identity of the user, treating all processes as if they are acting on behalf of the user that runs them. However, this user-oriented approach is insufficient at protecting against contemporary threats, where security compromises are often due to applications running malicious code, either due to software vulnerabilities or malware. Application-oriented access controls can mitigate this threat by managing the authority of individual applications. Rule-based application-oriented access controls can restrict applications to only allow access to the specific finely-grained resources required for them to carry out their tasks, and thus can significantly limit the damage that can be caused by malicious code. Unfortunately existing application-oriented access controls have policy complexity and usability problems that have limited their use. This thesis proposes a new access control model, known as functionality-based application confinement (FBAC). The FBAC model has a number of unique features designed to overcome problems with previous approaches. Policy abstractions, known as functionalities, are used to assign authority to applications based on the features they provide. Functionalities authorise elaborate sets of finely grained privileges based on high-level security goals, and adapt to the needs of specific applications through parameterisation. FBAC is hierarchical, which enables it to provide layers of abstraction and encapsulation in policy. It also simultaneously enforces the security goals of both users and administrators by providing discretionary and mandatory controls. An LSM-based (Linux security module) prototype implementation, known as FBAC-LSM, was developed as a proof-of-concept and was used to evaluate the new model and associated techniques. The policy requirements of over one hundred applications were analysed, and policy abstractions and application policies were developed. Analysis showed that the FBAC model is capable of representing the privilege needs of applications. The model is also well suited to automaiii tion techniques that can in many cases create complete application policies a priori, that is, without first running the applications. This is an improvement over previous approaches that typically rely on learning modes to generate policies. A usability study was conducted, which showed that compared to two widely-deployed alternatives (SELinux and AppArmor), FBAC-LSM had significantly higher perceived usability and resulted in significantly more protective policies. Qualitative analysis was performed and gave further insight into the issues surrounding the usability of application-oriented access controls, and confirmed the success of the FBAC model

    Energy-Efficient Software

    Get PDF
    The energy consumption of ICT is growing at an unprecedented pace. The main drivers for this growth are the widespread diffusion of mobile devices and the proliferation of datacenters, the most power-hungry IT facilities. In addition, it is predicted that the demand for ICT technologies and services will increase in the coming years. Finding solutions to decrease ICT energy footprint is and will be a top priority for researchers and professionals in the field. As a matter of fact, hardware technology has substantially improved throughout the years: modern ICT devices are definitely more energy efficient than their predecessors, in terms of performance per watt. However, as recent studies show, these improvements are not effectively reducing the growth rate of ICT energy consumption. This suggests that these devices are not used in an energy-efficient way. Hence, we have to look at software. Modern software applications are not designed and implemented with energy efficiency in mind. As hardware became more and more powerful (and cheaper), software developers were not concerned anymore with optimizing resource usage. Rather, they focused on providing additional features, adding layers of abstraction and complexity to their products. This ultimately resulted in bloated, slow software applications that waste hardware resources -- and consequently, energy. In this dissertation, the relationship between software behavior and hardware energy consumption is explored in detail. For this purpose, the abstraction levels of software are traversed upwards, from source code to architectural components. Empirical research methods and evidence-based software engineering approaches serve as a basis. First of all, this dissertation shows the relevance of software over energy consumption. Secondly, it gives examples of best practices and tactics that can be adopted to improve software energy efficiency, or design energy-efficient software from scratch. Finally, this knowledge is synthesized in a conceptual framework that gives the reader an overview of possible strategies for software energy efficiency, along with examples and suggestions for future research

    High-Fidelity Provenance:Exploring the Intersection of Provenance and Security

    Get PDF
    In the past 25 years, the World Wide Web has disrupted the way news are disseminated and consumed. However, the euphoria for the democratization of news publishing was soon followed by scepticism, as a new phenomenon emerged: fake news. With no gatekeepers to vouch for it, the veracity of the information served over the World Wide Web became a major public concern. The Reuters Digital News Report 2020 cites that in at least half of the EU member countries, 50% or more of the population is concerned about online fake news. To help address the problem of trust on information communi- cated over the World Wide Web, it has been proposed to also make available the provenance metadata of the information. Similar to artwork provenance, this would include a detailed track of how the information was created, updated and propagated to produce the result we read, as well as what agents—human or software—were involved in the process. However, keeping track of provenance information is a non-trivial task. Current approaches, are often of limited scope and may require modifying existing applications to also generate provenance information along with thei regular output. This thesis explores how provenance can be automatically tracked in an application-agnostic manner, without having to modify the individual applications. We frame provenance capture as a data flow analysis problem and explore the use of dynamic taint analysis in this context. Our work shows that this appoach improves on the quality of provenance captured compared to traditonal approaches, yielding what we term as high-fidelity provenance. We explore the performance cost of this approach and use deterministic record and replay to bring it down to a more practical level. Furthermore, we create and present the tooling necessary for the expanding the use of using deterministic record and replay for provenance analysis. The thesis concludes with an application of high-fidelity provenance as a tool for state-of-the art offensive security analysis, based on the intuition that software too can be misguided by "fake news". This demonstrates that the potential uses of high-fidelity provenance for security extend beyond traditional forensics analysis
    • …
    corecore