1,664 research outputs found

    Privacy protocols

    Full text link
    Security protocols enable secure communication over insecure channels. Privacy protocols enable private interactions over secure channels. Security protocols set up secure channels using cryptographic primitives. Privacy protocols set up private channels using secure channels. But just like some security protocols can be broken without breaking the underlying cryptography, some privacy protocols can be broken without breaking the underlying security. Such privacy attacks have been used to leverage e-commerce against targeted advertising from the outset; but their depth and scope became apparent only with the overwhelming advent of influence campaigns in politics. The blurred boundaries between privacy protocols and privacy attacks present a new challenge for protocol analysis. Covert channels turn out to be concealed not only below overt channels, but also above: subversions, and the level-below attacks are supplemented by sublimations and the level-above attacks.Comment: 38 pages, 6 figure

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft

    Enhancing Face Recognition with Deep Learning Architectures: A Comprehensive Review

    Get PDF
    The progression of information discernment via facial identification and the emergence of innovative frameworks has exhibited remarkable strides in recent years. This phenomenon has been particularly pronounced within the realm of verifying individual credentials, a practice prominently harnessed by law enforcement agencies to advance the field of forensic science. A multitude of scholarly endeavors have been dedicated to the application of deep learning techniques within machine learning models. These endeavors aim to facilitate the extraction of distinctive features and subsequent classification, thereby elevating the precision of unique individual recognition. In the context of this scholarly inquiry, the focal point resides in the exploration of deep learning methodologies tailored for the realm of facial recognition and its subsequent matching processes. This exploration centers on the augmentation of accuracy through the meticulous process of training models with expansive datasets. Within the confines of this research paper, a comprehensive survey is conducted, encompassing an array of diverse strategies utilized in facial recognition. This survey, in turn, delves into the intricacies and challenges that underlie the intricate field of facial recognition within imagery analysis

    Access and information flow control to secure mobile web service compositions in resource constrained environments

    Get PDF
    The growing use of mobile web services such as electronic health records systems and applications like twitter, Facebook has increased interest in robust mechanisms for ensuring security for such information sharing services. Common security mechanisms such as access control and information flow control are either restrictive or weak in that they prevent applications from sharing data usefully, and/or allow private information leaks when used independently. Typically, when services are composed there is a resource that some or all of the services involved in the composition need to share. However, during service composition security problems arise because the resulting service is made up of different services from different security domains. A key issue that arises and that we address in this thesis is that of enforcing secure information flow control during service composition to prevent illegal access and propagation of information between the participating services. This thesis describes a model that combines access control and information flow control in one framework. We specifically consider a case study of an e-health service application, and consider how constraints like location and context dependencies impact on authentication and authorization. Furthermore, we consider how data sharing applications such as the e-health service application handle issues of unauthorized users and insecure propagation of information in resource constrained environmentsÂą. Our framework addresses this issue of illegitimate information access and propagation by making use of the concept of program dependence graphs (PDGs). Program dependence graphs use path conditions as necessary conditions for secure information flow control. The advantage of this approach to securing information sharing is that, information is only propagated if the criteria for data sharing are verified. Our solution proposes or offers good performance, fast authentication taking into account bandwidth limitations. A security analysis shows the theoretical improvements our scheme offers. Results obtained confirm that the framework accommodates the CIA-triad (which is the confidentiality, integrity and availability model designed to guide policies of information security) of our work and can be used to motivate further research work in this field

    Roadmap for KRSM RTD

    Get PDF

    A Value Function Approach to Information Operations MOE\u27s: A Preliminary Study

    Get PDF
    A value focused thinking approach is applied to information operations. A preliminary value hierarchy for information operations is constructed by extracting the values of senior military leadership from existing doctrine. To identify these key values for information operations, applicable existing doctrine was reviewed and summarized. Additionally, hierarchical representations of the values represented within each reviewed doctrine are developed. A value hierarchy requires that supporting objectives be mutually exclusive and collectively exhaustive. Within this analysis, these requirements are enforced, in part, by developed definitions which serve as tests to maintain mutual exclusivity. An exhaustive set of supporting values is also guaranteed by identifying a spanning set of values that directly support the overall objective of information operations. This preliminary value hierarchy serves as the basis for continuing research. The implications for this research include the construction of a prescriptive model in which the effectiveness of current and future systems can be assessed on a common scale. Further, the effectiveness of developing technologies can be assessed and the value of these technologies determined with respect to the values of senior military leadership. With this, the value of holes in our suite of information warfare systems can also be assessed in terms of their effectiveness in fulfilling the values of military leadership

    Towards Efficiency and Quality Assurance in Threat Analysis of Software Systems

    Get PDF
    Context: Security threats have been a growing concern in many organizations. Organizations developing software products strive to plan for security as soon as possible to mitigate such potential threats. In the design phase of the software development life-cycle, teams of experts routinely analyze the system architecture and design to nd potential security threats.Objective: The goal of this research is to improve on the performance of existing threat analysis techniques and support practitioners with automation and tool support. To understand the inner-workings of existing threat analysis methodologies we also conduct a systematic literature review examining 26 methodologies in detail. Our industrial partners conrm that existing techniques are labor intensive and do not provide quality guarantees about their outcomes.Method: We conducted empirical studies for building an in-depth understanding of existing techniques (Systematic Literature Review (SLR), controlled experiments). Further we rely on empirical case studies for ongoing validation of an attempted technique performance improvement.Findings: We have found that using a novel risk-rst approach can help reduce the labor while producing the same level of outcome quality in a shorter period of time. Further, we suggest that the key for a successful application of this approach is two fold. First, widening the analysis scope to end-to-end scenarios guides the analyst to focus on important assets. Second, appropriate model abstractions are required to manage the cognitive load of the human analysts. We have also found that reasoning about security in a formal setting requires extending the existing notations with security semantics. Further, minimal model extensions for doing so include security contracts for system nodes handling sensitive information. In such a setting, the analysis can be automated and can to some extent provide completeness guarantees.Future work: In the future, we plan to further study the analysis completeness guarantees. In particular, we plan to improve on the analysis automation and investigate complementary techniques for analysis completeness (namely informal pattern based techniques). We also plan to work on the disconnect between the planned and implemented security

    ID5.2 Roadmap for KRSM RTD

    Get PDF
    Roadmap for KRSM RTD activities.The work on this publication has been sponsored by the TENCompetence Integrated Project that is funded by the European Commission's 6th Framework Programme, priority IST/Technology Enhanced Learning. Contract 027087 [http://www.tencompetence.org

    Data Mining Algorithms for Internet Data: from Transport to Application Layer

    Get PDF
    Nowadays we live in a data-driven world. Advances in data generation, collection and storage technology have enabled organizations to gather data sets of massive size. Data mining is a discipline that blends traditional data analysis methods with sophisticated algorithms to handle the challenges posed by these new types of data sets. The Internet is a complex and dynamic system with new protocols and applications that arise at a constant pace. All these characteristics designate the Internet a valuable and challenging data source and application domain for a research activity, both looking at Transport layer, analyzing network tra c flows, and going up to Application layer, focusing on the ever-growing next generation web services: blogs, micro-blogs, on-line social networks, photo sharing services and many other applications (e.g., Twitter, Facebook, Flickr, etc.). In this thesis work we focus on the study, design and development of novel algorithms and frameworks to support large scale data mining activities over huge and heterogeneous data volumes, with a particular focus on Internet data as data source and targeting network tra c classification, on-line social network analysis, recommendation systems and cloud services and Big data
    • …
    corecore