24,821 research outputs found

    A Forensically Sound Adversary Model for Mobile Devices

    Full text link
    In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device

    Global Semantic Integrity Constraint Checking for a System of Databases

    Get PDF
    In today’s emerging information systems, it is natural to have data distributed across multiple sites. We define a System of Databases (SyDb) as a collection of autonomous and heterogeneous databases. R-SyDb (System of Relational Databases) is a restricted form of SyDb, referring to a collection of relational databases, which are independent. Similarly, X-SyDb (System of XML Databases) refers to a collection of XML databases. Global integrity constraints ensure integrity and consistency of data spanning multiple databases. In this dissertation, we present (i) Constraint Checker, a general framework of a mobile agent based approach for checking global constraints on R-SyDb, and (ii) XConstraint Checker, a general framework for checking global XML constraints on X-SyDb. Furthermore, we formalize multiple efficient algorithms for varying semantic integrity constraints involving both arithmetic and aggregate predicates. The algorithms take as input an update statement, list of all global semantic integrity constraints with arithmetic predicates or aggregate predicates and outputs sub-constraints to be executed on remote sites. The algorithms are efficient since (i) constraint check is carried out at compile time, i.e. before executing update statement; hence we save time and resources by avoiding rollbacks, and (ii) the implementation exploits parallelism. We have also implemented a prototype of systems and algorithms for both R-SyDb and X-SyDb. We also present performance evaluations of the system

    Handling Data Consistency through Spatial Data Integrity Rules in Constraint Decision Tables

    Get PDF

    Context constraint integration and validation in dynamic web service compositions

    Get PDF
    System architectures that cross organisational boundaries are usually implemented based on Web service technologies due to their inherent interoperability benets. With increasing exibility requirements, such as on-demand service provision, a dynamic approach to service architecture focussing on composition at runtime is needed. The possibility of technical faults, but also violations of functional and semantic constraints require a comprehensive notion of context that captures composition-relevant aspects. Context-aware techniques are consequently required to support constraint validation for dynamic service composition. We present techniques to respond to problems occurring during the execution of dynamically composed Web services implemented in WS-BPEL. A notion of context { covering physical and contractual faults and violations { is used to safeguard composed service executions dynamically. Our aim is to present an architectural framework from an application-oriented perspective, addressing practical considerations of a technical framework

    Interoperability, Trust Based Information Sharing Protocol and Security: Digital Government Key Issues

    Full text link
    Improved interoperability between public and private organizations is of key significance to make digital government newest triumphant. Digital Government interoperability, information sharing protocol and security are measured the key issue for achieving a refined stage of digital government. Flawless interoperability is essential to share the information between diverse and merely dispersed organisations in several network environments by using computer based tools. Digital government must ensure security for its information systems, including computers and networks for providing better service to the citizens. Governments around the world are increasingly revolving to information sharing and integration for solving problems in programs and policy areas. Evils of global worry such as syndrome discovery and manage, terror campaign, immigration and border control, prohibited drug trafficking, and more demand information sharing, harmonization and cooperation amid government agencies within a country and across national borders. A number of daunting challenges survive to the progress of an efficient information sharing protocol. A secure and trusted information-sharing protocol is required to enable users to interact and share information easily and perfectly across many diverse networks and databases globally.Comment: 20 page

    Towards a Framework for Developing Mobile Agents for Managing Distributed Information Resources

    No full text
    Distributed information management tools allow users to author, disseminate, discover and manage information within large-scale networked environments, such as the Internet. Agent technology provides the flexibility and scalability necessary to develop such distributed information management applications. We present a layered organisation that is shared by the specific applications that we build. Within this organisation we describe an architecture where mobile agents can move across distributed environments, integrate with local resources and other mobile agents, and communicate their results back to the user

    State-of-the-art on evolution and reactivity

    Get PDF
    This report starts by, in Chapter 1, outlining aspects of querying and updating resources on the Web and on the Semantic Web, including the development of query and update languages to be carried out within the Rewerse project. From this outline, it becomes clear that several existing research areas and topics are of interest for this work in Rewerse. In the remainder of this report we further present state of the art surveys in a selection of such areas and topics. More precisely: in Chapter 2 we give an overview of logics for reasoning about state change and updates; Chapter 3 is devoted to briefly describing existing update languages for the Web, and also for updating logic programs; in Chapter 4 event-condition-action rules, both in the context of active database systems and in the context of semistructured data, are surveyed; in Chapter 5 we give an overview of some relevant rule-based agents frameworks

    Where Do Facts Matter? The Digital Paradox in Magazines\u27 Fact-Checking Practices

    Get PDF
    Print magazines are unique among nonfiction media in their dedication of staff and resources to in-depth, word-by-word verification of stories. Over time, this practice has established magazines’ reputation for reliability, helped them retain loyal readers amid a glut of information sources, and protected them from litigation. But during the past decade, websites, mobile platforms, and social media have expanded the types of stories and other content that magazines provide readers. Doing so has shortened the time between the creation and dissemination of content, challenging and in some cases squeezing out fact-checkers’ participation. This study examines the procedures applied to stories in magazines and their non-print platforms, seeking to discern what decisions were made in response to the speed of digital publication, what effects these decisions have had, what lessons have been learned and what changes have been made over time. The results suggest that fact-checking practices for print content remain solidly in place at most magazines, if executed with diminished resources; however, magazine media are also exploring new processes to ensure accuracy and protect their reputations in an accelerated media environment

    Design and implementation of a multi-modal biometric system for company access control

    Get PDF
    This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman\u2019s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user\u2019s face only or the serial acquisition of both the user\u2019s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions), possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent \u201cresponse behaviors\u201d according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database
    • 

    corecore