2,217 research outputs found

    Bringing self assessment home: repository profiling and key lines of enquiry within DRAMBORA

    Get PDF
    Digital repositories are a manifestation of complex organizational, financial, legal, technological, procedural, and political interrelationships. Accompanying each of these are innate uncertainties, exacerbated by the relative immaturity of understanding prevalent within the digital preservation domain. Recent efforts have sought to identify core characteristics that must be demonstrable by successful digital repositories, expressed in the form of check-list documents, intended to support the processes of repository accreditation and certification. In isolation though, the available guidelines lack practical applicability; confusion over evidential requirements and difficulties associated with the diversity that exists among repositories (in terms of mandate, available resources, supported content and legal context) are particularly problematic. A gap exists between the available criteria and the ways and extent to which conformity can be demonstrated. The Digital Repository Audit Method Based on Risk Assessment (DRAMBORA) is a methodology for undertaking repository self assessment, developed jointly by the Digital Curation Centre (DCC) and DigitalPreservationEurope (DPE). DRAMBORA requires repositories to expose their organization, policies and infrastructures to rigorous scrutiny through a series of highly structured exercises, enabling them to build a comprehensive registry of their most pertinent risks, arranged into a structure that facilitates effective management. It draws on experiences accumulated throughout 18 evaluative pilot assessments undertaken in an internationally diverse selection of repositories, digital libraries and data centres (including institutions and services such as the UK National Digital Archive of Datasets, the National Archives of Scotland, Gallica at the National Library of France and the CERN Document Server). Other organizations, such as the British Library, have been using sections of DRAMBORA within their own risk assessment procedures. Despite the attractive benefits of a bottom up approach, there are implicit challenges posed by neglecting a more objective perspective. Following a sustained period of pilot audits undertaken by DPE, DCC and the DELOS Digital Preservation Cluster aimed at evaluating DRAMBORA, it was stated that had respective project members not been present to facilitate each assessment, and contribute their objective, external perspectives, the results may have been less useful. Consequently, DRAMBORA has developed in a number of ways, to enable knowledge transfer from the responses of comparable repositories, and incorporate more opportunities for structured question sets, or key lines of enquiry, that provoke more comprehensive awareness of the applicability of particular threats and opportunities

    A comprehensive meta-analysis of cryptographic security mechanisms for cloud computing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The concept of cloud computing offers measurable computational or information resources as a service over the Internet. The major motivation behind the cloud setup is economic benefits, because it assures the reduction in expenditure for operational and infrastructural purposes. To transform it into a reality there are some impediments and hurdles which are required to be tackled, most profound of which are security, privacy and reliability issues. As the user data is revealed to the cloud, it departs the protection-sphere of the data owner. However, this brings partly new security and privacy concerns. This work focuses on these issues related to various cloud services and deployment models by spotlighting their major challenges. While the classical cryptography is an ancient discipline, modern cryptography, which has been mostly developed in the last few decades, is the subject of study which needs to be implemented so as to ensure strong security and privacy mechanisms in today’s real-world scenarios. The technological solutions, short and long term research goals of the cloud security will be described and addressed using various classical cryptographic mechanisms as well as modern ones. This work explores the new directions in cloud computing security, while highlighting the correct selection of these fundamental technologies from cryptographic point of view

    Discovery and validation for composite services on the semantic web

    Get PDF
    urrent technology for locating and validating composite services are not sufficient due to the following reasons. • Current frameworks do not have the capacity to create complete service descriptions since they do not model all the functional aspects together (i.e. the purpose of a service, state transitions, data transformations). Those that deal with behavioural descriptions are unable to model the ordering constraints between concurrent interactions completely since they do not consider the time taken by interactions. Furthermore, there is no mechanism to assess the correctness of a functional description. • Existing semantic-based matching techniques cannot locate services that conform to global constraints. Semantic-based techniques use ontological relationships to perform mappings between the terms in service descriptions and user requests. Therefore, unlike techniques that perform either direct string matching or schema matching, semantic-based approaches can match descriptions created with different terminologies and achieve a higher recall. Global constraints relate to restrictions on values of two or more attributes of multiple constituent services. • Current techniques that generate and validate global communication models of composite services yield inaccurate results (i.e. detect phantom deadlocks or ignore actual deadlocks) since they either (i) do not support all types of interactions (i.e. only send and receive, not service and invoke) or (ii) do not consider the time taken by interactions. This thesis presents novel ideas to deal with the stated limitations. First, we propose two formalisms (WS-ALUE and WS-π-calculus) for creating functional and behavioural descriptions respectively. WS-ALUE extends the Description Logic language ALUE with some new predicates and models all the functional aspects together. WS-π-calculus extends π-calculus with Interval Time Logic (ITL) axioms. ITL axioms accurately model temporal relationships between concurrent interactions. A technique comparing a WS-π-calculus description of a service against its WS-ALUE description is introduced to detect any errors that are not equally reflected in both descriptions. We propose novel semantic-based matching techniques to locate composite services that conform to global constraints. These constraints are of two types: strictly dependent or independent. A constraint is of the former type if the values that should be assigned to all the remaining restricted attributes can be uniquely determined once a value is assigned to one. Any global constraint that is not strictly dependent is independent. A complete and correct technique that locates services that conform to strictly dependent constraints in polynomial time, is defined using a three-dimensional data cube. The proposed approach that deals with independent constraints is correct, but not complete, and is a heuristic approach. It incorporates user defined objective functions, greedy algorithms and domain rules to locate conforming services. We propose a new approach to generate global communication models (of composite services) that are free of deadlocks and synchronisation conflicts. This approach is an extension of a transitive temporal reasoning mechanism

    Semantics-aware planning methodology for automatic web service composition

    Get PDF
    Service-Oriented Computing (SOC) has been a major research topic in the past years. It is based on the idea of composing distributed applications even in heterogeneous environments by discovering and invoking network-available Web Services to accomplish some complex tasks when no existing service can satisfy the user request. Service-Oriented Architecture (SOA) is a key design principle to facilitate building of these autonomous, platform-independent Web Services. However, in distributed environments, the use of services without considering their underlying semantics, either functional semantics or quality guarantees can negatively affect a composition process by raising intermittent failures or leading to slow performance. More recently, Artificial Intelligence (AI) Planning technologies have been exploited to facilitate the automated composition. But most of the AI planning based algorithms do not scale well when the number of Web Services increases, and there is no guarantee that a solution for a composition problem will be found even if it exists. AI Planning Graph tries to address various limitations in traditional AI planning by providing a unique search space in a directed layered graph. However, the existing AI Planning Graph algorithm only focuses on finding complete solutions without taking account of other services which are not achieving the goals. It will result in the failure of creating such a graph in the case that many services are available, despite most of them being irrelevant to the goals. This dissertation puts forward a concept of building a more intelligent planning mechanism which should be a combination of semantics-aware service selection and a goal-directed planning algorithm. Based on this concept, a new planning system so-called Semantics Enhanced web service Mining (SEwsMining) has been developed. Semantic-aware service selection is achieved by calculating on-demand multi-attributes semantics similarity based on semantic annotations (QWSMO-Lite). The planning algorithm is a substantial revision of the AI GraphPlan algorithm. To reduce the size of planning graph, a bi-directional planning strategy has been developed

    Proceedings of the 2004 ONR Decision-Support Workshop Series: Interoperability

    Get PDF
    In August of 1998 the Collaborative Agent Design Research Center (CADRC) of the California Polytechnic State University in San Luis Obispo (Cal Poly), approached Dr. Phillip Abraham of the Office of Naval Research (ONR) with the proposal for an annual workshop focusing on emerging concepts in decision-support systems for military applications. The proposal was considered timely by the ONR Logistics Program Office for at least two reasons. First, rapid advances in information systems technology over the past decade had produced distributed collaborative computer-assistance capabilities with profound potential for providing meaningful support to military decision makers. Indeed, some systems based on these new capabilities such as the Integrated Marine Multi-Agent Command and Control System (IMMACCS) and the Integrated Computerized Deployment System (ICODES) had already reached the field-testing and final product stages, respectively. Second, over the past two decades the US Navy and Marine Corps had been increasingly challenged by missions demanding the rapid deployment of forces into hostile or devastate dterritories with minimum or non-existent indigenous support capabilities. Under these conditions Marine Corps forces had to rely mostly, if not entirely, on sea-based support and sustainment operations. Particularly today, operational strategies such as Operational Maneuver From The Sea (OMFTS) and Sea To Objective Maneuver (STOM) are very much in need of intelligent, near real-time and adaptive decision-support tools to assist military commanders and their staff under conditions of rapid change and overwhelming data loads. In the light of these developments the Logistics Program Office of ONR considered it timely to provide an annual forum for the interchange of ideas, needs and concepts that would address the decision-support requirements and opportunities in combined Navy and Marine Corps sea-based warfare and humanitarian relief operations. The first ONR Workshop was held April 20-22, 1999 at the Embassy Suites Hotel in San Luis Obispo, California. It focused on advances in technology with particular emphasis on an emerging family of powerful computer-based tools, and concluded that the most able members of this family of tools appear to be computer-based agents that are capable of communicating within a virtual environment of the real world. From 2001 onward the venue of the Workshop moved from the West Coast to Washington, and in 2003 the sponsorship was taken over by ONR’s Littoral Combat/Power Projection (FNC) Program Office (Program Manager: Mr. Barry Blumenthal). Themes and keynote speakers of past Workshops have included: 1999: ‘Collaborative Decision Making Tools’ Vadm Jerry Tuttle (USN Ret.); LtGen Paul Van Riper (USMC Ret.);Radm Leland Kollmorgen (USN Ret.); and, Dr. Gary Klein (KleinAssociates) 2000: ‘The Human-Computer Partnership in Decision-Support’ Dr. Ronald DeMarco (Associate Technical Director, ONR); Radm CharlesMunns; Col Robert Schmidle; and, Col Ray Cole (USMC Ret.) 2001: ‘Continuing the Revolution in Military Affairs’ Mr. Andrew Marshall (Director, Office of Net Assessment, OSD); and,Radm Jay M. Cohen (Chief of Naval Research, ONR) 2002: ‘Transformation ... ’ Vadm Jerry Tuttle (USN Ret.); and, Steve Cooper (CIO, Office ofHomeland Security) 2003: ‘Developing the New Infostructure’ Richard P. Lee (Assistant Deputy Under Secretary, OSD); and, MichaelO’Neil (Boeing) 2004: ‘Interoperability’ MajGen Bradley M. Lott (USMC), Deputy Commanding General, Marine Corps Combat Development Command; Donald Diggs, Director, C2 Policy, OASD (NII

    Agent-based workflow model for enterprise collaboration

    Get PDF
    Workflow management system supports the automation of business processes where a collection of tasks is organized between participants according to a defined set of rules to accomplish some business goals. The service-orientated computing paradigm is transforming traditional workflow management from a close, centralized control system into a dynamic information exchange and business process. Moreover, agent based workflow, from another point of view, provides a flexible mechanism for dynamic workflow coordination at run time. In this context, the combination of Web services and software agents provides great flexibility to discover and establish relationships among business partners. This thesis proposes an agent-based workflow model in support of inter-enterprise workflow management. In the proposed model, agent-based technology enables the workflow coordination at both inter- and intra- enterprise levels while semantic Web and Web services based technologies provide infrastructures for messaging, service description, service discovery, workflow ontology, and workflow enactment. Coordination agents and resource agents are used with a Contract Net protocol based bidding mechanism for constructing a dynamic workflow process among business partners. The agent system architecture, workflow models and related components are described. A prototype system is implemented for the purpose of designing and developing role-feasible agents for simulating the formation process of a virtual enterprise

    Gesture based interface for image annotation

    Get PDF
    Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaGiven the complexity of visual information, multimedia content search presents more problems than textual search. This level of complexity is related with the difficulty of doing automatic image and video tagging, using a set of keywords to describe the content. Generally, this annotation is performed manually (e.g., Google Image) and the search is based on pre-defined keywords. However, this task takes time and can be dull. In this dissertation project the objective is to define and implement a game to annotate personal digital photos with a semi-automatic system. The game engine tags images automatically and the player role is to contribute with correct annotations. The application is composed by the following main modules: a module for automatic image annotation, a module that manages the game graphical interface (showing images and tags), a module for the game engine and a module for human interaction. The interaction is made with a pre-defined set of gestures, using a web camera. These gestures will be detected using computer vision techniques interpreted as the user actions. The dissertation also presents a detailed analysis of this application, computational modules and design, as well as a series of usability tests

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    Balancing privacy needs with location sharing in mobile computing

    Get PDF
    Mobile phones are increasingly becoming tools for social interaction. As more phones come equipped with location tracking capabilities, capable of collecting and distributing personal information (including location) of their users, user control of location information and privacy for that matter, has become an important research issue. This research first explores various techniques of user control of location in location-based systems, and proposes the re-conceptualisation of deception (defined here as the deliberate withholding of location information) from information systems security to the field of location privacy. Previous work in this area considers techniques such as anonymisation, encryption, cloaking and blurring, among others. Since mobile devices have become social tools, this thesis takes a different approach by empirically investigating first the likelihood of the use of the proposed technique (deception) in protecting location privacy. We present empirical results (based on an online study) that show that people are willing to deliberately withhold their location information to protect their location privacy. However, our study shows that people feel uneasy in engaging in this type of deception if they believe this will be detected by their intended recipients. The results also suggest that the technique is popular in situations where it is very difficult to detect that there has been a deliberate withholding of location information during a location disclosure. Our findings are then presented in the form of initial design guidelines for the design of deception to control location privacy. Based on these initial guidelines, we propose and build a deception-based privacy control model. Two different evaluation approaches are employed in investigating the suitability of the model. These include; a field-based study of the techniques employed in the model and a laboratory-based usability study of the Mobile Client application upon which the DPC model is based, using HCI (Human Computer Interaction) professionals. Finally, we present guidelines for the design of deception in location disclosure, and lessons learned from the two evaluation approaches. We also propose a unified privacy preference framework implemented on the application layer of the mobile platform as a future direction of this thesis

    Semantic-Based, Scalable, Decentralized and Dynamic Resource Discovery for Internet-Based Distributed System

    Get PDF
    Resource Discovery (RD) is a key issue in Internet-based distributed sytems such as grid. RD is about locating an appropriate resource/service type that matches the user's application requirements. This is very important, as resource reservation and task scheduling are based on it. Unfortunately, RD in grid is very challenging as resources and users are distributed, resources are heterogeneous in their platforms, status of the resources is dynamic (resources can join or leave the system without any prior notice) and most recently the introduction of a new type of grid called intergrid (grid of grids) with the use of multi middlewares. Such situation requires an RD system that has rich interoperability, scalability, decentralization and dynamism features. However, existing grid RD systems have difficulties to attain these features. Not only that, they lack the review and evaluation studies, which may highlight the gap in achieving the required features. Therefore, this work discusses the problem associated with intergrid RD from two perspectives. First, reviewing and classifying the current grid RD systems in such a way that may be useful for discussing and comparing them. Second, propose a novel RD framework that has the aforementioned required RD features. In the former, we mainly focus on the studies that aim to achieve interoperability in the first place, which are known as RD systems that use semantic information (semantic technology). In particular, we classify such systems based on their qualitative use of the semantic information. We evaluate the classified studies based on their degree of accomplishment of interoperability and the other RD requirements, and draw the future research direction of this field. Meanwhile in the latter, we name the new framework as semantic-based scalable decentralized dynamic RD. The framework further contains two main components which are service description, and service registration and discovery models. The earlier consists of a set of ontologies and services. Ontologies are used as a data model for service description, whereas the services are to accomplish the description process. The service registration is also based on ontology, where nodes of the service (service providers) are classified to some classes according to the ontology concepts, which means each class represents a concept in the ontology. Each class has a head, which is elected among its own class I nodes/members. Head plays the role of a registry in its class and communicates with I the other heads of the classes in a peer to peer manner during the discovery process. We further introduce two intelligent agents to automate the discovery process which are Request Agent (RA) and Description Agent (DA). Eaclj. node is supposed to have both agents. DA describes the service capabilities based on the ontology, and RA I carries the service requests based on the ontology as well. We design a service search I algorithm for the RA that starts the service look up from the class of request origin first, then to the other classes. We finally evaluate the performance of our framework ~ith extensive simulation experiments, the result of which confirms the effectiveness of the proposed system in satisfying the required RD features (interoperability, scalability, decentralization and dynamism). In short, our main contributions are outlined new key taxonomy for the semantic-based grid RD studies; an interoperable semantic description RD component model for intergrid services metadata representation; a semantic distributed registry architecture for indexing service metadata; and an agent-qased service search and selection algorithm. Vll
    • …
    corecore