545,008 research outputs found

    The design and implementation of an infrastructure for multimedia digital libraries

    Get PDF
    We develop an infrastructure for managing, indexing and serving multimedia content in digital libraries. This infrastructure follows the model of the Web, and thereby is distributed in nature. We discuss the design of the Librarian, the component that manages meta data about the content. The management of meta data has been separated from the media servers that manage the content itself. Also, the extraction of the meta data is largely independent of the Librarian. We introduce our extensible data model and the daemon paradigm that are the core pieces of this architecture. We evaluate our initial implementation using a relational database. We conclude with a discussion of the lessons we learned in building this system, and proposals for improving the flexibility, reliability, and performance of the syste

    The design and implementation of a P2P-based composite event notification system

    Get PDF
    The development of large, open, and heterogeneous distributed systems is becoming increasingly dependent on event services to bind together the components of an application in such a way that they are able to react to changes in other components. One way to distribute event notifications around a distributed environment is to use content-based publish/ subscribe communication. Such a system mediates between publishers of information and subscribers who sign up to receive information by routing messages across the network from their source to the point of subscription using the message content and the client subscriptions. Although content-based publish/subscribe has been used successfully to develop simple event notification systems, in which events are routed through from external publisher to external client, more complex systems are possible that create new events, known as composites, based on the detection of patterns of events. Composite event notification, however, poses a number of challenges, including network management and network routing. In this paper, we discuss the design and implementation of a composite event notification system over a Chord-based peer-to-peer network using JXTA, and how we have addressed these challenges

    Privacy Preserving Enforcement of Sensitive Policies in Outsourced and Distributed Environments

    Get PDF
    The enforcement of sensitive policies in untrusted environments is still an open challenge for policy-based systems. On the one hand, taking any appropriate security decision requires access to these policies. On the other hand, if such access is allowed in an untrusted environment then confidential information might be leaked by the policies. The key challenge is how to enforce sensitive policies and protect content in untrusted environments. In the context of untrusted environments, we mainly distinguish between outsourced and distributed environments. The most attractive paradigms concerning outsourced and distributed environments are cloud computing and opportunistic networks, respectively. In this dissertation, we present the design, technical and implementation details of our proposed policy-based access control mechanisms for untrusted environments. First of all, we provide full confidentiality of access policies in outsourced environments, where service providers do not learn private information about policies. We support expressive policies and take into account contextual information. The system entities do not share any encryption keys. For complex user management, we offer the full-fledged Role-Based Access Control (RBAC) policies. In opportunistic networks, we protect content by specifying expressive policies. In our proposed approach, brokers match subscriptions against policies associated with content without compromising privacy of subscribers. As a result, unauthorised brokers neither gain access to content nor learn policies and authorised nodes gain access only if they satisfy policies specified by publishers. Our proposed system provides scalable key management in which loosely-coupled publishers and subscribers communicate without any prior contact. Finally, we have developed a prototype of the system that runs on real smartphones and analysed its performance.Comment: Ph.D. Dissertation. http://eprints-phd.biblio.unitn.it/1124

    TuLip : reshaping trust management

    Get PDF
    In todayā€™s highly distributed and heterogeneous world of the Internet, sharing resources has\ud become an everyday activity of every Internet user. We buy and sell goods over the Internet,\ud share our holiday pictures using facebookā„¢, ā€œtubeā€ our home videos on You Tubeā„¢, and\ud exchange our interests and thoughts on blogs. We podcast, we are Linkedinā„¢ to extend our\ud professional network, we share files over P2P networks, and we seek advice on numerous\ud on-line discussion groups. Although in most cases we want to reach the largest possible\ud group of users, often we realise that some data should remain private or, at least, restricted\ud to a carefully chosen audience. Access control is no longer the domain of computer security\ud experts, but something we experience everyday.\ud In a typical access control scenario, the resource provider has full control over the protected\ud resource. The resource provider decides who can access which resource and what\ud action can be performed on this resource. The set of entities that can access a protected resource\ud can be statically defined and is known a priori to the resource provider. Although still\ud valid in many cases, such a scenario is too restrictive today. The resource owner is not only\ud required, but often wants to reach the widest possible group of users, many of which remain\ud anonymous to the resource provider. A more flexible approach to access control is needed.\ud Trust Management is a recent approach to access control in which the access control decision\ud is based on security credentials. In a credential, the credential issuer states attributes\ud (roles, properties) of the credential subject. For the credentials to have the same meaning\ud across all the users, the credentials are written in a trust management language. A special\ud algorithm, called a compliance checker, is then used to evaluate if the given set of credentials\ud is compliant with the requested action on the requested protected resource. Finally, an\ud important characteristic of trust management is that every entity may issue credentials.\ud In the original approach to trust management, the credentials are stored at a well-known\ud location, so that the compliance checker knows where to search for the credentials. Another\ud approach is to let the users store the credentials. Storing the credentials in a distributed way\ud eliminates the single point of failure introduced by the centralised credential repository, but\ud now the compliance checker must know where to find the credentials. Another difficulty of\ud the distributed approach is that the design of a correct credential discovery algorithm comes\ud at the cost of limiting the expressive power of the trust management language.\ud In this thesis we show that it is possible to build a generic, open-ended trust management\ud system enjoying both a powerful syntax and supporting distributed credential storage. More\ud specifically, we show how to build a trust management system that has:\ud ā€¢ a formal yet expressive trust management language for specifying credentials,\ud ā€¢ a compliance checker for determining if a given authorisation request can be granted\ud given the set of credentials,\ud ā€¢ support for distributed credential storage.\ud \ud We call our trust management system TuLiP (Trust management based on Logic Programming).\ud In the thesis we also indicate how to deploy TuLiP in a distributed content management\ud system (we use pictures as the content in our implementation). Using the same approach,\ud TuLiP can improve existing P2P content sharing services by providing a personalised, scalable,\ud and password-free access control method to the users. By decentralising the architecture,\ud systems like facebookā„¢ or You Tubeā„¢ could also benefit from TuLiP. By providing\ud easy to use and scalable access control method, TuLiP can encourage sharing of private and\ud copyrighted content under a uniform and familiar user interface. Also Internet stores, often\ud deployed as a centralised system, can benefit from using the credential based trust management.\ud Here, TuLiP can facilitate the business models in which the recommended clients\ud and the clients of friendly businesses participate in customised customer rewarding programs\ud (like receiving attractive discounts). By naturally supporting co-operation of autonomous entities\ud using distributed credentials, we believe that TuLiP could make validation of business\ud relationships easier, which, in turn, could stimulate creation of new business models

    Software Defined Application Delivery Networking

    Get PDF
    In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters. Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code. AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including: Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases. Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context. Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand. Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning

    Towards practicalization of blockchain-based decentralized applications

    Get PDF
    Blockchain can be defined as an immutable ledger for recording transactions, maintained in a distributed network of mutually untrusting peers. Blockchain technology has been widely applied to various fields beyond its initial usage of cryptocurrency. However, blockchain itself is insufficient to meet all the desired security or efficiency requirements for diversified application scenarios. This dissertation focuses on two core functionalities that blockchain provides, i.e., robust storage and reliable computation. Three concrete application scenarios including Internet of Things (IoT), cybersecurity management (CSM), and peer-to-peer (P2P) content delivery network (CDN) are utilized to elaborate the general design principles for these two main functionalities. Among them, the IoT and CSM applications involve the design of blockchain-based robust storage and management while the P2P CDN requires reliable computation. Such general design principles derived from disparate application scenarios have the potential to realize practicalization of many other blockchain-enabled decentralized applications. In the IoT application, blockchain-based decentralized data management is capable of handling faulty nodes, as designed in the cybersecurity application. But an important issue lies in the interaction between external network and blockchain network, i.e., external clients must rely on a relay node to communicate with the full nodes in the blockchain. Compromization of such relay nodes may result in a security breach and even a blockage of IoT sensors from the network. Therefore, a censorship-resistant blockchain-based decentralized IoT management system is proposed. Experimental results from proof-of-concept implementation and deployment in a real distributed environment show the feasibility and effectiveness in achieving censorship resistance. The CSM application incorporates blockchain to provide robust storage of historical cybersecurity data so that with a certain level of cyber intelligence, a defender can determine if a network has been compromised and to what extent. The CSM functions can be categorized into three classes: Network-centric (N-CSM), Tools-centric (T-CSM) and Application-centric (A-CSM). The cyber intelligence identifies new attackers, victims, or defense capabilities. Moreover, a decentralized storage network (DSN) is integrated to reduce on-chain storage costs without undermining its robustness. Experiments with the prototype implementation and real-world cyber datasets show that the blockchain-based CSM solution is effective and efficient. The P2P CDN application explores and utilizes the functionality of reliable computation that blockchain empowers. Particularly, P2P CDN is promising to provide benefits including cost-saving and scalable peak-demand handling compared with centralized CDNs. However, reliable P2P delivery requires proper enforcement of delivery fairness. Unfortunately, most existing studies on delivery fairness are based on non-cooperative game-theoretic assumptions that are arguably unrealistic in the ad-hoc P2P setting. To address this issue, an expressive security requirement for desired fair P2P content delivery is defined and two efficient approaches based on blockchain for P2P downloading and P2P streaming are proposed. The proposed system guarantees the fairness for each party even when all others collude to arbitrarily misbehave and achieves asymptotically optimal on-chain costs and optimal delivery communication

    Analisis Hubungan Faktor End User Computing Satisfaction Terhadap Kepuasan Pengguna Sistem Informasi Di Rumah Sakit PKU Muhammadiyah Kota Yogyakarta Pada Tahun 2021

    Get PDF
    Introduction : According to the 2011 Minister of Health, every hospital is required to implement SIRS, so that a process of collecting, processing, and presenting hospital data can be integrated. In its implementation, SIMRS is required to improve the effectiveness and efficiency of a service system. One indicator of the success of the development of information systems is user satisfaction. A number of methods to measure user satisfaction, one of which is EUCS. The general purpose of the study is to analyze the EUCS factor on system user satisfaction, while the specific purpose of this study is to see if there is a relationship between the EUCS variable factors, namely content, format, accuracy, timeliness, and ease of use with satisfaction of hospital management system users at PKU Muhammadiyah Yogyakarta in 2021. Methods: The type of research used is quantitative research with a cross sectional design. Sampling using the lemeshow formula with the results of 245 respondents. The selection of respondents used an accidental sampling technique with the research instrument using a questionnaire adopted from previous studies. Data analysis used univariate and bivariate analysis while the statistical test used was the chi square test with data categories normally distributed based on the median value. Results: From the research conducted, it is found that there is a significant relationship between 5 EUCS variables, namely content, format, accuracy, timeliness and ease of use with SIMRS user satisfaction. The results of data processing using the chi square test were obtained about 212 (86.5%) respondents who stated they were satisfied and there were about 33 (13.5%) who said they were not satisfied with the SIMRS application. Conclusion: Overall, the five EUCS variables have a relationship and the use of the system states that they are satisfied with the My Hospital application

    A Prototype Collaborative Directory of Metadata Standards

    Get PDF
    The implementation and development of a prototype metadata directory to support the discovery and identification of metadata standards, used in scientific research, is reported on. The project was undertaken as an initiative of the Research Data Alliance's Metadata Standards Directory Working Group in 2014 and makes more accessible information about metadata standards used by scientists in a range of disciplines in order to support the exchange, management, curation and preservation of scientific data. Other important benefits include: supporting the reuse of standards to eliminate duplication of effort, and to enable reproducible scientific research. The report examines the design of the directory; including the use of a distributed version control system as a mechanism for content management. A discussion of the sustainability and risks, and a comparison to other collaboratively managed information systems is made. Future features and extensions applicable to the prototype are also addressed.Master of Science in Library Scienc
    • ā€¦
    corecore