239,233 research outputs found

    A Comparative Evaluation of .net Remoting and JAVA RMI

    Get PDF
    Distributed application technologies such as Micrososoft.NET Remoting, and Java Remote Method Invocation (RMI) have evolved over many years to keep up with the constantly increasing requirements of the enterprise. In the broadest sense, a distributed application is one in which the application processing is divided among two or more machines. Distributed middleware technologies have made significant progress over the last decade. Although Remoting and RMI are the two of most popular contemporary middleware technologies, little literature exists that compares them. In this paper, we study the issues involved in designing a distributed system using Java RMI and Microsoft.NET Remoting. In order to perform the comparisons, we designed a distributed distance learning application in both technologies. In this paper, we show both similarities and differences between these two competing technologies. Remoting and RMI both have similar serialization process and let objects serialization to be customized according to the needs. They both provide support to be able to connect to interface definition language such as Common Object Request Broker Architecture (CORBA). They both contain distributed garbage collection support. Our research shows that programs coded using Remoting execute faster than programs coded using RMI. They both have strong support for security although implemented in different ways. In addition, RMI also has additional security mechanisms provided via security policy files. RMI requires a naming service to be able to locate the server address and connection port. This is a big advantage since the clients do not need to know the server location or port number, RMI registry locates it automatically. On the other hand, Remoting does not require a naming service; it requires that the port to connect must be pre-specified and all services must be well-known. RMI applications can be run on any operating system whereas Remoting targets Windows as the primary platform. We found it was easier to design the distance learning application in Remoting than in RMI. Remoting also provides greater flexibility in regard to configuration by providing support for external configuration files. In conclusion, we recommend that before deciding which application to choose careful considerations should be given to the type of application, platform, and resources available to program the application

    Indoor Positioning Services & Location Based Recommendations

    Get PDF
    With beacon technology, real-time turn-by-turn directions and real-time recommendations in the print collection can be provided to a user’s mobile device. With the infrastructure and research trajectory developed for an augmented reality experiment ( http://journal.code4lib.org/articles/10881 ), researchers undertook an experimental project to incorporate Estimote beacons ( http://estimote.com/ ) into an Undergraduate Library collection so that students new to the environment can see the location of their mobile device within the library building, supporting wayfinding to items, and discovery of like items with location-based recommendations. Presenters will demonstrate the distributed computing processes and workflows necessary to integrate beacons into collections-based wayfinding and walk through key components for the recommendation algorithm used for “topic spaces” in collections. The experimental location‐based recommendation service is grounded in the advantages of collocation that support information discovery and are supplemented with existing ILS data -- e.g. sum total circulation of a particular item. Presenters will demonstrate techniques and approaches utilized in developing improvements for beacon precision, enabling increased location granularity in library environments, along with security considerations for location based services.University of Illinois at Urbana-Champaign: University Library Research and Publication CommitteeOpe

    A personal distributed environment for future mobile systems

    Get PDF
    A Personal Distributed Environment (PDE) embraces a user-centric view of communications that take place against a backdrop of multiple user devices, each with its distinct capabilities, in physically separate locations. This paper provides an overview of a Personal Distributed Environment and some of the research issues related to the implementation of the PDE concept that are being considered in the current Mobile VCE work programme

    Cross-middleware Interoperability in Distributed Concurrent Engineering

    No full text
    Secure, distributed collaboration between different organizations is a key challenge in Grid computing today. The GDCD project has produced a Grid-based demonstrator Virtual Collaborative Facility (VCF) for the European Space Agency. The purpose of this work is to show the potential of Grid technology to support fully distributed concurrent design, while addressing practical considerations including network security, interoperability, and integration of legacy applications. The VCF allows domain engineers to use the concurrent design methodology in a distributed fashion to perform studies for future space missions. To demonstrate the interoperability and integration capabilities of Grid computing in concurrent design, we developed prototype VCF components based on ESA’s current Excel-based Concurrent Design Facility (a non-distributed environment), using a STEP-compliant database that stores design parameters. The database was exposed as a secure GRIA 5.1 Grid service, whilst a .NET/WSE3.0-based library was developed to enable secure communication between the Excel client and STEP database

    Web Single Sign-On Authentication using SAML

    Get PDF
    Companies have increasingly turned to application service providers (ASPs) or Software as a Service (SaaS) vendors to offer specialized web-based services that will cut costs and provide specific and focused applications to users. The complexity of designing, installing, configuring, deploying, and supporting the system with internal resources can be eliminated with this type of methodology, providing great benefit to organizations. However, these models can present an authentication problem for corporations with a large number of external service providers. This paper describes the implementation of Security Assertion Markup Language (SAML) and its capabilities to provide secure single sign-on (SSO) solutions for externally hosted applications

    PrivacyScore: Improving Privacy and Security via Crowd-Sourced Benchmarks of Websites

    Full text link
    Website owners make conscious and unconscious decisions that affect their users, potentially exposing them to privacy and security risks in the process. In this paper we introduce PrivacyScore, an automated website scanning portal that allows anyone to benchmark security and privacy features of multiple websites. In contrast to existing projects, the checks implemented in PrivacyScore cover a wider range of potential privacy and security issues. Furthermore, users can control the ranking and analysis methodology. Therefore, PrivacyScore can also be used by data protection authorities to perform regularly scheduled compliance checks. In the long term we hope that the transparency resulting from the published benchmarks creates an incentive for website owners to improve their sites. The public availability of a first version of PrivacyScore was announced at the ENISA Annual Privacy Forum in June 2017.Comment: 14 pages, 4 figures. A german version of this paper discussing the legal aspects of this system is available at arXiv:1705.0888

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Digital curation and the cloud

    Get PDF
    Digital curation involves a wide range of activities, many of which could benefit from cloud deployment to a greater or lesser extent. These range from infrequent, resource-intensive tasks which benefit from the ability to rapidly provision resources to day-to-day collaborative activities which can be facilitated by networked cloud services. Associated benefits are offset by risks such as loss of data or service level, legal and governance incompatibilities and transfer bottlenecks. There is considerable variability across both risks and benefits according to the service and deployment models being adopted and the context in which activities are performed. Some risks, such as legal liabilities, are mitigated by the use of alternative, e.g., private cloud models, but this is typically at the expense of benefits such as resource elasticity and economies of scale. Infrastructure as a Service model may provide a basis on which more specialised software services may be provided. There is considerable work to be done in helping institutions understand the cloud and its associated costs, risks and benefits, and how these compare to their current working methods, in order that the most beneficial uses of cloud technologies may be identified. Specific proposals, echoing recent work coordinated by EPSRC and JISC are the development of advisory, costing and brokering services to facilitate appropriate cloud deployments, the exploration of opportunities for certifying or accrediting cloud preservation providers, and the targeted publicity of outputs from pilot studies to the full range of stakeholders within the curation lifecycle, including data creators and owners, repositories, institutional IT support professionals and senior manager
    • 

    corecore