57 research outputs found

    Model-driven dual caching For nomadic service-oriented architecture clients

    Get PDF
    Mobile devices have evolved over the years from resource constrained devices that supported only the most basic tasks to powerful handheld computing devices. However, the most significant step in the evolution of mobile devices was the introduction of wireless connectivity which enabled them to host applications that require internet connectivity such as email, web browsers and maybe most importantly smart/rich clients. Being able to host smart clients allows the users of mobile devices to seamlessly access the Information Technology (IT) resources of their organizations. One increasingly popular way of enabling access to IT resources is by using Web Services (WS). This trend has been aided by the rapid availability of WS packages/tools, most notably the efforts of the Apache group and Integrated Development Environment (IDE) vendors. But the widespread use of WS raises questions for users of mobile devices such as laptops or PDAs; how and if they can participate in WS. Unlike their “wired” counterparts (desktop computers and servers) they rely on a wireless network that is characterized by low bandwidth and unreliable connectivity.The aim of this thesis is to enable mobile devices to host Web Services consumers. It introduces a Model-Driven Dual Caching (MDDC) approach to overcome problems arising from temporarily loss of connectivity and fluctuations in bandwidth

    Research in Mobile Database Query Optimization and Processing

    Get PDF

    Cooperative Caching in Vehicular Networks - Distributed Cache Invalidation Using Information Freshness

    Get PDF
    Recent advances in vehicular communications has led to significant opportunities to deploy variety of applications and services improving road safety and traffic efficiency to road users. In regard to traffic management services in distributed vehicular networks, this thesis work evaluates managing storage at vehicles efficiently as cache for moderate cellular transmission costs while still achieving correct routing decision. Road status information was disseminated to oncoming traffic in the form of cellular notifications using a reporting mechanism. High transmission costs due to redundant notifications published by all vehicles following a basic reporting mechanism: Default-approach was overcome by implementing caching at every vehicle. A cooperative based reporting mechanism utilizing cache: Cooperative-approach, was proposed to notify road status while avoiding redundant notifications. In order to account those significantly relevant vehicles for decision-making process which did not actually publish, correspondingly virtual cache entries were implemented. To incorporate the real-world scenario of varying vehicular rate observed on any road, virtual cache entries based on varying vehicular rate was modeled as Adaptive Cache Management mechanism. The combinations of proposed mechanisms were evaluated for cellular transmission costs and accuracy achieved for making correct routing decision. Simulation case studies comprising varying vehicular densities and different false detection rates were conducted to demonstrate the performance of these mechanisms. Additionally, the proposed mechanisms were evaluated in different decision-making algorithms for both information freshness in changing road conditions and for robustness despite false detections. The simulation results demonstrated that the combination of proposed mechanisms was capable of achieving realistic information accuracy enough to make correct routing decision despite false readings while keeping network costs significantly low. Furthermore, using QoI-based decision algorithm in high density vehicular networks, fast adaptability to frequently changing road conditions as well as quick recovery from false notifications by invalidating them with correct notifications were indicated

    Adaptive Caching of Distributed Components

    Get PDF
    Die ZugriffslokalitĂ€t referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. GegenwĂ€rtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum UnterstĂŒtzung fĂŒr diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frĂŒhzeitige Modellierung und spĂ€tere Wiederverwendung caching-spezifischer Metadaten gewĂ€hrleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezĂŒglich der Cachebarkeit von Daten adaptiv an geĂ€ndertes Nutzungsverhalten anpassen.Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach

    Migrating Integration from SOAP to REST : Can the Advantages of Migration Justify the Project?

    Get PDF
    This thesis investigates the functional and conceptual differences between SOAP-based and RESTful web services and their implications in the context of a real-world migration project. The primary research questions addressed are: ‱ What are the key functional and conceptual differences between SOAP-based and RESTful web services? ‱ How can SOAP-based and RESTful service clients be implemented into a general client? ‱ Can developing a client to work with REST and SOAP be justified based on differences in performance and maintainability? The thesis begins with a literature review of the core principles and features of SOAP and REST, highlighting their strengths, weaknesses, and suitability for different use cases. A detailed comparison table is provided to summarize the key differences between the two web services. The thesis presents a case study of a migration project from Lemonsoft's web team, which involved adapting an existing integration to support SOAP-based and RESTful services. The project utilized design patterns and a general client implementation to achieve a unified solution compatible with both protocols. In terms of performance, the evaluation showed that the general client led to faster execution times and reduced memory usage, enhancing the overall system efficiency. Additionally, improvements in maintainability were achieved by simplifying the codebase, using design patterns and object factories, adopting an interface-driven design, and promoting collaborative code reviews. These enhancements have not only resulted in a better user experience but also minimized future resource demands and maintenance costs. In conclusion, this thesis provides valuable insights into the functional and conceptual differences between SOAP-based and RESTful web services, the challenges and best practices for implementing a general client, and the justification for resource usage in such a solution based on performance and maintainability improvements

    Réseaux ad hoc : systÚme d'adressage et méthodes d'accessibilité aux données

    Get PDF
    RÉSUMÉ Au cours de la derniĂšre dĂ©cennie, un nouveau type de rĂ©seaux sans fil a suscitĂ© un grand intĂ©rĂȘt dans la communautĂ© scientifique: ce sont les rĂ©seaux ad hoc. Ils existent sous la variante des rĂ©seaux mobiles ad hoc (MANET), et des rĂ©seaux de capteurs sans fil (RCSF). Les rĂ©seaux mobiles ad hoc sont constituĂ©s de noeuds mobiles qui communiquent les uns avec les autres sans l‘aide d‘une d'infrastructure centralisĂ©e. Les noeuds se dĂ©placent librement et sont soumis Ă  des dĂ©connexions frĂ©quentes en raison de l'instabilitĂ© des liens. Cela a pour consĂ©quence de diminuer l'accessibilitĂ© aux donnĂ©es, et de modifier la façon dont les donnĂ©es sont partagĂ©es dans le rĂ©seau. Comparable aux rĂ©seaux MANET, un RCSF est composĂ© d'un ensemble d'unitĂ©s de traitements embarquĂ©es, appelĂ©es capteurs, communiquant via des liens sans fil et dont la fonction principale est la collecte de paramĂštres relatifs Ă  l'environnement qui les entoure, telles que la tempĂ©rature, la pression, ou la prĂ©sence d'objets. Les RCSF diffĂšrent des MANET de par le dĂ©ploiement Ă  grande Ă©chelle des noeuds, et trouvent leur application dans diverses activitĂ©s de la sociĂ©tĂ©, tels les processus industriels, les applications militaires de surveillance, l'observation et le suivi d'habitat, etc. Lorsqu‘un grand nombre de capteurs sont dĂ©ployĂ©s avec des dispositifs d'actionnement appelĂ©s acteurs, le RCSF devient un rĂ©seau de capteurs et d‘acteurs sans fil (RCASF). Dans une telle situation, les capteurs collaborent pour la dĂ©tection des phĂ©nomĂšnes physiques et rapportent les donnĂ©es affĂ©rentes aux acteurs qui les traitent et initient les actions appropriĂ©es. De nombreux travaux dans les RCSF supposent l'existence d'adresses et d'infrastructures de routage pour valider leurs propositions. Cependant, l‘allocation d‘adresses et le routage des donnĂ©es liĂ©es aux Ă©vĂ©nements dĂ©tectĂ©s dans ces rĂ©seaux restent des dĂ©fis entiers, en particulier Ă  cause du nombre Ă©levĂ© de capteurs et des ressources limitĂ©es dont ils disposent. Dans cette thĂšse, nous abordons le problĂšme de l'accessibilitĂ© aux donnĂ©es dans les MANET, et les mĂ©canismes d‘adressage et de routage dans les RCSF de grande taille.----------ABSTRACT During the last decade, a new type of wireless networks has stirred up great interest within the scientific community: there are ad hoc networks. They exist as mobile ad hoc networks (MANET), and wireless sensor (WSN). The mobile ad hoc networks consist of mobile nodes that communicate with each other without using a centralized infrastructure. The nodes move freely and are subject to frequent disconnections due to links instability. This has the effect of reducing data accessibility, and change the way data are shared across the network. Similar MANET networks, a WSN consists of a set of embedded processing units called sensors that communicate with each other via wireless links. Their main function is the collection of parameters relating to the environment around them, such as temperature, pressure, motion, video, etc. WSNs differ from the MANETs due to the large scale deployment of nodes, and are expected to have many applications in various fields, such as industrial processes, military surveillance, observation and monitoring of habitat, etc. When a large number of sensors which are resource-impoverished nodes are deployed with powerful actuation devices, the WSN becomes a Wireless Sensor and Actor Network (WSAN). In such a situation, the collaborative operation of sensors enables the distributed sensing of a physical phenomenon, while actors collect and process sensor data to perform appropriate action. Numerous works in WSN assumes the existence of addresses and routing infrastructure to validate their proposals. However, assigning addresses and delivering detected events remains highly challenging, specifically due to the sheer number of nodes. In this thesis, we address the problem of data accessibility in MANET, and that of addressing and routing in large scale WSN. This involves techniques such as data caching and replication to prevent the deterioration of data accessibility. The addressing system in WSN includes a distributed address allocation scheme and a routing infrastructure for both actors and sensors. Moreover, with the birth of the multimedia sensors, the traffic may be mixed with time sensitive packets and reliability-demanding packets. For that purpose, we also address the problem of providing quality of service (QoS) in the routing infrastructure for WSN

    A shared-disk parallel cluster file system

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em InformĂĄtica Pela Universidade Nova de Lisboa, Faculdade de CiĂȘncias e TecnologiaToday, clusters are the de facto cost effective platform both for high performance computing (HPC) as well as IT environments. HPC and IT are quite different environments and differences include, among others, their choices on file systems and storage: HPC favours parallel file systems geared towards maximum I/O bandwidth, but which are not fully POSIX-compliant and were devised to run on top of (fault prone) partitioned storage; conversely, IT data centres favour both external disk arrays (to provide highly available storage) and POSIX compliant file systems, (either general purpose or shared-disk cluster file systems, CFSs). These specialised file systems do perform very well in their target environments provided that applications do not require some lateral features, e.g., no file locking on parallel file systems, and no high performance writes over cluster-wide shared files on CFSs. In brief, we can say that none of the above approaches solves the problem of providing high levels of reliability and performance to both worlds. Our pCFS proposal makes a contribution to change this situation: the rationale is to take advantage on the best of both – the reliability of cluster file systems and the high performance of parallel file systems. We don’t claim to provide the absolute best of each, but we aim at full POSIX compliance, a rich feature set, and levels of reliability and performance good enough for broad usage – e.g., traditional as well as HPC applications, support of clustered DBMS engines that may run over regular files, and video streaming. pCFS’ main ideas include: · Cooperative caching, a technique that has been used in file systems for distributed disks but, as far as we know, was never used either in SAN based cluster file systems or in parallel file systems. As a result, pCFS may use all infrastructures (LAN and SAN) to move data. · Fine-grain locking, whereby processes running across distinct nodes may define nonoverlapping byte-range regions in a file (instead of the whole file) and access them in parallel, reading and writing over those regions at the infrastructure’s full speed (provided that no major metadata changes are required). A prototype was built on top of GFS (a Red Hat shared disk CFS): GFS’ kernel code was slightly modified, and two kernel modules and a user-level daemon were added. In the prototype, fine grain locking is fully implemented and a cluster-wide coherent cache is maintained through data (page fragments) movement over the LAN. Our benchmarks for non-overlapping writers over a single file shared among processes running on different nodes show that pCFS’ bandwidth is 2 times greater than NFS’ while being comparable to that of the Parallel Virtual File System (PVFS), both requiring about 10 times more CPU. And pCFS’ bandwidth also surpasses GFS’ (600 times for small record sizes, e.g., 4 KB, decreasing down to 2 times for large record sizes, e.g., 4 MB), at about the same CPU usage.Lusitania, Companhia de Seguros S.A, Programa IBM Shared University Research (SUR

    The Design of Secure Mobile Databases: An Evaluation of Alternative Secure Access Models

    Get PDF
    This research considers how mobile databases can be designed to be both secure and usable. A mobile database is one that is accessed and manipulated via mobile information devices over a wireless medium. A prototype mobile database was designed and then tested against secure access control models to determine if and how these models performed in securing a mobile database. The methodology in this research consisted of five steps. Initially, a preliminary analysis was done to delineate the environment the prototypical mobile database would be used in. Requirements definitions were established to gain a detailed understanding of the users and function of the database system. Conceptual database design was then employed to produce a database design model. In the physical database design step, the database was denormalized in order to reflect some unique computing requirements of the mobile environment. Finally, this mobile database design was tested against three secure access control models and observations made
    • 

    corecore