264 research outputs found

    Contextual mobile adaptation

    Get PDF
    Ubiquitous computing (ubicomp) involves systems that attempt to fit in with users’ context and interaction. Researchers agree that system adaptation is a key issue in ubicomp because it can be hard to predict changes in contexts, needs and uses. Even with the best planning, it is impossible to foresee all uses of software at the design stage. In order for software to continue to be helpful and appropriate it should, ideally, be as dynamic as the environment in which it operates. Changes in user requirements, contexts of use and system resources mean software should also adapt to better support these changes. An area in which adaptation is clearly lacking is in ubicomp systems, especially those designed for mobile devices. By improving techniques and infrastructure to support adaptation it is possible for ubicomp systems to not only sense and adapt to the environments they are running in, but also retrieve and install new functionality so as to better support the dynamic context and needs of users in such environments. Dynamic adaptation of software refers to the act of changing the structure of some part of a software system as it executes, without stopping or restarting it. One of the core goals of this thesis is to discover if such adaptation is feasible, useful and appropriate in the mobile environment, and how designers can create more adaptive and flexible ubicomp systems and associated user experiences. Through a detailed study of existing literature and experience of several early systems, this thesis presents design issues and requirements for adaptive ubicomp systems. This thesis presents the Domino framework, and demonstrates that a mobile collaborative software adaptation framework is achievable. This system can recommend future adaptations based on a history of use. The framework demonstrates that wireless network connections between mobile devices can be used to transport usage logs and software components, with such connections made either in chance encounters or in designed multi–user interactions. Another aim of the thesis is to discover if users can comprehend and smoothly interact with systems that are adapting. To evaluate Domino, a multiplayer game called Castles has been developed, in which game buildings are in fact software modules that are recommended and transferred between players. This evaluation showed that people are comfortable receiving semi–automated software recommendations; these complement traditional recommendation methods such as word of mouth and online forums, with the system’s support freeing users to discuss more in–depth aspects of the system, such as tactics and strategies for use, rather than forcing them to discover, acquire and integrate software by themselves

    ATOM : a distributed system for video retrieval via ATM networks

    Get PDF
    The convergence of high speed networks, powerful personal computer processors and improved storage technology has led to the development of video-on-demand services to the desktop that provide interactive controls and deliver Client-selected video information on a Client-specified schedule. This dissertation presents the design of a video-on-demand system for Asynchronous Transfer Mode (ATM) networks, incorporating an optimised topology for the nodes in the system and an architecture for Quality of Service (QoS). The system is called ATOM which stands for Asynchronous Transfer Mode Objects. Real-time video playback over a network consumes large bandwidth and requires strict bounds on delay and error in order to satisfy the visual and auditory needs of the user. Streamed video is a fundamentally different type of traffic to conventional IP (Internet Protocol) data since files are viewed in real-time, not downloaded and then viewed. This streaming data must arrive at the Client decoder when needed or it loses its interactive value. Characteristics of multimedia data are investigated including the use of compression to reduce the excessive bit rates and storage requirements of digital video. The suitability of MPEG-1 for video-on-demand is presented. Having considered the bandwidth, delay and error requirements of real-time video, the next step in designing the system is to evaluate current models of video-on-demand. The distributed nature of four such models is considered, focusing on how Clients discover Servers and locate videos. This evaluation eliminates a centralized approach in which Servers have no logical or physical connection to any other Servers in the network and also introduces the concept of a selection strategy to find alternative Servers when Servers are fully loaded. During this investigation, it becomes clear that another entity (called a Broker) could provide a central repository for Server information. Clients have logical access to all videos on every Server simply by connecting to a Broker. The ATOM Model for distributed video-on-demand is then presented by way of a diagram of the topology showing the interconnection of Servers, Brokers and Clients; a description of each node in the system; a list of the connectivity rules; a description of the protocol; a description of the Server selection strategy and the protocol if a Broker fails. A sample network is provided with an example of video selection and design issues are raised and solved including how nodes discover each other, a justification for using a mesh topology for the Broker connections, how Connection Admission Control (CAC) is achieved, how customer billing is achieved and how information security is maintained. A calculation of the number of Servers and Brokers required to service a particular number of Clients is presented. The advantages of ATOM are described. The underlying distributed connectivity is abstracted away from the Client. Redundant Server/Broker connections are eliminated and the total number of connections in the system are minimized by the rule stating that Clients and Servers may only connect to one Broker at a time. This reduces the total number of Switched Virtual Circuits (SVCs) which are a performance hindrance in ATM. ATOM can be easily scaled by adding more Servers which increases the total system capacity in terms of storage and bandwidth. In order to transport video satisfactorily, a guaranteed end-to-end Quality of Service architecture must be in place. The design methodology for such an architecture is investigated starting with a review of current QoS architectures in the literature which highlights important definitions including a flow, a service contract and flow management. A flow is a single media source which traverses resource modules between Server and Client. The concept of a flow is important because it enables the identification of the areas requiring consideration when designing a QoS architecture. It is shown that ATOM adheres to the principles motivating the design of a QoS architecture, namely the Integration, Separation and Transparency principles. The issue of mapping human requirements to network QoS parameters is investigated and the action of a QoS framework is introduced, including several possible causes of QoS degradation. The design of the ATOM Quality of Service Architecture (AQOSA) is then presented. AQOSA consists of 11 modules which interact to provide end-to-end QoS guarantees for each stream. Several important results arise from the design. It is shown that intelligent choice of stored videos in respect of peak bandwidth can improve overall system capacity. The concept of disk striping over a disk array is introduced and a Data Placement Strategy is designed which eliminates disk hot spots (i.e. Overuse of some disks whilst others lie idle.) A novel parameter (the B-P Ratio) is presented which can be used by the Server to predict future bursts from each video stream. The use of Traffic Shaping to decrease the load on the network from each stream is presented. Having investigated four algorithms for rewind and fast-forward in the literature, a rewind and fast-forward algorithm is presented. The method produces a significant decrease in bandwidth, and the resultant stream is very constant, reducing the chance that the stream will add to network congestion. The C++ classes of the Server, Broker and Client are described emphasizing the interaction between classes. The use of ATOM in the Virtual Private Network and the multimedia teaching laboratory is considered. Conclusions and recommendations for future work are presented. It is concluded that digital video applications require high bandwidth, low error, low delay networks; a video-on-demand system to support large Client volumes must be distributed, not centralized; control and operation (transport) must be separated; the number of ATM Switched Virtual Circuits (SVCs) must be minimized; the increased connections caused by the Broker mesh is justified by the distributed information gain; a Quality of Service solution must address end-to-end issues. It is recommended that a web front-end for Brokers be developed; the system be tested in a wide area A TM network; the Broker protocol be tested by forcing failure of a Broker and that a proprietary file format for disk striping be implemented

    Increasing the robustness of networked systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 133-143).What popular news do you recall about networked systems? You've probably heard about the several hour failure at Amazon's computing utility that knocked down many startups for several hours, or the attacks that forced the Estonian government web-sites to be inaccessible for several days, or you may have observed inexplicably slow responses or errors from your favorite web site. Needless to say, keeping networked systems robust to attacks and failures is an increasingly significant problem. Why is it hard to keep networked systems robust? We believe that uncontrollable inputs and complex dependencies are the two main reasons. The owner of a web-site has little control on when users arrive; the operator of an ISP has little say in when a fiber gets cut; and the administrator of a campus network is unlikely to know exactly which switches or file-servers may be causing a user's sluggish performance. Despite unpredictable or malicious inputs and complex dependencies we would like a network to self-manage itself, i.e., diagnose its own faults and continue to maintain good performance. This dissertation presents a generic approach to harden networked systems by distinguishing between two scenarios. For systems that need to respond rapidly to unpredictable inputs, we design online solutions that re-optimize resource allocation as inputs change. For systems that need to diagnose the root cause of a problem in the presence of complex subsystem dependencies, we devise techniques to infer these dependencies from packet traces and build functional representations that facilitate reasoning about the most likely causes for faults. We present a few solutions, as examples of this approach, that tackle an important class of network failures. Specifically, we address (1) re-routing traffic around congestion when traffic spikes or links fail in internet service provider networks, (2) protecting websites from denial of service attacks that mimic legitimate users and (3) diagnosing causes of performance problems in enterprises and campus-wide networks. Through a combination of implementations, simulations and deployments, we show that our solutions advance the state-of-the-art.by Srikanth Kandula.Ph.D

    An optimized hardware architecture and communication protocol for scheduled communication

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (p. 173-177).by David Shoemaker.Ph.D

    Guidelines and infrastructure for the design and implementation of highly adaptive, context-aware, mobile, peer-to-peer systems

    Get PDF
    Through a thorough review of existing literature, and extensive study of two large ubicomp systems, problems are identified with current mobile design practices, infrastructures and a lack of required software. From these problems, a set of guidelines for the design of mobile, peer-to-peer, context-aware systems are derived. Four key items of software infrastructure that are desirable but currently unavailable for mobile systems are identified. Each of these items of software are subsequently implemented, and the thesis describes each one, and at least one system in which each was used and trialled. These four items of mobile software infrastructure are: An 802.11 wireless driver that is capable of automatically switching between ad hoc and infrastructure networks when appropriate, combined with a peer discovery mechanism that can be used to identify peers and the services running and available on them. A hybrid positioning system that combines GPS, 802.11 and GSM positioning techniques to deliver location information that is almost constantly available, and can collect further 802.11 and GSM node samples during normal use of the system. A distributed recommendation system that, in addition to providing standard recommendations, can determine the relevance of data stored on the mobile device. This information is used by the same system to prioritise data when exchanging information with peers and to determine data that may be culled when the system is low on storage space without greatly affecting overall system performance. An infrastructure for creating highly adaptive, context-aware mobile applications. The Domino infrastructure allows software functionality to be recommended, exchanged between peers, installed, and executed, at runtime

    Multi-layer traffic control for wireless networks

    Get PDF
    Le reti Wireless LAN, così come definite dallo standard IEEE 802.11, garantiscono connettività senza fili nei cosiddetti “hot-spot” (aeroporti, hotel, etc.), nei campus universitari, nelle intranet aziendali e nelle abitazioni. In tali scenari, le WLAN sono denotate come “ad infrastruttura” nel senso che la copertura della rete è basata sulla presenza di un “Access Point” che fornisce alle stazioni mobili l’accesso alla rete cablata. Esiste un ulteriore approccio (chiamato “ad-hoc”) in cui le stazioni mobili appartenenti alla WLAN comunicano tra di loro senza l’ausilio dell’Access Point. Le Wireless LAN tipicamente sono connesse alla rete di trasporto (che essa sia Internet o una Intranet aziendale) usando un’infrastruttura cablata. Le reti wireless Mesh ad infrastruttura (WIMN) rappresentano un’alternativa valida e meno costosa alla classica infrastruttura cablata. A testimonianza di quanto appena affermato vi è la comparsa e la crescita sul mercato di diverse aziende specializzate nella fornitura di infrastrutture di trasporto wireless e il lancio di varie attività di standardizzazione (tra cui spicca il gruppo 802.11s). La facilità di utilizzo, di messa in opera di una rete wireless e i costi veramente ridotti hanno rappresentato fattori critici per lo straordinario successo di tale tecnologia. Di conseguenza possiamo affermare che la tecnologia wireless ha modificato lo stile di vita degli utenti, il modo di lavorare, il modo di passare il tempo libero (video conferenze, scambio foto, condivisione di brani musicali, giochi in rete, messaggistica istantanea ecc.). D’altro canto, lo sforzo per garantire lo sviluppo di reti capaci di supportare servizi dati ubiqui a velocità di trasferimento elevate è strettamente legato a numerose sfide tecniche tra cui: il supporto per l’handover tra differenti tecnologie (WLAN/3G), la certezza di accesso e autenticazione sicure, la fatturazione e l’accounting unificati, la garanzia di QoS ecc. L’attività di ricerca svolta nell’arco del Dottorato si è focalizzata sulla definizione di meccanismi multi-layer per il controllo del traffico in reti wireless. In particolare, nuove soluzioni di controllo del traffico sono state realizzate a differenti livelli della pila protocollare (dallo strato data-link allo strato applicativo) in modo da fornire: funzionalità avanzate (autenticazione sicura, differenziazione di servizio, handover trasparente) e livelli soddisfacenti di Qualità del Servizio. La maggior parte delle soluzioni proposte in questo lavoro di tesi sono state implementate in test-bed reali. Questo lavoro riporta i risultati della mia attività di ricerca ed è organizzato nel seguente modo: ogni capitolo presenta, ad uno specifico strato della pila protocollare, un meccanismo di controllo del traffico con l’obiettivo di risolvere le problematiche presentate precedentemente. I Capitoli 1 e 2 fanno riferimento allo strato di Trasporto ed investigano il problema del mantenimento della fairness per le connessioni TCP. L’unfairness TCP conduce ad una significativa degradazione delle performance implicando livelli non soddisfacenti di QoS. Questi capitoli descrivono l’attività di ricerca in cui ho impiegato il maggior impegno durante gli studi del dottorato. Nel capitolo 1 viene presentato uno studio simulativo delle problematiche di unfairness TCP e vengono introdotti due possibili soluzioni basate su rate-control. Nel Capitolo 2 viene derivato un modello analitico per la fairness TCP e si propone uno strumento per la personalizzazione delle politiche di fairness. Il capitolo 3 si focalizza sullo strato Applicativo e riporta diverse soluzioni di controllo del traffico in grado di garantire autenticazione sicura in scenari di roaming tra provider wireless. Queste soluzioni rappresentano parte integrante del framework UniWireless, un testbed nazionale sviluppato nell’ambito del progetto TWELVE. Il capitolo 4 descrive, nuovamente a strato Applicativo, una soluzione (basata su SIP) per la gestione della mobilità degli utenti in scenari di rete eterogenei ovvero quando diverse tecnologie di accesso radio sono presenti (802.11/WiFi, Bluetooth, 2.5G/3G). Infine il Capitolo 5 fa riferimento allo strato Data-Link presentando uno studio preliminare di un approccio per il routing e il load-balancing in reti Mesh infrastrutturate.Wireless LANs, as they have been defined by the IEEE 802.11 standard, are shared media enabling connectivity in the so-called “hot-spots” (airports, hotel lounges, etc.), university campuses, enterprise intranets, as well as “in-home” for home internet access. With reference to the above scenarios, WLANs are commonly denoted as “infra-structured” in the sense that WLAN coverage is based on “Access Points” which provide the mobile stations with access to the wired network. In addition to this approach, there exists also an “ad-hoc” mode to organize WLANs where mobile stations talk to each other without the need of Access Points. Wireless LANs are typically connected to the wired backbones (Internet or corporate intranets) using a wired infrastructure. Wireless Infrastructure Mesh Networks (WIMN) may represent a viable and cost-effective alternative to this traditional wired approach. This is witnessed by the emergence and growth of many companies specialized in the provisioning of wireless infrastructure solutions, as well as the launch of standardization activities (such as 802.11s). The easiness of deploying and using a wireless network, and the low deployment costs have been critical factors in the extraordinary success of such technology. As a logical consequence, the wireless technology has allowed end users being connected everywhere – every time and it has changed several things in people’s lifestyle, such as the way people work, or how they live their leisure time (videoconferencing, instant photo or music sharing, network gaming, etc.). On the other side, the effort to develop networks capable of supporting ubiquitous data services with very high data rates in strategic locations is linked with many technical challenges including seamless vertical handovers across WLAN and 3G radio technologies, security, 3G-based authentication, unified accounting and billing, consistent QoS and service provisioning, etc. My PhD research activity have been focused on multi-layer traffic control for Wireless LANs. In particular, specific new traffic control solutions have been designed at different layers of the protocol stack (from the link layer to the application layer) in order to guarantee i) advanced features (secure authentication, service differentiation, seamless handover) and ii) satisfactory level of perceived QoS. Most of the proposed solutions have been also implemented in real testbeds. This dissertation presents the results of my research activity and is organized as follows: each Chapter presents, at a specific layer of the protocol stack, a traffic control mechanism in order to address the introduced above issues. Chapter 1 and Charter 2 refer to the Transport Layer, and they investigate the problem of maintaining fairness for TCP connections. TCP unfairness may result in significant degradation of performance leading to users perceiving unsatisfactory Quality of Service. These Chapters describe the research activity in which I spent the most significant effort. Chapter 1 proposes a simulative study of the TCP fairness issues and two different solutions based on Rate Control mechanism. Chapter 2 illustrates an analytical model of the TCP fairness and derives a framework allowing wireless network providers to customize fairness policies. Chapter 3 focuses on the Application Layer and it presents new traffic control solutions able to guarantee secure authentication in wireless inter-provider roaming scenarios. These solutions are an integral part of the UniWireless framework, a nationwide distributed Open Access testbed that has been jointly realized by different research units within the TWELVE national project. Chapter 4 describes again an Application Layer solution, based on Session Initiation Protocol to manage user mobility and provide seamless mobile multimedia services in a heterogeneous scenario where different radio access technologies are used (802.11/WiFi, Bluetooth, 2.5G/3G networks). Finally Chapter 5 refers to the Data Link Layer and presents a preliminary study of a general approach for routing and load balancing in Wireless Infrastructure Mesh Network. The key idea is to dynamically select routes among a set of slowly changing alternative network paths, where paths are created through the reuse of classical 802.1Q multiple spanning tree mechanisms
    • …
    corecore