1,684 research outputs found

    Mobile web resource tracking during a disaster or crisis situation

    Get PDF
    This paper proposes a prototype solution for a mobile web resource tracking system using mobile devices during an emergency situation. The system provides real time data to a decision maker so that he/she can effectively and efficiently monitor resources and assess the situation accordingly. Mobile devices (i.e., smart phones) support the ease of use for any location and at anytime. The Internet technology is selected to enable multiple or cross platform technology solutions for different mobile devices. Resources in the scope of this project are human resources (e.g., a doctor, a police officer) and a chemical list in a room. With the use of a GPS-enabled device or a wireless-enabled device, the system is used to provide the current location of the human resources. Transferring data between system databases and mobile devices is one of the important areas to address in this project. Since location data of a user is sensitive data, data should be protected via an encrypted protected network. In addition, because of the urgency of any crisis situation, it is critical that data from the system be able to be retrieved in a reasonable time frame. The investigation includes the exploration of database and data transfer solutions to meet the data availability during emergency conditions on mobile devices. This document includes a description of the system design, a review of current technologies, proposed methodology, and the implementation. Review of the literature section provides background information on current available technologies that were studied (section 3). Four identified factors are suggested in the system design – usability, security, availability, and performance. The discussion of which technology is selected for each feature can be found in the implementation section 4. Proposed future research areas can be found in the conclusion section and recommendations for future work section

    Channel Scanning and Access Point Selection Mechanisms for 802.11 Handoff: A Survey

    Get PDF
    While the cellular technology has been evolving continuously in recent years and client handoffs remain unnoticed, the 802.11 networks still impose an enormous latency issue once the client device decides to roam between the Access Point (AP). This latency is caused by many factors reckoning on scanning the channels and searching for APs with better signal strength. Once data from all the nearby APs has been collected, the client picks the most suitable AP and tries to connect with it. The AP verifies if it has enough capability to serve the client. It also ensures that the client has the required parameters and supported rates to match with the AP. The AP then processes this request, generates a new Association ID and sends it back to the client, thereby granting access to connect. Throughout this re-association process, the client fails to receive or send any data frames and experiences a lag between leaving the old and associating with a new AP. Originally, 802.11 authentication frames were designed for Wired Equivalent Privacy protocol, but later it was found to be insecure and thus got depreciated. Keeping these security aspects concerning shared key authentication in mind, few additional drafts were introduced by IEEE that concerned many key exchanges between the devices. IEEE 802.11r was introduced in 2008 that permits wireless clients to perform faster handoff along with additional data security standards. The key exchange method was redefined and also the new security negotiation protocol started serving wireless devices with a better approach. This enables a client to set up the Quality of Service state and security on an alternative AP before making a transition which ends up in minimal connectivity losses. Although this was an excellent step towards minimizing the service disruption and channel scanning, failure to remain connected with consecutive suitable APs within the minimum time continued to be a challenge. Different manufacturers use their custom-built methodology of handling a client handoff and hence the latency costs differ based on the type of handoff scheme deployed on the device. This thesis focuses on the foremost economical researches throughout recent years which targets minimizing the delays involved with channel scanning and AP selection. A wide sort of enhancements, whether it is on a client device or the AP, has been discussed and compared. Some modifications are associated with enhancing channel scan period or using beacons, and probe requests/responses in an efficient manner. Others concentrate on modifying the device hardware configuration and switching between Network Interfaces. Central controllers are a solution to handoff delays that may track the status of each device within the network and guide them to provide the appropriate Quality of Service to the end-users

    Models of higher-order, type-safe, distributed computation over autonomous persistent object stores

    Get PDF
    A remote procedure call (RPC) mechanism permits the calling of procedures in another address space. RPC is a simple but highly effective mechanism for interprocess communication and enjoys nowadays a great popularity as a tool for building distributed applications. This popularity is partly a result of their overall simplicity but also partly a consequence of more than 20 years of research in transpaxent distribution that have failed to deliver systems that meet the expectations of real-world application programmers. During the same 20 years, persistent systems have proved their suitability for building complex database applications by seamlessly integrating features traditionally found in database management systems into the programming language itself. Some research. effort has been invested on distributed persistent systems, but the outcomes commonly suffer from the same problems found with transparent distribution. In this thesis I claim that a higher-order persistent RPC is useful for building distributed persistent applications. The proposed mechanism is: realistic in the sense that it uses current technology and tolerates partial failures; understandable by application programmers; and general to support the development of many classes of distributed persistent applications. In order to demonstrate the validity of these claims, I propose and have implemented three models for distributed higher-order computation over autonomous persistent stores. Each model has successively exposed new problems which have then been overcome by the next model. Together, the three models provide a general yet simple higher-order persistent RPC that is able to operate in realistic environments with partial failures. The real strength of this thesis is the demonstration of realism and simplicity. A higherorder persistent RPC was not only implemented but also used by programmers without experience of programming distributed applications. Furthermore, a distributed persistent application has been built using these models which would not have been feasible with a traditional (non-persistent) programming language

    The Design of Secure Mobile Databases: An Evaluation of Alternative Secure Access Models

    Get PDF
    This research considers how mobile databases can be designed to be both secure and usable. A mobile database is one that is accessed and manipulated via mobile information devices over a wireless medium. A prototype mobile database was designed and then tested against secure access control models to determine if and how these models performed in securing a mobile database. The methodology in this research consisted of five steps. Initially, a preliminary analysis was done to delineate the environment the prototypical mobile database would be used in. Requirements definitions were established to gain a detailed understanding of the users and function of the database system. Conceptual database design was then employed to produce a database design model. In the physical database design step, the database was denormalized in order to reflect some unique computing requirements of the mobile environment. Finally, this mobile database design was tested against three secure access control models and observations made

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    Dependable IPTV Hosting

    Get PDF
    This research focuses on the challenges of hosting 3rd party RESTful applications that have to meet specific dependability standards. To provide a proof of concept I have implemented an architecture and framework for the use case of internet protocol television. Delivering TV services via internet protocols over high-speed connections is commonly referred to as IPTV (internet protocol television). Similar to the app-stores of smartphones, IPTV platforms enable the emergence of IPTV services in which 3rd party developers provide services to consumer that add value to the IPTV experience. A key issue in the IPTV ecosystem is that currently telecommunications IPTV providers do not have a system that allows 3rd party developers to create applications that meet their standards. The main challenges are that the 3rd party applications must be dependable, scalable and adhere to service level agreements. This research provides an architecture and framework to overcome these challenges

    Resource Management in Multi-Access Edge Computing (MEC)

    Get PDF
    This PhD thesis investigates the effective ways of managing the resources of a Multi-Access Edge Computing Platform (MEC) in 5th Generation Mobile Communication (5G) networks. The main characteristics of MEC include distributed nature, proximity to users, and high availability. Based on these key features, solutions have been proposed for effective resource management. In this research, two aspects of resource management in MEC have been addressed. They are the computational resource and the caching resource which corresponds to the services provided by the MEC. MEC is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to end-user Internet of Things (IoT) and mobile devices. MEC would support latency-critical user applications such as driverless cars and e-health. These applications will depend on resources and services provided by the MEC. However, MEC has limited computational and storage resources compared to the cloud. Therefore, it is important to ensure a reliable MEC network communication during resource provisioning by eradicating the chances of deadlock. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is crucial to eradicate deadlock while scheduling and provisioning resources on MEC to achieve a highly reliable and readily available system to support latency-critical applications. In this research, a deadlock avoidance resource provisioning algorithm has been proposed for industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates Banker’s resource-request algorithm using Software Defined Networking (SDN) to reduce communication overhead. Simulation and experimental results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms. Additionally, this research explores the use of MEC as a caching platform as it is proclaimed as a key technology for reducing service processing delays in 5G networks. Caching on MEC decreases service latency and improve data content access by allowing direct content delivery through the edge without fetching data from the remote server. Caching on MEC is also deemed as an effective approach that guarantees more reachability due to proximity to endusers. In this regard, a novel hybrid content caching algorithm has been proposed for MEC platforms to increase their caching efficiency. The proposed algorithm is a unification of a modified Belady’s algorithm and a distributed cooperative caching algorithm to improve data access while reducing latency. A polynomial fit algorithm with Lagrange interpolation is employed to predict future request references for Belady’s algorithm. Experimental results show that the proposed algorithm obtains 4% more cache hits due to its selective caching approach when compared with case study algorithms. Results also show that the use of a cooperative algorithm can improve the total cache hits up to 80%. Furthermore, this thesis has also explored another predictive caching scheme to further improve caching efficiency. The motivation was to investigate another predictive caching approach as an improvement to the formal. A Predictive Collaborative Replacement (PCR) caching framework has been proposed as a result which consists of three schemes. Each of the schemes addresses a particular problem. The proactive predictive scheme has been proposed to address the problem of continuous change in cache popularity trends. The collaborative scheme addresses the problem of cache redundancy in the collaborative space. Finally, the replacement scheme is a solution to evict cold cache blocks and increase hit ratio. Simulation experiment has shown that the replacement scheme achieves 3% more cache hits than existing replacement algorithms such as Least Recently Used, Multi Queue and Frequency-based replacement. PCR algorithm has been tested using a real dataset (MovieLens20M dataset) and compared with an existing contemporary predictive algorithm. Results show that PCR performs better with a 25% increase in hit ratio and a 10% CPU utilization overhead

    Connectivity and Data Transmission over Wireless Mobile Systems

    Get PDF
    We live in a world where wireless connectivity is pervasive and becomes ubiquitous. Numerous devices with varying capabilities and multiple interfaces are surrounding us. Most home users use Wi-Fi routers, whereas a large portion of human inhabited land is covered by cellular networks. As the number of these devices, and the services they provide, increase, our needs in bandwidth and interoperability are also augmented. Although deploying additional infrastructure and future protocols may alleviate these problems, efficient use of the available resources is important. We are interested in the problem of identifying the properties of a system able to operate using multiple interfaces, take advantage of user locations, identify the users that should be involved in the routing, and setup a mechanism for information dissemination. The challenges we need to overcome arise from network complexity and heterogeneousness, as well as the fact that they have no single owner or manager. In this thesis I focus on two cases, namely that of utilizing "in-situ" WiFi Access Points to enhance the connections of mobile users, and that of establishing "Virtual Access Points" in locations where there is no fixed roadside equipment available. Both environments have attracted interest for numerous related works. In the first case the main effort is to take advantage of the available bandwidth, while in the second to provide delay tolerant connectivity, possibly in the face of disasters. Our main contribution is to utilize a database to store user locations in the system, and to provide ways to use that information to improve system effectiveness. This feature allows our system to remain effective in specific scenarios and tests, where other approaches fail
    • …
    corecore