5,631 research outputs found

    Multiple multimodal mobile devices: Lessons learned from engineering lifelog solutions

    Get PDF
    For lifelogging, or the recording of one’s life history through digital means, to be successful, a range of separate multimodal mobile devices must be employed. These include smartphones such as the N95, the Microsoft SenseCam – a wearable passive photo capture device, or wearable biometric devices. Each collects a facet of the bigger picture, through, for example, personal digital photos, mobile messages and documents access history, but unfortunately, they operate independently and unaware of each other. This creates significant challenges for the practical application of these devices, the use and integration of their data and their operation by a user. In this chapter we discuss the software engineering challenges and their implications for individuals working on integration of data from multiple ubiquitous mobile devices drawing on our experiences working with such technology over the past several years for the development of integrated personal lifelogs. The chapter serves as an engineering guide to those considering working in the domain of lifelogging and more generally to those working with multiple multimodal devices and integration of their data

    Learning to Fly by Crashing

    Full text link
    How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see: https://youtu.be/u151hJaGKU

    Automatically Discovering, Reporting and Reproducing Android Application Crashes

    Full text link
    Mobile developers face unique challenges when detecting and reporting crashes in apps due to their prevailing GUI event-driven nature and additional sources of inputs (e.g., sensor readings). To support developers in these tasks, we introduce a novel, automated approach called CRASHSCOPE. This tool explores a given Android app using systematic input generation, according to several strategies informed by static and dynamic analyses, with the intrinsic goal of triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented crash report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). We evaluated CRASHSCOPE's effectiveness in discovering crashes as compared to five state-of-the-art Android input generation tools on 61 applications. The results demonstrate that CRASHSCOPE performs about as well as current tools for detecting crashes and provides more detailed fault information. Additionally, in a study analyzing eight real-world Android app crashes, we found that CRASHSCOPE's reports are easily readable and allow for reliable reproduction of crashes by presenting more explicit information than human written reports.Comment: 12 pages, in Proceedings of 9th IEEE International Conference on Software Testing, Verification and Validation (ICST'16), Chicago, IL, April 10-15, 2016, pp. 33-4

    Investigation into Mobile Learning Framework in Cloud Computing Platform

    Get PDF
    Abstract—Cloud computing infrastructure is increasingly used for distributed applications. Mobile learning applications deployed in the cloud are a new research direction. The applications require specific development approaches for effective and reliable communication. This paper proposes an interdisciplinary approach for design and development of mobile applications in the cloud. The approach includes front service toolkit and backend service toolkit. The front service toolkit packages data and sends it to a backend deployed in a cloud computing platform. The backend service toolkit manages rules and workflow, and then transmits required results to the front service toolkit. To further show feasibility of the approach, the paper introduces a case study and shows its performance

    Spartan Daily May 10, 2011

    Get PDF
    Volume 136, Issue 52https://scholarworks.sjsu.edu/spartandaily/1159/thumbnail.jp

    Designing Comprehensive Independent Learning Interactive Multimedia and its Resources Demands

    Get PDF
    Generally, multimedia project refers to a combination of  two or three types of media, such as texts, images, animations, applications, and videos, and is easily decompiled. This research produces interactive multimedia with five types of media using ActionScript. Using an external XML file as the connector between interface and contents, this interactive multimedia becomes reusable and developable. This multimedia project is flash based with the extensions *.swf and *.exe. To protect from decompiling, the project uses layered encryption combined with As3Crypto encryption. Based on test results, Sothink SWF Decompiler software cannot decompile and display all the properties of the project. In this research ,the analysis highlights multiplatform, standalone, media types, content types, exercises system, responsive, reusability, developability, energy efficiency, layered encryption and independent learning. Based on test results, the project has been tested and participants declared that they can use all contents and navigations easily and it ran normally on the devices used.

    Network layer access control for context-aware IPv6 applications

    Get PDF
    As part of the Lancaster GUIDE II project, we have developed a novel wireless access point protocol designed to support the development of next generation mobile context-aware applications in our local environs. Once deployed, this architecture will allow ordinary citizens secure, accountable and convenient access to a set of tailored applications including location, multimedia and context based services, and the public Internet. Our architecture utilises packet marking and network level packet filtering techniques within a modified Mobile IPv6 protocol stack to perform access control over a range of wireless network technologies. In this paper, we describe the rationale for, and components of, our architecture and contrast our approach with other state-of-the- art systems. The paper also contains details of our current implementation work, including preliminary performance measurements
    • 

    corecore