4,730 research outputs found

    Is my configuration any good: checking usability in an interactive sensor-based activity monitor

    Get PDF
    We investigate formal analysis of two aspects of usability in a deployed interactive, configurable and context-aware system: an event-driven, sensor-based homecare activity monitor system. The system was not designed from formal requirements or specification: we model the system as it is in the context of an agile development process. Our aim was to determine if formal modelling and analysis can contribute to improving usability, and if so, which style of modelling is most suitable. The purpose of the analysis is to inform configurers about how to interact with the system, so the system is more usable for participants, and to guide future developments. We consider redundancies in configuration rules defined by carers and participants and the interaction modality of the output messages.Two approaches to modelling are considered: a deep embedding in which devices, sensors and rules are represented explicitly by data structures in the modelling language and non-determinism is employed to model all possible device and sensor states, and a shallow embedding in which the rules and device and sensor states are represented directly in propositional logic. The former requires a conventional machine and a model-checker for analysis, whereas the latter is implemented using a SAT solver directly on the activity monitor hardware. We draw conclusions about the role of formal models and reasoning in deployed systems and the need for clear semantics and ontologies for interaction modalities

    Estimating snow cover from publicly available images

    Get PDF
    In this paper we study the problem of estimating snow cover in mountainous regions, that is, the spatial extent of the earth surface covered by snow. We argue that publicly available visual content, in the form of user generated photographs and image feeds from outdoor webcams, can both be leveraged as additional measurement sources, complementing existing ground, satellite and airborne sensor data. To this end, we describe two content acquisition and processing pipelines that are tailored to such sources, addressing the specific challenges posed by each of them, e.g., identifying the mountain peaks, filtering out images taken in bad weather conditions, handling varying illumination conditions. The final outcome is summarized in a snow cover index, which indicates for a specific mountain and day of the year, the fraction of visible area covered by snow, possibly at different elevations. We created a manually labelled dataset to assess the accuracy of the image snow covered area estimation, achieving 90.0% precision at 91.1% recall. In addition, we show that seasonal trends related to air temperature are captured by the snow cover index.Comment: submitted to IEEE Transactions on Multimedi

    Smart Data Recognition System For Seven Segment LED Display

    Get PDF
    The automatic data capturing system provides an alternative and effective way of data collection instead of manual data collection in the laboratory, especially for experiments that need to be carried out for a long period. It can solve common mistakes made by humans, like misreading or mistyping data. Thus, a new smart data recognition system for a seven-segment LED display is developed to sort the whole process of data collection to become more systematic and accurate. An image is captured and saved automatically in an image file, and then it is processed through MATLAB software to identify the digits displayed on the LED display. Once the image is preprocessed, analyzed, and recognized, the final output values obtained are transferred to an existing Excel file for a further process according to the user’s requirement. From the results obtained, it was proven that binary thresholding is the best preprocessing method, and the brightness of the image should be set to ‘0’ for better recognition output

    Synesthesia: Detecting Screen Content via Remote Acoustic Side Channels

    Full text link
    We show that subtle acoustic noises emanating from within computer screens can be used to detect the content displayed on the screens. This sound can be picked up by ordinary microphones built into webcams or screens, and is inadvertently transmitted to other parties, e.g., during a videoconference call or archived recordings. It can also be recorded by a smartphone or "smart speaker" placed on a desk next to the screen, or from as far as 10 meters away using a parabolic microphone. Empirically demonstrating various attack scenarios, we show how this channel can be used for real-time detection of on-screen text, or users' input into on-screen virtual keyboards. We also demonstrate how an attacker can analyze the audio received during video call (e.g., on Google Hangout) to infer whether the other side is browsing the web in lieu of watching the video call, and which web site is displayed on their screen

    GaVe: A webcam-based gaze vending interface using one-point calibration

    Get PDF
    Gaze input, i.e., information input via eye of users, represents a promising method for contact-free interaction in human-machine systems. In this paper, we present the GazeVending interface (GaVe), which lets users control actions on a display with their eyes. The interface works on a regular webcam, available on most of today's laptops, and only requires a short one-point calibration before use. GaVe is designed in a hierarchical structure, presenting broad item cluster to users first and subsequently guiding them through another selection round, which allows the presentation of a large number of items. Cluster/item selection in GaVe is based on the dwell time, i.e., the time duration that users look at a given Cluster/item. A user study (N=22) was conducted to test optimal dwell time thresholds and comfortable human-to-display distances. Users' perception of the system, as well as error rates and task completion time were registered. We found that all participants were able to quickly understand and know how to interact with the interface, and showed good performance, selecting a target item within a group of 12 items in 6.76 seconds on average. We provide design guidelines for GaVe and discuss the potentials of the system

    Facial and Bodily Expressions for Control and Adaptation of Games (ECAG 2008)

    Get PDF
    • …
    corecore