196,666 research outputs found

    Facilitating personal content management in smart phones

    Get PDF
    Smart phones, which combine, e.g., communication and mobile multimedia features, store increasing amount of media content and so they face content management challenges similar to what desktop computers are experiencing. Content management refers to actions performed on content (e.g., capture image, or edit text) although the similar management action may vary depending on content type (e.g., editing audio involves different operations than editing an image). A key enabler for content management is metadata, which describes content with textual attribute–value pairs, and aids the user in, e.g., automatic grouping, sorting, searching, and organizing. Research on mobile personal content management is on its infancy and therefore the dissertation focuses on common enablers which are required for further management of multimedia in smart phones. As a result, we claim that information about the context of use could enrich metadata and improve ease-of-use with the system, e.g., to support later information retrieval and visualizing content. Another prerequisite for enabling the personal content management is to locate the content either by browsing or searching. Finally, after content has been located, it must be visualized to begin the actual content management and defining how to display the content is essential as the user can view it only briefly while moving

    Mobile Interface for Content-Based Image Management

    Get PDF
    People make more and more use of digital image acquisition devices to capture screenshots of their everyday life. The growing number of personal pictures raise the problem of their classification. Some of the authors proposed an automatic technique for personal photo album management dealing with multiple aspects (i.e., people, time and background) in a homogenous way. In this paper we discuss a solution that allows mobile users to remotely access such technique by means of their mobile phones, almost from everywhere, in a pervasive fashion. This allows users to classify pictures they store on their devices. The whole solution is presented, with particular regard to the user interface implemented on the mobile phone, along with some experimental results

    Multiple multimodal mobile devices: Lessons learned from engineering lifelog solutions

    Get PDF
    For lifelogging, or the recording of one’s life history through digital means, to be successful, a range of separate multimodal mobile devices must be employed. These include smartphones such as the N95, the Microsoft SenseCam – a wearable passive photo capture device, or wearable biometric devices. Each collects a facet of the bigger picture, through, for example, personal digital photos, mobile messages and documents access history, but unfortunately, they operate independently and unaware of each other. This creates significant challenges for the practical application of these devices, the use and integration of their data and their operation by a user. In this chapter we discuss the software engineering challenges and their implications for individuals working on integration of data from multiple ubiquitous mobile devices drawing on our experiences working with such technology over the past several years for the development of integrated personal lifelogs. The chapter serves as an engineering guide to those considering working in the domain of lifelogging and more generally to those working with multiple multimodal devices and integration of their data

    Mobile access to personal digital photograph archives

    Get PDF
    Handheld computing devices are becoming highly connected devices with high capacity storage. This has resulted in their being able to support storage of, and access to, personal photo archives. However the only means for mobile device users to browse such archives is typically a simple one-by-one scroll through image thumbnails in the order that they were taken, or by manually organising them based on folders. In this paper we describe a system for context-based browsing of personal digital photo archives. Photos are labeled with the GPS location and time they are taken and this is used to derive other context-based metadata such as weather conditions and daylight conditions. We present our prototype system for mobile digital photo retrieval, and an experimental evaluation illustrating the utility of location information for effective personal photo retrieval

    Combination of content analysis and context features for digital photograph retrieval.

    Get PDF
    In recent years digital cameras have seen an enormous rise in popularity, leading to a huge increase in the quantity of digital photos being taken. This brings with it the challenge of organising these large collections. The MediAssist project uses date/time and GPS location for the organisation of personal collections. However, this context information is not always sufficient to support retrieval when faced with a large, shared, archive made up of photos from a number of users. We present work in this paper which retrieves photos of known objects (buildings, monuments) using both location information and content-based retrieval tools from the AceToolbox. We show that for this retrieval scenario, where a user is searching for photos of a known building or monument in a large shared collection, content-based techniques can offer a significant improvement over ranking based on context (specifically location) alone

    Pain Level Detection From Facial Image Captured by Smartphone

    Get PDF
    Accurate symptom of cancer patient in regular basis is highly concern to the medical service provider for clinical decision making such as adjustment of medication. Since patients have limitations to provide self-reported symptoms, we have investigated how mobile phone application can play the vital role to help the patients in this case. We have used facial images captured by smart phone to detect pain level accurately. In this pain detection process, existing algorithms and infrastructure are used for cancer patients to make cost low and user-friendly. The pain management solution is the first mobile-based study as far as we found today. The proposed algorithm has been used to classify faces, which is represented as a weighted combination of Eigenfaces. Here, angular distance, and support vector machines (SVMs) are used for the classification system. In this study, longitudinal data was collected for six months in Bangladesh. Again, cross-sectional pain images were collected from three different countries: Bangladesh, Nepal and the United States. In this study, we found that personalized model for pain assessment performs better for automatic pain assessment. We also got that the training set should contain varying levels of pain in each group: low, medium and high
    • 

    corecore