12,867 research outputs found

    MOSDEN: An Internet of Things Middleware for Resource Constrained Mobile Devices

    Get PDF
    The Internet of Things (IoT) is part of Future Internet and will comprise many billions of Internet Connected Objects (ICO) or `things' where things can sense, communicate, compute and potentially actuate as well as have intelligence, multi-modal interfaces, physical/ virtual identities and attributes. Collecting data from these objects is an important task as it allows software systems to understand the environment better. Many different hardware devices may involve in the process of collecting and uploading sensor data to the cloud where complex processing can occur. Further, we cannot expect all these objects to be connected to the computers due to technical and economical reasons. Therefore, we should be able to utilize resource constrained devices to collect data from these ICOs. On the other hand, it is critical to process the collected sensor data before sending them to the cloud to make sure the sustainability of the infrastructure due to energy constraints. This requires to move the sensor data processing tasks towards the resource constrained computational devices (e.g. mobile phones). In this paper, we propose Mobile Sensor Data Processing Engine (MOSDEN), an plug-in-based IoT middleware for mobile devices, that allows to collect and process sensor data without programming efforts. Our architecture also supports sensing as a service model. We present the results of the evaluations that demonstrate its suitability towards real world deployments. Our proposed middleware is built on Android platform

    Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications

    Full text link
    In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected sensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting one's health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area Network, Body Sensor Network, Edge Computing, Fog Computing, Medical Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment, Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in Smart Healthcare (2017), Springe

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations

    Efficient Opportunistic Sensing using Mobile Collaborative Platform MOSDEN

    Get PDF
    Mobile devices are rapidly becoming the primary computing device in people's lives. Application delivery platforms like Google Play, Apple App Store have transformed mobile phones into intelligent computing devices by the means of applications that can be downloaded and installed instantly. Many of these applications take advantage of the plethora of sensors installed on the mobile device to deliver enhanced user experience. The sensors on the smartphone provide the opportunity to develop innovative mobile opportunistic sensing applications in many sectors including healthcare, environmental monitoring and transportation. In this paper, we present a collaborative mobile sensing framework namely Mobile Sensor Data EngiNe (MOSDEN) that can operate on smartphones capturing and sharing sensed data between multiple distributed applications and users. MOSDEN follows a component-based design philosophy promoting reuse for easy and quick opportunistic sensing application deployments. MOSDEN separates the application-specific processing from the sensing, storing and sharing. MOSDEN is scalable and requires minimal development effort from the application developer. We have implemented our framework on Android-based mobile platforms and evaluate its performance to validate the feasibility and efficiency of MOSDEN to operate collaboratively in mobile opportunistic sensing applications. Experimental outcomes and lessons learnt conclude the paper

    Unconventional TV Detection using Mobile Devices

    Full text link
    Recent studies show that the TV viewing experience is changing giving the rise of trends like "multi-screen viewing" and "connected viewers". These trends describe TV viewers that use mobile devices (e.g. tablets and smart phones) while watching TV. In this paper, we exploit the context information available from the ubiquitous mobile devices to detect the presence of TVs and track the media being viewed. Our approach leverages the array of sensors available in modern mobile devices, e.g. cameras and microphones, to detect the location of TV sets, their state (ON or OFF), and the channels they are currently tuned to. We present the feasibility of the proposed sensing technique using our implementation on Android phones with different realistic scenarios. Our results show that in a controlled environment a detection accuracy of 0.978 F-measure could be achieved.Comment: 4 pages, 14 figure

    Music Learning Tools for Android Devices

    Get PDF
    In this paper, a musical learning application for mobile devices is presented. The main objective is to design and develop an application capable of offering exercises to practice and improve a selection of music skills, to users interested in music learning and training. The selected music skills are rhythm, melodic dictation and singing. The application includes an audio signal analysis system implemented making use of the Goertzel algorithm which is employed in singing exercises to check if the user sings the right musical note. This application also includes a graphical interface to represent musical symbols. A set of tests were conducted to check the usefulness of the application as musical learning tool. A group of users with different music knowledge have tested the system and reported to have found it effective, easy and accessible.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
    • …
    corecore