3,878 research outputs found

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Academic use of smart phones for social development of visually impaired students of University of Karachi: A study of android Smartphone applications by VI students.

    Get PDF
    Smart phones are very common in Pakistan since last 5 years. Smart phone is a mobile phone that performs many of the functions of a computer, typically having a touch screen interface, internet access, and an operating system capable of running downloaded apps. (Woyke,2014) states that the first true smart phone actually made its debut earlier in 1992. Smart phone has now captured the human life at a very large extent. Smart phone has become a part of daily, professional, social and academic life. Students are dependent on smart phones for their academic and social activities. It could be used for dictionary, for browsing and searching a piece of information or interacting with others for social needs. Visually impaired refers to the condition in which a person is partially or completely blind. The VI students have equal information and social needs as sighted people. VI students are a part of society and play an equal role in the social development of a society. Smart phones are helpful in routine academic tasks and social activities of the students. It is difficult task to satisfy their immediate information and social needs using an android Smartphone. The focus of this study is to show the academic use of smart phone for the social development of visually impaired students of faculty of social science and faculty of Education of University of Karachi. There are 24 departments in these faculties and more than 20 VI students are getting education from these departments. This study aims to discuss the Applications of android Smartphone used for academic purpose and their social development. VI students of faculty of social science and education, university of Karachi identify some specific apps during data collection. These applications will be helpful to increase the accessibility of Smartphone among VI students. The study identified that smart phone is very useful for academic and social activities

    Applying touch gesture to improve application accessing speed on mobile devices.

    Get PDF
    The touch gesture shortcut is one of the most significant contributions to Human-Computer Interaction (HCI). It is used in many fields: e.g., performing web browsing tasks (i.e., moving to the next page, adding bookmarks, etc.) on a smartphone, manipulating a virtual object on a tabletop device and communicating between two touch screen devices. Compared with the traditional Graphic User Interface (GUI), the touch gesture shortcut is more efficient, more natural, it is intuitive and easier to use. With the rapid development of smartphone technology, an increasing number of data items are showing up in users’ mobile devices, such as contacts, installed apps and photos. As a result, it has become troublesome to find a target item on a mobile device with traditional GUI. For example, to find a target app, sliding and browsing through several screens is a necessity. This thesis addresses this challenge by proposing two alternative methods of using a touch gesture shortcut to find a target item (an app, as an example) in a mobile device. Current touch gesture shortcut methods either employ a universal built-in system- defined shortcut template or a gesture-item set, which is defined by users before using the device. In either case, the users need to learn/define first and then recall and draw the gesture to reach the target item according to the template/predefined set. Evidence has shown that compared with GUI, the touch gesture shortcut has an advantage when performing several types of tasks e.g., text editing, picture drawing, audio control, etc. but it is unknown whether it is quicker or more effective than the traditional GUI for finding target apps. This thesis first conducts an exploratory study to understand user memorisation of their Personalized Gesture Shortcuts (PGS) for 15 frequently used mobile apps. An experiment will then be conducted to investigate (1) the users’ recall accuracy on the PGS for finding both frequently and infrequently used target apps, (2) and the speed by which users are able to access the target apps relative to GUI. The results show that the PGS produced a clear speed advantage (1.3s faster on average) over the traditional GUI, while there was an approximate 20% failure rate due to unsuccessful recall on the PGS. To address the unsuccessful recall problem, this thesis explores ways of developing a new interactive approach based on the touch gesture shortcut but without requiring recall or having to be predefined before use. It has been named the Intelligent Launcher in this thesis, and it predicts and launches any intended target app from an unconstrained gesture drawn by the user. To explore how to achieve this, this thesis conducted a third experiment to investigate the relationship between the reasons underlying the user’s gesture creation and the gesture shape (handwriting, non-handwriting or abstract) they used as their shortcut. According to the results, unlike the existing approaches, the thesis proposes that the launcher should predict the users’ intended app from three types of gestures. First, the non-handwriting gestures via the visual similarity between it and the app’s icon; second, the handwriting gestures via the app’s library name plus functionality; and third, the abstract gestures via the app’s usage history. In light of these findings mentioned above, we designed and developed the Intelligent Launcher, which is based on the assumptions drawn from the empirical data. This thesis introduces the interaction, the architecture and the technical details of the launcher. How to use the data from the third experiment to improve the predictions based on a machine learning method, i.e., the Markov Model, is described in this thesis. An evaluation experiment, shows that the Intelligent Launcher has achieved user satisfaction with a prediction accuracy of 96%. As of now, it is still difficult to know which type of gesture a user tends to use. Therefore, a fourth experiment, which focused on exploring the factors that influence the choice of touch gesture shortcut type for accessing a target app is also conducted in this thesis. The results of the experiment show that (1) those who preferred a name-based method used it more consistently and used more letter gestures compared with those who preferred the other three methods; (2) those who preferred the keyword app search method created more letter gestures than other types; (3) those who preferred an iOS system created more drawing gestures than other types; (4) letter gestures were more often used for the apps that were used frequently, whereas drawing gestures were more often used for the apps that were used infrequently; (5) the participants tended to use the same creation method as the preferred method on different days of the experiment. This thesis contributes to the body of Human-Computer Interaction knowledge. It proposes two alternative methods which are more efficient and flexible for finding a target item among a large number of items. The PGS method has been confirmed as being effective and has a clear speed advantage. The Intelligent Launcher has been developed and it demonstrates a novel way of predicting a target item via the gesture user’s drawing. The findings concerning the relationship between the user’s choice of gesture for the shortcut and some of the individual factors have informed the design of a more flexible touch gesture shortcut interface for ”target item finding” tasks. When searching for different types of data items, the Intelligent Launcher is a prototype for finding target apps since the variety in visual appearance of an app and its functionality make it more difficult to predict than other targets, such as a standard phone setting, a contact or a website. However, we believe that the ideas that have been presented in this thesis can be further extended to other types of items, such as videos or photos in a Photo Library, places on a map or clothes in an online store. What is more, this study also leads the way in tackling the advantage of a machine learning method in touch gesture shortcut interactions

    Army Hand Signal Recognition System using Smartwatch Sensors

    Get PDF
    The organized armies of the world all have their own hand signal systems to deliver commands and messages between combatants during operations such as search, reconnaissance, and infiltration. For instance, to command a troop to stop, a commander would lift his/her fist next to the his/her face height. When the operation is carried out by a small unit, the hand signal system plays a very important role. However, obviously, there is an aspect of limitation in this method; each signal should be relayed by individuals, which while waiting attentively for a signal can cause soldiers to lose attention on the front observation and be distracted. Another limitation is, it takes a certain period to convey signals from the first person to the last person. While the limitations above are related to a short moment, that can be fatal in the field of battle. Gesture recognition has emerged as a very important and effective way for interaction between human and computer (HCI). An application of inertial measurement unit (IMU) sensor data from smart devices has lead gesture recognition into the next level, because it means people don’t need to rely on any external equipment, such as a camera to read movements. Especially wearable devices can be more adequate for gesture recognition than hand-held devices because of its distinguished strengths. If soldiers can deliver signals using an off-the-shelf smartwatch, without additional training, it can resolve many drawbacks of the current hand signal system. In the battlefield, cameras to record combatants’ movement for image processing cannot be installed nor utilized, and there are countless obstacles, such as tree branches, trunks, or valleys that hinder the camera to observe movements of the combatants. Because of unique characteristics of battlefield, a gesture recognition system using a smartwatch can be the most appropriate solution for making troops mobility more efficient and secure. For the system to be used successfully in combat zone, the system requires high precision and prompt processing; although accuracy and operating speed are inversely proportional in most of cases. This paper will present a gesture recognition tool for army hand signals with high accuracy and fast processing speed. It is expected that the army hand signal recognition system (AHSR) will assist small units to carry-out their maneuver with higher efficiency

    Chronic Kidney Disease Android Application

    Get PDF
    Chronic kidney disease is increasingly recognized as a leading public health problem over the world that affects more than 10 percent of the population worldwide, where electrolytes and wastes can build up in your system. Kidney failure might not be noticeable until more advanced stages where it may then become fatal if not for artificial filtering or a transplant. As a result, it is important to detect kidney disease early on to prevent it from progressing to kidney failure. The current main test of the disease is a blood test that measures the levels of a waste product called creatine and needs information such as age, size, gender, and ethnicity. They may be uncomfortable, can lead to infections, and are inconvenient and expensive. I will re-engineer an Android application for Chronic Kidney Disease detection by working on test strip detection zone localization, detection zone focus, capture quality, and dynamic model loading. This uses a smartphone’s camera and allows users to manually focus on an area of the view to analyze. The camera detects where the test strip and its detection zone is and checks if it is in focus. The pixels are sent to the machine learning algorithm. The application can quickly determine the health of a users kidney and can display it. By only requiring a few drops of blood and an Android smartphone, it is very important for those who cannot afford insurance or live in developing countries. This can make a huge difference in early detection of CDK in these areas where people would otherwise disregard the tests in fear of not having enough money
    • 

    corecore