67 research outputs found

    (re)new configurations:Beyond the HCI/Art Challenge: Curating re-new 2011

    Get PDF

    Re-new - IMAC 2011 Proceedings

    Get PDF

    Smart Home

    Get PDF
    Cílem této bakalářské práce je seznámit čtenáře a popsat mu základní principy, charakteristiky a použití mého vlastního projektu s názvem Chytrá domácnost. V této práci se budu zabývat dosavadním stavem mého projektu a dále popíši budoucí plány pro moji Chytrou domácnost. Taktéž představím a porovnám systémy chytrých domů od firem, které se touto technologií zabývají několik let.The aim of this bachelor’s thesis is to present and describe fundamental principles, characteristics, and application of my own project of Smart Home. In this thesis, I will present my current state of the project as well as the future plans for my Smart Home. I will also introduce and compare Smart Home systems from companies already established on the market.

    An analysis of the impact of wireless technology on public vs. private traffic data collection, dissemination and use

    Get PDF
    Thesis (M.C.P. and S.M.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2001.Includes bibliographical references (leaves 151-154).The collection of data concerning traffic conditions (e.g., incidents, travel times, average speed, traffic volumes, etc.) on roadways has traditionally been carried out by those public entities charged with managing traffic flow, responding to incidents, and maintaining the surface of the roadway. Pursuant to this task, public agencies have employed inductive loop detectors, closed circuit television cameras, technology for tracking electronic toll tags, and other surveillance devices, in an effort to monitor conditions on roads within their jurisdictions. The high cost of deploying and maintaining this surveillance equipment has precluded most agencies from collecting data on roads other than freeways and important arterials. In addition, the "point" nature of most commonly utilized surveillance equipment limits both the variety of data available for analysis, as well as its overall accuracy. Consequently, these problems have limited the usefulness of this traffic data, both to the public agencies collecting it, as well as private entities who would like to use it as a resource from which they can generate fee-based traveler information services. Recent Federal Communications Commission (FCC) mandates concerning E-911 have led to the development of new technologies for tracking wireless devices (i.e., cellular phones). Although developed to assist mobile phone companies in meeting the FCC's E-911 mandate, a great deal of interest has arisen concerning their application to the collection of traffic data. That said, the goal of this thesis has been to compare traditional traffic surveillance technologies' capabilities and effectiveness with that of the wireless tracking systems currently under development. Our technical research indicates that these newly developed tracking technologies will eventually be able to provide wider geographic surveillance of roads at less expense than traditional surveillance equipment, as well as collect traffic information that is currently unavailable. Even so, our overall conclusions suggest that due to budgetary, institutional, and/or political constraints, some organizations may find themselves unable to procure this high quality data. Moreover, we believe that even those organizations (both public and private) that find themselves in a position to procure data collected via wireless tracking technology should first consider the needs of their "customers," the strength of the local market for traffic data, and their organization's overall mission, prior to making a final decision.by Armand J. Ciccarelli, III.M.C.P.and S.M

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results

    The Student Movement Volume 107 Issue 2: We Prayed, We Changed, We Glowed: Week Three at Andrews University

    Get PDF
    HUMANS Change Day Interview: Jessica Bowen, Interviewed by: Gloria Oh Interview with Brandon Alvarez, Interviewed by: Grace No Meet Andrew Rappette, AUSA Executive Vice President, Interviewed by: Lauren Kim ARTS & ENTERTAINMENT Change Day: Art as a Service, Skyler Campbell Currently..., Solana Campbell Disney\u27s D23 Expo Concludes, Andrew Francis In the Rick of Time: Season 6 Launces Off My 2022 School Year, Grace No NEWS Almost Anything Goes, Glow Edition, Yoel Kim & Editors Lead Levels in Benton Harbor, Abigail Kim Students React to Queen Elizabeth\u27s Passing, Andrew Francis IDEAS iOS 16 and the new iPhone: Bop or Flop?, Rachel Ingram-Clay Meghan Markle and the British Media, Terika Williams The Little Mermaid and the Importance of Representation, Genevieve Prouty PULSE Change Day 2022, Elizabeth Dovich Clubs & Organizations Ice Cream Fair, Charisse Lapuebla Scientists Engaging Beyond Classroom & Lab, Desmond Hartwell Murray Divine Direction: Week of Prayer at Andrews University, Melissa Moore LAST WORD Thoughts at 30,000 Feet, Alannah Tjhatrahttps://digitalcommons.andrews.edu/sm-107/1001/thumbnail.jp

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    2D-barcode for mobile devices

    Get PDF
    2D-barcodes were designed to carry significantly more data than its 1D counterpart. These codes are often used in industrial information tagging applications where high data capacity, mobility, and data robustness are required. Wireless mobile devices such as camera phones and Portable Digital Assistants (PDAs) have evolved from just a mobile voice communication device to what is now a mobile multimedia computing platform. Recent integration of these two mobile technologies has sparked some interesting applications where 2D-barcodes work as visual tags and/or information source and camera phones performs image processing tasks on the device itself. One of such applications is hyperlink establishment. The 2D symbol captured by a camera phone is decoded by the software installed in the phone. Then the web site indicated by the data encoded in a symbol is automatically accessed and shown in the display of the camera phone. Nonetheless, this new mobile applications area is still at its infancy. Each proposed mobile 2D-barcode application has its own choice of code, but no standard exists nor is there any study done on what are the criteria for setting a standard 2D-barcode for mobile phones. This study intends to address this void. The first phase of the study is qualitative examination. In order to select a best standard 2D-barcode, firstly, features desirable for a standard 2D-barcode that is optimized for the mobile phone platform are identified. The second step is to establish the criteria based on the features identified. These features are based on the operating limitations and attributes of camera phones in general use today. All published and accessible 2D-barcodes are thoroughly examined in terms of criteria set for the selection of a best 2D-barcode for camera phone applications. In the second phase, the 2D-barcodes that have higher potential to be chosen as a standard code are experimentally examined against the three criteria: light condition, distance, whether or not a 2D-barcode supports VGA resolution. Each sample 2D-barcode is captured by a camera phone with VGA resolution and the outcome is tested using an image analysis tool written in the scientific language called MATLAB. The outcome of this study is the selection of the most suitable 2D-barcode for applications where mobile devices such as camera phones are utilized

    Designing and evaluating a user interface for continous embedded lifelogging based on physical context

    Get PDF
    PhD ThesisAn increase in both personal information and storage capacity has encouraged people to store and archive their life experience in multimedia formats. The usefulness of such large amounts of data will remain inadequate without the development of both retrieval techniques and interfaces that help people access and navigate their personal collections. The research described in this thesis investigates lifelogging technology from the perspective of the psychology of memory and human-computer interaction. The research described seeks to increase my understanding of what data can trigger memories and how I might use this insight to retrieve past life experiences in interfaces to lifelogging technology. The review of memory and previous research on lifelogging technology allows and support me to establish a clear understanding of how memory works and design novel and effective memory cues; whilst at the same time I critiqued existing lifelogging systems and approaches to retrieving memories of past actions and activities. In the initial experiments I evaluated the design and implementation of a prototype which exposed numerous problems both in the visualisation of data and usability. These findings informed the design of novel lifelogging prototype to facilitate retrieval. I assessed the second prototype and determined how an improved system supported access and retrieval of users’ past life experiences, in particular, how users group their data into events, how they interact with their data, and the classes of memories that it supported. In this doctoral thesis I found that visualizing the movements of users’ hands and bodies facilitated grouping activities into events when combined with the photos and other data captured at the same time. In addition, the movements of the user's hand and body and the movements of some objects can promote an activity recognition or support user detection and grouping of them into events. Furthermore, the ability to search for specific movements significantly reduced the amount of time that it took to retrieve data related to specific events. I revealed three major strategies that users followed to understand the combined data: skimming sequences, cross sensor jumping and continued scanning
    • …
    corecore