7 research outputs found
HIPPO: Pervasive Hand-Grip Estimation from Everyday Interactions
Hand-grip strength is widely used to estimate muscle strength and it serves as a general indicator of the overall health of a person, particularly in aging adults. Hand-grip strength is typically estimated using dynamometers or specialized force resistant pressure sensors embedded onto objects. Both of these solutions require the user to interact with a dedicated measurement device which unnecessarily restricts the contexts where estimates are acquired. We contribute HIPPO, a novel non-intrusive and opportunistic method for estimating hand-grip strength from everyday interactions with objects. HIPPO re-purposes light sensors available in wearables (e.g., rings or gloves) to capture changes in light reflectivity when people interact with objects. This allows HIPPO to non-intrusively piggyback everyday interactions for health information without affecting the user's everyday routines. We present two prototypes integrating HIPPO, an early smart glove proof-of-concept, and a further optimized solution that uses sensors integrated onto a ring. We validate HIPPO through extensive experiments and compare HIPPO against three baselines, including a clinical dynamometer. Our results show that HIPPO operates robustly across a wide range of everyday objects, and participants. The force strength estimates correlate with estimates produced by pressure-based devices, and can also determine the correct hand grip strength category with up to 86\% accuracy. Our findings also suggest that users prefer our approach to existing solutions as HIPPO blends the estimation with everyday interactions.Peer reviewe
The SPATIAL Architecture:Design and Development Experiences from Gauging and Monitoring the AI Inference Capabilities of Modern Applications
Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of-concept architecture that analyzes AI models in a human-in-the-loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in realworld industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight
The SPATIAL Architecture:Design and Development Experiences from Gauging and Monitoring the AI Inference Capabilities of Modern Applications
Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of-concept architecture that analyzes AI models in a human-in-the-loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in realworld industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight
The SPATIAL architecture: design and development experiences from gauging and monitoring the AI inference capabilities of modern applications
Despite its enormous economical and societal impact, lack of human-perceived control and safety is re-defining the design and development of emerging AI-based technologies. New regulatory requirements mandate increased human control and oversight of AI, transforming the development practices and responsibilities of individuals interacting with AI. In this paper, we present the SPATIAL architecture, a system that augments modern applications with capabilities to gauge and monitor trustworthy properties of AI inference capabilities. To design SPATIAL, we first explore the evolution of modern system architectures and how AI components and pipelines are integrated. With this information, we then develop a proof-of- concept architecture that analyzes AI models in a human-in-the- loop manner. SPATIAL provides an AI dashboard for allowing individuals interacting with applications to obtain quantifiable insights about the AI decision process. This information is then used by human operators to comprehend possible issues that influence the performance of AI models and adjust or counter them. Through rigorous benchmarks and experiments in real- world industrial applications, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness, however, this in turn increases the complexity of developing and maintaining systems implementing AI. Our work highlights lessons learned and experiences from augmenting modern applications with mechanisms that support regulatory compliance of AI. In addition, we also present a road map of on-going challenges that require attention to achieve robust trustworthy analysis of AI and greater engagement of human oversight
Toward Trustworthy and Responsible Autonomous Drones in Future Smart Cities
Autonomous drones are reaching a level of maturity when they can be deployed in cities to support tasks ranging from medicine or food delivery to environmental monitoring. These operations rely on powerful AI models integrated into the drones. Ensuring these models are robust is essential for operating in cities as any errors in the decisions of the autonomous drones can cause damage to the citizens or the urban infrastructure. We contribute a research vision for trustworthy city-scale deployments of autonomous drones. We highlight current key requirements and challenges that have to be fulfilled for achieving city-scale autonomous drone deployments. In addition, we also analyze the complexity of using XAI methods to monitor drone behavior. We demonstrate this by inducing changes in AI model behavior using data poisoning attacks. Our results demonstrate that XAI methods are sensitive enough to detect the possibility of a data attack, but a combination of multiple XAI methods is better to improve the robustness of the estimation. Our results also suggest that currently, the reaction time to counter an attack in city-scale deployment is large due to the complexity of the XAI analysis.</p
Social-aware Federated Learning: Challenges and Opportunities in Collaborative Data Training
Federated learning (FL) is a promising privacy-preserving solution to build powerful AI models. In many FL scenarios, such as healthcare or smart city monitoring, the user's devices may lack the required capabilities to collect suitable data, which limits their contributions to the global model. We contribute social-aware federated learning as a solution to boost the contributions of individuals by allowing outsourcing tasks to social connections. We identify key challenges and opportunities, and establish a research roadmap for the path forward. Through a user study with N = 30 participants, we study collaborative incentives for FL showing that social-aware collaborations can significantly boost the number of contributions to a global model provided that the right incentive structures are in place.Information and Communication Technolog