269 research outputs found

    Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour

    Full text link
    Rapport, the close and harmonious relationship in which interaction partners are "in sync" with each other, was shown to result in smoother social interactions, improved collaboration, and improved interpersonal outcomes. In this work, we are first to investigate automatic prediction of low rapport during natural interactions within small groups. This task is challenging given that rapport only manifests in subtle non-verbal signals that are, in addition, subject to influences of group dynamics as well as inter-personal idiosyncrasies. We record videos of unscripted discussions of three to four people using a multi-view camera system and microphones. We analyse a rich set of non-verbal signals for rapport detection, namely facial expressions, hand motion, gaze, speaker turns, and speech prosody. Using facial features, we can detect low rapport with an average precision of 0.7 (chance level at 0.25), while incorporating prior knowledge of participants' personalities can even achieve early prediction without a drop in performance. We further provide a detailed analysis of different feature sets and the amount of information contained in different temporal segments of the interactions.Comment: 12 pages, 6 figure

    Smartphone Semi-unlock by Grip Authentication

    Get PDF
    Viewing notifications on a smartphone or other device either requires the user to unlock their device or allow notification delivery on the device lock screen. Delivery of notifications on the lock screen, while convenient, can potentially leak user information. However, biometric (face, fingerprint, etc.) or password/pattern based authentication can be cumbersome and/or unavailable in many situations. This disclosure describes the use of a user’s grip style to semi-unlock an electronic device such as a smartphone, tablet, laptop, etc. without compromising privacy. Unlike biometric authentication (fingerprint, face, etc.), grip data alone does not contain sufficient personal characteristics to authenticate a user. Per techniques of this disclosure, to authenticate the user via grip, the user is instructed to perform a simple stimulated gesture. The sequential grip dynamics during the time the user performs the gesture are observed. For example, the gesture can be grasping the body of the device. Upon successful grip authentication, low confidentiality notifications can be displayed, or features such as a virtual assistant be made available to the user

    DETECTING GESTURES UTILIZING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect gestures or user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects gestures or user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or the case of the device. The techniques described enable a computing device to utilize a standard, existing motion sensor (e.g., an inertial measurement unit (IMU), accelerometer, gyroscope, etc.) to detect the user input and determine attributes of the user input. Motion data generated by the motion sensor (also referred to as a movement sensor) is processed by an artificial neural network to infer attributes of the user input. In other words, the computing device applies a machine-learned model to the motion data (also referred to as sensor data or motion sensor data) to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    Aggregating Sensor Data from User Devices at a Fire Scene to Support Rescue Operations

    Get PDF
    In structural firefighting and rescue operations, firefighters need to locate individuals to evacuate from the fire zone while protecting themselves from fire-related dangers. Currently, there is no mechanism to use data from user devices within a fire zone that include sensors that can provide fire-related information to support rescue operations. This disclosure describes techniques that enable the use of sensor data from user devices present at or near a fire scene for rescue operations. With appropriate user permissions, data from relevant sensors in user devices present at or near an active fire can be made available for rescue operations. The sensor data can help firefighters estimate the status of the fire and locate individuals present in the fire zone that may need to be rescued. Such data can include, e.g., temperature/ pressure distribution, visibility, location and/or motion of victims, alarm sounds, etc

    TACTILE TEXTURES FOR BACK OF SCREEN GESTURE DETECTION USING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect gestures or user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects gestures or user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or the case of the device. A tactile texture is applied to a housing of the computing device or a case that is coupled to the housing. The tactile texture causes the computing device to move in response to a user input applied to the tactile texture, such as when a user’s finger slides over the tactile texture. A motion sensor (e.g., an inertial measurement unit (IMU), accelerometer, gyroscope, etc.) generates motion data in response to detecting the motion of the computing device. The motion data is processed by an artificial neural network to infer attributes of the user input. In other words, the computing device applies a machine-learned model to the motion data (also referred to as sensor data or motion sensor data) to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    DETECTING ATTRIBUTES OF USER INPUTS UTILIZING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or case of the device. The techniques described enable a computing device to utilize a standard, existing motion sensor (e.g., an inertial measurement unit, (IMU), accelerometer, gyroscope, etc.) to detect the user input and determine attributes of the user input. Motion data generated by the motion sensor (also referred to as a movement sensor) is processed by an artificial neural network to infer characteristics or attributes of the user input, including: a location on the housing where the input was detected, a surface of the housing where the input was detected (e.g., front, back, and edges, such as top, bottom and sides); a type of user input (e.g., finger, stylus, fingernail, finger pad, etc.). In other words, the computing device applies a machine-learned model to the sensor data to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    Adaptive and Sequential Methods for Clinical Trials

    Get PDF
    This special issue describes state-of-the-art statistical research in adaptive and sequential methods and the application of such methods in clinical trials. It provides 1 review article and 5 research articles contributed by some of the leading experts in this field. The review article gives a comprehensive overview of the outstanding methodology in the current literature that is related to adaptive and sequential clinical trials, while each of the 5 research articles addresses specific critical issues in contemporary clinical trials, as summarized below
    • …
    corecore