44 research outputs found

    Smartphone Semi-unlock by Grip Authentication

    Get PDF
    Viewing notifications on a smartphone or other device either requires the user to unlock their device or allow notification delivery on the device lock screen. Delivery of notifications on the lock screen, while convenient, can potentially leak user information. However, biometric (face, fingerprint, etc.) or password/pattern based authentication can be cumbersome and/or unavailable in many situations. This disclosure describes the use of a user’s grip style to semi-unlock an electronic device such as a smartphone, tablet, laptop, etc. without compromising privacy. Unlike biometric authentication (fingerprint, face, etc.), grip data alone does not contain sufficient personal characteristics to authenticate a user. Per techniques of this disclosure, to authenticate the user via grip, the user is instructed to perform a simple stimulated gesture. The sequential grip dynamics during the time the user performs the gesture are observed. For example, the gesture can be grasping the body of the device. Upon successful grip authentication, low confidentiality notifications can be displayed, or features such as a virtual assistant be made available to the user

    Detecting Low Rapport During Natural Interactions in Small Groups from Non-Verbal Behaviour

    Full text link
    Rapport, the close and harmonious relationship in which interaction partners are "in sync" with each other, was shown to result in smoother social interactions, improved collaboration, and improved interpersonal outcomes. In this work, we are first to investigate automatic prediction of low rapport during natural interactions within small groups. This task is challenging given that rapport only manifests in subtle non-verbal signals that are, in addition, subject to influences of group dynamics as well as inter-personal idiosyncrasies. We record videos of unscripted discussions of three to four people using a multi-view camera system and microphones. We analyse a rich set of non-verbal signals for rapport detection, namely facial expressions, hand motion, gaze, speaker turns, and speech prosody. Using facial features, we can detect low rapport with an average precision of 0.7 (chance level at 0.25), while incorporating prior knowledge of participants' personalities can even achieve early prediction without a drop in performance. We further provide a detailed analysis of different feature sets and the amount of information contained in different temporal segments of the interactions.Comment: 12 pages, 6 figure

    DETECTING GESTURES UTILIZING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect gestures or user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects gestures or user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or the case of the device. The techniques described enable a computing device to utilize a standard, existing motion sensor (e.g., an inertial measurement unit (IMU), accelerometer, gyroscope, etc.) to detect the user input and determine attributes of the user input. Motion data generated by the motion sensor (also referred to as a movement sensor) is processed by an artificial neural network to infer attributes of the user input. In other words, the computing device applies a machine-learned model to the motion data (also referred to as sensor data or motion sensor data) to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    Aggregating Sensor Data from User Devices at a Fire Scene to Support Rescue Operations

    Get PDF
    In structural firefighting and rescue operations, firefighters need to locate individuals to evacuate from the fire zone while protecting themselves from fire-related dangers. Currently, there is no mechanism to use data from user devices within a fire zone that include sensors that can provide fire-related information to support rescue operations. This disclosure describes techniques that enable the use of sensor data from user devices present at or near a fire scene for rescue operations. With appropriate user permissions, data from relevant sensors in user devices present at or near an active fire can be made available for rescue operations. The sensor data can help firefighters estimate the status of the fire and locate individuals present in the fire zone that may need to be rescued. Such data can include, e.g., temperature/ pressure distribution, visibility, location and/or motion of victims, alarm sounds, etc

    TACTILE TEXTURES FOR BACK OF SCREEN GESTURE DETECTION USING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect gestures or user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects gestures or user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or the case of the device. A tactile texture is applied to a housing of the computing device or a case that is coupled to the housing. The tactile texture causes the computing device to move in response to a user input applied to the tactile texture, such as when a user’s finger slides over the tactile texture. A motion sensor (e.g., an inertial measurement unit (IMU), accelerometer, gyroscope, etc.) generates motion data in response to detecting the motion of the computing device. The motion data is processed by an artificial neural network to infer attributes of the user input. In other words, the computing device applies a machine-learned model to the motion data (also referred to as sensor data or motion sensor data) to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    DETECTING ATTRIBUTES OF USER INPUTS UTILIZING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or case of the device. The techniques described enable a computing device to utilize a standard, existing motion sensor (e.g., an inertial measurement unit, (IMU), accelerometer, gyroscope, etc.) to detect the user input and determine attributes of the user input. Motion data generated by the motion sensor (also referred to as a movement sensor) is processed by an artificial neural network to infer characteristics or attributes of the user input, including: a location on the housing where the input was detected, a surface of the housing where the input was detected (e.g., front, back, and edges, such as top, bottom and sides); a type of user input (e.g., finger, stylus, fingernail, finger pad, etc.). In other words, the computing device applies a machine-learned model to the sensor data to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    Combined inhibition of BCL-2 and MCL-1 overcomes BAX deficiency-mediated resistance of TP53-mutant acute myeloid leukemia to individual BH3 mimetics

    Get PDF
    TP53-mutant acute myeloid leukemia (AML) respond poorly to currently available treatments, including venetoclax-based drug combinations and pose a major therapeutic challenge. Analyses of RNA sequencing and reverse phase protein array datasets revealed significantly lower BAX RNA and protein levels in TP53-mutant compared to TP53-wild-type (WT) AML, a finding confirmed in isogenic CRISPR-generated TP53-knockout and -mutant AML. The response to either BCL-2 (venetoclax) or MCL-1 (AMG176) inhibition was BAX-dependent and much reduced in TP53-mutant compared to TP53-WT cells, while the combination of two BH3 mimetics effectively activated BAX, circumventing survival mechanisms in cells treated with either BH3 mimetic, and synergistically induced cell death in TP53-mutant AML and stem/progenitor cells. The BH3 mimetic-driven stress response and cell death patterns after dual inhibition were largely independent of TP53 status and affected by apoptosis induction. Co-targeting, but not individual targeting of BCL-2 and MCL-1 in mice xenografted with TP53-WT and TP53-R248W Molm13 cells suppressed both TP53-WT and TP53-mutant cell growth and significantly prolonged survival. Our results demonstrate that co-targeting BCL-2 and MCL-1 overcomes BAX deficiency-mediated resistance to individual BH3 mimetics in TP53-mutant cells, thus shifting cell fate from survival to death in TP53-deficient and -mutant AML. This concept warrants clinical evaluation

    Initial Report of a Phase I Study of LY2510924, Idarubicin, and Cytarabine in Relapsed/Refractory Acute Myeloid Leukemia

    Get PDF
    Background: The CXCR4/SDF-1α axis plays a vital role in the retention of stem cells within the bone marrow and downstream activation of cell survival signaling pathways. LY2510924, a second generation CXCR4, showed significant anti-leukemia activity in a murine AML model.Methods: We conducted a phase I study to determine the safety and toxicity of LY2510924, idarubicin and cytarabine (IA) combination therapy in relapsed/refractory (R/R) AML. Eligible patients were 18–70 years of age receiving up to salvage 3 therapy. A peripheral blood absolute blast count of < 20,000/μL was required for inclusion. LY2510924 was administered daily for 7 days followed by IA from day 8. Two dose escalation levels (10 and 20 mg) were evaluated, with a plan to enroll up to 12 patients in the phase I portion.Results: The median age of the enrolled patients (n = 11) was 55 years (range, 19–70). Median number of prior therapies was 1 (1–3). Six and five patients were treated at dose-levels “0” (10 mg) and “1” (20 mg), respectively. Only one patient experiencing a dose limiting toxicity (grade 3 rash and myelosuppression). Three and one complete responses were observed at dose-levels “0” and “1,” respectively; the overall response rate (ORR) was 36% (4 of 11 patients). A ≥ 50% decrease in CXCR4 mean fluorescence intensity was observed in 4 of 9 patients by flow cytometry, indicating incomplete suppression of CXCR4-receptor occupancy.Conclusions: The combination of LY2510924 with IA is safe in R/R AML. Dose-escalation to a 30 mg LY2510924 dose is planned to achieve complete blockade of CXCR4 receptor occupancy, followed by expansion phase at the recommended phase 2 dose-level

    Assessing the clinical utility of cancer genomic and proteomic data across tumor types

    Get PDF
    Molecular profiling of tumors promises to advance the clinical management of cancer, but the benefits of integrating molecular data with traditional clinical variables have not been systematically studied. Here we retrospectively predict patient survival using diverse molecular data (somatic copy-number alteration, DNA methylation and mRNA, miRNA and protein expression) from 953 samples of four cancer types from The Cancer Genome Atlas project. We found that incorporating molecular data with clinical variables yielded statistically significantly improved predictions (FDR < 0.05) for three cancers but those quantitative gains were limited (2.2–23.9%). Additional analyses revealed little predictive power across tumor types except for one case. In clinically relevant genes, we identified 10,281 somatic alterations across 12 cancer types in 2,928 of 3,277 patients (89.4%), many of which would not be revealed in single-tumor analyses. Our study provides a starting point and resources, including an open-access model evaluation platform, for building reliable prognostic and therapeutic strategies that incorporate molecular data
    corecore