41 research outputs found
PresenceSense: Zero-training Algorithm for Individual Presence Detection based on Power Monitoring
Non-intrusive presence detection of individuals in commercial buildings is
much easier to implement than intrusive methods such as passive infrared,
acoustic sensors, and camera. Individual power consumption, while providing
useful feedback and motivation for energy saving, can be used as a valuable
source for presence detection. We conduct pilot experiments in an office
setting to collect individual presence data by ultrasonic sensors, acceleration
sensors, and WiFi access points, in addition to the individual power monitoring
data. PresenceSense (PS), a semi-supervised learning algorithm based on power
measurement that trains itself with only unlabeled data, is proposed, analyzed
and evaluated in the study. Without any labeling efforts, which are usually
tedious and time consuming, PresenceSense outperforms popular models whose
parameters are optimized over a large training set. The results are interpreted
and potential applications of PresenceSense on other data sources are
discussed. The significance of this study attaches to space security, occupancy
behavior modeling, and energy saving of plug loads.Comment: BuildSys 201
Environmental Sensing by Wearable Device for Indoor Activity and Location Estimation
We present results from a set of experiments in this pilot study to
investigate the causal influence of user activity on various environmental
parameters monitored by occupant carried multi-purpose sensors. Hypotheses with
respect to each type of measurements are verified, including temperature,
humidity, and light level collected during eight typical activities: sitting in
lab / cubicle, indoor walking / running, resting after physical activity,
climbing stairs, taking elevators, and outdoor walking. Our main contribution
is the development of features for activity and location recognition based on
environmental measurements, which exploit location- and activity-specific
characteristics and capture the trends resulted from the underlying
physiological process. The features are statistically shown to have good
separability and are also information-rich. Fusing environmental sensing
together with acceleration is shown to achieve classification accuracy as high
as 99.13%. For building applications, this study motivates a sensor fusion
paradigm for learning individualized activity, location, and environmental
preferences for energy management and user comfort.Comment: submitted to the 40th Annual Conference of the IEEE Industrial
Electronics Society (IECON
AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Deep neural networks (DNNs) are vulnerable to adversarial examples, which may
lead to catastrophe in security-critical domains. Numerous detection methods
are proposed to characterize the feature uniqueness of adversarial examples, or
to distinguish DNN's behavior activated by the adversarial examples. Detections
based on features cannot handle adversarial examples with large perturbations.
Besides, they require a large amount of specific adversarial examples. Another
mainstream, model-based detections, which characterize input properties by
model behaviors, suffer from heavy computation cost. To address the issues, we
introduce the concept of local gradient, and reveal that adversarial examples
have a quite larger bound of local gradient than the benign ones. Inspired by
the observation, we leverage local gradient for detecting adversarial examples,
and propose a general framework AdvCheck. Specifically, by calculating the
local gradient from a few benign examples and noise-added misclassified
examples to train a detector, adversarial examples and even misclassified
natural inputs can be precisely distinguished from benign ones. Through
extensive experiments, we have validated the AdvCheck's superior performance to
the state-of-the-art (SOTA) baselines, with detection rate ()
on general adversarial attacks and () on misclassified natural
inputs on average, with average 1/500 time cost. We also provide interpretable
results for successful detection.Comment: 26 page
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Current literature, aiming to surpass the "Chain-of-Thought" approach, often
resorts to an external modus operandi involving halting, modifying, and then
resuming the generation process to boost Large Language Models' (LLMs)
reasoning capacities. This mode escalates the number of query requests, leading
to increased costs, memory, and computational overheads. Addressing this, we
propose the Algorithm of Thoughts -- a novel strategy that propels LLMs through
algorithmic reasoning pathways, pioneering a new mode of in-context learning.
By employing algorithmic examples, we exploit the innate recurrence dynamics of
LLMs, expanding their idea exploration with merely one or a few queries. Our
technique outperforms earlier single-query methods and stands on par with a
recent multi-query strategy that employs an extensive tree search algorithm.
Intriguingly, our results suggest that instructing an LLM using an algorithm
can lead to performance surpassing that of the algorithm itself, hinting at
LLM's inherent ability to weave its intuition into optimized searches. We probe
into the underpinnings of our method's efficacy and its nuances in application
Excitement Surfeited Turns to Errors: Deep Learning Testing Framework Based on Excitable Neurons
Despite impressive capabilities and outstanding performance, deep neural
networks (DNNs) have captured increasing public concern about their security
problems, due to their frequently occurred erroneous behaviors. Therefore, it
is necessary to conduct a systematical testing for DNNs before they are
deployed to real-world applications. Existing testing methods have provided
fine-grained metrics based on neuron coverage and proposed various approaches
to improve such metrics. However, it has been gradually realized that a higher
neuron coverage does \textit{not} necessarily represent better capabilities in
identifying defects that lead to errors. Besides, coverage-guided methods
cannot hunt errors due to faulty training procedure. So the robustness
improvement of DNNs via retraining by these testing examples are
unsatisfactory. To address this challenge, we introduce the concept of
excitable neurons based on Shapley value and design a novel white-box testing
framework for DNNs, namely DeepSensor. It is motivated by our observation that
neurons with larger responsibility towards model loss changes due to small
perturbations are more likely related to incorrect corner cases due to
potential defects. By maximizing the number of excitable neurons concerning
various wrong behaviors of models, DeepSensor can generate testing examples
that effectively trigger more errors due to adversarial inputs, polluted data
and incomplete training. Extensive experiments implemented on both image
classification models and speaker recognition models have demonstrated the
superiority of DeepSensor.Comment: 32 page
Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems
Learning feature interaction is the critical backbone to building recommender
systems. In web-scale applications, learning feature interaction is extremely
challenging due to the sparse and large input feature space; meanwhile,
manually crafting effective feature interactions is infeasible because of the
exponential solution space. We propose to leverage a Transformer-based
architecture with attention layers to automatically capture feature
interactions. Transformer architectures have witnessed great success in many
domains, such as natural language processing and computer vision. However,
there has not been much adoption of Transformer architecture for feature
interaction modeling in industry. We aim at closing the gap. We identify two
key challenges for applying the vanilla Transformer architecture to web-scale
recommender systems: (1) Transformer architecture fails to capture the
heterogeneous feature interactions in the self-attention layer; (2) The serving
latency of Transformer architecture might be too high to be deployed in
web-scale recommender systems. We first propose a heterogeneous self-attention
layer, which is a simple yet effective modification to the self-attention layer
in Transformer, to take into account the heterogeneity of feature interactions.
We then introduce \textsc{Hiformer} (\textbf{H}eterogeneous
\textbf{I}nteraction Trans\textbf{former}) to further improve the model
expressiveness. With low-rank approximation and model pruning, \hiformer enjoys
fast inference for online deployment. Extensive offline experiment results
corroborates the effectiveness and efficiency of the \textsc{Hiformer} model.
We have successfully deployed the \textsc{Hiformer} model to a real world large
scale App ranking model at Google Play, with significant improvement in key
engagement metrics (up to +2.66\%)
A phase 1 study of dimdazenil to evaluate the pharmacokinetics, food effect and safety in Chinese healthy subjects
Background and objective: As a partial positive allosteric modulator of the gamma-aminobutyric acid A (GABAA) receptor, dimdazenil was used for the treatment of insomnia with the potential to alleviate associated side effects compared to full agonists. The objective of this trial is to assess the safety, tolerability, food effect and pharmacokinetics following single and multiple doses of dimdazenil in Chinese healthy subjects.Methods: In this phase 1 trial, 36 healthy subjects aged ≥18 years were assigned to receive a single dose of 1.5, 2.5, or 5 mg dimdazenil, with each dose cohort consisting of 12 subjects, and 14 subjects were assigned to receive a multiple 2.5 mg daily dose of dimdazenil for 5 days. Safety, tolerability, and pharmacokinetic characteristics were evaluated.Results: Of the 50 subjects enrolled and 49 completed the trial, the incidences of treatment-emergent adverse events (AEs) in the single-dose groups of 1.5, 2.5, and 5 mg were 16.7%, 58.3% and 66.7% respectively, while 61.5% in the multiple-dose group. There were no serious AEs, deaths, AEs leading to discontinuation or AEs of requiring clinical intervention in any treatment groups. The most treatment-emergent AEs were dizziness (n = 4, 8.2%), hyperuricemia (n = 2, 6.1%), upper respiratory tract infection (n = 2, 6.1%), diastolic blood pressure decreased (n = 2, 6.1%), blood TG increased (n = 2, 6.1%) and RBC urine positive (n = 2, 6.1%). All AEs were mild-to-moderate and transient, and no severe AEs were documented in any study phase. The PK profile of dimdazenil and its active metabolite Ro46-1927 was linear across 1.5–5 mg oral doses in humans. The median Tmax for dimdazenil was in the range of 0.5–1.5 h, and the apparent terminal t1/2z ranged from 3.50 to 4.32 h. Taking Dimdazenil with food may delay Tmax and decrease Cmax, without affecting the total exposure (AUC). No relevant accumulations of dimdazenil and Ro 46–1927 were observed in multiple-dose group.Conclusion: Dimdazenil was generally well tolerated in healthy Chinese subjects after single and 5 days-multiple dosing. The pharmacokinetic properties of dimdazenil are compatible with a drug for the treatment of insomnia.Clinical Trial Registration: chinadrugtrials.org.cn, identifier CTR2020197