230 research outputs found
Robust and Scalable Hyperdimensional Computing With Brain-Like Neural Adaptations
The Internet of Things (IoT) has facilitated many applications utilizing
edge-based machine learning (ML) methods to analyze locally collected data.
Unfortunately, popular ML algorithms often require intensive computations
beyond the capabilities of today's IoT devices. Brain-inspired hyperdimensional
computing (HDC) has been introduced to address this issue. However, existing
HDCs use static encoders, requiring extremely high dimensionality and hundreds
of training iterations to achieve reasonable accuracy. This results in a huge
efficiency loss, severely impeding the application of HDCs in IoT systems. We
observed that a main cause is that the encoding module of existing HDCs lacks
the capability to utilize and adapt to information learned during training. In
contrast, neurons in human brains dynamically regenerate all the time and
provide more useful functionalities when learning new information. While the
goal of HDC is to exploit the high-dimensionality of randomly generated base
hypervectors to represent the information as a pattern of neural activity, it
remains challenging for existing HDCs to support a similar behavior as brain
neural regeneration. In this work, we present dynamic HDC learning frameworks
that identify and regenerate undesired dimensions to provide adequate accuracy
with significantly lowered dimensionalities, thereby accelerating both the
training and inference.Comment: arXiv admin note: substantial text overlap with arXiv:2304.0550
Effect of overweight status at onset on C-peptide levels during first 2 years since diagnosis in children with type 1 diabetes
Background: Recently, the growing epidemic of obesity is mirrored in the increasing incidence rate of T1D in children. However, the role of overweight in the progress of T1D is still unknown.
Objective: To assess the relationship between the overweight status at onset and the insulin reserve in the first two years since diagnosis in children with T1D.
Methods: One hundred sixty-eight children newly diagnosed with T1D, aged from 1.5 to 18.9 years, with 2-years of follow-up, ≥4 autoantibodies measured at baseline, and onset C-peptide plus 3 or more follow-up measures were included in this study from the Children’s Hospital of Pittsburgh Registry (2004-2006). Baseline demographic and clinical characteristics were compared between overweight and non-overweight subjects. The change and the rate of change of C-peptide were evaluated. The contribution of being overweight to C-peptide levels and change in C-peptide from onset over time were estimated using linear mixed models adjusting for other covariates.
Results: Among the 168 subjects with mean age at 9.7 years and mean onset C-peptide of 0.76ng/mL, 22% (36) were overweight at onset with BMI ≥ 85th percentile. Onset C-peptide level of overweight subjects was higher than that of non-overweight (median: 0.88ng/mL vs. 0.50ng/mL, P<0.0001). The highest C-peptide levels (median: 1.86ng/mL vs. 1.47ng/mL, P=0.30) were observed a 3 months, followed by a continuous decline reaching the lowest level at 24 months (median: 0.29ng/mL vs. 0.18ng/mL, P=0.13). Linear mixed models suggest that the overall mean rate of change of the overweight subjects was 0.7865ng/mL/months (95% C.I.: (0.2277, 1.3452), P=0.0062) compared to the non-overweight subjects adjusting for other baseline covariates. The differences of mean C-peptide levels between these two groups decreased as time passed and reached similar levels at the end of the second year.
Conclusion: Compared to the non-overweight T1D children, overweight children had higher C-peptide levels at 3, 6, 12, and 18 months after diagnosis; however, at 24 months, this difference was not statistically significant.
Public health significance: Children with T1D who are overweight can benefit from the potential related target interventions to help them maintain or extend the duration of high C-peptide level after receiving treatment
The Power of Menus in Contract Design
We study the power of menus of contracts in principal-agent problems with
adverse selection (agents can be one of several types) and moral hazard (we
cannot observe agent actions directly). For principal-agent problems with
types and actions, we show that the best menu of contracts can obtain a
factor more utility for the principal than the best
individual contract, partially resolving an open question of Guruganesh et al.
(2021). We then turn our attention to randomized menus of linear contracts,
where we likewise show that randomized linear menus can be better
than the best single linear contract. As a corollary, we show this implies an
analogous gap between deterministic menus of (general) contracts and randomized
menus of contracts (as introduced by Castiglioni et al. (2022)).Comment: EC 202
DOMINO: Domain-invariant Hyperdimensional Classification for Multi-Sensor Time Series Data
With the rapid evolution of the Internet of Things, many real-world
applications utilize heterogeneously connected sensors to capture time-series
information. Edge-based machine learning (ML) methodologies are often employed
to analyze locally collected data. However, a fundamental issue across
data-driven ML approaches is distribution shift. It occurs when a model is
deployed on a data distribution different from what it was trained on, and can
substantially degrade model performance. Additionally, increasingly
sophisticated deep neural networks (DNNs) have been proposed to capture spatial
and temporal dependencies in multi-sensor time series data, requiring intensive
computational resources beyond the capacity of today's edge devices. While
brain-inspired hyperdimensional computing (HDC) has been introduced as a
lightweight solution for edge-based learning, existing HDCs are also vulnerable
to the distribution shift challenge. In this paper, we propose DOMINO, a novel
HDC learning framework addressing the distribution shift problem in noisy
multi-sensor time-series data. DOMINO leverages efficient and parallel matrix
operations on high-dimensional space to dynamically identify and filter out
domain-variant dimensions. Our evaluation on a wide range of multi-sensor time
series classification tasks shows that DOMINO achieves on average 2.04% higher
accuracy than state-of-the-art (SOTA) DNN-based domain generalization
techniques, and delivers 16.34x faster training and 2.89x faster inference.
More importantly, DOMINO performs notably better when learning from partially
labeled and highly imbalanced data, providing 10.93x higher robustness against
hardware noises than SOTA DNNs
Exploring the relationship between home environmental characteristics and restorative effect through neural activities
As society and the economy have advanced, the focus of architectural and interior environment design has shifted from practicality to eliciting emotional responses, such as stimulating environments and innovative inclusive designs. Of particular interest is the home environment, as it is best suited for achieving restorative effects, leading to a debate between interior qualities and restorative impact. This study explored the relationships between home characteristics, restorative potential, and neural activities using the Neu-VR. The results of the regression analysis revealed statistically significant relationships between interior properties and restorative potential. We examined each potential characteristic of the home environment that could have a restorative impact and elucidated the environmental characteristics that should be emphasized in residential interior design. These findings contribute evidence-based knowledge for designing therapeutic indoor environments. And combining different restorative potential environments with neural activity, discussed new neuro activities which may predict restorativeness, decoded the new indicators of neuro activity for environmental design
Virtual sensing for gearbox condition monitoring based on extreme learning machine
Gearbox, as a critical component to convert speed and torque to maintain machinery normal operation in the industrial processes, has been received and still needs considerable attentions to ensure its reliable operation. Direct sensing and indirect sensing techniques are widely used for gearbox condition monitoring and fault diagnosis, but both have Pros and Cons. To bridge their gaps and enhance the performance of early fault diagnosis, this paper presents a new virtual sensing technique based on extreme learning machine (ELM) for gearbox degradation status estimation. By fusing the features extracted from indirect sensing measurements (e.g. in-process vibration measurement), ELM based virtual sensing model could infer the gearbox condition which was usually directly indicated by the direct sensing measurements (e.g. offline oil debris mass (ODM)). Different state-of-the-art dimension reduction techniques have been investigated for feature selection and fusion including principal component analysis (PCA) and its kernel version, locality preserving projection (LPP) method. The effectiveness of the presented virtual sensing technique is experimentally validated by the sensing measurements from a spiral bevel gear test rig. The experimental results show that the estimated gearbox condition by the virtual sensing model based on ELM and kernel PCA well follows the trend of truth data and presents the better performance over the support vector regression based virtual sensing scheme
RS2G: Data-Driven Scene-Graph Extraction and Embedding for Robust Autonomous Perception and Scenario Understanding
Human drivers naturally reason about interactions between road users to
understand and safely navigate through traffic. Thus, developing autonomous
vehicles necessitates the ability to mimic such knowledge and model
interactions between road users to understand and navigate unpredictable,
dynamic environments. However, since real-world scenarios often differ from
training datasets, effectively modeling the behavior of various road users in
an environment remains a significant research challenge. This reality
necessitates models that generalize to a broad range of domains and explicitly
model interactions between road users and the environment to improve scenario
understanding. Graph learning methods address this problem by modeling
interactions using graph representations of scenarios. However, existing
methods cannot effectively transfer knowledge gained from the training domain
to real-world scenarios. This constraint is caused by the domain-specific rules
used for graph extraction that can vary in effectiveness across domains,
limiting generalization ability. To address these limitations, we propose
RoadScene2Graph (RS2G): a data-driven graph extraction and modeling approach
that learns to extract the best graph representation of a road scene for
solving autonomous scene understanding tasks. We show that RS2G enables better
performance at subjective risk assessment than rule-based graph extraction
methods and deep-learning-based models. RS2G also improves generalization and
Sim2Real transfer learning, which denotes the ability to transfer knowledge
gained from simulation datasets to unseen real-world scenarios. We also present
ablation studies showing how RS2G produces a more useful graph representation
for downstream classifiers. Finally, we show how RS2G can identify the relative
importance of rule-based graph edges and enables intelligent graph sparsity
tuning
- …