21,351 research outputs found
An evolutionary approach to constraint-regularized learning
The success of machine learning methods for inducing models from data
crucially depends on the proper incorporation of background knowledge about
the model to be learned. The idea of constraint-regularized learning is to em-
ploy fuzzy set-based modeling techniques in order to express such knowl-
edge in a flexible way, and to formalize it in terms of fuzzy constraints.
Thus, background knowledge can be used to appropriately bias the learn-
ing process within the regularization framework of inductive inference. After
a brief review of this idea, the paper offers an operationalization of constraint-
regularized learning. The corresponding framework is based on evolutionary
methods for model optimization and employs fuzzy rule bases of the Takagi-
Sugeno type as flexible function approximators
Foothill: A Quasiconvex Regularization for Edge Computing of Deep Neural Networks
Deep neural networks (DNNs) have demonstrated success for many supervised
learning tasks, ranging from voice recognition, object detection, to image
classification. However, their increasing complexity might yield poor
generalization error that make them hard to be deployed on edge devices.
Quantization is an effective approach to compress DNNs in order to meet these
constraints. Using a quasiconvex base function in order to construct a binary
quantizer helps training binary neural networks (BNNs) and adding noise to the
input data or using a concrete regularization function helps to improve
generalization error. Here we introduce foothill function, an infinitely
differentiable quasiconvex function. This regularizer is flexible enough to
deform towards and penalties. Foothill can be used as a binary
quantizer, as a regularizer, or as a loss. In particular, we show this
regularizer reduces the accuracy gap between BNNs and their full-precision
counterpart for image classification on ImageNet.Comment: Accepted in 16th International Conference of Image Analysis and
Recognition (ICIAR 2019
Multitask Learning Deep Neural Networks to Combine Revealed and Stated Preference Data
It is an enduring question how to combine revealed preference (RP) and stated
preference (SP) data to analyze travel behavior. This study presents a
framework of multitask learning deep neural networks (MTLDNNs) for this
question, and demonstrates that MTLDNNs are more generic than the traditional
nested logit (NL) method, due to its capacity of automatic feature learning and
soft constraints. About 1,500 MTLDNN models are designed and applied to the
survey data that was collected in Singapore and focused on the RP of four
current travel modes and the SP with autonomous vehicles (AV) as the one new
travel mode in addition to those in RP. We found that MTLDNNs consistently
outperform six benchmark models and particularly the classical NL models by
about 5% prediction accuracy in both RP and SP datasets. This performance
improvement can be mainly attributed to the soft constraints specific to
MTLDNNs, including its innovative architectural design and regularization
methods, but not much to the generic capacity of automatic feature learning
endowed by a standard feedforward DNN architecture. Besides prediction, MTLDNNs
are also interpretable. The empirical results show that AV is mainly the
substitute of driving and AV alternative-specific variables are more important
than the socio-economic variables in determining AV adoption. Overall, this
study introduces a new MTLDNN framework to combine RP and SP, and demonstrates
its theoretical flexibility and empirical power for prediction and
interpretation. Future studies can design new MTLDNN architectures to reflect
the speciality of RP and SP and extend this work to other behavioral analysis
Global Structure-Aware Diffusion Process for Low-Light Image Enhancement
This paper studies a diffusion-based framework to address the low-light image
enhancement problem. To harness the capabilities of diffusion models, we delve
into this intricate process and advocate for the regularization of its inherent
ODE-trajectory. To be specific, inspired by the recent research that low
curvature ODE-trajectory results in a stable and effective diffusion process,
we formulate a curvature regularization term anchored in the intrinsic
non-local structures of image data, i.e., global structure-aware
regularization, which gradually facilitates the preservation of complicated
details and the augmentation of contrast during the diffusion process. This
incorporation mitigates the adverse effects of noise and artifacts resulting
from the diffusion process, leading to a more precise and flexible enhancement.
To additionally promote learning in challenging regions, we introduce an
uncertainty-guided regularization technique, which wisely relaxes constraints
on the most extreme regions of the image. Experimental evaluations reveal that
the proposed diffusion-based framework, complemented by rank-informed
regularization, attains distinguished performance in low-light enhancement. The
outcomes indicate substantial advancements in image quality, noise suppression,
and contrast amplification in comparison with state-of-the-art methods. We
believe this innovative approach will stimulate further exploration and
advancement in low-light image processing, with potential implications for
other applications of diffusion models. The code is publicly available at
https://github.com/jinnh/GSAD.Comment: Accepted to NeurIPS 202
- …