176 research outputs found
Memory and Computation-Efficient Kernel SVM via Binary Embedding and Ternary Model Coefficients
Kernel approximation is widely used to scale up kernel SVM training and
prediction. However, the memory and computation costs of kernel approximation
models are still too high if we want to deploy them on memory-limited devices
such as mobile phones, smartwatches, and IoT devices. To address this
challenge, we propose a novel memory and computation-efficient kernel SVM model
by using both binary embedding and binary model coefficients. First, we propose
an efficient way to generate compact binary embedding of the data, preserving
the kernel similarity. Second, we propose a simple but effective algorithm to
learn a linear classification model with ternary coefficients that can support
different types of loss function and regularizer. Our algorithm can achieve
better generalization accuracy than existing works on learning binary
coefficients since we allow coefficient to be , , or during the
training stage, and coefficient can be removed during model inference for
binary classification. Moreover, we provide a detailed analysis of the
convergence of our algorithm and the inference complexity of our model. The
analysis shows that the convergence to a local optimum is guaranteed, and the
inference complexity of our model is much lower than other competing methods.
Our experimental results on five large real-world datasets have demonstrated
that our proposed method can build accurate nonlinear SVM models with memory
costs less than 30KB
Improved Subsampled Randomized Hadamard Transform for Linear SVM
Subsampled Randomized Hadamard Transform (SRHT), a popular random projection
method that can efficiently project a -dimensional data into -dimensional
space () in time, has been widely used to address the
challenge of high-dimensionality in machine learning. SRHT works by rotating
the input data matrix by Randomized
Walsh-Hadamard Transform followed with a subsequent uniform column sampling on
the rotated matrix. Despite the advantages of SRHT, one limitation of SRHT is
that it generates the new low-dimensional embedding without considering any
specific properties of a given dataset. Therefore, this data-independent random
projection method may result in inferior and unstable performance when used for
a particular machine learning task, e.g., classification. To overcome this
limitation, we analyze the effect of using SRHT for random projection in the
context of linear SVM classification. Based on our analysis, we propose
importance sampling and deterministic top- sampling to produce effective
low-dimensional embedding instead of uniform sampling SRHT. In addition, we
also proposed a new supervised non-uniform sampling method. Our experimental
results have demonstrated that our proposed methods can achieve higher
classification accuracies than SRHT and other random projection methods on six
real-life datasets.Comment: AAAI-2
Semantic Arithmetic Coding using Synonymous Mappings
Recent semantic communication methods explore effective ways to expand the
communication paradigm and improve the system performance of the communication
systems. Nonetheless, the common problem of these methods is that the essence
of semantics is not explicitly pointed out and directly utilized. A new
epistemology suggests that synonymy, which is revealed as the fundamental
feature of semantics, guides the establishment of the semantic information
theory from a novel viewpoint. Building on this theoretical basis, this paper
proposes a semantic arithmetic coding (SAC) method for semantic lossless
compression using intuitive semantic synonymy. By constructing reasonable
synonymous mappings and performing arithmetic coding procedures over synonymous
sets, SAC can achieve higher compression efficiency for meaning-contained
source sequences at the semantic level and thereby approximate the semantic
entropy limits. Experimental results on edge texture map compression show an
evident improvement in coding efficiency using SAC without semantic losses,
compared to traditional arithmetic coding, which demonstrates its
effectiveness.Comment: 6 pages, 4 figures. This paper is submitted to the 2024 IEEE
International Symposium on Information Theory (ISIT 2024
Semantic Huffman Coding using Synonymous Mapping
Semantic communication stands out as a highly promising avenue for future
developments in communications. Theoretically, source compression coding based
on semantics can achieve lower rates than Shannon entropy. This paper
introduces a semantic Huffman coding built upon semantic information theory. By
incorporating synonymous mapping and synonymous sets, semantic Huffman coding
can achieve shorter average code lengths. Furthermore, we demonstrate that
semantic Huffman coding theoretically have the capability to approximate
semantic entropy. Experimental results indicate that, under the condition of
semantic lossless, semantic Huffman coding exhibits clear advantages in
compression efficiency over classical Huffman coding.Comment: 6 pages, 3 figures, this paper is submitted to the 2024 IEEE
International Symposium on Information Theory (ISIT 2024
Compressed sensing in photoacoustic tomography with in vivo experiments
The data acquisition speed in photoacoustic computed tomography (PACT) is limited by the laser repetition rate and the number of parallel ultrasound detecting channels. Reconstructing PACT image with a less number of measurements can effectively accelerate the data acquisition and reduce the system cost. Recently emerged Compressed Sensing (CS) theory enables us to reconstruct a compressible image with a small number of projections. This paper adopts the CS theory for reconstruction in PACT. The idea is implemented as a non-linear conjugate gradient descent algorithm and tested with phantom and in vivo experiments
Monad: Towards Cost-effective Specialization for Chiplet-based Spatial Accelerators
Advanced packaging offers a new design paradigm in the post-Moore era, where
many small chiplets can be assembled into a large system. Based on
heterogeneous integration, a chiplet-based accelerator can be highly
specialized for a specific workload, demonstrating extreme efficiency and cost
reduction. To fully leverage this potential, it is critical to explore both the
architectural design space for individual chiplets and different integration
options to assemble these chiplets, which have yet to be fully exploited by
existing proposals. This paper proposes Monad, a cost-aware specialization
approach for chiplet-based spatial accelerators that explores the tradeoffs
between PPA and fabrication costs. To evaluate a specialized system, we
introduce a modeling framework considering the non-uniformity in dataflow,
pipelining, and communications when executing multiple tensor workloads on
different chiplets. We propose to combine the architecture and integration
design space by uniformly encoding the design aspects for both spaces and
exploring them with a systematic ML-based approach. The experiments demonstrate
that Monad can achieve an average of 16% and 30% EDP reduction compared with
the state-of-the-art chiplet-based accelerators, Simba and NN-Baton,
respectively.Comment: To be published in ICCAD 202
TAG : Type Auxiliary Guiding for Code Comment Generation
Existing leading code comment generation approaches with the
structure-to-sequence framework ignores the type information of the
interpretation of the code, e.g., operator, string, etc. However, introducing
the type information into the existing framework is non-trivial due to the
hierarchical dependence among the type information. In order to address the
issues above, we propose a Type Auxiliary Guiding encoder-decoder framework for
the code comment generation task which considers the source code as an N-ary
tree with type information associated with each node. Specifically, our
framework is featured with a Type-associated Encoder and a Type-restricted
Decoder which enables adaptive summarization of the source code. We further
propose a hierarchical reinforcement learning method to resolve the training
difficulties of our proposed framework. Extensive evaluations demonstrate the
state-of-the-art performance of our framework with both the auto-evaluated
metrics and case studies.Comment: ACL 2020, Accepte
Regulation of Apical NHE3 Trafficking by Ouabain-Induced Activation of Basolateral Na/K-ATPase Receptor Complex
The long-term effects of ouabain on transepithelial Na+ transport involve transcriptional downregulation of apical Na+/H+ exchanger isoform 3 (NHE3). The aim of this study was to determine whether ouabain could acutely regulate NHE3 via a posttranscriptional mechanism in LLC-PK1 cells. We observed that the basolateral, but not apical, application of ouabain for 1 h significantly reduced transepithelial Na+ transport. This effect was not due to changes in the integrity of tight junctions or increases in the intracellular Na+ concentration. Ouabain regulated the trafficking of NHE3 and subsequently inhibited its activity, a process independent of intracellular Na+ concentration. Ouabain-induced NHE3 trafficking was abolished by either cholesterol depletion or Src inhibition. Moreover, ouabain increased the intracellular Ca2+concentration. Pretreatment of cells with the intracellular Ca2+ chelator BAPTA-AM blocked ouabain-induced trafficking of NHE3. Also, blockade of Na+-K+-ATPase endocytosis by a phosphatidylinositol 3-kinase inhibitor was equally effective in attenuating ouabain-induced NHE3 trafficking. These data indicate that ouabain acutely stimulates NHE3 trafficking by activating the basolateral Na+-K+-ATPase signaling complex. Taken together with our previous observations, we propose that ouabain can simultaneously regulate basolateral Na+-K+-ATPase and apical NHE3, leading to inhibition of transepithelial Na+ transport. This mechanism may be relevant to proximal tubular Na+ handling during conditions associated with increases in circulating endogenous cardiotonic steroids
Wireless Deep Video Semantic Transmission
In this paper, we design a new class of high-efficiency deep joint
source-channel coding methods to achieve end-to-end video transmission over
wireless channels. The proposed methods exploit nonlinear transform and
conditional coding architecture to adaptively extract semantic features across
video frames, and transmit semantic feature domain representations over
wireless channels via deep joint source-channel coding. Our framework is
collected under the name deep video semantic transmission (DVST). In
particular, benefiting from the strong temporal prior provided by the feature
domain context, the learned nonlinear transform function becomes temporally
adaptive, resulting in a richer and more accurate entropy model guiding the
transmission of current frame. Accordingly, a novel rate adaptive transmission
mechanism is developed to customize deep joint source-channel coding for video
sources. It learns to allocate the limited channel bandwidth within and among
video frames to maximize the overall transmission performance. The whole DVST
design is formulated as an optimization problem whose goal is to minimize the
end-to-end transmission rate-distortion performance under perceptual quality
metrics or machine vision task performance metrics. Across standard video
source test sequences and various communication scenarios, experiments show
that our DVST can generally surpass traditional wireless video coded
transmission schemes. The proposed DVST framework can well support future
semantic communications due to its video content-aware and machine vision task
integration abilities.Comment: published in IEEE JSA
- …