5,442 research outputs found
On the Weak Computability of Continuous Real Functions
In computable analysis, sequences of rational numbers which effectively
converge to a real number x are used as the (rho-) names of x. A real number x
is computable if it has a computable name, and a real function f is computable
if there is a Turing machine M which computes f in the sense that, M accepts
any rho-name of x as input and outputs a rho-name of f(x) for any x in the
domain of f. By weakening the effectiveness requirement of the convergence and
classifying the converging speeds of rational sequences, several interesting
classes of real numbers of weak computability have been introduced in
literature, e.g., in addition to the class of computable real numbers (EC), we
have the classes of semi-computable (SC), weakly computable (WC), divergence
bounded computable (DBC) and computably approximable real numbers (CA). In this
paper, we are interested in the weak computability of continuous real functions
and try to introduce an analogous classification of weakly computable real
functions. We present definitions of these functions by Turing machines as well
as by sequences of rational polygons and prove these two definitions are not
equivalent. Furthermore, we explore the properties of these functions, and
among others, show their closure properties under arithmetic operations and
composition
Generalized Batch Normalization: Towards Accelerating Deep Neural Networks
Utilizing recently introduced concepts from statistics and quantitative risk
management, we present a general variant of Batch Normalization (BN) that
offers accelerated convergence of Neural Network training compared to
conventional BN. In general, we show that mean and standard deviation are not
always the most appropriate choice for the centering and scaling procedure
within the BN transformation, particularly if ReLU follows the normalization
step. We present a Generalized Batch Normalization (GBN) transformation, which
can utilize a variety of alternative deviation measures for scaling and
statistics for centering, choices which naturally arise from the theory of
generalized deviation measures and risk theory in general. When used in
conjunction with the ReLU non-linearity, the underlying risk theory suggests
natural, arguably optimal choices for the deviation measure and statistic.
Utilizing the suggested deviation measure and statistic, we show experimentally
that training is accelerated more so than with conventional BN, often with
improved error rate as well. Overall, we propose a more flexible BN
transformation supported by a complimentary theoretical framework that can
potentially guide design choices.Comment: accepted at AAAI-1
Characterizing Location-based Mobile Tracking in Mobile Ad Networks
Mobile apps nowadays are often packaged with third-party ad libraries to
monetize user data
Proton-coupled sugar transport in the prototypical major facilitator superfamily protein XylE.
The major facilitator superfamily (MFS) is the largest collection of structurally related membrane proteins that transport a wide array of substrates. The proton-coupled sugar transporter XylE is the first member of the MFS that has been structurally characterized in multiple transporting conformations, including both the outward and inward-facing states. Here we report the crystal structure of XylE in a new inward-facing open conformation, allowing us to visualize the rocker-switch movement of the N-domain against the C-domain during the transport cycle. Using molecular dynamics simulation, and functional transport assays, we describe the movement of XylE that facilitates sugar translocation across a lipid membrane and identify the likely candidate proton-coupling residues as the conserved Asp27 and Arg133. This study addresses the structural basis for proton-coupled substrate transport and release mechanism for the sugar porter family of proteins
Topological Data Analysis on Simple English Wikipedia Articles
Single-parameter persistent homology, a key tool in topological data
analysis, has been widely applied to data problems along with statistical
techniques that quantify the significance of the results. In contrast,
statistical techniques for two-parameter persistence, while highly desirable
for real-world applications, have scarcely been considered. We present three
statistical approaches for comparing geometric data using two-parameter
persistent homology; these approaches rely on the Hilbert function, matching
distance, and barcodes obtained from two-parameter persistence modules computed
from the point-cloud data. Our statistical methods are broadly applicable for
analysis of geometric data indexed by a real-valued parameter. We apply these
approaches to analyze high-dimensional point-cloud data obtained from Simple
English Wikipedia articles. In particular, we show how our methods can be
utilized to distinguish certain subsets of the Wikipedia data and to compare
with random data. These results yield insights into the construction of null
distributions and stability of our methods with respect to noisy data.Comment: 17 pages, 13 figure
- …