3,683 research outputs found

    Incubators vs Zombies: Fault-Tolerant, Short, Thin and Lanky Spanners for Doubling Metrics

    Full text link
    Recently Elkin and Solomon gave a construction of spanners for doubling metrics that has constant maximum degree, hop-diameter O(log n) and lightness O(log n) (i.e., weight O(log n)w(MST). This resolves a long standing conjecture proposed by Arya et al. in a seminal STOC 1995 paper. However, Elkin and Solomon's spanner construction is extremely complicated; we offer a simple alternative construction that is very intuitive and is based on the standard technique of net tree with cross edges. Indeed, our approach can be readily applied to our previous construction of k-fault tolerant spanners (ICALP 2012) to achieve k-fault tolerance, maximum degree O(k^2), hop-diameter O(log n) and lightness O(k^3 log n)

    Rational invariants of even ternary forms under the orthogonal group

    Get PDF
    In this article we determine a generating set of rational invariants of minimal cardinality for the action of the orthogonal group O3\mathrm{O}_3 on the space R[x,y,z]2d\mathbb{R}[x,y,z]_{2d} of ternary forms of even degree 2d2d. The construction relies on two key ingredients: On one hand, the Slice Lemma allows us to reduce the problem to dermining the invariants for the action on a subspace of the finite subgroup B3\mathrm{B}_3 of signed permutations. On the other hand, our construction relies in a fundamental way on specific bases of harmonic polynomials. These bases provide maps with prescribed B3\mathrm{B}_3-equivariance properties. Our explicit construction of these bases should be relevant well beyond the scope of this paper. The expression of the B3\mathrm{B}_3-invariants can then be given in a compact form as the composition of two equivariant maps. Instead of providing (cumbersome) explicit expressions for the O3\mathrm{O}_3-invariants, we provide efficient algorithms for their evaluation and rewriting. We also use the constructed B3\mathrm{B}_3-invariants to determine the O3\mathrm{O}_3-orbit locus and provide an algorithm for the inverse problem of finding an element in R[x,y,z]2d\mathbb{R}[x,y,z]_{2d} with prescribed values for its invariants. These are the computational issues relevant in brain imaging.Comment: v3 Changes: Reworked presentation of Neuroimaging application, refinement of Definition 3.1. To appear in "Foundations of Computational Mathematics

    Sediment Motion beneath Surges and Bores

    Get PDF
    Positive surges and bores can induce significant bed-load transport in estuaries and river channels. Based upon physical modelling, the present study investigated the sediment motion beneath bores on a relatively long gravel bed. The freesurface measurements at a series of locations showed that the bore shape varied during its upstream propagation. An ultrahigh speed camera captured the details of gravel motion at 1200 fps. Frame-by-frame analysis of slow-motion video movies demonstrated three basic modes of pebble motion: rotation, rolling and saltation. More complicated pebble motion was a combination of 2 or 3 basic modes. The synchronous measurements of near-bottom velocity and bed-load material trajectories highlighted the importance of the adverse longitudinal pressure gradient and transient flow recirculation on the inception of particle motion. Long durations of gravel motion also indicated that the weak negative flow under secondary waves played some role in maintaining the upstream transient sediment transport

    Sediment Motion beneath Surges and Bores

    Get PDF
    Positive surges and bores can induce significant bed-load transport in estuaries and river channels. Based upon physical modelling, the present study investigated the sediment motion beneath bores on a relatively long gravel bed. The freesurface measurements at a series of locations showed that the bore shape varied during its upstream propagation. An ultrahigh speed camera captured the details of gravel motion at 1200 fps. Frame-by-frame analysis of slow-motion video movies demonstrated three basic modes of pebble motion: rotation, rolling and saltation. More complicated pebble motion was a combination of 2 or 3 basic modes. The synchronous measurements of near-bottom velocity and bed-load material trajectories highlighted the importance of the adverse longitudinal pressure gradient and transient flow recirculation on the inception of particle motion. Long durations of gravel motion also indicated that the weak negative flow under secondary waves played some role in maintaining the upstream transient sediment transport

    Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation

    Get PDF
    Whilst the availability of 3D LiDAR point cloud data has significantly grown in recent years, annotation remains expensive and time-consuming, leading to a demand for semisupervised semantic segmentation methods with application domains such as autonomous driving. Existing work very often employs relatively large segmentation backbone networks to improve segmentation accuracy, at the expense of computational costs. In addition, many use uniform sampling to reduce ground truth data requirements for learning needed, often resulting in sub-optimal performance. To address these issues, we propose a new pipeline that employs a smaller architecture, requiring fewer ground-truth annotations to achieve superior segmentation accuracy compared to contemporary approaches. This is facilitated via a novel Sparse Depthwise Separable Convolution module that significantly reduces the network parameter count while retaining overall task performance. To effectively sub-sample our training data, we propose a new Spatio-Temporal Redundant Frame Downsampling (ST-RFD) method that leverages knowledge of sensor motion within the environment to extract a more diverse subset of training data frame samples. To leverage the use of limited annotated data samples, we further propose a soft pseudo-label method informed by Li- DAR reflectivity. Our method outperforms contemporary semi-supervised work in terms of mIoU, using less labeled data, on the SemanticKITTI (59.5@5%) and ScribbleKITTI (58.1@5%) benchmark datasets, based on a 2.3Ă— reduction in model parameters and 641Ă— fewer multiply-add operations whilst also demonstrating significant performance improvement on limited training data (i.e., Less is More)

    Nearly Optimal Private Convolution

    Full text link
    We study computing the convolution of a private input xx with a public input hh, while satisfying the guarantees of (ϵ,δ)(\epsilon, \delta)-differential privacy. Convolution is a fundamental operation, intimately related to Fourier Transforms. In our setting, the private input may represent a time series of sensitive events or a histogram of a database of confidential personal information. Convolution then captures important primitives including linear filtering, which is an essential tool in time series analysis, and aggregation queries on projections of the data. We give a nearly optimal algorithm for computing convolutions while satisfying (ϵ,δ)(\epsilon, \delta)-differential privacy. Surprisingly, we follow the simple strategy of adding independent Laplacian noise to each Fourier coefficient and bounding the privacy loss using the composition theorem of Dwork, Rothblum, and Vadhan. We derive a closed form expression for the optimal noise to add to each Fourier coefficient using convex programming duality. Our algorithm is very efficient -- it is essentially no more computationally expensive than a Fast Fourier Transform. To prove near optimality, we use the recent discrepancy lowerbounds of Muthukrishnan and Nikolov and derive a spectral lower bound using a characterization of discrepancy in terms of determinants

    Tackling Data Bias in Painting Classification with Style Transfer

    Get PDF
    It is difficult to train classifiers on paintings collections due to model bias from domain gaps and data bias from the uneven distribution of artistic styles. Previous techniques like data distillation, traditional data augmentation and style transfer improve classifier training using task specific training datasets or domain adaptation. We propose a system to handle data bias in small paintings datasets like the Kaokore dataset while simultaneously accounting for domain adaptation in fine-tuning a model trained on real world images. Our system consists of two stages which are style transfer and classification. In the style transfer stage, we generate the stylized training samples per class with uniformly sampled content and style images and train the style transformation network per domain. In the classification stage, we can interpret the effectiveness of the style and content layers at the attention layers when training on the original training dataset and the stylized images. We can tradeoff the model performance and convergence by dynamically varying the proportion of augmented samples in the majority and minority classes. We achieve comparable results to the SOTA with fewer training epochs and a classifier with fewer training parameters
    • …
    corecore