1,956 research outputs found

    Adaptive edge-based prediction for lossless image compression

    Get PDF
    Many lossless image compression methods have been suggested with established results hard to surpass. However there are some aspects that can be considered to improve the performance further. This research focuses on two-phase prediction-encoding method, separately studying each and suggesting new techniques.;In the prediction module, proposed Edge-Based-Predictor (EBP) and Least-Squares-Edge-Based-Predictor (LS-EBP) emphasizes on image edges and make predictions accordingly. EBP is a gradient based nonlinear adaptive predictor. EBP switches between prediction-rules based on few threshold parameters automatically determined by a pre-analysis procedure, which makes a first pass. The LS-EBP also uses these parameters, but optimizes the prediction for each pre-analysis assigned edge location, thus applying least-square approach only at the edge points.;For encoding module: a novel Burrows Wheeler Transform (BWT) inspired method is suggested, which performs better than applying the BWT directly on the images. We also present a context-based adaptive error modeling and encoding scheme. When coupled with the above-mentioned prediction schemes, the result is the best-known compression performance in the genre of compression schemes with same time and space complexity

    Feature learning in neural networks and kernel machines that recursively learn features

    Full text link
    Neural networks have achieved impressive results on many technological and scientific tasks. Yet, their empirical successes have outpaced our fundamental understanding of their structure and function. By identifying mechanisms driving the successes of neural networks, we can provide principled approaches for improving neural network performance and develop simple and effective alternatives. In this work, we isolate the key mechanism driving feature learning in fully connected neural networks by connecting neural feature learning to the average gradient outer product. We subsequently leverage this mechanism to design \textit{Recursive Feature Machines} (RFMs), which are kernel machines that learn features. We show that RFMs (1) accurately capture features learned by deep fully connected neural networks, (2) close the gap between kernel machines and fully connected networks, and (3) surpass a broad spectrum of models including neural networks on tabular data. Furthermore, we demonstrate that RFMs shed light on recently observed deep learning phenomena such as grokking, lottery tickets, simplicity biases, and spurious features. We provide a Python implementation to make our method broadly accessible [\href{https://github.com/aradha/recursive_feature_machines}{GitHub}]

    Magnetic behavior of EuCu2As2: Delicate balance between antiferromagnetic and ferromagnetic order

    Full text link
    The Eu-based compound, EuCu2As2, crystallizing in the ThCr2Si2-type tetragonal structure, has been synthesized and its magnetic behavior has been investigated by magnetization (M), heat-capacity (C) and electrical resistivity (rho) measurements as a function of temperature (T) and magnetic field (H) as well as by 151Eu Moessbauer measurements. The results reveal that Eu is divalent ordering antiferromagnetically below 15 K in the absence of magnetic field, apparently with the formation of magnetic Brillouin-zone boundary gaps. A fascinating observation is made in a narrow temperature range before antiferromagnetism sets in: That is, there is a remarkable upturn just below 20 K in the plot of magnetic susceptibility versus T even at low fields, as though the compound actually tends to order ferromagnetically. There are corresponding anomalies in the magnetocaloric effect data as well. In addition, a small application of magnetic field (around 1 kOe at 1.8 K) in the antiferromagnetic state causes spin-reorientation effect. These results suggest that there is a close balance between antiferromagnetism and ferromagnetism in this compoundComment: Phys. Rev. B, in pres

    Refinement of the Equilibrium of Public Goods Games over Networks: Efficiency and Effort of Specialized Equilibria

    Get PDF
    Recently Bramoulle and Kranton presented a model for the provision of public goods over a network and showed the existence of a class of Nash equilibria called specialized equilibria wherein some agents exert maximum effort while other agents free ride. We examine the efficiency, effort and cost of specialized equilibria in comparison to other equilibria. Our main results show that the welfare of a particular specialized equilibrium approaches the maximum welfare amongst all equilibria as the concavity of the benefit function tends to unity. For forest networks a similar result also holds as the concavity approaches zero. Moreover, without any such concavity conditions, there exists for any network a specialized equilibrium that requires the maximum weighted effort amongst all equilibria. When the network is a forest, a specialized equilibrium also incurs the minimum total cost amongst all equilibria. For well-covered forest networks we show that all welfare maximizing equilibria are specialized and all equilibria incur the same total cost. Thus we argue that specialized equilibria may be considered as a refinement of the equilibrium of the public goods game. We show several results on the structure and efficiency of equilibria that highlight the role of dependants in the network
    • …
    corecore