531 research outputs found

    Orbitofrontal cortex and learning predictions of state transitions

    No full text

    Representational structure or task structure? Bias in neural representational similarity analysis and a Bayesian method for reducing bias

    No full text
    <div><p>The activity of neural populations in the brains of humans and animals can exhibit vastly different spatial patterns when faced with different tasks or environmental stimuli. The degrees of similarity between these neural activity patterns in response to different events are used to characterize the representational structure of cognitive states in a neural population. The dominant methods of investigating this similarity structure first estimate neural activity patterns from noisy neural imaging data using linear regression, and then examine the similarity between the estimated patterns. Here, we show that this approach introduces spurious bias structure in the resulting similarity matrix, in particular when applied to fMRI data. This problem is especially severe when the signal-to-noise ratio is low and in cases where experimental conditions cannot be fully randomized in a task. We propose Bayesian Representational Similarity Analysis (BRSA), an alternative method for computing representational similarity, in which we treat the covariance structure of neural activity patterns as a hyper-parameter in a generative model of the neural data. By marginalizing over the unknown activity patterns, we can directly estimate this covariance structure from imaging data. This method offers significant reductions in bias and allows estimation of neural representational similarity with previously unattained levels of precision at low signal-to-noise ratio, without losing the possibility of deriving an interpretable distance measure from the estimated similarity. The method is closely related to Pattern Component Model (PCM), but instead of modeling the estimated neural patterns as in PCM, BRSA models the imaging data directly and is suited for analyzing data in which the order of task conditions is not fully counterbalanced. The probabilistic framework allows for jointly analyzing data from a group of participants. The method can also simultaneously estimate a signal-to-noise ratio map that shows where the learned representational structure is supported more strongly. Both this map and the learned covariance matrix can be used as a structured prior for maximum <i>a posteriori</i> estimation of neural activity patterns, which can be further used for fMRI decoding. Our method therefore paves the way towards a more unified and principled analysis of neural representations underlying fMRI signals. We make our tool freely available in Brain Imaging Analysis Kit (BrainIAK).</p></div

    Online k-taxi via double coverage and time-reverse primal-dual

    Get PDF
    We consider the online k-taxi problem, a generalization of the k-server problem, in which k servers are located in a metric space. A sequence of requests is revealed one by one, where each request is a pair of two points, representing the start and destination of a travel request by a passenger. The goal is to serve all requests while minimizing the distance traveled without carrying a passenger. We show that the classic Double Coverage algorithm has competitive ratio 2 k- 1 on HSTs, matching a recent lower bound for deterministic algorithms. For bounded depth HSTs, the competitive ratio turns out to be much better and we obtain tight bounds. When the depth is dâ‰Ș k, these bounds are approximately kd/ d!. By standard embedding results, we obtain a randomized algorithm for arbitrary n-point metrics with (polynomial) competitive ratio O(kcΔ 1/clog Δn), where Δ is the aspect ratio and c≄ 1 is an arbitrary positive integer constant. The only previous known bound was O(2 klog n). For general (weighted) tree metrics, we prove the competitive ratio of Double Coverage to be Θ (kd) for any fixed depth d, but unlike on HSTs it is not bounded by 2 k- 1. We obtain our results by a dual fitting analysis where the dual solution is constructed step-by-step backwards in time. Unlike the forward-time approach typical of online primal-dual analyses, this allows us to combine information from the past and the future when assigning dual variables. We believe this method can be useful also for other problems. Using this technique, we also provide a dual fitting proof of the k-competitiveness of Double Coverage for the k-server problem on trees

    Metrical service systems with transformations

    Get PDF
    We consider a generalization of the fundamental online metrical service systems (MSS) problem where the feasible region can be transformed between requests. In this problem, which we call T-MSS, an algorithm maintains a point in a metric space and has to serve a sequence of requests. Each request is a map (transformation) ft : At → Bt between subsets At and Bt of the metric space. To serve it, the algorithm has to go to a point at ∈ At, paying the distance from its previous position. Then, the transformation is applied, modifying the algorithm’s state to ft(at). Such transformations can model, e.g., changes to the environment that are outside of an algorithm’s control, and we therefore do not charge any additional cost to the algorithm when the transformation is applied. The transformations also allow to model requests occurring in the k-taxi problem. We show that for α-Lipschitz transformations, the competitive ratio is Θ(α)n-2 on n-point metrics. Here, the upper bound is achieved by a deterministic algorithm and the lower bound holds even for randomized algorithms. For the k-taxi problem, we prove a competitive ratio of Õ((nlog k)2). For chasing convex bodies, we show that even with contracting transformations no competitive algorithm exists. The problem T-MSS has a striking connection to the following deep mathematical question: Given a finite metric space M, what is the required cardinality of an extension M ⊇ M where each partial isometry on M extends to an automorphism? We give partial answers for special cases

    Bitter or not? BitterPredict, a tool for predicting taste from chemical structure

    Get PDF
    Bitter taste is an innately aversive taste modality that is considered to protect animals from consuming toxic compounds. Yet, bitterness is not always noxious and some bitter compounds have beneficial effects on health. Hundreds of bitter compounds were reported (and are accessible via the BitterDB http://bitterdb.agri.huji.ac.il/dbbitter.php), but numerous additional bitter molecules are still unknown. The dramatic chemical diversity of bitterants makes bitterness prediction a difficult task. Here we present a machine learning classifier, BitterPredict, which predicts whether a compound is bitter or not, based on its chemical structure. BitterDB was used as the positive set, and non-bitter molecules were gathered from literature to create the negative set. Adaptive Boosting (AdaBoost), based on decision trees machine-learning algorithm was applied to molecules that were represented using physicochemical and ADME/Tox descriptors. BitterPredict correctly classifies over 80% of the compounds in the hold-out test set, and 70-90% of the compounds in three independent external sets and in sensory test validation, providing a quick and reliable tool for classifying large sets of compounds into bitter and non-bitter groups. BitterPredict suggests that about 40% of random molecules, and a large portion (66%) of clinical and experimental drugs, and of natural products (77%) are bitter

    Low-complexity weak pseudorandom functions in AC0[MOD2]

    Get PDF
    A weak pseudorandom function (WPRF) is a keyed function fk:{0,1}n→{0,1} such that, for a random key k, a collection of samples (x,fk(x)), for uniformly random inputs x, cannot be efficiently distinguished from totally random input-output pairs (x, y). We study WPRFs in AC0[MOD2], the class of functions computable by AC0 circuits with parity gates, making

    Oblivious Transfer with constant computational overhead

    Get PDF
    The computational overhead of a cryptographic task is the asymptotic ratio between the computational cost of securely realizing the task and that of realizing the task with no security at all. Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC 2008) showed that secure two-party computation of Boolean circuits can be realized with constant computational overhead, independent of the desired level of security, assuming the existence of an oblivious transfer (OT) protocol and a local pseudorandom generator (PRG). However, this only applies to the case of semi-honest parties. A central open question in the area is the possibility of a similar result for malicious parties. This question is open even for the simpler task of securely realizing many instances of a constant-size function, such as OT of bits. We settle the question in the affirmative for the case of OT, assuming: (1) a standard OT protocol, (2) a slightly stronger “correlation-robust" variant of a local PRG, and (3) a standard sparse variant of the Learning Parity with Noise (LPN) assumption. An optimized version of our construction requires fewer than 100 bit operations per party per bit-OT. For 128-bit security, this improves over the best previous protocols by 1–2 orders of magnitude. We achieve this by constructing a constant-overhead pseudorandom correlation generator (PCG) for the bit-OT correlation. Such a PCG generates N pseudorandom instances of bit-OT by locally expanding short, correlated seeds. As a result, we get an end-to-end protocol for generating N pseudorandom instances of bit-OT with o(N) communication, O(N) computation, and security that scales sub-exponentially with N. Finally, we present applications of our main result to realizing other secure computation tasks with constant computational overhead. These include protocols for general circuits with a relaxed notion of security against malicious parties, protocols for realizing N instances of natural constant-size functions, and reducing the main open question to a potentially simpler question about fault-tolerant computation

    Was There Shortening of the Interval Between Diagnosis and Treatment of Colorectal Cancer in Southern Netherlands Between 2005 and 2008?

    Get PDF
    Background: The Dutch Cancer Society proposed that the interval between diagnosis and start of treatment should be less than 15 working days. The purpose of this study was to determine whether the interval from diagnosis to treatment for patients with colorectal cancer (CRC) shortened between 2005 and 2008 in hospitals in southern Netherlands. Methods: Patients with CRC diagnosed in six hospitals in southern Netherlands during January to December in 2005 (n = 445) and January to July in 2008 (n = 353) were included. The time between diagnosis and start of treatment was assessed, and the proportion of patients treated within the recommended time (70 years and those with stage I disease. Substantial variation was seen among hospitals. Conclusions: Time to treatment for patients with CRC in southern Netherlands did not shorten between 2005 and 2008. The time to treatment should be reduced to meet the advice of the Dutch Cancer Society
    • 

    corecore