4,958 research outputs found

    A Perspective on the Potential Role of Neuroscience in the Court

    Get PDF
    This Article presents some lessons learned while offering expert testimony on neuroscience in courts. As a biomedical investigator participating in cutting-edge research with clinical and mentoring responsibilities, Dr. Ruben Gur, Ph.D., became involved in court proceedings rather late in his career. Based on the success of Dr. Gur and other research investigators of his generation, who developed and validated advanced methods for linking brain structure and function to behavior, neuroscience findings and procedures became relevant to multiple legal issues, especially related to culpability and mitigation. Dr. Gur found himself being asked to opine in cases where he could contribute expertise on neuropsychological testing and structural and functional neuroimaging. Most of his medical-legal consulting experience has been in capital cases because of the elevated legal requirement for thorough mitigation investigations in such cases, and his limited availability due to his busy schedule as a full-time professor and research investigator who runs the Brain and Behavior Lab at the University of Pennsylvania (“Penn”). Courtroom testimony, however, has not been a topic of his research and so he has not published extensively on the issues in peer-reviewed literature

    Scanning Electrochemical Microscopy of DNA Monolayers Modified with Nile Blue

    Get PDF
    Scanning electrochemical microscopy (SECM) is used to probe long-range charge transport (CT) through DNA monolayers containing the redox-active Nile Blue (NB) intercalator covalently affixed at a specific location in the DNA film. At substrate potentials negative of the formal potential of covalently attached NB, the electrocatalytic reduction of Fe(CN)63− generated at the SECM tip is observed only when NB is located at the DNA/solution interface; for DNA films containing NB in close proximity to the DNA/electrode interface, the electrocatalytic effect is absent. This behavior is consistent with both rapid DNA-mediated CT between the NB intercalator and the gold electrode as well as a rate-limiting electron transfer between NB and the solution phase Fe(CN)63−. The DNA-mediated nature of the catalytic cycle is confirmed through sequence-specific and localized detection of attomoles of TATA-binding protein, a transcription factor that severely distorts DNA upon binding. Importantly, the strategy outlined here is general and allows for the local investigation of the surface characteristics of DNA monolayers both in the absence and in the presence of DNA binding proteins. These experiments highlight the utility of DNA-modified electrodes as versatile platforms for SECM detection schemes that take advantage of CT mediated by the DNA base pair stack

    Parameterized Study of the Test Cover Problem

    Full text link
    We carry out a systematic study of a natural covering problem, used for identification across several areas, in the realm of parameterized complexity. In the {\sc Test Cover} problem we are given a set [n]={1,...,n}[n]=\{1,...,n\} of items together with a collection, T\cal T, of distinct subsets of these items called tests. We assume that T\cal T is a test cover, i.e., for each pair of items there is a test in T\cal T containing exactly one of these items. The objective is to find a minimum size subcollection of T\cal T, which is still a test cover. The generic parameterized version of {\sc Test Cover} is denoted by p(k,n,T)p(k,n,|{\cal T}|)-{\sc Test Cover}. Here, we are given ([n],T)([n],\cal{T}) and a positive integer parameter kk as input and the objective is to decide whether there is a test cover of size at most p(k,n,T)p(k,n,|{\cal T}|). We study four parameterizations for {\sc Test Cover} and obtain the following: (a) kk-{\sc Test Cover}, and (nk)(n-k)-{\sc Test Cover} are fixed-parameter tractable (FPT). (b) (Tk)(|{\cal T}|-k)-{\sc Test Cover} and (logn+k)(\log n+k)-{\sc Test Cover} are W[1]-hard. Thus, it is unlikely that these problems are FPT

    Helly-Type Theorems in Property Testing

    Full text link
    Helly's theorem is a fundamental result in discrete geometry, describing the ways in which convex sets intersect with each other. If SS is a set of nn points in RdR^d, we say that SS is (k,G)(k,G)-clusterable if it can be partitioned into kk clusters (subsets) such that each cluster can be contained in a translated copy of a geometric object GG. In this paper, as an application of Helly's theorem, by taking a constant size sample from SS, we present a testing algorithm for (k,G)(k,G)-clustering, i.e., to distinguish between two cases: when SS is (k,G)(k,G)-clusterable, and when it is ϵ\epsilon-far from being (k,G)(k,G)-clusterable. A set SS is ϵ\epsilon-far (0<ϵ1)(0<\epsilon\leq1) from being (k,G)(k,G)-clusterable if at least ϵn\epsilon n points need to be removed from SS to make it (k,G)(k,G)-clusterable. We solve this problem for k=1k=1 and when GG is a symmetric convex object. For k>1k>1, we solve a weaker version of this problem. Finally, as an application of our testing result, in clustering with outliers, we show that one can find the approximate clusters by querying a constant size sample, with high probability

    Balanced Allocations and Double Hashing

    Full text link
    Double hashing has recently found more common usage in schemes that use multiple hash functions. In double hashing, for an item xx, one generates two hash values f(x)f(x) and g(x)g(x), and then uses combinations (f(x)+kg(x))modn(f(x) +k g(x)) \bmod n for k=0,1,2,...k=0,1,2,... to generate multiple hash values from the initial two. We first perform an empirical study showing that, surprisingly, the performance difference between double hashing and fully random hashing appears negligible in the standard balanced allocation paradigm, where each item is placed in the least loaded of dd choices, as well as several related variants. We then provide theoretical results that explain the behavior of double hashing in this context.Comment: Further updated, small improvements/typos fixe

    Online Admission Control and Embedding of Service Chains

    Full text link
    The virtualization and softwarization of modern computer networks enables the definition and fast deployment of novel network services called service chains: sequences of virtualized network functions (e.g., firewalls, caches, traffic optimizers) through which traffic is routed between source and destination. This paper attends to the problem of admitting and embedding a maximum number of service chains, i.e., a maximum number of source-destination pairs which are routed via a sequence of to-be-allocated, capacitated network functions. We consider an Online variant of this maximum Service Chain Embedding Problem, short OSCEP, where requests arrive over time, in a worst-case manner. Our main contribution is a deterministic O(log L)-competitive online algorithm, under the assumption that capacities are at least logarithmic in L. We show that this is asymptotically optimal within the class of deterministic and randomized online algorithms. We also explore lower bounds for offline approximation algorithms, and prove that the offline problem is APX-hard for unit capacities and small L > 2, and even Poly-APX-hard in general, when there is no bound on L. These approximation lower bounds may be of independent interest, as they also extend to other problems such as Virtual Circuit Routing. Finally, we present an exact algorithm based on 0-1 programming, implying that the general offline SCEP is in NP and by the above hardness results it is NP-complete for constant L.Comment: early version of SIROCCO 2015 pape

    Coupled Two-Way Clustering Analysis of Gene Microarray Data

    Get PDF
    We present a novel coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task: we present an algorithm, based on iterative clustering, which performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research

    How does an interacting many-body system tunnel through a potential barrier to open space?

    Get PDF
    The tunneling process in a many-body system is a phenomenon which lies at the very heart of quantum mechanics. It appears in nature in the form of alpha-decay, fusion and fission in nuclear physics, photoassociation and photodissociation in biology and chemistry. A detailed theoretical description of the decay process in these systems is a very cumbersome problem, either because of very complicated or even unknown interparticle interactions or due to a large number of constitutent particles. In this work, we theoretically study the phenomenon of quantum many-body tunneling in a more transparent and controllable physical system, in an ultracold atomic gas. We analyze a full, numerically exact many-body solution of the Schr\"odinger equation of a one-dimensional system with repulsive interactions tunneling to open space. We show how the emitted particles dissociate or fragment from the trapped and coherent source of bosons: the overall many-particle decay process is a quantum interference of single-particle tunneling processes emerging from sources with different particle numbers taking place simultaneously. The close relation to atom lasers and ionization processes allows us to unveil the great relevance of many-body correlations between the emitted and trapped fractions of the wavefunction in the respective processes.Comment: 18 pages, 4 figures (7 pages, 2 figures supplementary information

    Discriminants, symmetrized graph monomials, and sums of squares

    Full text link
    Motivated by the necessities of the invariant theory of binary forms J. J. Sylvester constructed in 1878 for each graph with possible multiple edges but without loops its symmetrized graph monomial which is a polynomial in the vertex labels of the original graph. In the 20-th century this construction was studied by several authors. We pose the question for which graphs this polynomial is a non-negative resp. a sum of squares. This problem is motivated by a recent conjecture of F. Sottile and E. Mukhin on discriminant of the derivative of a univariate polynomial, and an interesting example of P. and A. Lax of a graph with 4 edges whose symmetrized graph monomial is non-negative but not a sum of squares. We present detailed information about symmetrized graph monomials for graphs with four and six edges, obtained by computer calculations
    corecore