52,543 research outputs found

    Global Flow Impacts Time-to-passage Judgments Based on Local Motion Cues

    Get PDF
    We assessed the effect of the coherence of optic flow on time-to-passage judgments in order to investigate the strategies that observers use when local expansion information is reduced or lacking. In the standard display, we presented a cloud of dots whose image expanded consistent with constant observer motion. The dots themselves, however, did not expand and were thus devoid of object expansion cues. Only the separations between the dots expanded. Subjects had to judge which of two colored target dots, presented at different simulated depths and lateral displacements would pass them first. Image velocities of the target dots were chosen so as to correlate with time-to-passage only some of the time. When optic flow was mainly incoherent, subjects’ responses were biased and relied on image velocities rather than on global flow analysis. However, the bias induced by misleading image velocity cues diminished as a function of the coherence of the optic flow. We discuss the results in the context of a global tau mechanism and settle a debate whether local expansion cues or optic flow analysis are the basis for time-to-passage estimation

    Learning Graphs from Linear Measurements: Fundamental Trade-offs and Applications

    Get PDF
    We consider a specific graph learning task: reconstructing a symmetric matrix that represents an underlying graph using linear measurements. We present a sparsity characterization for distributions of random graphs (that are allowed to contain high-degree nodes), based on which we study fundamental trade-offs between the number of measurements, the complexity of the graph class, and the probability of error. We first derive a necessary condition on the number of measurements. Then, by considering a three-stage recovery scheme, we give a sufficient condition for recovery. Furthermore, assuming the measurements are Gaussian IID, we prove upper and lower bounds on the (worst-case) sample complexity for both noisy and noiseless recovery. In the special cases of the uniform distribution on trees with n nodes and the Erdős-Rényi (n,p) class, the fundamental trade-offs are tight up to multiplicative factors with noiseless measurements. In addition, for practical applications, we design and implement a polynomial-time (in n ) algorithm based on the three-stage recovery scheme. Experiments show that the heuristic algorithm outperforms basis pursuit on star graphs. We apply the heuristic algorithm to learn admittance matrices in electric grids. Simulations for several canonical graph classes and IEEE power system test cases demonstrate the effectiveness and robustness of the proposed algorithm for parameter reconstruction

    Two-echelon freight transport optimisation: unifying concepts via a systematic review

    Get PDF
    Multi-echelon distribution schemes are one of the most common strategies adopted by the transport companies in an aim of cost reduction, but their identification in scientific literature is not always easy due to a lack of unification. This paper presents the main concepts of two-echelon distribution via a systematic review, in the specific a meta-narrative analysis, in order to identify and unify the main concepts, issues and methods that can be helpful for scientists and transport practitioners. The problem of system cost optimisation in two-echelon freight transport systems is defined. Moreover, the main variants are synthetically presented and discussed. Finally, future research directions are proposed.location-routing problems, multi-echelon distribution, cross-docking, combinatorial optimisation, systematic review.

    Constructing experimental indicators for Open Access documents

    Get PDF
    The ongoing paradigm change in the scholarly publication system ('science is turning to e-science') makes it necessary to construct alternative evaluation criteria/metrics which appropriately take into account the unique characteristics of electronic publications and other research output in digital formats. Today, major parts of scholarly Open Access (OA) publications and the self-archiving area are not well covered in the traditional citation and indexing databases. The growing share and importance of freely accessible research output demands new approaches/metrics for measuring and for evaluating of these new types of scientific publications. In this paper we propose a simple quantitative method which establishes indicators by measuring the access/download pattern of OA documents and other web entities of a single web server. The experimental indicators (search engine, backlink and direct access indicator) are constructed based on standard local web usage data. This new type of web-based indicator is developed to model the specific demand for better study/evaluation of the accessibility, visibility and interlinking of open accessible documents. We conclude that e-science will need new stable e-indicators.Comment: 9 pages, 3 figure

    Fr-TM-align: a new protein structural alignment method based on fragment alignments and the TM-score

    Get PDF
    ©2008 Pandit and Skolnick; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This article is available from: http://www.biomedcentral.com/1471-2105/9/531doi:10.1186/1471-2105-9-531Background: Protein tertiary structure comparisons are employed in various fields of contemporary structural biology. Most structure comparison methods involve generation of an initial seed alignment, which is extended and/or refined to provide the best structural superposition between a pair of protein structures as assessed by a structure comparison metric. One such metric, the TM-score, was recently introduced to provide a combined structure quality measure of the coordinate root mean square deviation between a pair of structures and coverage. Using the TM-score, the TM-align structure alignment algorithm was developed that was often found to have better accuracy and coverage than the most commonly used structural alignment programs; however, there were a number of situations when this was not true. Results: To further improve structure alignment quality, the Fr-TM-align algorithm has been developed where aligned fragment pairs are used to generate the initial seed alignments that are then refined using dynamic programming to maximize the TM-score. For the assessment of the structural alignment quality from Fr-TM-align in comparison to other programs such as CE and TMalign, we examined various alignment quality assessment scores such as PSI and TM-score. The assessment showed that the structural alignment quality from Fr-TM-align is better in comparison to both CE and TM-align. On average, the structural alignments generated using Fr-TM-align have a higher TM-score (~9%) and coverage (~7%) in comparison to those generated by TM-align. Fr- TM-align uses an exhaustive procedure to generate initial seed alignments. Hence, the algorithm is computationally more expensive than TM-align. Conclusion: Fr-TM-align, a new algorithm that employs fragment alignment and assembly provides better structural alignments in comparison to TM-align. The source code and executables of Fr- TM-align are freely downloadable at: http://cssb.biology.gatech.edu/skolnick/files/FrTMalign/

    Subjectively interesting connecting trees

    Get PDF

    Construction of near-optimal vertex clique covering for real-world networks

    Get PDF
    We propose a method based on combining a constructive and a bounding heuristic to solve the vertex clique covering problem (CCP), where the aim is to partition the vertices of a graph into the smallest number of classes, which induce cliques. Searching for the solution to CCP is highly motivated by analysis of social and other real-world networks, applications in graph mining, as well as by the fact that CCP is one of the classical NP-hard problems. Combining the construction and the bounding heuristic helped us not only to find high-quality clique coverings but also to determine that in the domain of real-world networks, many of the obtained solutions are optimal, while the rest of them are near-optimal. In addition, the method has a polynomial time complexity and shows much promise for its practical use. Experimental results are presented for a fairly representative benchmark of real-world data. Our test graphs include extracts of web-based social networks, including some very large ones, several well-known graphs from network science, as well as coappearance networks of literary works' characters from the DIMACS graph coloring benchmark. We also present results for synthetic pseudorandom graphs structured according to the Erdös-Renyi model and Leighton's model
    • …
    corecore