969 research outputs found

    The Importance of Forgetting: Limiting Memory Improves Recovery of Topological Characteristics from Neural Data

    Full text link
    We develop of a line of work initiated by Curto and Itskov towards understanding the amount of information contained in the spike trains of hippocampal place cells via topology considerations. Previously, it was established that simply knowing which groups of place cells fire together in an animal's hippocampus is sufficient to extract the global topology of the animal's physical environment. We model a system where collections of place cells group and ungroup according to short-term plasticity rules. In particular, we obtain the surprising result that in experiments with spurious firing, the accuracy of the extracted topological information decreases with the persistence (beyond a certain regime) of the cell groups. This suggests that synaptic transience, or forgetting, is a mechanism by which the brain counteracts the effects of spurious place cell activity

    Fast and Exact Fiber Surfaces for Tetrahedral Meshes

    Get PDF
    Isosurfaces are fundamental geometrical objects for the analysis and visualization of volumetric scalar fields. Recent work has generalized them to bivariate volumetric fields with fiber surfaces, the pre-image of polygons in range space. However, the existing algorithm for their computation is approximate, and is limited to closed polygons. Moreover, its runtime performance does not allow instantaneous updates of the fiber surfaces upon user edits of the polygons. Overall, these limitations prevent a reliable and interactive exploration of the space of fiber surfaces. This paper introduces the first algorithm for the exact computation of fiber surfaces in tetrahedral meshes. It assumes no restriction on the topology of the input polygon, handles degenerate cases and better captures sharp features induced by polygon bends. The algorithm also allows visualization of individual fibers on the output surface, better illustrating their relationship with data features in range space. To enable truly interactive exploration sessions, we further improve the runtime performance of this algorithm. In particular, we show that it is trivially parallelizable and that it scales nearly linearly with the number of cores. Further, we study acceleration data-structures both in geometrical domain and range space and we show how to generalize interval trees used in isosurface extraction to fiber surface extraction. Experiments demonstrate the superiority of our algorithm over previous work, both in terms of accuracy and running time, with up to two orders of magnitude speedups. This improvement enables interactive edits of range polygons with instantaneous updates of the fiber surface for exploration purpose. A VTK-based reference implementation is provided as additional material to reproduce our results

    Context-Based classification of objects in topographic data

    Get PDF
    Large-scale topographic databases model real world features as vector data objects. These can be point, line or area features. Each of these map objects is assigned to a descriptive class; for example, an area feature might be classed as a building, a garden or a road. Topographic data is subject to continual updates from cartographic surveys and ongoing quality improvement. One of the most important aspects of this is assignment and verification of class descriptions to each area feature. These attributes can be added manually, but, due to the vast volume of data involved, automated techniques are desirable to classify these polygons. Analogy is a key thought process that underpins learning and has been the subject of much research in the field of artificial intelligence (AI). An analogy identifies structural similarity between a well-known source domain and a less familiar target domain. In many cases, information present in the source can then be mapped to the target, yielding a better understanding of the latter. The solution of geometric analogy problems has been a fruitful area of AI research. We observe that there is a correlation between objects in geometric analogy problem domains and map features in topographic data. We describe two topographic area feature classification tools that use descriptions of neighbouring features to identify analogies between polygons: content vector matching (CVM) and context structure matching (CSM). CVM and CSM classify an area feature by matching its neighbourhood context against those of analogous polygons whose class is known. Both classifiers were implemented and then tested on high quality topographic polygon data supplied by Ordnance Survey (Great Britain). Area features were found to exhibit a high degree of variation in their neighbourhoods. CVM correctly classified 85.38% of the 79.03% of features it attempted to classify. The accuracy for CSM was 85.96% of the 62.96% of features it tried to identify. Thus, CVM can classify 25.53% more features than CSM, but is slightly less accurate. Both techniques excelled at identifying the feature classes that predominate in suburban data. Our structure-based classification approach may also benefit other types of spatial data, such as topographic line data, small-scale topographic data, raster data, architectural plans and circuit diagrams

    Integrated modelling for 3D GIS

    Get PDF
    A three dimensional (3D) model facilitates the study of the real world objects it represents. A geoinformation system (GIS) should exploit the 3D model in a digital form as a basis for answering questions pertaining to aspects of the real world. With respect to the earth sciences, different kinds of objects of reality can be realized. These objects are components of the reality under study. At the present state-of-the-art different realizations are usually situated in separate systems or subsystems. This separation results in redundancy and uncertainty when different components sharing some common aspects are combined. Relationships between different kinds of objects, or between components of an object, cannot be represented adequately. This thesis aims at the integration of those components sharing some common aspects in one 3D model. This integration brings related components together, minimizes redundancy and uncertainty. Since the model should permit not only the representation of known aspects of reality, but also the derivation of information from the existing representation, the design of the model is constrained so as to afford these capabilities. The tessellation of space by the network of simplest geometry, the simplicial network, is proposed as a solution. The known aspects of the reality can be embedded in the simplicial network without degrading their quality. The model provides finite spatial units useful for the representation of objects. Relationships between objects can also be expressed through components of these spatial units which at the same time facilitate various computations and the derivation of information implicitly available in the model. Since the simplicial network is based on concepts in geoinformation science and in mathematics, its design can be generalized for n-dimensions. The networks of different dimension are said to be compatible, which enables the incorporation of a simplicial network of a lower dimension into another simplicial network of a higher dimension.The complexity of the 3D model fulfilling the requirements listed calls for a suitable construction method. The thesis presents a simple way to construct the model. The raster technique is used for the formation of the simplicial network embedding the representation of the known aspects of reality as constraints. The prototype implementation in a software package, ISNAP, demonstrates the simplicial network's construction and use. The simplicial network can facilitate spatial and non spatial queries, computations, and 2D and 3D visualizations. The experimental tests using different kinds of data sets show that the simplicial network can be used to represent real world objects in different dimensionalities. Operations traditionally requiring different systems and spatial models can be carried out in one system using one model as a basis. This possibility makes the GIS more powerful and easy to use

    Fitted Value Function Iteration With Probability One Contractions

    Get PDF
    This paper studies a value function iteration algorithm that can be applied to almost all stationary dynamic programming problems. Using nonexpansive function approximation and Monte Carlo integration, we develop a randomized fitted Bellman operator and a corresponding algorithm that is globally convergent with probability one. When additional restrictions are imposed, an OP(n-1/2) rate of convergence for Monte Carlo error is obtained.

    Structuring a Wayfinder\u27s Dynamic and Uncertain Environment

    Get PDF
    Wayfinders typically travel in dynamic environments where barriers and requirements change over time. In many cases, uncertainty exists about the future state of this changing environment. Current geographic information systems lack tools to assist wayfinders in understanding the travel possibilities and path selection options in these dynamic and uncertain settings. The goal of this research is a better understanding of the impact of dynamic and uncertain environments on wayfinding travel possibilities. An integrated spatio-temporal framework, populated with barriers and requirements, models wayfinding scenarios by generating four travel possibility partitions based on the wayfinder\u27s maximum travel speed. Using these partitions, wayfinders select paths to meet scenario requirements. When uncertainty exists, wayfinders often cannot discern the future state of barriers and requirements. The model to address indiscemibility employs a threevalued logic to indicate accessible space, inaccessible space, and possibly inaccessible space. Uncertain scenarios generate up to fifteen distinct travel possibility categories. These fifteen categories generalize into three-valued travel possible partitions based on where travel can occur and where travel is successful. Path selection in these often-complex environments is explored through a specific uncertain scenario that includes a well-defined initial requirement and the possibility of an additional requirement somewhere beforehand. Observations from initial path selection tests with this scenario provide the motivation for the hypothesis that paths arriving as soon as possible to well-defined requirements also maximize the probability of success in meeting possible additional requirements. The hypothesis evaluation occurs within a prototype Travel Possibility Calculator application that employs a set of metrics to test path accessibility in various linear and planar scenarios. The results did not support the hypothesis, but showed instead that path accessibility to possible additional requirements is greatly influenced by the spatio-temporal characteristics of the scenario\u27s barriers
    • …
    corecore