1,193 research outputs found

    Visualizing Spacetime Curvature via Frame-Drag Vortexes and Tidal Tendexes I. General Theory and Weak-Gravity Applications

    Get PDF
    When one splits spacetime into space plus time, the Weyl curvature tensor (vacuum Riemann tensor) gets split into two spatial, symmetric, and trace-free (STF) tensors: (i) the Weyl tensor's so-called "electric" part or tidal field, and (ii) the Weyl tensor's so-called "magnetic" part or frame-drag field. Being STF, the tidal field and frame-drag field each have three orthogonal eigenvector fields which can be depicted by their integral curves. We call the integral curves of the tidal field's eigenvectors tendex lines, we call each tendex line's eigenvalue its tendicity, and we give the name tendex to a collection of tendex lines with large tendicity. The analogous quantities for the frame-drag field are vortex lines, their vorticities, and vortexes. We build up physical intuition into these concepts by applying them to a variety of weak-gravity phenomena: a spinning, gravitating point particle, two such particles side by side, a plane gravitational wave, a point particle with a dynamical current-quadrupole moment or dynamical mass-quadrupole moment, and a slow-motion binary system made of nonspinning point particles. [Abstract is abbreviated; full abstract also mentions additional results.]Comment: 25 pages, 20 figures, matches the published versio

    Address-Event based Platform for Bio-inspired Spiking Systems

    Get PDF
    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows a real-time virtual massive connectivity between huge number neurons, located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate "events" according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems, it is absolutely necessary to have a computer interface that allows (a) reading AER interchip traffic into the computer and visualizing it on the screen, and (b) converting conventional frame-based video stream in the computer into AER and injecting it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. In the other hand, the use of a commercial personal computer implies to depend on software tools and operating systems that can make the system slower and un-robust. This paper addresses the problem of communicating several AER based chips to compose a powerful processing system. The problem was discussed in the Neuromorphic Engineering Workshop of 2006. The platform is based basically on an embedded computer, a powerful FPGA and serial links, to make the system faster and be stand alone (independent from a PC). A new platform is presented that allow to connect up to eight AER based chips to a Spartan 3 4000 FPGA. The FPGA is responsible of the network communication based in Address-Event and, at the same time, to map and transform the address space of the traffic to implement a pre-processing. A MMU microprocessor (Intel XScale 400MHz Gumstix Connex computer) is also connected to the FPGA to allow the platform to implement eventbased algorithms to interact to the AER system, like control algorithms, network connectivity, USB support, etc. The LVDS transceiver allows a bandwidth of up to 1.32 Gbps, around ~66 Mega events per second (Mevps)

    VIOLA - A multi-purpose and web-based visualization tool for neuronal-network simulation output

    Full text link
    Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data. Furthermore, ongoing efforts including the European Human Brain Project aim at providing online user portals for integrated model development, simulation, analysis and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.Comment: 38 pages, 10 figures, 3 table

    Detection and Simulation of Dangerous Human Crowd Behavior

    Get PDF
    Tragically, gatherings of large human crowds quite often end in crowd disasters such as the recent catastrophe at the Loveparade 2010. In the past, research on pedestrian and crowd dynamics focused on simulation of pedestrian motion. As of yet, however, there does not exist any automatic system which can detect hazardous situations in crowds, thus helping to prevent these tragic incidents. In the thesis at hand, we analyze pedestrian behavior in large crowds and observe characteristic motion patterns. Based on our findings, we present a computer vision system that detects unusual events and critical situations from video streams and thus alarms security personnel in order to take necessary actions. We evaluate the system’s performance on synthetic, experimental as well as on real-world data. In particular, we show its effectiveness on the surveillance videos recorded at the Loveparade crowd stampede. Since our method is based on optical flow computations, it meets two crucial prerequisites in video surveillance: Firstly, it works in real-time and, secondly, the privacy of the people being monitored is preserved. In addition to that, we integrate the observed motion patterns into models for simulating pedestrian motion and show that the proposed simulation model produces realistic trajectories. We employ this model to simulate large human crowds and use techniques from computer graphics to render synthetic videos for further evaluation of our automatic video surveillance system

    In vivo dynamics of DnaA and its regulators

    Get PDF
    Die Verdopplung des Chromosoms ist ein entscheidender Schritt des bakteriellen Zellzyklus. Um die Anzahl der Chromosomen pro Zelle konstant zu halten, muss dieser Prozess streng reguliert werden. Die Replikation wird am Replikationsursprung durch das hochkonservierte Initiationsprotein DnaA initiiert. Dazu bindet DnaA an spezifische DNA-Sequenzen in der Ursprungsregion und bildet einen helikalen Nukleo-Protein-Komplex, der die lokale Aufwindung des Doppelstrangs herbeiführt. Dies muss kontrolliert geschehen, um eine Initiation zu einem falschen Zeitpunkt zu vermeiden. Für Bacillus subtilis sind zwei Regulationsproteine bekannt, YabA und Soj (ParA). YabA erfüllt zwei Funktionen: Zum einen wirkt es als Anti-Kooperativitätsfaktor für DnaA und zum anderen rekrutiert es, vermittelt duch eine Interaktion mit DnaN, DnaA und die Replikationsmaschinerie. Das Protein Soj hingegen kann ATP-abhängig ein Dimer bilden und als solches fördernd auf die DnaA-Aktivität wirken, während es in seiner ADP- gebundenen Form als Monomer vorliegt und am Ursprung die Ausbildung der rechtsgängigen DnaA-Helix verhindert. Die vorliegende Arbeit zeigt, dass die Lokalisation von YabA im Bezug auf den Replikationsursprung und die Replikationsmaschinerie einem ähnlichen Muster folgt, wie es für DnaA bekannt ist. Dabei stellt sich heraus, dass YabA hauptsächlich am Replisom, aber auch am Ursprung lokalisieren kann. Unter Zuhifenahme von FRAP (fluorescence recovery after photobleaching) konnte ich in vivo zeigen, dass DnaA einen hohen Turnover am Ursprung und an der Replikationsmaschinerie hat, der im Bereich von nur wenigen Sekunden liegt. Des Weiteren verringert eine Deletion von yabA und soj-spo0J den Turnover von DnaA. YabA hingegen zeigt ein ähnliches Verhalten wie DnaA. In einem zweiten Ansatz wurde Einzel-Molekül-Mikroskopie an lebenden Zellen durchgeführt. Die Untersuchung von DnaA-Einzelmolekülen bestätigt, dass es sich bei DnaA um ein sehr dynamisches Protein handelt, das für nicht mehr als wenige Millisekunden an seinen Bindestellen verweilt. Da DnaA auch als Transkriptionsfaktor wirkt, beinhalten diese Bindestellen nicht nur den Ursprung und die Replikationsmaschinerie, sondern auch mehrere über das ganze Chromosom verteilte Promotoren. Des Weiteren führt eine Deletion von soj-spo0J zu verkürzten Verweildauern, was im Einklang mit den FRAP-Daten steht. Im Gegensatz dazu verweilt eine DnaA-Variante, die eine Aminosäuresubstitution trägt, welche die ATPase-Aktivität und die Initiationsrate beeinträchtigt, deutlich kürzer. Folglich führen selbst geringe Unterschiede in der Verweildauer zu untypischen Initiationsfrequenzen. YabA-Einzelmoleküle hingegen sind statischer, was darauf hindeutet, dass DnaA und YabA nicht gemeinsam, als Komplex, die DNA scannen. Erstaunlicherweise war für E. coli-DnaA eine Oszillation der DnaA-Moleküle zwischen den zwei Zellhälften zu beobachten. Die Diffusionskonstante und die Verweildauer ähnelten denen von B. subtilis-DnaA, was eine hohe Dynamik von DnaA Proteinen in zwei Bakterienspezies zeigt. viiDie Beobachtung, dass E. coli-DnaA oszilliert, lässt weiterhin einen Mechanismus vermuten, bei dem ein regulatorisches Protein DnaA verfolgt und es von seinen Bindestellen löst, was zu dem beobachteten Muster führen könnte. Hohe Änderungsraten können vorteilhaft sein um viele regulatorische Impulse in die Entscheidung über eine Initation zu integrieren. In B. subtilis könnten die zwei Regulatoren YabA und Soj teilweise durch eine Stimulation der zeitlichen Änderungsrate von DnaA am Ursprung und an der Replikationsmaschinerie agieren

    A biologically plausible system for detecting saliency in video

    Get PDF
    Neuroscientists and cognitive scientists credit the dorsal and ventral pathways for the capability of detecting both still salient and motion salient objects. In this work, a framework is developed to explore potential models of still and motion saliency and is an extension of the original VENUS system. The early visual pathway is modeled by using Independent Component Analysis to learn a set of Gabor-like receptive fields similar to those found in the mammalian visual pathway. These spatial receptive fields form a set of 2D basis feature matrices, which are used to decompose complex visual stimuli into their spatial components. A still saliency map is formed by combining the outputs of convoluting the learned spatial receptive fields with the input stimuli. The dorsal pathway is primarily focused on motion-based information. In this framework, the model uses simple motion segmentation and tracking algorithms to create a statistical model of the motion and color-related information in video streams. A key feature of the human visual system is the ability to detect novelty. This framework uses a set of Gaussian distributions to model color and motion. When a unique event is detected, Gaussian distributions are created and the event is declared novel. The next time a similar event is detected the framework is able to determine that the event is not novel based on the previously created distributions. A forgetting term is also included that allows events that have not been detected for a long period of time to be forgotten

    Using Web Archives to Enrich the Live Web Experience Through Storytelling

    Get PDF
    Much of our cultural discourse occurs primarily on the Web. Thus, Web preservation is a fundamental precondition for multiple disciplines. Archiving Web pages into themed collections is a method for ensuring these resources are available for posterity. Services such as Archive-It exists to allow institutions to develop, curate, and preserve collections of Web resources. Understanding the contents and boundaries of these archived collections is a challenge for most people, resulting in the paradox of the larger the collection, the harder it is to understand. Meanwhile, as the sheer volume of data grows on the Web, storytelling is becoming a popular technique in social media for selecting Web resources to support a particular narrative or story . In this dissertation, we address the problem of understanding the archived collections through proposing the Dark and Stormy Archive (DSA) framework, in which we integrate storytelling social media and Web archives. In the DSA framework, we identify, evaluate, and select candidate Web pages from archived collections that summarize the holdings of these collections, arrange them in chronological order, and then visualize these pages using tools that users already are familiar with, such as Storify. To inform our work of generating stories from archived collections, we start by building a baseline for the structural characteristics of popular (i.e., receiving the most views) human-generated stories through investigating stories from Storify. Furthermore, we checked the entire population of Archive-It collections for better understanding the characteristics of the collections we intend to summarize. We then filter off-topic pages from the collections the using different methods to detect when an archived page in a collection has gone off-topic. We created a gold standard dataset from three Archive-It collections to evaluate the proposed methods at different thresholds. From the gold standard dataset, we identified five behaviors for the TimeMaps (a list of archived copies of a page) based on the page’s aboutness. Based on a dynamic slicing algorithm, we divide the collection and cluster the pages in each slice. We then select the best representative page from each cluster based on different quality metrics (e.g., the replay quality, and the quality of the generated snippet from the page). At the end, we put the selected pages in chronological order and visualize them using Storify. For evaluating the DSA framework, we obtained a ground truth dataset of hand-crafted stories from Archive-It collections generated by expert archivists. We used Amazon’s Mechanical Turk to evaluate the automatically generated stories against the stories that were created by domain experts. The results show that the automatically generated stories by the DSA are indistinguishable from those created by human subject domain experts, while at the same time both kinds of stories (automatic and human) are easily distinguished from randomly generated storie

    Management and display of four-dimensional environmental data sets using McIDAS

    Get PDF
    Over the past four years, great strides have been made in the areas of data management and display of 4-D meteorological data sets. A survey was conducted of available and planned 4-D meteorological data sources. The data types were evaluated for their impact on the data management and display system. The requirements were analyzed for data base management generated by the 4-D data display system. The suitability of the existing data base management procedures and file structure were evaluated in light of the new requirements. Where needed, new data base management tools and file procedures were designed and implemented. The quality of the basic 4-D data sets was assured. The interpolation and extrapolation techniques of the 4-D data were investigated. The 4-D data from various sources were combined to make a uniform and consistent data set for display purposes. Data display software was designed to create abstract line graphic 3-D displays. Realistic shaded 3-D displays were created. Animation routines for these displays were developed in order to produce a dynamic 4-D presentation. A prototype dynamic color stereo workstation was implemented. A computer functional design specification was produced based on interactive studies and user feedback

    ICASE/LaRC Symposium on Visualizing Time-Varying Data

    Get PDF
    Time-varying datasets present difficult problems for both analysis and visualization. For example, the data may be terabytes in size, distributed across mass storage systems at several sites, with time scales ranging from femtoseconds to eons. In response to these challenges, ICASE and NASA Langley Research Center, in cooperation with ACM SIGGRAPH, organized the first symposium on visualizing time-varying data. The purpose was to bring the producers of time-varying data together with visualization specialists to assess open issues in the field, present new solutions, and encourage collaborative problem-solving. These proceedings contain the peer-reviewed papers which were presented at the symposium. They cover a broad range of topics, from methods for modeling and compressing data to systems for visualizing CFD simulations and World Wide Web traffic. Because the subject matter is inherently dynamic, a paper proceedings cannot adequately convey all aspects of the work. The accompanying video proceedings provide additional context for several of the papers

    Exploratory Visualization of Data Pattern Changes in Multivariate Data Streams

    Get PDF
    More and more researchers are focusing on the management, querying and pattern mining of streaming data. The visualization of streaming data, however, is still a very new topic. Streaming data is very similar to time-series data since each datapoint has a time dimension. Although the latter has been well studied in the area of information visualization, a key characteristic of streaming data, unbounded and large-scale input, is rarely investigated. Moreover, most techniques for visualizing time-series data focus on univariate data and seldom convey multidimensional relationships, which is an important requirement in many application areas. Therefore, it is necessary to develop appropriate techniques for streaming data instead of directly applying time-series visualization techniques to it. As one of the main contributions of this dissertation, I introduce a user-driven approach for the visual analytics of multivariate data streams based on effective visualizations via a combination of windowing and sampling strategies. To help users identify and track how data patterns change over time, not only the current sliding window content but also abstractions of past data in which users are interested are displayed. Sampling is applied within each single time window to help reduce visual clutter as well as preserve data patterns. Sampling ratios scheduled for different windows reflect the degree of user interest in the content. A degree of interest (DOI) function is used to represent a user\u27s interest in different windows of the data. Users can apply two types of pre-defined DOI functions, namely RC (recent change) and PP (periodic phenomena) functions. The developed tool also allows users to interactively adjust DOI functions, in a manner similar to transfer functions in volume visualization, to enable a trial-and-error exploration process. In order to visually convey the change of multidimensional correlations, four layout strategies were designed. User studies showed that three of these are effective techniques for conveying data pattern changes compared to traditional time-series data visualization techniques. Based on this evaluation, a guide for the selection of appropriate layout strategies was derived, considering the characteristics of the targeted datasets and data analysis tasks. Case studies were used to show the effectiveness of DOI functions and the various visualization techniques. A second contribution of this dissertation is a data-driven framework to merge and thus condense time windows having small or no changes and distort the time axis. Only significant changes are shown to users. Pattern vectors are introduced as a compact format for representing the discovered data model. Three views, juxtaposed views, pattern vector views, and pattern change views, were developed for conveying data pattern changes. The first shows more details of the data but needs more canvas space; the last two need much less canvas space via conveying only the pattern parameters, but lose many data details. The experiments showed that the proposed merge algorithms preserves more change information than an intuitive pattern-blind averaging. A user study was also conducted to confirm that the proposed techniques can help users find pattern changes more quickly than via a non-distorted time axis. A third contribution of this dissertation is the history views with related interaction techniques were developed to work under two modes: non-merge and merge. In the former mode, the framework can use natural hierarchical time units or one defined by domain experts to represent timelines. This can help users navigate across long time periods. Grid or virtual calendar views were designed to provide a compact overview for the history data. In addition, MDS pattern starfields, distance maps, and pattern brushes were developed to enable users to quickly investigate the degree of pattern similarity among different time periods. For the merge mode, merge algorithms were applied to selected time windows to generate a merge-based hierarchy. The contiguous time windows having similar patterns are merged first. Users can choose different levels of merging with the tradeoff between more details in the data and less visual clutter in the visualizations. The usability evaluation demonstrated that most participants could understand the concepts of the history views correctly and finished assigned tasks with a high accuracy and relatively fast response time
    corecore