2,990 research outputs found
Phase-conjugate reflection by degenerate four-wave mixing in a nematic liquid crystal in the isotropic phase
We report the generation of conjugate wave fronts by degenerate four-wave mixing in the isotropic phase of the nematic substance p-methoxy-benzylidene p-n-butylaniline. The temporal and spatial properties of the conjugate wave fronts are verified. The dependence of the nonlinear reflectivity on the pump-wave power and the temperature of the medium is discussed
Obvious: a meta-toolkit to encapsulate information visualization toolkits. One toolkit to bind them all
This article describes âObviousâ: a meta-toolkit that abstracts and encapsulates information visualization toolkits implemented in the Java language. It intends to unify their use and postpone the choice of which concrete toolkit(s) to use later-on in the development of visual analytics applications. We also report on the lessons we have learned when wrapping popular toolkits with Obvious, namely Prefuse, the InfoVis Toolkit, partly Improvise, JUNG and other data management libraries. We show several examples on the uses of Obvious, how the different toolkits can be combined, for instance sharing their data models. We also show how Weka and RapidMiner, two popular machine-learning toolkits, have been wrapped with Obvious and can be used directly with all the other wrapped toolkits. We expect Obvious to start a co-evolution process: Obvious is meant to evolve when more components of Information Visualization systems will become consensual. It is also designed to help information visualization systems adhere to the best practices to provide a higher level of interoperability and leverage the domain of visual analytics
Entanglement Generation of Clifford Quantum Cellular Automata
Clifford quantum cellular automata (CQCAs) are a special kind of quantum
cellular automata (QCAs) that incorporate Clifford group operations for the
time evolution. Despite being classically simulable, they can be used as basic
building blocks for universal quantum computation. This is due to the
connection to translation-invariant stabilizer states and their entanglement
properties. We will give a self-contained introduction to CQCAs and investigate
the generation of entanglement under CQCA action. Furthermore, we will discuss
finite configurations and applications of CQCAs.Comment: to appear in the "DPG spring meeting 2009" special issue of Applied
Physics
Scaling gridded river networks for macroscale hydrology: Development, analysis, and control of error
A simple and robust river network scaling algorithm (NSA) is presented to rescale fineâresolution networks to any coarser resolution. The algorithm was tested over the Danube River basin and the European continent. Coarseâresolution networks, at 2.5, 5, 10, and 30 min resolutions, were derived from higherâresolution gridded networks using NSA and geomorphometric attributes, such as river order, shape index, and width function. These parameters were calculated and compared at each resolution. Simple scaling relationships were found to predict decreasing river lengths with coarserâresolution data. This relationship can be used to correct river length as a function of grid resolution. The lengthâcorrected width functions of the major river basins in Europe were compared at different resolutions to assess river network performance. The discretization error in representing basin area and river lengths at coarser resolutions were analyzed, and simple relationships were found to calculate the minimum number of grid cells needed to maintain the catchment area and length within a desired level of accuracy. This relationship among geomorphological characteristics, such as shape index and width function (derived from gridded networks at different resolutions), suggests that a minimum of 200â300 grid cells is necessary to maintain the geomorphological characteristics of the river networks with sufficient accuracy
Shortest path discovery of complex networks
In this paper we present an analytic study of sampled networks in the case of
some important shortest-path sampling models. We present analytic formulas for
the probability of edge discovery in the case of an evolving and a static
network model. We also show that the number of discovered edges in a finite
network scales much slower than predicted by earlier mean field models.
Finally, we calculate the degree distribution of sampled networks, and we
demonstrate that they are analogous to a destructed network obtained by
randomly removing edges from the original network.Comment: 10 pages, 4 figure
Global system of rivers: Its role in organizing continental land mass and defining landâtoâocean linkages
The spatial organization of the Earth\u27s land mass is analyzed using a simulated topological network (STNâ30p) representing potential flow pathways across the entire nonglacierized surface of the globe at 30âmin (longitude Ă latitude) spatial resolution. We discuss a semiautomated procedure to develop this topology combining digital elevation models and manual network editing. STNâ30p was verified against several independent sources including map products and drainage basin statistics, although we found substantial inconsistency within the extant literature itself. A broad suite of diagnostics is offered that quantitatively describes individual grid cells, river segments, and complete drainage systems spanning orders 1 through 6 based on the Strahler classification scheme. Continental and globalâscale summaries of key STNâ30p attributes are given. Summaries are also presented which distinguish basins that potentially deliver discharge to an ocean (exorheic) from those that potentially empty into an internal receiving body (endorheic). A total of 59,122 individual grid cells constitutes the global nonglacierized land mass. At 30âmin spatial resolution, the cells are organized into 33,251 distinct river segments which define 6152 drainage basins. A global total of 133.1 Ă 106 km2 bear STNâSOp flow paths with a total length of 3.24 Ă 106 km. The organization of river networks has an important role in linking land mass to ocean. From a continental perspective, lowâorder river segments (orders 1â3) drain the largest fraction of land (90%) and thus constitute a primary source area for runoff and constituents. From an oceanic perspective, however, the small number (n=101) of large drainage systems (orders 4â6) predominates; draining 65% of global land area and subsuming a large fraction of the otherwise spatially remote lowâorder rivers. Along river corridors, only 10% of land mass is within 100 km of a coastline, 25% is within 250 km, and 50% is within 750 km. The global mean distance to river mouth is 1050 km with individual continental values from 460 to 1340 km. The Mediterranean/Black Sea and Arctic Ocean are the most landâdominated of all oceans with land:ocean area ratios of 4.4 and 1.2, respectively; remaining oceans show ratios from 0.55 to 0.13. We discuss limitations of the STNâ30p together with its potential role in future global change studies. STNâ30p is geographically linked to several hundred river discharge and chemistry monitoring stations to provide a framework for calibrating and validating macroscale hydrology and biogeochemical flux models
MDCC: Multi-Data Center Consistency
Replicating data across multiple data centers not only allows moving the data
closer to the user and, thus, reduces latency for applications, but also
increases the availability in the event of a data center failure. Therefore, it
is not surprising that companies like Google, Yahoo, and Netflix already
replicate user data across geographically different regions.
However, replication across data centers is expensive. Inter-data center
network delays are in the hundreds of milliseconds and vary significantly.
Synchronous wide-area replication is therefore considered to be unfeasible with
strong consistency and current solutions either settle for asynchronous
replication which implies the risk of losing data in the event of failures,
restrict consistency to small partitions, or give up consistency entirely. With
MDCC (Multi-Data Center Consistency), we describe the first optimistic commit
protocol, that does not require a master or partitioning, and is strongly
consistent at a cost similar to eventually consistent protocols. MDCC can
commit transactions in a single round-trip across data centers in the normal
operational case. We further propose a new programming model which empowers the
application developer to handle longer and unpredictable latencies caused by
inter-data center communication. Our evaluation using the TPC-W benchmark with
MDCC deployed across 5 geographically diverse data centers shows that MDCC is
able to achieve throughput and latency similar to eventually consistent quorum
protocols and that MDCC is able to sustain a data center outage without a
significant impact on response times while guaranteeing strong consistency
Reaching Approximate Byzantine Consensus with Multi-hop Communication
We address the problem of reaching consensus in the presence of Byzantine
faults. In particular, we are interested in investigating the impact of
messages relay on the network connectivity for a correct iterative approximate
Byzantine consensus algorithm to exist. The network is modeled by a simple
directed graph. We assume a node can send messages to another node that is up
to hops away via forwarding by the intermediate nodes on the routes, where
is a natural number. We characterize the necessary and
sufficient topological conditions on the network structure. The tight
conditions we found are consistent with the tight conditions identified for
, where only local communication is allowed, and are strictly weaker for
. Let denote the length of a longest path in the given network. For
and undirected graphs, our conditions hold if and only if and the node-connectivity of the given graph is at least , where
is the total number of nodes and is the maximal number of Byzantine
nodes; and for and directed graphs, our conditions is equivalent to
the tight condition found for exact Byzantine consensus.
Our sufficiency is shown by constructing a correct algorithm, wherein the
trim function is constructed based on investigating a newly introduced minimal
messages cover property. The trim function proposed also works over
multi-graphs.Comment: 24 pages, 1 figure. arXiv admin note: text overlap with
arXiv:1203.188
Dynamic Composite Data Physicalization Using Wheeled Micro-Robots
This paper introduces dynamic composite physicalizations, a new class of physical visualizations that use collections of self-propelled objects to represent data. Dynamic composite physicalizations can be used both to give physical form to well-known interactive visualization techniques, and to explore new visualizations and interaction paradigms. We first propose a design space characterizing composite physicalizations based on previous work in the fields of Information Visualization and Human Computer Interaction. We illustrate dynamic composite physicalizations in two scenarios demonstrating potential benefits for collaboration and decision making, as well as new opportunities for physical interaction. We then describe our implementation using wheeled micro-robots capable of locating themselves and sensing user input, before discussing limitations and opportunities for future work
- âŠ