1,271 research outputs found
Solving the Bin-Packing Problem by Means of Tissue P System with 2-Division
The ability of tissue P systems with 2-division for solving
NP problems in polynomial time is well-known and many solutions can
be found in the literature to several of such problems. Nonetheless, there
are very few papers devoted to the Bin-packing problem. The reason may
be the difficulties for dealing with different number of bins, capacity and
number of objects by using exclusively division rules that produce two
offsprings in each application. In this paper we present the design of a
family of tissue P systems with 2 division which solves the Bin-packing
problem in polynomial time by combining design techniques which can
be useful for further research
Roughening of the (1+1) interfaces in two-component surface growth with an admixture of random deposition
We simulate competitive two-component growth on a one dimensional substrate
of sites. One component is a Poisson-type deposition that generates
Kardar-Parisi-Zhang (KPZ) correlations. The other is random deposition (RD). We
derive the universal scaling function of the interface width for this model and
show that the RD admixture acts as a dilatation mechanism to the fundamental
time and height scales, but leaves the KPZ correlations intact. This
observation is generalized to other growth models. It is shown that the
flat-substrate initial condition is responsible for the existence of an early
non-scaling phase in the interface evolution. The length of this initial phase
is a non-universal parameter, but its presence is universal. In application to
parallel and distributed computations, the important consequence of the derived
scaling is the existence of the upper bound for the desynchronization in a
conservative update algorithm for parallel discrete-event simulations. It is
shown that such algorithms are generally scalable in a ring communication
topology.Comment: 16 pages, 16 figures, 77 reference
Synchronization Landscapes in Small-World-Connected Computer Networks
Motivated by a synchronization problem in distributed computing we studied a
simple growth model on regular and small-world networks, embedded in one and
two-dimensions. We find that the synchronization landscape (corresponding to
the progress of the individual processors) exhibits Kardar-Parisi-Zhang-like
kinetic roughening on regular networks with short-range communication links.
Although the processors, on average, progress at a nonzero rate, their spread
(the width of the synchronization landscape) diverges with the number of nodes
(desynchronized state) hindering efficient data management. When random
communication links are added on top of the one and two-dimensional regular
networks (resulting in a small-world network), large fluctuations in the
synchronization landscape are suppressed and the width approaches a finite
value in the large system-size limit (synchronized state). In the resulting
synchronization scheme, the processors make close-to-uniform progress with a
nonzero rate without global intervention. We obtain our results by ``simulating
the simulations", based on the exact algorithmic rules, supported by
coarse-grained arguments.Comment: 20 pages, 22 figure
Dense Building Instrumentation Application for City-Wide Structural Health Monitoring
The Community Seismic Network (CSN) has partnered with the NASA Jet Propulsion Laboratory (JPL) to initiate a campus-wide structural monitoring program of all buildings on the premises. The JPL campus serves as a proxy for a densely instrumented urban city with localized vibration measurements collected throughout the free-field and built environment. Instrumenting the entire campus provides dense measurements in a horizontal geospatial sense for soil response; in addition five buildings have been instrumented on every floor of the structure. Each building has a unique structural system as well as varied amounts of structural information via structural drawings, making several levels of assessment and evaluation possible. Computational studies with focus on damage detection applied to the campus structural network are demonstrated for a collection of buildings. For campus-wide real-time and post-event evaluation, ground and building response products using CSN data are illustrating the usefulness of higher spatial resolution compared to what was previously typical with sparser instrumentation
The Alliance for Cellular Signaling Plasmid Collection: A Flexible Resource for Protein Localization Studies and Signaling Pathway Analysis
Cellular responses to inputs that vary both temporally and spatially are determined by complex relationships between the components of cell signaling networks. Analysis of these relationships requires access to a wide range of experimental reagents and techniques, including the ability to express the protein components of the model cells in a variety of contexts. As part of the Alliance for Cellular Signaling, we developed a robust method for cloning large numbers of signaling ORFs into Gateway® entry vectors, and we created a wide range of compatible expression platforms for proteomics applications. To date, we have generated over 3000 plasmids that are available to the scientific community via the American Type Culture Collection. We have established a website at www.signaling-gateway.org/data/plasmid/ that allows users to browse, search, and blast Alliance for Cellular Signaling plasmids. The collection primarily contains murine signaling ORFs with an emphasis on kinases and G protein signaling genes. Here we describe the cloning, databasing, and application of this proteomics resource for large scale subcellular localization screens in mammalian cell lines
Community seismic network and localized earthquake situational awareness
Community-hosted seismic networks are a solution to the need for large numbers of sensors to operate over a seismically active region in order to accurately measure the size and location of an earthquake, assess resulting damage, and provide alerts. The Community Seismic Network is one such strong-motion network, currently comprising hundreds of elements located in California. It consists of low-cost, three-component, MEMS accelerometers capable of recording accelerations up to twice the level of gravity. The primary product of the network is to produce measurements of shaking of the ground and multiple locations of every upper floor in buildings, in the seconds during and following a major earthquake. Each sensor uses a small, dedicated ARM processor computer running Linux, and analyzes time series data in real time at hundreds of samples per second. The network reports on shaking parameters that indicate intensity of the structural response levels such as maximum floor acceleration and velocity, displacement of a floor in a building, as well as data products that depend on the response time histories. To do this, Cloud computing has been expanded through the use of statically defined subsets of sensors called cloudlets. These are smaller subsets of similar sensors that carry out customized calculations for those locations. The measurements are reported as rapidly as possible following an earthquake so that they may be incorporated into structural diagnosis and prognosis applications that can be used by first responders to prioritize their initial disaster management efforts. The cloudlet displays are customized for specific buildings and they show in real time: instantaneous displacement, inter-story drift, and resonant frequency and mode shapes using system identification software tools. The real-time display products are useful for decision-making about whether the potential for damage exists, what level of damage may have occurred and where, and whether total business disruption is necessary. City-wide dense monitoring makes it possible for emergency response managers to prioritize the target locations requiring first response on a block-by-block scale based on reports of shaking intensity
Downtown Los Angeles 52-Story High-Rise and Free-Field Response to an Oil Refinery Explosion
The ExxonMobil Corp. oil refinery in Torrance, California experienced an explosion on February 18, 2015 causing ground shaking equivalent to a magnitude 2.0 earthquake. The impulse response for the source was computed from Southern California Seismic Network data for a single force system with a value of 2×10^5 kN vertically downward. The refinery explosion produced an air pressure wave that was recorded 22.8 km away in a 52-story high-rise building in downtown Los Angeles by a dense accelerometer array that is a component of the Community Seismic Network. The array recorded anomalous waveforms on each floor displaying coherent arrivals that are consistent with the building's elastic response to a pressure wave caused by the refinery explosion. Using a finite-element model of the building, the force on the building on a floor-by-floor scale was found to range up to 1.42 kN, corresponding to a pressure perturbation of 7.7 Pa
Update statistics in conservative parallel discrete event simulations of asynchronous systems
We model the performance of an ideal closed chain of L processing elements
that work in parallel in an asynchronous manner. Their state updates follow a
generic conservative algorithm. The conservative update rule determines the
growth of a virtual time surface. The physics of this growth is reflected in
the utilization (the fraction of working processors) and in the interface
width. We show that it is possible to nake an explicit connection between the
utilization and the macroscopic structure of the virtual time interface. We
exploit this connection to derive the theoretical probability distribution of
updates in the system within an approximate model. It follows that the
theoretical lower bound for the computational speed-up is s=(L+1)/4 for L>3.
Our approach uses simple statistics to count distinct surface configuration
classes consistent with the model growth rule. It enables one to compute
analytically microscopic properties of an interface, which are unavailable by
continuum methods.Comment: 15 pages, 12 figure
The extent, nature and distribution of child poverty in India
Despite a long history, research on poverty has only relatively recently examined the issue of child poverty as a distinct topic of concern. This article examines how child poverty and well-being are now conceptualized, defined and measured, and presents a portrait of child poverty in India by social and cultural groups, and by geographic area. In December 2006, the UN General Assembly adopted a definition of child poverty which noted that children living in poverty were deprived of (among other things) nutrition, water and sanitation facilities, access to basic health care services, shelter and education. The definition noted that while poverty hurts every human being ‘it is most threatening and harmful to children, leaving them unable to enjoy their rights, to reach their full potential and to participate as full members of the society’. Researchers have developed age-specific and gender-sensitive indicators of deprivation which conform to the UN definition of child poverty and which can be used to examine the extent and nature of child poverty in low and middle-income countries. These new methods have ‘transformed the way UNICEF and many of its partners both understood and measured the poverty suffered by children’ (UNICEF, 2009). This article uses these methods and presents results of child poverty in India based on nationally representative household survey data for India
Automatic generation of hardware/software interfaces
Enabling new applications for mobile devices often requires the use of specialized hardware to reduce power consumption. Because of time-to-market pressure, current design methodologies for embedded applications require an early partitioning of the design, allowing the hardware and software to be developed simultaneously, each adhering to a rigid interface contract. This approach is problematic for two reasons: (1) a detailed hardware-software interface is difficult to specify until one is deep into the design process, and (2) it prevents the later migration of functionality across the interface motivated by efficiency concerns or the addition of features. We address this problem using the Bluespec Codesign Language~(BCL) which permits the designer to specify the hardware-software partition in the source code, allowing the compiler to synthesize efficient software and hardware along with transactors for communication between the partitions. The movement of functionality across the hardware-software boundary is accomplished by simply specifying a new partitioning, and since the compiler automatically generates the desired interface specifications, it eliminates yet another error-prone design task. In this paper we present BCL, an extension of a commercially available hardware design language (Bluespec SystemVerilog), a new software compiling scheme, and preliminary results generated using our compiler for various hardware-software decompositions of an Ogg Vorbis audio decoder, and a ray-tracing application.National Science Foundation (U.S.) (NSF (#CCF-0541164))National Research Foundation of Korea (grant from the Korean Government (MEST) (#R33-10095)
- …
