5,616 research outputs found
Structure and assembly of desmosome junctions - biosynthesis, processing, and transport of the major protein and glycoprotein components in cultured epithelial-cells
Analogies between the crossing number and the tangle crossing number
Tanglegrams are special graphs that consist of a pair of rooted binary trees
with the same number of leaves, and a perfect matching between the two
leaf-sets. These objects are of use in phylogenetics and are represented with
straightline drawings where the leaves of the two plane binary trees are on two
parallel lines and only the matching edges can cross. The tangle crossing
number of a tanglegram is the minimum crossing number over all such drawings
and is related to biologically relevant quantities, such as the number of times
a parasite switched hosts.
Our main results for tanglegrams which parallel known theorems for crossing
numbers are as follows. The removal of a single matching edge in a tanglegram
with leaves decreases the tangle crossing number by at most , and this
is sharp. Additionally, if is the maximum tangle crossing number of
a tanglegram with leaves, we prove
. Further,
we provide an algorithm for computing non-trivial lower bounds on the tangle
crossing number in time. This lower bound may be tight, even for
tanglegrams with tangle crossing number .Comment: 13 pages, 6 figure
Hydrodynamical simulations of the Sunyaev–Zel'dovich effect: the kinetic effect
We use hydrodynamical N-body simulations to study the kinetic Sunyaev–Zel'dovich effect. We construct sets of maps, one square degree in size, in three different cosmological models. We confirm earlier calculations that on the scales studied the kinetic effect is much smaller than the thermal (except close to the thermal null point), with an rms dispersion smaller by about a factor of 5 in the Rayleigh–Jeans region. We study the redshift dependence of the rms distortion and the pixel distribution at the present epoch. We compute the angular power spectra of the maps, including their redshift dependence, and compare them with the thermal Sunyaev–Zel'dovich effect and with the expected cosmic microwave background anisotropy spectrum as well as with determinations by other authors. We correlate the kinetic effect with the thermal effect both pixel-by-pixel and for identified thermal sources in the maps to assess the extent to which the kinetic effect is enhanced in locations of strong thermal signal
Catalog Extraction in SZ Cluster Surveys: a matched filter approach
We present a method based on matched multifrequency filters for extracting
cluster catalogs from Sunyaev-Zel'dovich (SZ) surveys. We evaluate its
performance in terms of completeness, contamination rate and photometric
recovery for three representative types of SZ survey: a high resolution single
frequency radio survey (AMI), a high resolution ground-based multiband survey
(SPT), and the Planck all-sky survey. These surveys are not purely flux
limited, and they loose completeness significantly before their point-source
detection thresholds. Contamination remains relatively low at <5% (less than
30%) for a detection threshold set at S/N=5 (S/N=3). We identify photometric
recovery as an important source of catalog uncertainty: dispersion in recovered
flux from multiband surveys is larger than the intrinsic scatter in the Y-M
relation predicted from hydrodynamical simulations, while photometry in the
single frequency survey is seriously compromised by confusion with primary
cosmic microwave background anisotropy. The latter effect implies that
follow-up observations in other wavebands (e.g., 90 GHz, X-ray) of single
frequency surveys will be required. Cluster morphology can cause a bias in the
recovered Y-M relation, but has little effect on the scatter; the bias would be
removed during calibration of the relation. Point source confusion only
slightly decreases multiband survey completeness; single frequency survey
completeness could be significantly reduced by radio point source confusion,
but this remains highly uncertain because we do not know the radio counts at
the relevant flux levels.Comment: 14 pages, 13 figures, replaced to match version accepted for
publication in A&
One More Awareness Gap? The Behaviour–Impact Gap Problem
Preceding research has made hardly any attempt to measure the ecological impacts of pro-environmental behaviour in an objective way. Those impacts were rather supposed or calculated. The research described herein scrutinized the ecological impact reductions achieved through pro-environmental behaviour and raised the question how much of a reduction in carbon footprint can be achieved through voluntary action without actually affecting the socio-economic determinants of life. A survey was carried out in order to measure the difference between the ecological footprint of “green” and “brown” consumers. No significant difference was found between the ecological footprints of the two groups—suggesting that individual pro-environmental attitudes and behaviour do not always reduce the environmental impacts of consumption. This finding resulted in the formulation of a new proposition called the BIG (behaviour–impact gap) problem, which is an interesting addition to research in the field of environmental awareness gaps
A comparison of the galaxy peculiar velocity field with the PSCz gravity field-- A Bayesian hyper-parameter method
We constructed a Bayesian hyper-parameter statistical method to quantify the
difference between predicted velocities derived from the observed galaxy
distribution in the \textit{IRAS}-PSC redshift survey and peculiar
velocities measured using different distance indicators. In our analysis we
find that the model--data comparison becomes unreliable beyond 70 \hmpc
because of the inadequate sampling by \textit{IRAS} survey of prominent,
distant superclusters, like the Shapley Concentration. On the other hand, the
analysis of the velocity residuals show that the PSC gravity field provides
an adequate model to the local, \le 70 \hmpc, peculiar velocity field. The
hyper-parameter combination of ENEAR, SN, A1SN and SFI++ catalogues in the
Bayesian framework constrains the amplitude of the linear flow to be
. For an rms density fluctuations in the PSC galaxy
number density , we obtain an estimate of the
growth rate of density fluctuations ,
which is in excellent agreement with independent estimates based on different
techniques.Comment: 14 pages, 32 figures, MNRAS in press, matched the MNRAS published
versio
Detecting Sunyaev-Zel'dovich clusters with PLANCK: I. Construction of all-sky thermal and kinetic SZ-maps
All-sky thermal and kinetic Sunyaev-Zel'dovich (SZ) maps are presented for
assessing how well the PLANCK-mission can find and characterise clusters of
galaxies, especially in the presence of primary anisotropies of the cosmic
microwave background (CMB) and various galactic and ecliptic foregrounds. The
maps have been constructed from numerical simulations of structure formation in
a standard LCDM cosmology and contain all clusters out to redshifts of z = 1.46
with masses exceeding 5e13 M_solar/h. By construction, the maps properly
account for the evolution of cosmic structure, the halo-halo correlation
function, the evolving mass function, halo substructure and adiabatic gas
physics. The velocities in the kinetic map correspond to the actual density
environment at the cluster positions. We characterise the SZ-cluster sample by
measuring the distribution of angular sizes, the integrated thermal and kinetic
Comptonisations, the source counts in the three relevant PLANCK-channels, and
give the angular power-spectra of the SZ-sky. While our results are broadly
consistent with simple estimates based on scaling relations and spherically
symmetric cluster models, some significant differences are seen which may
affect the number of cluster detectable by PLANCK.Comment: 14 pages, 16 figures, 3 tables, submitted to MNRAS, 05.Jul.200
Parameterization Effects in the analysis of AMI Sunyaev-Zel'dovich Observations
Most Sunyaev--Zel'dovich (SZ) and X-ray analyses of galaxy clusters try to
constrain the cluster total mass and/or gas mass using parameterised models and
assumptions of spherical symmetry and hydrostatic equilibrium. By numerically
exploring the probability distributions of the cluster parameters given the
simulated interferometric SZ data in the context of Bayesian methods, and
assuming a beta-model for the electron number density we investigate the
capability of this model and analysis to return the simulated cluster input
quantities via three rameterisations. In parameterisation I we assume that the
T is an input parameter. We find that parameterisation I can hardly constrain
the cluster parameters. We then investigate parameterisations II and III in
which fg(r200) replaces temperature as a main variable. In parameterisation II
we relate M_T(r200) and T assuming hydrostatic equilibrium. We find that
parameterisation II can constrain the cluster physical parameters but the
temperature estimate is biased low. In parameterisation III, the virial theorem
replaces the hydrostatic equilibrium assumption. We find that parameterisation
III results in unbiased estimates of the cluster properties. We generate a
second simulated cluster using a generalised NFW (GNFW) pressure profile and
analyse it with an entropy based model to take into account the temperature
gradient in our analysis and improve the cluster gas density distribution. This
model also constrains the cluster physical parameters and the results show a
radial decline in the gas temperature as expected. The mean cluster total mass
estimates are also within 1 sigma from the simulated cluster true values.
However, we find that for at least interferometric SZ analysis in practice at
the present time, there is no differences in the AMI visibilities between the
two models. This may of course change as the instruments improve.Comment: 19 pages, 13 tables, 24 figure
Hydrodynamical simulations of the Sunyaev--Zel'dovich effect
We use a hydrodynamical N-body code to generate simulated maps, of size one
square degree, of the thermal SZ effect. We study three different cosmologies;
the currently-favoured low-density model with a cosmological constant, a
critical-density model and a low-density open model. We stack simulation boxes
corresponding to different redshifts in order to include contributions to the
Compton y-parameter out to the highest necessary redshifts. Our main results
are:
1. The mean y-distortion is around for low-density
cosmologies, and for critical density. These are below
current limits, but not by a wide margin in the former case.
2. In low-density cosmologies, the mean y-distortion comes from a broad range
of redshifts, the bulk coming from and a tail out to . For
critical-density models, most of the contribution comes from .
3. The number of SZ sources above a given depends strongly on instrument
resolution. For a one arcminute beam, there is around 0.1 sources per square
degree with in a critical-density Universe, and around 8 such
sources per square degree in low-density models. Low-density models with and
without a cosmological constant give very similar results.
4. We estimate that the {\sc Planck} satellite will be able to see of order
25000 SZ sources if the Universe has a low density, or around 10000 if it has
critical density.Comment: 9 pages LaTeX file with eleven figures (including four in colour)
incorporated (uses mn.sty and epsf). Further colour images and animations at
http://star-www.cpes.susx.ac.uk/~andrewl/sz/sz.html Updated to match
published versio
3D time series analysis of cell shape using Laplacian approaches
Background:
Fundamental cellular processes such as cell movement, division or food uptake critically depend on cells being able to change shape. Fast acquisition of three-dimensional image time series has now become possible, but we lack efficient tools for analysing shape deformations in order to understand the real three-dimensional nature of shape changes.
Results:
We present a framework for 3D+time cell shape analysis. The main contribution is three-fold: First, we develop a fast, automatic random walker method for cell segmentation. Second, a novel topology fixing method is proposed to fix segmented binary volumes without spherical topology. Third, we show that algorithms used for each individual step of the analysis pipeline (cell segmentation, topology fixing, spherical parameterization, and shape representation) are closely related to the Laplacian operator. The framework is applied to the shape analysis of neutrophil cells.
Conclusions:
The method we propose for cell segmentation is faster than the traditional random walker method or the level set method, and performs better on 3D time-series of neutrophil cells, which are comparatively noisy as stacks have to be acquired fast enough to account for cell motion. Our method for topology fixing outperforms the tools provided by SPHARM-MAT and SPHARM-PDM in terms of their successful fixing rates. The different tasks in the presented pipeline for 3D+time shape analysis of cells can be solved using Laplacian approaches, opening the possibility of eventually combining individual steps in order to speed up computations
- …
