4,663 research outputs found
Building CMS Pixel Barrel Detectur Modules
For the barrel part of the CMS pixel tracker about 800 silicon pixel detector
modules are required. The modules are bump bonded, assembled and tested at the
Paul Scherrer Institute. This article describes the experience acquired during
the assembly of the first ~200 modules.Comment: 5 pages, 7 figures, Vertex200
Qualification Procedures of the CMS Pixel Barrel Modules
The CMS pixel barrel system will consist of three layers built of about 800
modules. One module contains 66560 readout channels and the full pixel barrel
system about 48 million channels. It is mandatory to test each channel for
functionality, noise level, trimming mechanism, and bump bonding quality.
Different methods to determine the bump bonding yield with electrical
measurements have been developed. Measurements of several operational
parameters are also included in the qualification procedure. Among them are
pixel noise, gains and pedestals. Test and qualification procedures of the
pixel barrel modules are described and some results are presented.Comment: 7 Pages, 7 Figures. Contribution to Pixel 2005, September 5-8, 2005,
Bonn, Germna
Radiation hardness of CMS pixel barrel modules
Pixel detectors are used in the innermost part of the multi purpose
experiments at LHC and are therefore exposed to the highest fluences of
ionising radiation, which in this part of the detectors consists mainly of
charged pions. The radiation hardness of all detector components has thoroughly
been tested up to the fluences expected at the LHC. In case of an LHC upgrade,
the fluence will be much higher and it is not yet clear how long the present
pixel modules will stay operative in such a harsh environment. The aim of this
study was to establish such a limit as a benchmark for other possible detector
concepts considered for the upgrade.
As the sensors and the readout chip are the parts most sensitive to radiation
damage, samples consisting of a small pixel sensor bump-bonded to a CMS-readout
chip (PSI46V2.1) have been irradiated with positive 200 MeV pions at PSI up to
6E14 Neq and with 21 GeV protons at CERN up to 5E15 Neq.
After irradiation the response of the system to beta particles from a Sr-90
source was measured to characterise the charge collection efficiency of the
sensor. Radiation induced changes in the readout chip were also measured. The
results show that the present pixel modules can be expected to be still
operational after a fluence of 2.8E15 Neq. Samples irradiated up to 5E15 Neq
still see the beta particles. However, further tests are needed to confirm
whether a stable operation with high particle detection efficiency is possible
after such a high fluence.Comment: Contribution to the 11th European Symposium on Semiconductor
Detectors June 7-11, 2009 Wildbad Kreuth, German
A Spectral Algorithm with Additive Clustering for the Recovery of Overlapping Communities in Networks
This paper presents a novel spectral algorithm with additive clustering
designed to identify overlapping communities in networks. The algorithm is
based on geometric properties of the spectrum of the expected adjacency matrix
in a random graph model that we call stochastic blockmodel with overlap (SBMO).
An adaptive version of the algorithm, that does not require the knowledge of
the number of hidden communities, is proved to be consistent under the SBMO
when the degrees in the graph are (slightly more than) logarithmic. The
algorithm is shown to perform well on simulated data and on real-world graphs
with known overlapping communities.Comment: Journal of Theoretical Computer Science (TCS), Elsevier, A Para\^itr
A high-precision polarimeter
We have built a polarimeter in order to measure the electron beam
polarization in hall C at JLAB. Using a superconducting solenoid to drive the
pure-iron target foil into saturation, and a symmetrical setup to detect the
Moller electrons in coincidence, we achieve an accuracy of <1%. This sets a new
standard for Moller polarimeters.Comment: 17 pages, 9 figures, submitted to N.I.
Contemporary Neighborhood Planning: A Critique Of Two Operating Programs
Contemporary neighborhood planning has developed, in part, as a reaction to the failures of traditional comprehensive planning. Critics of comprehensive planning suggest that it has favored business interests, has accomplished few tangible results, has excluded citizens from meaningful participation, has ignored the needs of local areas, and has failed to achieve a more equal distribution of public goods (Chapin, 1967; Friedman, 1971; Perin, 1967). In response to these criticisms, as well as to federal pressure for citizen participation, neighborhood based planning programs have been established in a number of cities throughout the country. These neighborhood level programs are meant to supplement comprehensive planning programs, and differ from them in a number of ways. First, these programs are typically problem oriented rather than comprehensive in nature. Second, they focus on geographic subareas rather than the city as a functional whole. Third, they allow considerable input from the citizenry. Last, they typically adopt a short term rather than a long term perspective. (Center for Governmental Studies, 1976; Rafter, I98O; Zuccotti, 1974.
Guaranteed clustering and biclustering via semidefinite programming
Identifying clusters of similar objects in data plays a significant role in a
wide range of applications. As a model problem for clustering, we consider the
densest k-disjoint-clique problem, whose goal is to identify the collection of
k disjoint cliques of a given weighted complete graph maximizing the sum of the
densities of the complete subgraphs induced by these cliques. In this paper, we
establish conditions ensuring exact recovery of the densest k cliques of a
given graph from the optimal solution of a particular semidefinite program. In
particular, the semidefinite relaxation is exact for input graphs corresponding
to data consisting of k large, distinct clusters and a smaller number of
outliers. This approach also yields a semidefinite relaxation for the
biclustering problem with similar recovery guarantees. Given a set of objects
and a set of features exhibited by these objects, biclustering seeks to
simultaneously group the objects and features according to their expression
levels. This problem may be posed as partitioning the nodes of a weighted
bipartite complete graph such that the sum of the densities of the resulting
bipartite complete subgraphs is maximized. As in our analysis of the densest
k-disjoint-clique problem, we show that the correct partition of the objects
and features can be recovered from the optimal solution of a semidefinite
program in the case that the given data consists of several disjoint sets of
objects exhibiting similar features. Empirical evidence from numerical
experiments supporting these theoretical guarantees is also provided
Reduction of Tc due to Impurities in Cuprate Superconductors
In order to explain how impurities affect the unconventional
superconductivity, we study non-magnetic impurity effect on the transition
temperature using on-site U Hubbard model within a fluctuation exchange (FLEX)
approximation. We find that in appearance, the reduction of Tc roughly
coincides with the well-known Abrikosov-Gor'kov formula. This coincidence
results from the cancellation between two effects; one is the reduction of
attractive force due to randomness, and another is the reduction of the damping
rate of quasi-particle arising from electron interaction. As another problem,
we also study impurity effect on underdoped cuprate as the system showing
pseudogap phenomena. To the aim, we adopt the pairing scenario for the
pseudogap and discuss how pseudogap phenomena affect the reduction of Tc by
impurities. We find that 'pseudogap breaking' by impurities plays the essential
role in underdoped cuprate and suppresses the Tc reduction due to the
superconducting (SC) fluctuation.Comment: 14 pages, 28 figures To be published in JPS
Saturation of nuclear matter and short-range correlations
A fully self-consistent treatment of short-range correlations in nuclear
matter is presented. Different implementations of the determination of the
nucleon spectral functions for different interactions are shown to be
consistent with each other. The resulting saturation densities are closer to
the empirical result when compared with (continuous-choice)
Brueckner-Hartree-Fock values. Arguments for the dominance of short-range
correlations in determining the nuclear-matter saturation density are
presented. A further survey of the role of long-range correlations suggests
that the inclusion of pionic contributions to ring diagrams in nuclear matter
leads to higher saturation densities than empirically observed. A possible
resolution of the nuclear-matter saturation problem is suggested.Comment: 5 pages, 1 figure, to be published in Phys.Rev.Let
- …