694 research outputs found

    Morpho-kinematic properties of field S0 bulges in the CALIFA survey

    Get PDF
    We study a sample of 28 S0 galaxies extracted from the integral-field spectroscopic (IFS) survey CALIFA. We combine an accurate two-dimensional (2D) multi-component photometric decomposition with the IFS kinematic properties of their bulges to understand their formation scenario. Our final sample is representative of S0s with high stellar masses (Mstar/Msun>1010M_{star}/M_{sun} > 10^{10}). They lay mainly on the red sequence and live in relatively isolated environments similar to that of the field and loose groups. We use our 2D photometric decomposition to define the size and photometric properties of the bulges, as well as their location within the galaxies. We perform mock spectroscopic simulations mimicking our observed galaxies to quantify the impact of the underlying disc on our bulge kinematic measurements (λ\lambda and v/σv/\sigma). We compare our bulge corrected kinematic measurements with the results from Schwarzschild dynamical modelling. The good agreement confirms the robustness of our results and allows us to use bulge reprojected values of λ\lambda and v/σv/\sigma. We find that the photometric (nn and B/TB/T) and kinematic (v/σv/\sigma and λ\lambda) properties of our field S0 bulges are not correlated. We demonstrate that this morpho-kinematic decoupling is intrinsic to the bulges and it is not due to projection effects. We conclude that photometric diagnostics to separate different types of bulges (disc-like vs classical) might not be useful for S0 galaxies. The morpho-kinematics properties of S0 bulges derived in this paper suggest that they are mainly formed by dissipation processes happening at high redshift, but dedicated high-resolution simulations are necessary to better identify their origin.Comment: 31 pages, 19 figures. Accepted for publication in MNRA

    Encoding problems in logic synthesis

    Get PDF

    The Design of Cube Calculus Machine Using Sram-Based Fpga Reconfigurable Hardware Dec’s Perle-1 Board

    Get PDF
    Presented in this thesis are new approaches to column compatibility checking and column-based input/output encoding for Curtis decompositions of switching functions. These approaches can be used in Curtis-type functional decomposition programs for applications in several scientific disciplines. Examples of applications are: minimization of combinational and sequential logic) mapping of logic functions to programmable logic devices such as CPLDs, MPGAs, and FPGAs, data encryption, data compression, pattern recognition) and image refinement. Presently, Curtis-type functional decomposition programs are used primarily for experimental purposes due to performance, quality, and compatibility issues. However) in the past few years a renewal of interest in the area of functional decomposition has resulted in significant improvements in performance and quality of multi-level decomposition programs. The goal of this thesis is to introduce algorithms that can significantly improve the performance and quality of Curtis-type decomposition programs. In doing so, it is hoped that a Curtis-type decomposition program, complete with efficient, high quality algorithms for decomposition, will be a feasible tool for use in one or more practical applications. Various testing and analyses were performed in order to evaluate the potential of algorithms presented in this thesis for use in a high quality Curtis-type decomposition program. Testing was done using a binary input, binary output Curtis-type decomposition program MULTIS/GUD. This program was implemented here at Portland State University by the Portland Oregon Logic Optimization Group

    A finite state machine synthesizer

    Get PDF
    This thesis presents a Finite State Machine (FSM) Synthesizer developed at Portland State University. The synthesizer starts from a high level behavioral description, in which no states are specified, and generates the lower level FSM descriptions for simulation and physical layout generation

    Modelling Orebody Structures: Block Merging Algorithms and Block Model Spatial Restructuring Strategies Given Mesh Surfaces of Geological Boundaries

    Get PDF
    This paper describes a framework for capturing geological structures in a 3D block model and improving its spatial fidelity given new mesh surfaces. Using surfaces that represent geological boundaries, the objectives are to identify areas where refinement is needed, increase spatial resolution to minimize surface approximation error, reduce redundancy to increase the compactness of the model and identify the geological domain on a block-by-block basis. These objectives are fulfilled by four system components which perform block-surface overlap detection, spatial structure decomposition, sub-blocks consolidation and block tagging, respectively. The main contributions are a coordinate-ascent merging algorithm and a flexible architecture for updating the spatial structure of a block model when given multiple surfaces, which emphasizes the ability to selectively retain or modify previously assigned block labels. The techniques employed include block-surface intersection analysis based on the separable axis theorem and ray-tracing for establishing the location of blocks relative to surfaces. To demonstrate the robustness and applicability of the proposed block merging strategy in a more narrow setting, it is used to reduce block fragmentation in an existing model where surfaces are not given and the minimum block size is fixed. To obtain further insight, a systematic comparison with octree subblocking subsequently illustrates the inherent constraints of dyadic hierarchical decomposition and the importance of inter-scale merging. The results show the proposed method produces merged blocks with less extreme aspect ratios and is highly amenable to parallel processing. The overall framework is applicable to orebody modelling given geological boundaries, and 3D segmentation more generally, where there is a need to delineate spatial regions using mesh surfaces within a block model.Comment: Keywords: Block merging algorithms, block model structure, spatial restructuring, mesh surfaces, subsurface modelling, geological structures, sub-blocking, boundary correction, domain identification, iterative refinement, geospatial information system. 27 page article, 26 figures, 6 tables, plus supplementary material (17 pages

    CO Luminosity Density at High-z (COLDz) Survey: A Sensitive, Large-area Blind Search for Low-J CO Emission from Cold Gas in the Early Universe with the Karl G. Jansky Very Large Array

    Get PDF
    We describe the CO Luminosity Density at High-z (COLDz) survey, the first spectral line deep field targeting CO(1–0) emission from galaxies at z = 1.95–2.85 and CO(2–1) at z = 4.91–6.70. The main goal of COLDz is to constrain the cosmic density of molecular gas at the peak epoch of cosmic star formation. By targeting both a wide (~51 arcmin2) and a deep (~9 arcmin^2) area, the survey is designed to robustly constrain the bright end and the characteristic luminosity of the CO(1–0) luminosity function. An extensive analysis of the reliability of our line candidates and new techniques provide detailed completeness and statistical corrections as necessary to determine the best constraints to date on the CO luminosity function. Our blind search for CO(1–0) uniformly selects starbursts and massive main-sequence galaxies based on their cold molecular gas masses. Our search also detects CO(2–1) line emission from optically dark, dusty star-forming galaxies at z > 5. We find a range of spatial sizes for the CO-traced gas reservoirs up to ~40 kpc, suggesting that spatially extended cold molecular gas reservoirs may be common in massive, gas-rich galaxies at z ~ 2. Through CO line stacking, we constrain the gas mass fraction in previously known typical star-forming galaxies at z = 2–3. The stacked CO detection suggests lower molecular gas mass fractions than expected for massive main-sequence galaxies by a factor of ~3–6. We find total CO line brightness at ~34 GHz of 0.45 ± 0.2 μK, which constrains future line intensity mapping and CMB experiments

    Enabling Ubiquitous OLAP Analyses

    Get PDF
    An OLAP analysis session is carried out as a sequence of OLAP operations applied to multidimensional cubes. At each step of a session, an operation is applied to the result of the previous step in an incremental fashion. Due to its simplicity and flexibility, OLAP is the most adopted paradigm used to explore the data stored in data warehouses. With the goal of expanding the fruition of OLAP analyses, in this thesis we touch several critical topics. We first present our contributions to deal with data extractions from service-oriented sources, which are nowadays used to provide access to many databases and analytic platforms. By addressing data extraction from these sources we make a step towards the integration of external databases into the data warehouse, thus providing richer data that can be analyzed through OLAP sessions. The second topic that we study is that of visualization of multidimensional data, which we exploit to enable OLAP on devices with limited screen and bandwidth capabilities (i.e., mobile devices). Finally, we propose solutions to obtain multidimensional schemata from unconventional sources (e.g., sensor networks), which are crucial to perform multidimensional analyses

    Euler Characteristic Curves and Profiles: a stable shape invariant for big data problems

    Full text link
    Tools of Topological Data Analysis provide stable summaries encapsulating the shape of the considered data. Persistent homology, the most standard and well studied data summary, suffers a number of limitations; its computations are hard to distribute, it is hard to generalize to multifiltrations and is computationally prohibitive for big data-sets. In this paper we study the concept of Euler Characteristics Curves, for one parameter filtrations and Euler Characteristic Profiles, for multi-parameter filtrations. While being a weaker invariant in one dimension, we show that Euler Characteristic based approaches do not possess some handicaps of persistent homology; we show efficient algorithms to compute them in a distributed way, their generalization to multifiltrations and practical applicability for big data problems. In addition we show that the Euler Curves and Profiles enjoys certain type of stability which makes them robust tool in data analysis. Lastly, to show their practical applicability, multiple use-cases are considered.Comment: 32 pages, 19 figures. Added remark on multicritical filtrations in section 4, typos correcte
    corecore