111 research outputs found

    Advanced parametrical modelling of 24 GHz radar sensor IC packaging components

    Get PDF
    This paper deals with the development of an advanced parametrical modelling concept for packaging components of a 24 GHz radar sensor IC used in automotive driver assistance systems. For fast and efficient design of packages for system-in-package modules (SiP), a simplified model for the description of parasitic electromagnetic effects within the package is desirable, as 3-D field computation becomes inefficient due to the high density of conductive elements of the various signal paths in the package. By using lumped element models for the characterization of the conductive components, a fast indication of the design's signal-quality can be gained, but so far does not offer enough flexibility to cover the whole range of geometric arrangements of signal paths in a contemporary package. This work pursues to meet the challenge of developing a flexible and fast package modelling concept by defining <i>parametric</i> lumped-element models for all basic signal path components, e.g. bond wires, vias, strip lines, bumps and balls

    An Energetic AGN Outburst Powered by a Rapidly Spinning Supermassive Black Hole or an Accreting Ultramassive Black Hole

    Full text link
    Powering the 10^62 erg nuclear outburst in the MS0735.6+7421 cluster central galaxy by accretion implies that its supermassive black hole (SMBH) grew by ~6x10^8 solar masses over the past 100 Myr. We place upper limits on the amount of cold gas and star formation near the nucleus of <10^9 solar masses and <2 solar masses per year, respectively. These limits imply that an implausibly large fraction of the preexisting cold gas in the bulge must have been consumed by its SMBH at the rate of ~3-5 solar masses per year while leaving no trace of star formation. Such a high accretion rate would be difficult to maintain by stellar accretion or the Bondi mechanism, unless the black hole mass approaches 10^11 solar masses. Its feeble nuclear luminosities in the UV, I, and X-ray bands compared to its enormous mechanical power are inconsistent with rapid accretion onto a ~5x10^9 solar mass black hole. We suggest instead that the AGN outburst is powered by a rapidly-spinning black hole. A maximally-spinning, 10^9 solar mass black hole contains enough rotational energy, ~10^62 erg, to quench a cooling flow over its lifetime and to contribute significantly to the excess entropy found in the hot atmospheres of groups and clusters. Two modes of AGN feedback may be quenching star formation in elliptical galaxies centered in cooling halos at late times. An accretion mode that operates in gas-rich systems, and a spin mode operating at modest accretion rates. The spin conjecture may be avoided in MS0735 by appealing to Bondi accretion onto a central black hole whose mass greatly exceeds 10^10 solar mass. The host galaxy's unusually large, 3.8 kpc stellar core radius (light deficit) may witness the presence of an ultramassive black hole.Comment: Accepted for publication in ApJ. Modifications: adopted slightly higher black hole mass using Lauer's M_SMBH vs L_bulge relation and adjusted related quantities; considered more seriously the consequences of a ultramassive black hole, motivated by new Kormendy & Bender paper published after our submission; other modifications per referee comments by Ruszkowsk

    A Chandra X-ray Analysis of Abell 1664: Cooling, Feedback and Star Formation in the Central Cluster Galaxy

    Full text link
    The brightest cluster galaxy (BCG) in the Abell 1664 cluster is unusually blue and is forming stars at a rate of ~ 23 M_{\sun} yr^{-1}. The BCG is located within 5 kpc of the X-ray peak, where the cooling time of 3.5x10^8 yr and entropy of 10.4 keV cm^2 are consistent with other star-forming BCGs in cooling flow clusters. The center of A1664 has an elongated, "bar-like" X-ray structure whose mass is comparable to the mass of molecular hydrogen, ~ 10^{10} M_{\sun} in the BCG. We show that this gas is unlikely to have been stripped from interloping galaxies. The cooling rate in this region is roughly consistent with the star formation rate, suggesting that the hot gas is condensing onto the BCG. We use the scaling relations of Birzan et al. 2008 to show that the AGN is underpowered compared to the central X-ray cooling luminosity by roughly a factor of three. We suggest that A1664 is experiencing rapid cooling and star formation during a low-state of an AGN feedback cycle that regulates the rates of cooling and star formation. Modeling the emission as a single temperature plasma, we find that the metallicity peaks 100 kpc from the X-ray center, resulting in a central metallicity dip. However, a multi-temperature cooling flow model improves the fit to the X-ray emission and is able to recover the expected, centrally-peaked metallicity profile.Comment: 15 pages, 13 figure

    Nanotechnology and global energy demand: challenges and prospects for a paradigm shift in the oil and gas industry.

    Get PDF
    The exploitation of new hydrocarbon discoveries in meeting the present global energy demand is a function of the availability and application of new technologies. The relevance of new technologies is borne out of the complex subsurface architecture and conditions of offshore petroleum plays. Conventional techniques, from drilling to production, for exploiting these discoveries may require adaption for such subsurface conditions as they fail under conditions of high pressure and high temperature. The oil and gas industry over the past decades has witnessed increased research into the use of nanotechnology with great promise for drilling operations, enhanced oil recovery, reservoir characterization, production, etc. The prospect for a paradigm shift towards the application of nanotechnology in the oil and gas industry is constrained by evolving challenges with its progression. This paper gave a review of developments from nano-research in the oil and gas industry, challenges and recommendations

    Synthesising executable gene regulatory networks in haematopoiesis from single-cell gene expression data

    Get PDF
    A fundamental challenge in biology is to understand the complex gene regulatory networks which control tissue development in the mammalian embryo, and maintain homoeostasis in the adult. The cell fate decisions underlying these processes are ultimately made at the level of individual cells. Recent experimental advances in biology allow researchers to obtain gene expression profiles at single-cell resolution over thousands of cells at once. These single-cell measurements provide snapshots of the states of the cells that make up a tissue, instead of the population-level averages provided by conventional high-throughput experiments. The aim of this PhD was to investigate the possibility of using this new high resolution data to reconstruct mechanistic computational models of gene regulatory networks. In this thesis I introduce the idea of viewing single-cell gene expression profiles as states of an asynchronous Boolean network, and frame model inference as the problem of reconstructing a Boolean network from its state space. I then give a scalable algorithm to solve this synthesis problem. In order to achieve scalability, this algorithm works in a modular way, treating different aspects of a graph data structure separately before encoding the search for logical rules as Boolean satisfiability problems to be dispatched to a SAT solver. Together with experimental collaborators, I applied this method to understanding the process of early blood development in the embryo, which is poorly understood due to the small number of cells present at this stage. The emergence of blood from Flk1+ mesoderm was studied by single cell expression analysis of 3934 cells at four sequential developmental time points. A mechanistic model recapitulating blood development was reconstructed from this data set, which was consistent with known biology and the bifurcation of blood and endothelium. Several model predictions were validated experimentally, demonstrating that HoxB4 and Sox17 directly regulate the haematopoietic factor Erg, and that Sox7 blocks primitive erythroid development. A general-purpose graphical tool was then developed based on this algorithm, which can be used by biological researchers as new single-cell data sets become available. This tool can deploy computations to the cloud in order to scale up larger high-throughput data sets. The results in this thesis demonstrate that single-cell analysis of a developing organ coupled with computational approaches can reveal the gene regulatory networks that underpin organogenesis. Rapid technological advances in our ability to perform single-cell profiling suggest that my tool will be applicable to other organ systems and may inform the development of improved cellular programming strategies.Microsoft Research PhD Scholarshi

    Large-scale unit commitment under uncertainty: an updated literature survey

    Get PDF
    The Unit Commitment problem in energy management aims at finding the optimal production schedule of a set of generation units, while meeting various system-wide constraints. It has always been a large-scale, non-convex, difficult problem, especially in view of the fact that, due to operational requirements, it has to be solved in an unreasonably small time for its size. Recently, growing renewable energy shares have strongly increased the level of uncertainty in the system, making the (ideal) Unit Commitment model a large-scale, non-convex and uncertain (stochastic, robust, chance-constrained) program. We provide a survey of the literature on methods for the Uncertain Unit Commitment problem, in all its variants. We start with a review of the main contributions on solution methods for the deterministic versions of the problem, focussing on those based on mathematical programming techniques that are more relevant for the uncertain versions of the problem. We then present and categorize the approaches to the latter, while providing entry points to the relevant literature on optimization under uncertainty. This is an updated version of the paper "Large-scale Unit Commitment under uncertainty: a literature survey" that appeared in 4OR 13(2), 115--171 (2015); this version has over 170 more citations, most of which appeared in the last three years, proving how fast the literature on uncertain Unit Commitment evolves, and therefore the interest in this subject

    Economic and Economical Statistical Design of Hotelling’s T2 Control Chart with Two-State Adaptive Sample Size

    Full text link
    The Hotelling’s T 2 control chart, a direct analogue of the univariate Shewhart ¯X chart, is perhaps the most commonly used tool in industry for simultaneous monitoring of several quality characteristics. Recent studies have shown that using variable sampling size (VSS) schemes results in charts with more statistical power when detecting small to moderate shifts in the process mean vector. In this paper, we build a cost model of a VSS T 2 control chart for the economic and economic statistical design using the general model of Lorenzen and Vance [The economic design of control charts: A unified approach, Technometrics 28 (1986), pp. 3–11].We optimize this model using a genetic algorithm approach.We also study the effects of the costs and operating parameters on theVSS T 2 parameters, and show, through an example, the advantage of economic design over statistical design forVSS T 2 charts, and measure the economic advantage of VSS sampling versus fixed sample size sampling

    The Optimal Design of the VSI T2 Control Chart

    Full text link
    peer reviewedRecent studies have shown that the variable sampling interval (VSI) scheme helps practitioners detect process shifts more quickly than the classical scheme (FRS). In this paper, the economically and statistically optimal design of the VSI T2 control chart for monitoring the process mean vector is investigated. The cost model proposed by Lorenzen and Vance (1986) is minimized through a genetic algorithm (GA) approach. Then the effects of the costs and operating parameters on the optimal design (OD) of the chart parameters and resulting operating loss through a fractional factorial design is systematically studied and finally, based on the ANOVA results, a Meta model to facilitate implementation in industry is proposed to determine the OD of the VSI T2 control chart parameters from the process and cost parameter

    Depth profilometry via multiplexed optical high-coherence interferometry.

    No full text
    Depth Profilometry involves the measurement of the depth profile of objects, and has significant potential for various industrial applications that benefit from non-destructive sub-surface profiling such as defect detection, corrosion assessment, and dental assessment to name a few. In this study, we investigate the feasibility of depth profilometry using an Multiplexed Optical High-coherence Interferometry MOHI instrument. The MOHI instrument utilizes the spatial coherence of a laser and the interferometric properties of light to probe the reflectivity as a function of depth of a sample. The axial and lateral resolutions, as well as imaging depth, are decoupled in the MOHI instrument. The MOHI instrument is capable of multiplexing interferometric measurements into 480 one-dimensional interferograms at a location on the sample and is built with axial and lateral resolutions of 40 ÎĽm at a maximum imaging depth of 700 ÎĽm. Preliminary results, where a piece of sand-blasted aluminum, an NBK7 glass piece, and an optical phantom were successfully probed using the MOHI instrument to produce depth profiles, demonstrate the feasibility of such an instrument for performing depth profilometry

    Importance analysis considering time-varying parameters and different perturbation occurrence times

    No full text
    Importance measures are integral parts of risk assessment for risk-informed decision making. Because the parameters of a risk model, such as the component failure rates, are functions of time and a perturbation (change) in their values can occur during the mission time, time dependence must be considered in the evaluation of the importance measures. In this paper, it is shown that the change in system performance at time t, and consequently the importance of the parameters at time t, depends on the parameters perturbation time and their value functions during the system mission time. We consider a nonhomogeneous continuous time Markov model of a series-parallel system to propose the mathematical proofs and simulations, while the ideas are also shown to be consistent with general models having nonexponential failure rates. Two new measures of importance and a simulation scheme for their computation are introduced to account for the effect of perturbation time and time-varying parameters
    • …
    corecore