25,443 research outputs found

    Analyzing the Spillover Mechanism on the Semiconductor Industry in the Silicon Valley and Route 128

    Get PDF
    To understand the impact of science and engineering innovations on economic growth requires relating discoveries to products, and identifying the scientists and engineers who are responsible for the knowledge transfer. Studies reliant on geographic proximity alone can show only that economic activity varies positively with the amount of research being done at a university [David (1992), Nelson and Romer (1996), Jaffe (1989,93)]. These "geographically localized knowledge spillovers" have proved unable to explain what it is about research universities that is crucial for their local economic impact (training, the research findings?) and, therefore, are unconvincing both to policy makers and the public. This paper analyses the spillover mechanism identifying its main components by analyzing the effect of university-based star scientists through explicit and implicit ties, and the effect of other neighbor firms, on the performance of semiconductor enterprises measured with patents. Explicit ties are modeled by the full and part-time job mobility of scientists located in universities; and implicit ties, by the presence of positive externalities or spillover effects to the firms of untied scientists at Universities in the same economic area. Specifically, this study examines the Silicon Valley and Route 128 cases in detail identifying the differences and similarities between these two major semiconductor regions in their spillover mechanisms. Previous research on high-technology industries has demonstrated the importance of geographically localized "knowledge spillovers" by building specific links between university scientists and firms and estimating the local effects of different types of links. This research goes an step forward, by not only measuring the effect of University research through the direct ties to firms (Zucker, Darby, Armstrong; 1998); but also measuring the importance of the inside industry R&D spillovers in the growth of the region.

    Data Engineering for the Analysis of Semiconductor Manufacturing Data

    Get PDF
    We have analyzed manufacturing data from several different semiconductor manufacturing plants, using decision tree induction software called Q-YIELD. The software generates rules for predicting when a given product should be rejected. The rules are intended to help the process engineers improve the yield of the product, by helping them to discover the causes of rejection. Experience with Q-YIELD has taught us the importance of data engineering -- preprocessing the data to enable or facilitate decision tree induction. This paper discusses some of the data engineering problems we have encountered with semiconductor manufacturing data. The paper deals with two broad classes of problems: engineering the features in a feature vector representation and engineering the definition of the target concept (the classes). Manufacturing process data present special problems for feature engineering, since the data have multiple levels of granularity (detail, resolution). Engineering the target concept is important, due to our focus on understanding the past, as opposed to the more common focus in machine learning on predicting the future

    Data acquisition software for the CMS strip tracker

    Get PDF
    The CMS silicon strip tracker, providing a sensitive area of approximately 200 m2 and comprising 10 million readout channels, has recently been completed at the tracker integration facility at CERN. The strip tracker community is currently working to develop and integrate the online and offline software frameworks, known as XDAQ and CMSSW respectively, for the purposes of data acquisition and detector commissioning and monitoring. Recent developments have seen the integration of many new services and tools within the online data acquisition system, such as event building, online distributed analysis, an online monitoring framework, and data storage management. We review the various software components that comprise the strip tracker data acquisition system, the software architectures used for stand-alone and global data-taking modes. Our experiences in commissioning and operating one of the largest ever silicon micro-strip tracking systems are also reviewed

    First results from the LUCID-Timepix spacecraft payload onboard the TechDemoSat-1 satellite in Low Earth Orbit

    Full text link
    The Langton Ultimate Cosmic ray Intensity Detector (LUCID) is a payload onboard the satellite TechDemoSat-1, used to study the radiation environment in Low Earth Orbit (∌\sim635km). LUCID operated from 2014 to 2017, collecting over 2.1 million frames of radiation data from its five Timepix detectors on board. LUCID is one of the first uses of the Timepix detector technology in open space, with the data providing useful insight into the performance of this technology in new environments. It provides high-sensitivity imaging measurements of the mixed radiation field, with a wide dynamic range in terms of spectral response, particle type and direction. The data has been analysed using computing resources provided by GridPP, with a new machine learning algorithm that uses the Tensorflow framework. This algorithm provides a new approach to processing Medipix data, using a training set of human labelled tracks, providing greater particle classification accuracy than other algorithms. For managing the LUCID data, we have developed an online platform called Timepix Analysis Platform at School (TAPAS). This provides a swift and simple way for users to analyse data that they collect using Timepix detectors from both LUCID and other experiments. We also present some possible future uses of the LUCID data and Medipix detectors in space.Comment: Accepted for publication in Advances in Space Researc
    • 

    corecore