39 research outputs found

    Lessons from an Open Source Business

    Get PDF
    Creating a successful company is difficult; but creating a successful company, a successful open source project, and a successful ecosystem all at the same time is much more difficult. This article takes a retrospective look at some of the lessons we have learned in building BigBlueButton, an open source web conferencing system for distance education, and in building Blindside Networks, a company following the traditional business model of providing support and services to paying customers. Our main message is that the focus must be on creating a successful open source project first, for without it, no company in the ecosystem can flourish

    Unified System on Chip RESTAPI Service (USOCRS)

    Get PDF
    Abstract. This thesis investigates the development of a Unified System on Chip RESTAPI Service (USOCRS) to enhance the efficiency and effectiveness of SOC verification reporting. The research aims to overcome the challenges associated with the transfer, utilization, and interpretation of SoC verification reports by creating a unified platform that integrates various tools and technologies. The research methodology used in this study follows a design science approach. A thorough literature review was conducted to explore existing approaches and technologies related to SOC verification reporting, automation, data visualization, and API development. The review revealed gaps in the current state of the field, providing a basis for further investigation. Using the insights gained from the literature review, a system design and implementation plan were developed. This plan makes use of cutting-edge technologies such as FASTAPI, SQL and NoSQL databases, Azure Active Directory for authentication, and Cloud services. The Verification Toolbox was employed to validate SoC reports based on the organization’s standards. The system went through manual testing, and user satisfaction was evaluated to ensure its functionality and usability. The results of this study demonstrate the successful design and implementation of the USOCRS, offering SOC engineers a unified and secure platform for uploading, validating, storing, and retrieving verification reports. The USOCRS facilitates seamless communication between users and the API, granting easy access to vital information including successes, failures, and test coverage derived from submitted SoC verification reports. By automating and standardizing the SOC verification reporting process, the USOCRS eliminates manual and repetitive tasks usually done by developers, thereby enhancing productivity, and establishing a robust and reliable framework for report storage and retrieval. Through the integration of diverse tools and technologies, the USOCRS presents a comprehensive solution that adheres to the required specifications of the SOC schema used within the organization. Furthermore, the USOCRS significantly improves the efficiency and effectiveness of SOC verification reporting. It facilitates the submission process, reduces latency through optimized data storage, and enables meaningful extraction and analysis of report data

    A gravity model analysis of South Korean semiconductor exports to a selected OECD group of countries

    Get PDF
    Treball Final de Grau en Economia. Codi: EC1049. Curs acadèmic: 2021/2022This paper estimates semiconductor trade flows between South Korea and a selected group of importing countries, from 2001 to 2020. Semiconductor exports are defined as the sum of HS8541 and HS8542 exports. It employs two specification of the gravity model, a classic one and an augmented one. For the estimates, I used a panel data regression and I have chosen a fixed effects model after rejecting Hausman’s null hypothesis. I find that standard variables of the gravity model, e.g. national income of both countries and distance are statistically significant, with their respective expected signs. In the augmented model, I find a positive effect of the Economic Complexity index of the importing country over the South Korean exports of semiconductors

    Exploitation dynamique des données de production pour améliorer les méthodes DFM dans l'industrie Microélectronique

    Get PDF
    La conception pour la fabrication ou DFM (Design for Manufacturing) est une méthode maintenant classique pour assurer lors de la conception des produits simultanément la faisabilité, la qualité et le rendement de la production. Dans l'industrie microélectronique, le Design Rule Manual (DRM) a bien fonctionné jusqu'à la technologie 250nm avec la prise en compte des variations systématiques dans les règles et/ou des modèles basés sur l'analyse des causes profondes, mais au-delà de cette technologie, des limites ont été atteintes en raison de l'incapacité à sasir les corrélations entre variations spatiales. D'autre part, l'évolution rapide des produits et des technologies contraint à une mise à jour dynamique des DRM en fonction des améliorations trouvées dans les fabs. Dans ce contexte les contributions de thèse sont (i) une définition interdisciplinaire des AMDEC et analyse de risques pour contribuer aux défis du DFM dynamique, (ii) un modèle MAM (mapping and alignment model) de localisation spatiale pour les données de tests, (iii) un référentiel de données basé sur une ontologie ROMMII (referential ontology Meta model for information integration) pour effectuer le mapping entre des données hétérogènes issues de sources variées et (iv) un modèle SPM (spatial positioning model) qui vise à intégrer les facteurs spatiaux dans les méthodes DFM de la microélectronique, pour effectuer une analyse précise et la modélisation des variations spatiales basées sur l'exploitation dynamique des données de fabrication avec des volumétries importantes.The DFM (design for manufacturing) methods are used during technology alignment and adoption processes in the semiconductor industry (SI) for manufacturability and yield assessments. These methods have worked well till 250nm technology for the transformation of systematic variations into rules and/or models based on the single-source data analyses, but beyond this technology they have turned into ineffective R&D efforts. The reason for this is our inability to capture newly emerging spatial variations. It has led an exponential increase in technology lead times and costs that must be addressed; hence, objectively in this thesis we are focused on identifying and removing causes associated with the DFM ineffectiveness. The fabless, foundry and traditional integrated device manufacturer (IDM) business models are first analyzed to see coherence against a recent shift in business objectives from time-to-market (T2M) and time-to-volume towards (T2V) towards ramp-up rate. The increasing technology lead times and costs are identified as a big challenge in achieving quick ramp-up rates; hence, an extended IDM (e-IDM) business model is proposed to support quick ramp-up rates which is based on improving the DFM ineffectiveness followed by its smooth integration. We have found (i) single-source analyses and (ii) inability to exploit huge manufacturing data volumes as core limiting factors (failure modes) towards DFM ineffectiveness during technology alignment and adoption efforts within an IDM. The causes for single-source root cause analysis are identified as the (i) varying metrology reference frames and (ii) test structures orientations that require wafer rotation prior to the measurements, resulting in varying metrology coordinates (die/site level mismatches). A generic coordinates mapping and alignment model (MAM) is proposed to remove these die/site level mismatches, however to accurately capture the emerging spatial variations, we have proposed a spatial positioning model (SPM) to perform multi-source parametric correlation based on the shortest distance between respective test structures used to measure the parameters. The (i) unstructured model evolution, (ii) ontology issues and (iii) missing links among production databases are found as causes towards our inability to exploit huge manufacturing data volumes. The ROMMII (referential ontology Meta model for information integration) framework is then proposed to remove these issues and enable the dynamic and efficient multi-source root cause analyses. An interdisciplinary failure mode effect analysis (i-FMEA) methodology is also proposed to find cyclic failure modes and causes across the business functions which require generic solutions rather than operational fixes for improvement. The proposed e-IDM, MAM, SPM, and ROMMII framework results in accurate analysis and modeling of emerging spatial variations based on dynamic exploitation of the huge manufacturing data volumes.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Innovation ecosystems for industry 4.0 : a collaborative perspective for the provision of digital technologies and platforms

    Get PDF
    Industry 4.0 considers complex interrelated IoT-based technologies for the provision of digital solutions. This complexity demands a vast set of capabilities that are hard to be found in a single technology provider, especially in small and medium-sized enterprises (SMEs). Innovation ecosystems allow SMEs to integrate resources and cocreate Industry 4.0 solutions. This thesis investigates the role of collaboration for the development of technologies and solutions in the Industry 4.0 context. To this end, this thesis was organized into three papers, which objectives are: (i) to verify if collaboration through inbound Open Innovation activities with different actors in the supply chain positively moderates the relationship between Industry 4.0 technologies and their expected benefits; (ii) to identify how the characteristics of an innovation ecosystem focused on solutions for Industry 4.0 change at each evolutionary lifecycle stage using elements from social exchange theory; and (iii) to identify which technologies can be configured as platforms through boundary-spanning activities and how they operate collaboratively to develop solutions for Industry 4.0. As a result, this thesis proposes a model that explains the role of collaboration at different levels (supply chains, ecosystems, and platforms) for the development of solutions in the Industry 4.0 context. This research approach combines both qualitative (i.e., focus group, interviews, and case studies) and quantitative (i.e., survey research with multivariate data analysis) aspects. The main results obtained are: (i) we show how collaboration with different actors in the supply chain through Open Innovation strategy has both positive and negative impacts on three strategies associated with product development (cost reduction, focalization, and innovation); (ii) we define the main characteristics of innovation ecosystems focused on the provision of Industry 4.0 solutions, considering an evolutionary lifecycles perspective and a Social Exchange view (iii) we define which are the different technology platforms of the Industry 4.0 context at different operation levels using Boundary-Spanning view. As remarking conclusions, from an academic perspective, these results help to understand how collaboration for the development of new solutions in Industry 4.0 can be analyzed under different perspectives (Open Innovation, Social Exchange Theory, and Boundary-Spanning) and in different contexts of integration (supply chains, ecosystems, and platforms). From a practical perspective, the results help to enlighten a trending business topic by showing how the collaboration among technology providers for Industry 4.0 should be fostered and developed

    Prediction of Host-Microbe Interactions from Community High-Throughput Sequencing Data

    Get PDF
    Microbial ecology is a diverse field, with a broad range of taxa, habitats, and trophic structures studied. Many of the major areas of research were developed independently, each with their own unique methods and standards, and their own questions and focus. This has changed in recent decades with the widespread implementation of culture-independent techniques, which exploit mechanisms shared by all life, regardless of habitat. In particular, high-throughput sequencing of environmentally isolated DNA and RNA has done much to expand our knowledge of the planet’s microbial diversity and has allowed us to explore the complex interplay between community members. Additionally, metatranscriptomic data can be used to parse relationships between individual members of the community, allowing researchers to propose hypotheses that can be tested in a laboratory or field setting. However, use of this technology is still relatively young, and there is a considerable need for broader consideration of its pitfalls, as well as the development of novel approaches that allow those without a computational background or with fewer resources to navigate its challenges and reap its rewards. To address these needs, we have developed targeted computational approaches that simplify next-generation sequencing datasets to a more manageable size, and we have used these techniques to address specific questions in environmental ecosystems. In a dataset sequenced for the purpose of identifying ecological factors that drive Microcystis aeruginosa to dominate cyanobacterial harmful algal blooms worldwide, we used a targeted approach to predict replication and lysogenic dormancy in bacteriophage. We used RNA-seq data to characterize viral diversity in the Sphagnum peat bog microbiome, identifying a wealth of novel viruses and proposing several host-virus pairs. We were able to assemble and describe the genome of a freshwater giant virus as well as that of a virophage that may infect it, and we used our techniques to describe its activity in publicly available datasets. Lastly, we have extended our efforts into the realm of medicine where we showed the influence exerted by the mouse gut microbiome on the host immune response to malaria, identifying several genes that may play a key role in reducing disease severity

    Improving the Scalability of High Performance Computer Systems

    Full text link
    Improving the performance of future computing systems will be based upon the ability of increasing the scalability of current technology. New paths need to be explored, as operating principles that were applied up to now are becoming irrelevant for upcoming computer architectures. It appears that scaling the number of cores, processors and nodes within an system represents the only feasible alternative to achieve Exascale performance. To accomplish this goal, we propose three novel techniques addressing different layers of computer systems. The Tightly Coupled Cluster technique significantly improves the communication for inter node communication within compute clusters. By improving the latency by an order of magnitude over existing solutions the cost of communication is considerably reduced. This enables to exploit fine grain parallelism within applications, thereby, extending the scalability considerably. The mechanism virtually moves the network interconnect into the processor, bypassing the latency of the I/O interface and rendering protocol conversions unnecessary. The technique is implemented entirely through firmware and kernel layer software utilizing off-the-shelf AMD processors. We present a proof-of-concept implementation and real world benchmarks to demonstrate the superior performance of our technique. In particular, our approach achieves a software-to-software communication latency of 240 ns between two remote compute nodes. The second part of the dissertation introduces a new framework for scalable Networks-on-Chip. A novel rapid prototyping methodology is proposed, that accelerates the design and implementation substantially. Due to its flexibility and modularity a large application space is covered ranging from Systems-on-chip, to high performance many-core processors. The Network-on-Chip compiler enables to generate complex networks in the form of synthesizable register transfer level code from an abstract design description. Our engine supports different target technologies including Field Programmable Gate Arrays and Application Specific Integrated Circuits. The framework enables to build large designs while minimizing development and verification efforts. Many topologies and routing algorithms are supported by partitioning the tasks into several layers and by the introduction of a protocol agnostic architecture. We provide a thorough evaluation of the design that shows excellent results regarding performance and scalability. The third part of the dissertation addresses the Processor-Memory Interface within computer architectures. The increasing compute power of many-core processors, leads to an equally growing demand for more memory bandwidth and capacity. Current processor designs exhibit physical limitations that restrict the scalability of main memory. To address this issue we propose a memory extension technique that attaches large amounts of DRAM memory to the processor via a low pin count interface using high speed serial transceivers. Our technique transparently integrates the extension memory into the system architecture by providing full cache coherency. Therefore, applications can utilize the memory extension by applying regular shared memory programming techniques. By supporting daisy chained memory extension devices and by introducing the asymmetric probing approach, the proposed mechanism ensures high scalability. We furthermore propose a DMA offloading technique to improve the performance of the processor memory interface. The design has been implemented in a Field Programmable Gate Array based prototype. Driver software and firmware modifications have been developed to bring up the prototype in a Linux based system. We show microbenchmarks that prove the feasibility of our design
    corecore