90 research outputs found

    Is there a Moore's law for quantum computing?

    Full text link
    There is a common wisdom according to which many technologies can progress according to some exponential law like the empirical Moore's law that was validated for over half a century with the growth of transistors number in chipsets. As a still in the making technology with a lot of potential promises, quantum computing is supposed to follow the pack and grow inexorably to maturity. The Holy Grail in that domain is a large quantum computer with thousands of errors corrected logical qubits made themselves of thousands, if not more, of physical qubits. These would enable molecular simulations as well as factoring 2048 RSA bit keys among other use cases taken from the intractable classical computing problems book. How far are we from this? Less than 15 years according to many predictions. We will see in this paper that Moore's empirical law cannot easily be translated to an equivalent in quantum computing. Qubits have various figures of merit that won't progress magically thanks to some new manufacturing technique capacity. However, some equivalents of Moore's law may be at play inside and outside the quantum realm like with quantum computers enabling technologies, cryogeny and control electronics. Algorithms, software tools and engineering also play a key role as enablers of quantum computing progress. While much of quantum computing future outcomes depends on qubit fidelities, it is progressing rather slowly, particularly at scale. We will finally see that other figures of merit will come into play and potentially change the landscape like the quality of computed results and the energetics of quantum computing. Although scientific and technological in nature, this inventory has broad business implications, on investment, education and cybersecurity related decision-making processes.Comment: 32 pages, 24 figure

    Addressing the feasibility of USI-based threads scheduler on polymorphic computing system

    Get PDF
    The consistent advances in IC technology result in ever increasing number of transistors. There is more and more interest attracted on the issue of using these transistors in computing more efficiently. The CMP (Chip Multi &ndash processors) is predicted to be one of the most promising solutions for this problem in future. The heterogeneous CMP is supposed to provide more computing efficiency compared to the homogeneous CMP architecture; but it requires complex processing art for manufacturing, which makes it less competitive in the old era. Nowadays, the complicate SOC(System On Chip) manufacturing techniques are pacing fast. This is leading us inexorably to heterogeneous CMP with diverse computing style resources like general purpose CPU, GPU, FPGA, and ASIC cores. In the heterogeneous CMP architecture, the generous purpose CPU provides coverage for all computing, while the non von &ndash Neumann cores harvest energy and processing time for specific computing. The polymorphic system is defined as a heterogeneous system that enable a computing thread to be dynamically selected and mapped to multiple kinds of cores. A polymorphic thread is compiled for multiple morphisms afforded by these diverse cores. The resulting polymorphic computing systems solve two problems. (1) Polymorphic threads enable more complex, dynamic trade &ndash offs between delay and power consumption. A piecewise cobbling of multiple morphism energy &ndash delay profiles offers a richer Energy &ndash Delay(ED) profile for the entire application. This in turn helps scale the proverbial ITRS &rdquo red &ndash brick power wall &rdquo. (2) The OS scheduler not only picks a thread to run, it also chooses its morphism. Previously, the scientists and engineers prefer using the numerical E · T results to evaluate the design trade &ndash offs, which is challenged to not fit on the future mobile systems design in this thesis. In the mobile systems, whose primary role is &ldquo enhanced terminals &rdquo &ndash user interface to cloud hosted computing backbone, user satisfaction ought to be the primary goal. We propose a scheduler to target User Satisfaction Index (USI) functions. In this thesis, we develop a model for a mobile polymorphic embedded system. This model primarily abstracts the queuing process of the threads in the OS operation. We integrate a polymorphic scheduler in this model to assess the application design space offered by polymorphic computing. We explore several greedy versions of a polymorphic scheduler to improve the user satisfaction driven QoS. We build a polymorphic system simulation platform based on SystemC to validate our theoretical analysis of a polymorphic system. We evaluate our polymorphic scheduler on a variety of application mix with various metrics. We further discuss the feasibility of USI &ndash based polymorphic scheduler by identifying its strengths and weaknesses in relation to the application design space based on the simulation results

    Linking image processing and numerical modeling to identify potential geohazards

    Get PDF
    Faults, along with natural fractures, may enhance production when confined within the reservoir. However, if the fault is connected to an aquifer, it may cause early water breakthrough in the reservoir. Even if they are not conductive, they pose a significant geohazard during drilling as fault slippage can cause shearing of casing/tubing leading to either sidetracking, or complete abandonment of the well. In this thesis, I propose a simplistic approximation of dynamic conductivity of faults based on steady state flow equation. I use a geometric attribute; coherence, as a proxy for fault hydraulic conductivity and in a steady state flow equation to model dynamic flow. This thesis was inspired by problems faced by several companies working the Eagle Ford shale reservoir of south Texas. Surveys often exceeds 1000 km2 and exhibit hundreds of faults. Most faults are not problematic; however, some connect with the deeper Edwards limestone aquifer. Wells that complete near these faults produce water. This algorithm can provide early water production warnings and can provide simple, easy to compute useful input in field development in the absence of the more complete datasets required more rigorous reservoir simulation implemented in commercial software. This simple tool is designed to be used in a statistical, rather than deterministic manner, identifying problematic faults by comparing their orientation and connectivity to those known to be bad by previous drilling history. The computational time is less than 17% of a more rigorous conventional reservoir simulation. The model can be updated easily as more and more dataset are available during various stages of field development by ignoring important parameters for single well production such as facies, petrophysical and flow equations. This algorithm is a fast and simple approximation that can be very useful in overall field management where one wishes to quickly identify problematic faults or fault sets

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Mapping the evolving landscape of child-computer interaction research: structures and processes of knowledge (re)production

    Get PDF
    Implementing an iterative sequential mixed methods design (Quantitative → Qualitative → Quantitative) framed within a sociology of knowledge approach to discourse, this study offers an account of the structure of the field of Child-Computer Interaction (CCI), its development over time, and the practices through which researchers have (re)structured knowledge comprising the field. Thematic structure of knowledge within the field, and its evolution over time, is quantified through implementation of a Correlated Topic Model (CTM), an automated inductive content analysis method, in analysing 4,771 CCI research papers published between 2003 and 2021. Detailed understanding of practices through which researchers (re)structure knowledge within the field, including factors influencing these practices, is obtained through thematic analysis of online workshops involving prominent contributors to the field (n=7). Strategic practices utilised by researchers in negotiating tensions impeding integration of novel concepts in the field are investigated through analysis of semantic features of retrieved papers using linear and negative binomial regression models. Contributing an extensive mapping, results portray the field of CCI as a varied research landscape, comprising 48 major themes of study, which has evolved dynamically over time. Research priorities throughout the field have been subject to influence from a range of endogenous and exogenous factors which researchers actively negotiate through research and publication practices. Tacitly structuring research practices, these factors have broadly sustained a technology-driven, novelty-dominated paradigm throughout the field which has failed to substantively progress cumulative knowledge. Through strategic negotiation of persistent tensions arising as consequence of these factors, researchers have nonetheless affected structural change within the field, contributing to a shift towards a user needs-driven agenda and progression of knowledge therein. Findings demonstrate that the field of CCI is proceeding through an intermediary phase in maturation, forming an increasingly distinct disciplinary shape and identity through the cumulative structuring effect of community members’ continued negotiation of tensions

    Towards a living earth simulator

    Get PDF
    The Living Earth Simulator (LES) is one of the core components of the FuturICT architecture. It will work as a federation of methods, tools, techniques and facilities supporting all of the FuturICT simulation-related activities to allow and encourage interactive exploration and understanding of societal issues. Society-relevant problems will be targeted by leaning on approaches based on complex systems theories and data science in tight interaction with the other components of FuturICT. The LES will evaluate and provide answers to realworld questions by taking into account multiple scenarios. It will build on present approaches such as agent-based simulation and modeling, multiscale modelling, statistical inference, and data mining, moving beyond disciplinary borders to achieve a new perspective on complex social systems. © The Author(s) 2012
    • …
    corecore