10,435 research outputs found

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Comparative Multiple Case Study into the Teaching of Problem-Solving Competence in Lebanese Middle Schools

    Get PDF
    This multiple case study investigates how problem-solving competence is integrated into teaching practices in private schools in Lebanon. Its purpose is to compare instructional approaches to problem-solving across three different programs: the American (Common Core State Standards and New Generation Science Standards), French (Socle Commun de Connaissances, de Compétences et de Culture), and Lebanese with a focus on middle school (grades 7, 8, and 9). The project was conducted in nine schools equally distributed among three categories based on the programs they offered: category 1 schools offered the Lebanese program, category 2 the French and Lebanese programs, and category 3 the American and Lebanese programs. Each school was treated as a separate case. Structured observation data were collected using observation logs that focused on lesson objectives and specific cognitive problem-solving processes. The two logs were created based on a document review of the requirements for the three programs. Structured observations were followed by semi-structured interviews that were conducted to explore teachers' beliefs and understandings of problem-solving competence. The comparative analysis of within-category structured observations revealed an instruction ranging from teacher-led practices, particularly in category 1 schools, to more student-centered approaches in categories 2 and 3. The cross-category analysis showed a reliance on cognitive processes primarily promoting exploration, understanding, and demonstrating understanding, with less emphasis on planning and executing, monitoring and reflecting, thus uncovering a weakness in addressing these processes. The findings of the post-observation semi-structured interviews disclosed a range of definitions of problem-solving competence prevalent amongst teachers with clear divergences across the three school categories. This research is unique in that it compares problem-solving teaching approaches across three different programs and explores underlying teachers' beliefs and understandings of problem-solving competence in the Lebanese context. It is hoped that this project will inform curriculum developers about future directions and much-anticipated reforms of the Lebanese program and practitioners about areas that need to be addressed to further improve the teaching of problem-solving competence

    Challenge and Research Trends of Solar Concentrators

    Get PDF
    Primary and secondary solar concentrators are of vital importance for advanced solar energy and solar laser researches. Some of the most recent developments in primary and secondary solar concentrators were firstly presented. A novel three-dimensional elliptical-shaped Fresnel lens analytical model was put forward to maximize the solar concentration ratio of Fresnel-lens-based solar concentrators. By combining a Fresnel lens with a modified parabolic mirror, significant improvement in solar laser efficiency was numerically calculated. A fixed fiber light guide system using concave outlet concentrators was proposed. The absence of a solar tracking structure highlights this research. By shaping a luminescent solar concentrators in the form of an elliptic array, its emission losses was drastically reduced. Simple conical secondary concentrator was effective for thermal applications. New progresses in solar-pumped lasers by NOVA University of Lisbon were presented. By adopting a rectangular fused silica light guide, 40 W maximum solar laser power was emitted from a single Ce:Nd:YAG rod. An aspheric fused silica secondary concentrator and a small diameter Ce:Nd:YAG rod were essential for attaining 4.5 % record solar-to-laser power conversion efficiency. A novel solar concentrator design for the efficient production of doughnut-shaped and top-hat solar laser beams were also reported. More importantly, a novel solar concentrator approach for the emission of 5 kW-class TEM00 mode solar laser beams from one megawatt solar furnace was put forward at the end of this book, revealing promising future for solar-pumped lasers

    Intelligent computing : the latest advances, challenges and future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing

    Modelling, Monitoring, Control and Optimization for Complex Industrial Processes

    Get PDF
    This reprint includes 22 research papers and an editorial, collected from the Special Issue "Modelling, Monitoring, Control and Optimization for Complex Industrial Processes", highlighting recent research advances and emerging research directions in complex industrial processes. This reprint aims to promote the research field and benefit the readers from both academic communities and industrial sectors

    Distributed MAP-Elites and its Application in Evolutionary Design

    Get PDF
    Quality-Diversity search is the process of finding diverse solutions within the search space which do not sacrifice performance. MAP-Elites is a quality-diversity algorithm which measures n phenotypes/behaviours of a solution and places it into an nn-dimensional hypercube based off its phenotype values. This thesis proposes an approach to addressing MAP-Elites' problem of exponential growth of hypercubes. The exponential growth of evaluation and computational time as the phenotypes/behaviours grow is potentially worse for optimization performance. The exponential growth in individuals results in the user being given too many candidate solutions at the end of processing. Therefore, MAP-Elites highlights diversity, but with the exponential growth, the said diversity is arguably impractical. This research proposes an enhancement to MAP-Elites with Distributed island-model evolution. This will introduce a linear growth in population as well as a reasonable number of candidate solutions to consider. Each island consists of a two dimensional MAP which allows for a realistic analysis and visualization of these individuals. Since the system increases on a linear scale, and MAP-Elites on an exponential scale, high-dimensional problems will show an even greater decrease in total candidate solution counts, which aids in the realistic analysis of a run. This system will then be tested on procedural texture generation with multiple computer vision fitness functions. This Distributed MAP-Elites algorithm was tested against vanilla GP, island-model evolution, and traditional MAP-Elites on multiple fitness functions and target images. The proposed algorithm was found, at the very minimum, to be competitive in fitness to the other algorithms and in some cases outperformed them. On top of this performance, when visually observing the best solutions, the algorithm was found to have been able to produce visually interesting textures

    EXAMINING PROTEIN CONFORMATIONAL DYNAMICS USING COMPUTATIONAL TECHNIQUES: STUDIES ON PHOSPHATIDYLINOSITOL-3-KINASE AND THE SODIUM-IODIDE SYMPORTER

    Get PDF
    Experimental biophysics techniques used to study proteins, polymers of amino acids that comprise most therapeutic targets of human disease, face limitations in their ability to interrogate the continual structural fluctuations exhibited by these macromolecules in the context of their myriad cellular functions. This dissertation aims to illustrate case studies that demonstrate how protein conformational dynamics can be characterized using computational methods, yielding novel insights into their functional regulation and activity. Towards this end, the work presented here describes two specific membrane proteins of therapeutic relevance: Phosphoinositide 3-kinase (PI3Kα), and the Na+/I- symporter (NIS). The PI3KCA gene, encoding the catalytic subunit of the PI3Kα protein that phosphorylates phosphatidylinositol-4,5-bisphosphate (PIP2) to generate phosphatidylinositol-3,4,5-triphosphate (PIP3), is highly mutated in human cancer. As such, a deeper mechanistic understanding of PI3Kα could facilitate the development of novel chemotherapeutic approaches. The second chapter of this dissertation describes molecular dynamics (MD) simulations that were conducted to determine how PI3Kα conformations are influenced by physiological effectors and the nSH2 domain of a regulatory subunit, p85. The results reported here suggest that dynamic allostery plays a role in populating the catalytically competent conformation of PI3Kα. NIS, a thirteen-helix transmembrane protein found in the thyroid and other tissues, transports iodide, a required constituent of thyroid hormones T3 and T4. Despite extensive experimental information and clinical data, many mechanistic details about NIS remain unresolved. The third chapter of this dissertation describes the results of unbiased and enhanced-sampling MD simulations of inwardly and outwardly open models of bound NIS under an enforced ion gradient. Simulations of NIS in the absence or presence of perchlorate are also described. The work presented in this dissertation aims to add to our mechanistic understanding of NIS ion transport and elucidate conformational states that occur between the inward and outward transitions of NIS in the absence and presence of bound Na+ and I- ions, which can provide valuable insight into its physiological activity and inform therapeutic interventions. Taken together, these case studies demonstrate the ability of computational techniques to provide novel insights into the impact of structural dynamics on the functional regulation of therapeutically important biological macromolecules

    ON EXPRESSIVENESS, INFERENCE, AND PARAMETER ESTIMATION OF DISCRETE SEQUENCE MODELS

    Get PDF
    Huge neural autoregressive sequence models have achieved impressive performance across different applications, such as NLP, reinforcement learning, and bioinformatics. However, some lingering problems (e.g., consistency and coherency of generated texts) continue to exist, regardless of the parameter count. In the first part of this thesis, we chart a taxonomy of the expressiveness of various sequence model families (Ch 3). In particular, we put forth complexity-theoretic proofs that string latent-variable sequence models are strictly more expressive than energy-based sequence models, which in turn are more expressive than autoregressive sequence models. Based on these findings, we introduce residual energy-based sequence models, a family of energy-based sequence models (Ch 4) whose sequence weights can be evaluated efficiently, and also perform competitively against autoregressive models. However, we show how unrestricted energy-based sequence models can suffer from uncomputability; and how such a problem is generally unfixable without knowledge of the true sequence distribution (Ch 5). In the second part of the thesis, we study practical sequence model families and algorithms based on theoretical findings in the first part of the thesis. We introduce neural particle smoothing (Ch 6), a family of approximate sampling methods that work with conditional latent variable models. We also introduce neural finite-state transducers (Ch 7), which extend weighted finite state transducers with the introduction of mark strings, allowing scoring transduction paths in a finite state transducer with a neural network. Finally, we propose neural regular expressions (Ch 8), a family of neural sequence models that are easy to engineer, allowing a user to design flexible weighted relations using Marked FSTs, and combine these weighted relations together with various operations

    Uncertain Quality-Diversity: Evaluation methodology and new methods for Quality-Diversity in Uncertain Domains

    Full text link
    Quality-Diversity optimisation (QD) has proven to yield promising results across a broad set of applications. However, QD approaches struggle in the presence of uncertainty in the environment, as it impacts their ability to quantify the true performance and novelty of solutions. This problem has been highlighted multiple times independently in previous literature. In this work, we propose to uniformise the view on this problem through four main contributions. First, we formalise a common framework for uncertain domains: the Uncertain QD setting, a special case of QD in which fitness and descriptors for each solution are no longer fixed values but distribution over possible values. Second, we propose a new methodology to evaluate Uncertain QD approaches, relying on a new per-generation sampling budget and a set of existing and new metrics specifically designed for Uncertain QD. Third, we propose three new Uncertain QD algorithms: Archive-sampling, Parallel-Adaptive-sampling and Deep-Grid-sampling. We propose these approaches taking into account recent advances in the QD community toward the use of hardware acceleration that enable large numbers of parallel evaluations and make sampling an affordable approach to uncertainty. Our final and fourth contribution is to use this new framework and the associated comparison methods to benchmark existing and novel approaches. We demonstrate once again the limitation of MAP-Elites in uncertain domains and highlight the performance of the existing Deep-Grid approach, and of our new algorithms. The goal of this framework and methods is to become an instrumental benchmark for future works considering Uncertain QD.Comment: Submitted to Transactions on Evolutionary Computatio
    • …
    corecore