8,085 research outputs found

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Architecture and Advanced Electronics Pathways Toward Highly Adaptive Energy- Efficient Computing

    Get PDF
    With the explosion of the number of compute nodes, the bottleneck of future computing systems lies in the network architecture connecting the nodes. Addressing the bottleneck requires replacing current backplane-based network topologies. We propose to revolutionize computing electronics by realizing embedded optical waveguides for onboard networking and wireless chip-to-chip links at 200-GHz carrier frequency connecting neighboring boards in a rack. The control of novel rate-adaptive optical and mm-wave transceivers needs tight interlinking with the system software for runtime resource management

    Towards Scalable Real-time Analytics:: An Architecture for Scale-out of OLxP Workloads

    Get PDF
    We present an overview of our work on the SAP HANA Scale-out Extension, a novel distributed database architecture designed to support large scale analytics over real-time data. This platform permits high performance OLAP with massive scale-out capabilities, while concurrently allowing OLTP workloads. This dual capability enables analytics over real-time changing data and allows fine grained user-specified service level agreements (SLAs) on data freshness. We advocate the decoupling of core database components such as query processing, concurrency control, and persistence, a design choice made possible by advances in high-throughput low-latency networks and storage devices. We provide full ACID guarantees and build on a logical timestamp mechanism to provide MVCC-based snapshot isolation, while not requiring synchronous updates of replicas. Instead, we use asynchronous update propagation guaranteeing consistency with timestamp validation. We provide a view into the design and development of a large scale data management platform for real-time analytics, driven by the needs of modern enterprise customers

    Analysis and Mitigation of Shared Resource Contention on Heterogeneous Multicore: An Industrial Case Study

    Full text link
    In this paper, we address the industrial challenge put forth by ARM in ECRTS 2022. We systematically analyze the effect of shared resource contention to an augmented reality head-up display (AR-HUD) case-study application of the industrial challenge on a heterogeneous multicore platform, NVIDIA Jetson Nano. We configure the AR-HUD application such that it can process incoming image frames in real-time at 20Hz on the platform. We use micro-architectural denial-of-service (DoS) attacks as aggressor tasks of the challenge and show that they can dramatically impact the latency and accuracy of the AR-HUD application, which results in significant deviations of the estimated trajectories from the ground truth, despite our best effort to mitigate their influence by using cache partitioning and real-time scheduling of the AR-HUD application. We show that dynamic LLC (or DRAM depending on the aggressor) bandwidth throttling of the aggressor tasks is an effective mean to ensure real-time performance of the AR-HUD application without resorting to over-provisioning the system

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Binaural virtual auditory display for music discovery and recommendation

    Get PDF
    Emerging patterns in audio consumption present renewed opportunity for searching or navigating music via spatial audio interfaces. This thesis examines the potential benefits and considerations for using binaural audio as the sole or principal output interface in a music browsing system. Three areas of enquiry are addressed. Specific advantages and constraints in spatial display of music tracks are explored in preliminary work. A voice-led binaural music discovery prototype is shown to offer a contrasting interactive experience compared to a mono smartspeaker. Results suggest that touch or gestural interaction may be more conducive input modes in the former case. The limit of three binaurally spatialised streams is identified from separate data as a usability threshold for simultaneous presentation of tracks, with no evident advantages derived from visual prompts to aid source discrimination or localisation. The challenge of implementing personalised binaural rendering for end-users of a mobile system is addressed in detail. A custom framework for assessing head-related transfer function (HRTF) selection is applied to data from an approach using 2D rendering on a personal computer. That HRTF selection method is developed to encompass 3D rendering on a mobile device. Evaluation against the same criteria shows encouraging results in reliability, validity, usability and efficiency. Computational analysis of a novel approach for low-cost, real-time, head-tracked binaural rendering demonstrates measurable advantages compared to first order virtual Ambisonics. Further perceptual evaluation establishes working parameters for interactive auditory display use cases. In summation, the renderer and identified tolerances are deployed with a method for synthesised, parametric 3D reverberation (developed through related research) in a final prototype for mobile immersive playlist editing. Task-oriented comparison with a graphical interface reveals high levels of usability and engagement, plus some evidence of enhanced flow state when using the eyes-free binaural system

    Environmental analysis for application layer networks

    Get PDF
    Die zunehmende Vernetzung von Rechners über das Internet lies die Vision von Application Layer Netzwerken aufkommen. Sie umfassen Overlay Netzwerke wie beispielsweise Peer-to-Peer Netzwerke und Grid Infrastrukturen unter Verwendung des TCP/IP Protokolls. Ihre gemeinsame Eigenschaft ist die redundante, verteilte Bereitstellung und der Zugang zu Daten-, Rechen- und Anwendungsdiensten, während sie die Heterogenität der Infrastruktur vor dem Nutzer verbergen. In dieser Arbeit werden die Anforderungen, die diese Netzwerke an ökonomische Allokationsmechanismen stellen, untersucht. Die Analyse erfolgt anhand eines Marktanalyseprozesses für einen zentralen Auktionsmechanismus und einen katallaktischen Markt

    2023-2024 Lynn University Academic Catalog

    Get PDF
    The 2023-2024 Academic Catalog initially published as a web-only document. The Department of Marketing and Communication created a PDF version, which is available for download here.https://spiral.lynn.edu/accatalogs/1052/thumbnail.jp

    Integration of design and NMPC-based control of processes under uncertainty

    Get PDF
    The implementation of a Nonlinear Model Predictive Control (NMPC) scheme for the integration of design and control demands the solution of a complex optimization formulation, in which the solution of the design problem depends on the decisions from a lower tier problem for the NMPC. This formulation with two decision levels is known as a bilevel optimization problem. The solution of a bilevel problem using traditional Linear Problem (LP), Nonlinear Problem (NLP) or Mixed-Integer Nonlinear Problem (MINLP) solvers is very difficult. Moreover, the bilevel problem becomes particularly complex if uncertainties or discrete decisions are considered. Therefore, the implementation of alternative methodologies is necessary for the solution of the bilevel problem for the integration of design and NMPC-based control. The lack of studies and practical methodologies regarding the integration of design and NMPC-based control motivates the development of novel methodologies to address the solution of the complex formulation. A systematic methodology has been proposed in this research to address the integration of design and control involving NMPC. This method is based on the determination of the amount of back-off necessary to move the design and control variables from an optimal steady-state design to a new dynamically feasible and economic operating point. This method features the reduction of complexity of the bilevel formulation by approximating the problem in terms of power series expansion (PSE) functions, which leads to a single-level problem formulation. These functions are obtained around the point that shows the worst-case variability in the process dynamics. This approximated PSE-based optimization model is easily solved with traditional NLP solvers. The method moves the decision variables for design and control in a systematic fashion that allows to accommodate the worst-case scenario in a dynamically feasible operating point. Since approximation techniques are implemented in this methodology, the feasible solutions potentially may have deviations from a local optimum solution. A transformation methodology has been implemented to restate the bilevel problem in terms of a single-level mathematical program with complementarity constraints (MPCC). This single-level MPCC is obtained by restating the optimization problem for the NMPC in terms of its conditions for optimality. The single-level problem is still difficult to solve; however, the use of conventional NLP or MINLP solvers for the search of a solution to the MPCC problem is possible. Hence, the implementation of conventional solvers provides guarantees for optimality for the MPCC’s solution. Nevertheless, an optimal solution for the MPCC-based problem may not be an optimal solution for the original bilevel problem. The introduction of structural decisions such as the arrangement of equipment or the selection of the number of process units requires the solution of formulations involving discrete decisions. This PhD thesis proposes the implementation of a discrete-steepest descent algorithm for the integration of design and NMPC-based control under uncertainty and structural decisions following a naturally ordered sequence, i.e., structural decisions that follow the order of the natural numbers. In this approach, the corresponding mixed-integer bilevel problem (MIBLP) is transformed first into a single-level mixed-integer nonlinear program (MINLP). Then, the MINLP is decomposed into an integer master problem and a set of continuous sub-problems. The set of problems is solved systematically, enabling exploration of the neighborhoods defined by subsets of integer variables. The search direction is determined by the neighbor that produces the largest improvement in the objective function. As this method does not require the relaxation of integer variables, it can determine local solutions that may not be efficiently identified using conventional MINLP solvers. To compare the performance of the proposed discrete-steepest descent approach, an alternative methodology based on the distributed stream-tray optimization (DSTO) method is presented. In that methodology, the integer variables are allowed to be continuous variables in a differentiable distribution function (DDF). The DDFs are derived from the discretization of Gaussian distributions. This allows the solution of a continuous formulation (i.e., a NLP) for the integration of design and NMPC-based control under uncertainty and structural decisions naturally ordered set. Most of the applications for the integration of design and control implement direct transcription approaches for the solution of the optimization formulation, i.e., the full discretization of the optimization problem is implemented. In chemical engineering, the most widely used discretization strategy is orthogonal collocation on finite elements (OCFE). OCFE offers adequate accuracy and numerical stability if the number of collocation points and the number of finite elements are properly selected. For the discretization of integrated design and control formulations, the selection of the number of finite elements is commonly decided based on a priori simulations or process heuristics. In this PhD study, a novel methodology for the selection and refinement of the number of finite elements in the integration of design and control framework is presented. The corresponding methodology implements two criteria for the selection of finite elements, i.e., the estimation of the collocation error and the Hamiltonian function profile. The Hamiltonian function features to be continuous and constant over time for autonomous systems; nevertheless, the Hamiltonian function shows a nonconstant profile for underestimated discretization meshes. The methodology systematically adds or removes finite elements depending on the magnitude of the estimated collocation error and the fluctuations in the profile for the Hamiltonian function. The proposed methodologies have been tested on different case studies involving different features. An existent wastewater treatment plan is considered to illustrate the implementation of back-off strategy. On the other hand, a reaction system with two continuous stirred reaction tanks (CSTRs) are considered to illustrate the implementation of the MPCC-based formulation for design and control. The D-SDA approach is tested for the integration of design, NMPC-based control, and superstructure of a binary distillation column. Lastly, a reaction system illustrates the effect of the selection and refinement of the discretization mesh in the integrated design and control framework. The results show that the implementation of NMPC controllers leads to more economically attractive process designs with improved control performance compared to applications with classical descentralized PID or Linear MPC controllers. The discrete-steepest descent approach allowed to skip sub-optimal solution regions and led to more economic designs with better control performance than the solutions obtained with the benchmark methodology using DDFs. Meanwhile, the refinement strategy for the discretization of integrated design and control formulations demonstrated that attractive solutions with improved control performance can be obtained with a reduced number of finite elements
    • …
    corecore