7,876 research outputs found
An Efficient Monte Carlo-based Probabilistic Time-Dependent Routing Calculation Targeting a Server-Side Car Navigation System
Incorporating speed probability distribution to the computation of the route
planning in car navigation systems guarantees more accurate and precise
responses. In this paper, we propose a novel approach for dynamically selecting
the number of samples used for the Monte Carlo simulation to solve the
Probabilistic Time-Dependent Routing (PTDR) problem, thus improving the
computation efficiency. The proposed method is used to determine in a proactive
manner the number of simulations to be done to extract the travel-time
estimation for each specific request while respecting an error threshold as
output quality level. The methodology requires a reduced effort on the
application development side. We adopted an aspect-oriented programming
language (LARA) together with a flexible dynamic autotuning library (mARGOt)
respectively to instrument the code and to take tuning decisions on the number
of samples improving the execution efficiency. Experimental results demonstrate
that the proposed adaptive approach saves a large fraction of simulations
(between 36% and 81%) with respect to a static approach while considering
different traffic situations, paths and error requirements. Given the
negligible runtime overhead of the proposed approach, it results in an
execution-time speedup between 1.5x and 5.1x. This speedup is reflected at
infrastructure-level in terms of a reduction of around 36% of the computing
resources needed to support the whole navigation pipeline
Efficient Monte Carlo Based Methods for Variability Aware Analysis and Optimization of Digital Circuits.
Process variability is of increasing concern in modern nanometer-scale CMOS. The
suitability of Monte Carlo based algorithms for efficient analysis and optimization of
digital circuits under variability is explored in this work. Random sampling based Monte
Carlo techniques incur high cost of computation, due to the large sample size required to
achieve target accuracy. This motivates the need for intelligent sample selection
techniques to reduce the number of samples. As these techniques depend on information
about the system under analysis, there is a need to tailor the techniques to fit the specific
application context. We propose efficient smart sampling based techniques for timing and
leakage power consumption analysis of digital circuits. For the case of timing analysis, we
show that the proposed method requires 23.8X fewer samples on average to achieve
comparable accuracy as a random sampling approach, for benchmark circuits studied. It is
further illustrated that the parallelism available in such techniques can be exploited using
parallel machines, especially Graphics Processing Units. Here, we show that SH-QMC
implemented on a Multi GPU is twice as fast as a single STA on a CPU for benchmark
circuits considered. Next we study the possibility of using such information from
statistical analysis to optimize digital circuits under variability, for example to achieve
minimum area on silicon though gate sizing while meeting a timing constraint. Though
several techniques to optimize circuits have been proposed in literature, it is not clear how
much gains are obtained in these approaches specifically through utilization of statistical
information. Therefore, an effective lower bound computation technique is proposed to
enable efficient comparison of statistical design optimization techniques. It is shown that
even techniques which use only limited statistical information can achieve results to
within 10% of the proposed lower bound. We conclude that future optimization research
should shift focus from use of more statistical information to achieving more efficiency
and parallelism to obtain speed ups.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78936/1/tvvin_1.pd
Formal and Informal Methods for Multi-Core Design Space Exploration
We propose a tool-supported methodology for design-space exploration for
embedded systems. It provides means to define high-level models of applications
and multi-processor architectures and evaluate the performance of different
deployment (mapping, scheduling) strategies while taking uncertainty into
account. We argue that this extension of the scope of formal verification is
important for the viability of the domain.Comment: In Proceedings QAPL 2014, arXiv:1406.156
Smart and high-performance digital-to-analog converters with dynamic-mismatch mapping
The trends of advanced communication systems, such as the high data rate in multi-channel base-stations and digital IF conversion in software-defined radios, have caused a continuously increasing demand for high performance interface circuits between the analog and the digital domain. A Digital-to-Analog converter (DAC) is such an interface circuit in the transmitter path. High bandwidth, high linearity and low noise are the main design challenges in high performance DACs. Current-steering is the most suitable architecture to meet these performance requirements. The aim of this thesis is to develop design techniques for high-speed high-performance Nyquist current-steering DACs, especially for the design of DACs with high dynamic performance, e.g. high linearity and low noise. The thesis starts with an introduction to DACs in chapter 2. The function in time/frequency domain, performance specifications, architectures and physical implementations of DACs are brie y discussed. Benchmarks of state-of-the-art published Nyquist DACs are also given. Chapter 3 analyzes performance limitations by various error sources in Nyquist current-steering DACs. The outcome shows that in the frequency range of DC to hundreds of MHz, mismatch errors, i.e. amplitude and timing errors, dominate the DAC linearity. Moreover, as frequencies increase, the effect of timing errors becomes more and more dominant over that of amplitude errors. Two new parameters, i.e. dynamic-INL and dynamic-DNL, are proposed to evaluate the matching of current cells. Compared to the traditional static-INL/DNL, the dynamic-INL/DNL can describe the matching between current cells more accurately and completely. By reducing the dynamic-INL/DNL, the non-linearities caused by all mismatch errors can be reduced. Therefore, both the DAC static and dynamic performance can be improved. The dynamic-INL/DNL are frequency-dependent parameters based on the measurement modulation frequency fm. This fm determines the weight between amplitude and timing errors in the dynamic-INL/DNL. Actually, this gives a freedom to optimize the DAC performance for different applications, e.g. low fm for low frequency applications and high fm for high frequency applications. Chapter 4 summarizes the existing design techniques for intrinsic and smart DACs. Due to technology limitations, it is diffcult to reduce the mismatch errors just by intrinsic DAC design with reasonable chip area and power consumption. Therefore, calibration techniques are required. An intrinsic DAC with calibration is called a smart DAC. Existing analog calibration techniques mainly focus on current source calibration, so that the amplitude error can be reduced. Dynamic element matching is a kind of digital calibration technique. It can reduce the non-linearities caused by all mismatch errors, but at the cost of an increased noise oor. Mapping is another kind of digital calibration technique and will not increase the noise. Mapping, as a highly digitized calibration technique, has many advantages. Since it corrects the error effects in the digital domain, the DAC analog core can be made clean and compact, which reduces the parasitics and the interference generated in the analog part. Traditional mapping is static-mismatch mapping, i.e. mapping only for amplitude errors, which many publications have already addressed on. Several concepts have also been proposed on mapping for timing errors. However, just mapping for amplitude or timing error is not enough to guarantee a good performance. This work focuses on developing mapping techniques which can correct both amplitude and timing errors at the same time. Chapter 5 introduces a novel mapping technique, called dynamic-mismatch mapping (DMM). By modulating current cells as square-wave outputs and measuring the dynamic-mismatch errors as vectors, DMM optimizes the switching sequence of current cells based on dynamic-mismatch error cancelation such that the dynamic-INL can be reduced. After reducing the dynamic-INL, the non-linearities caused by both amplitude and timing errors can be significantly reduced in the whole Nyquist band, which is confirmed by Matlab behavioral-level Monte-Carlo simulations. Compared to traditional static-mismatch mapping (SMM), DMM can reduce the non-linearities caused by both amplitude and timing errors. Compared to dynamic element matching (DEM), DMM does not increase the noise floor. The dynamic-mismatch error has to be accurately measured in order to gain the maximal benefit from DMM. An on-chip dynamic-mismatch error sensor based on a zero-IF receiver is proposed in chapter 6. This sensor is especially designed for low 1/f noise since the signal is directly down-converted to DC. Its signal transfer function and noise analysis are also given and con??rmed by transistor-level simulations. Chapter 7 gives a design example of a 14-bit current-steering DAC in 0.14mum CMOS technology. The DAC can be configured in an intrinsic-DAC mode or a smart-DAC mode. In the intrinsic-DAC mode, the 14-bit 650MS/s intrinsic DAC core achieves a performance of SFDR>65dBc across the whole 325MHz Nyquist band. In the smart-DAC mode, compared to the intrinsic DAC performance, DMM improves the DAC performance in the whole Nyquist band, providing at least 5dB linearity improvement at 200MS/s and without increasing the noise oor. This 14-bit 200MS/s smart DAC with DMM achieves a performance of SFDR>78dBc, IM
LEGaTO: first steps towards energy-efficient toolset for heterogeneous computing
LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft
Summary of CPAS EDU Testing Analysis Results
The Orion program's Capsule Parachute Assembly System (CPAS) project is currently conducting its third generation of testing, the Engineering Development Unit (EDU) series. This series utilizes two test articles, a dart-shaped Parachute Compartment Drop Test Vehicle (PCDTV) and capsule-shaped Parachute Test Vehicle (PTV), both of which include a full size, flight-like parachute system and require a pallet delivery system for aircraft extraction. To date, 15 tests have been completed, including six with PCDTVs and nine with PTVs. Two of the PTV tests included the Forward Bay Cover (FBC) provided by Lockheed Martin. Advancements in modeling techniques applicable to parachute fly-out, vehicle rate of descent, torque, and load train, also occurred during the EDU testing series. An upgrade from a composite to an independent parachute simulation allowed parachute modeling at a higher level of fidelity than during previous generations. The complexity of separating the test vehicles from their pallet delivery systems necessitated the use the Automatic Dynamic Analysis of Mechanical Systems (ADAMS) simulator for modeling mated vehicle aircraft extraction and separation. This paper gives an overview of each EDU test and summarizes the development of CPAS analysis tools and techniques during EDU testing
Investigation of domestic level EV chargers in the Distribution Network: An Assessment and mitigation solution
This research focuses on the electrification of the transport sector. Such electrification could potentially pose challenges to the distribution system operator (DSO) in terms of reliability, power quality and cost-effective implementation. This thesis contributes to both, an Electrical Vehicle (EV) load demand profiling and advanced use of reactive power compensation (D-STATCOM) to facilitate flexible and secure network operation. The main aim of this research is to investigate the planning and operation of low voltage distribution networks (LVDN) with increasing electrical vehicles (EVs) proliferation and the effects of higher demand charging systems. This work is based on two different independent strands of research.
Firstly, the thesis illustrates how the flexibility and composition of aggregated EVs demand can be obtained with very limited information available. Once the composition of demand is available, future energy scenarios are analysed in respect to the impact of higher EVs charging rates on single phase connections at LV distribution network level. A novel planning model based on energy scenario simulations suitable for the utilization of existing assets is developed. The proposed framework can provide probabilistic risk assessment of power quality (PQ) variations that may arise due to the proliferation of significant numbers of EVs chargers. Monte Carlo (MC) based simulation is applied in this regard. This probabilistic approach is used to estimate the likely impact of EVs chargers against the extreme-case scenarios.
Secondly, in relation to increased EVs penetration, dynamic reactive power reserve management through network voltage control is considered. In this regard, a generic distribution static synchronous compensator (D-STATCOM) model is adapted to achieve network voltage stability. The main emphasis is on a generic D-STATCOM modelling technique, where each individual EV charging is considered through a probability density function that is inclusive of dynamic D-STATCOM support. It demonstrates how optimal techniques can consider the demand flexibility at each bus to meet the requirement of network operator while maintaining the relevant steady state and/or dynamic performance indicators (voltage level) of the network. The results show that reactive power compensation through D-STATCOM, in the context of EVs integration, can provide continuous voltage support and thereby facilitate 90% penetration of network customers with EV connections at a normal EV charging rate (3.68 kW). The results are improved by using optimal power flow. The results suggest, if fast charging (up to 11 kW) is employed, up to 50% of network EV customers can be accommodated by utilising the optimal planning approach. During the case study, it is observed that the transformer loading is increased significantly in the presence of D-STATCOM. The transformer loading reaches approximately up to 300%, in one of the contingencies at 11 kW EV charging, so transformer upgrading is still required. Three-phase connected DSTATCOM is normally used by the DSO to control power quality issues in the network. Although, to maintain voltage level at each individual phase with three-phase connected device is not possible. So, single-phase connected D-STATCOM is used to control the voltage at each individual phase. Single-phase connected D-STATCOM is able maintain the voltage level at each individual phase at 1 p.u. This research will be of interest to the DSO, as it will provide an insight to the issues associated with higher penetration of EV chargers, present in the realization of a sustainable transport electrification agenda
- …