798 research outputs found
Vehicle-to-Grid and ancillary services:a profitability analysis under uncertainty
The rapid and massive diffusion of electric vehicles poses new challenges to
the electric system, which must be able to supply these new loads, but at the
same time opens up new opportunities thanks to the possible provision of
ancillary services. Indeed, in the so-called Vehicle-to-Grid (V2G) set-up, the
charging power can be modulated throughout the day so that a fleet of vehicles
can absorb an excess of power from the grid or provide extra power during a
shortage.To this end, many works in the literature focus on the optimization of
each vehicle daily charging profiles to offer the requested ancillary services
while guaranteeing a charged battery for each vehicle at the end of the day.
However, the size of the economic benefits related to the provision of
ancillary services varies significantly with the modeling approaches, different
assumptions, and considered scenarios. In this paper we propose a profitability
analysis with reference to a recently proposed framework for V2G optimal
operation in presence of uncertainty. We provide necessary and sufficient
conditions for profitability in a simplified case and we show via simulation
that they also hold for the general case.Comment: Accepted by IFAC for publication under a Creative Commons Licence
CC-BY-NC-N
Towards a comprehensive framework for V2G optimal operation in presence of uncertainty
As the global fleet of Electric Vehicles keeps increasing in number, the Vehicle To Grid (V2G) paradigm is gaining more and more attention. From the grid point of view an aggregate of electric vehicles can act as a flexible load, thus able to provide balancing services. The problem of computing the optimal day-ahead charging schedule for all vehicles in the fleet is a challenging one, especially because it is affected by many sources of uncertainty. In this paper we consider the uncertainty deriving from arrival and departure times, arrival energy and services market outcomes. We propose a general optimization framework to deal with the day ahead planning that encompasses different kind of use-cases. We adopt a robust paradigm to enforce the constraints and an expectation paradigm for the cost function. For all constraints and cost terms we propose an exact formulation or a very tight approximation, even in the case of piece-wise linear battery dynamics. Numerical results corroborates the theoretical findings
Model reduction of discrete time hybrid systems: A structural approach based on observability
This paper addresses model reduction for discrete time hybrid systems that are described by a Mixed Logical Dynamical (MLD) model. The goal is to simplify the MLD model while preserving its input/output behavior. This is useful when considering a reachability property that depends on the output and should be enforced by appropriately setting the input. The proposed procedure for model reduction rests on the analysis of the structure of the MLD system and on its observability properties. It is also applicable to PieceWise Affine (PWA) systems that can be equivalently represented as MLD systems. In the case of PWA systems, mode merging can be adopted to further simplify the model
Minimum resource commitment for reachability specifications in a discrete time linear setting.
This paper addresses control input design for a discrete time linear system. The goal is to satisfy a reachability specification and, at the same time, minimize the number of inputs that need to be set (influential inputs). To this purpose, we introduce an appropriate input parametrization so that, depending on the parameter values, some of the inputs act as control variables, while the others are treated as disturbances and can take an arbitrary value in their range. We then enforce the specification while maximizing the number of disturbance inputs. Two approaches are developed: one based on an open loop scheme and one based on a compensation scheme. In the former, we end up solving a linear program. In the latter, the parametrization is extended so as to allow the influential inputs to depend on the non-influential ones, and the problem is reduced to a mixed integer linear program. A comparison between the two approaches is carried out, showing the superiority of the latter. Possible applications to system design and security of networked control systems are briefly discussed in the introduction
Control Input Design: Detecting Non Influential Inputs While Satisfying a Reachability Specification
We address the problem of designing the control input for a discrete time dynamical system so as to make its state reach some target set in finite time. Among the feasible solutions to the reachability problem, we look for those where only few input variables need to be set to some specic value, whereas the others can take an arbitrary value within their admissible range without compromising the desired reachability condition. This input design problem is not standard and the optimality criterion cannot be easily expressed in terms of some performance index to be optimized. Here, we propose a solution that rests on an appropriate parametrization of the input variables as set-valued signals, and rephrase the input design problem as a robust optimization program. In turn, if the target set is a polytope, the optimization problem reduces to a linear program for linear system, and to a mixed integer linear program for mixed logical dynamical systems. Some numerical examples show the ecacy of the approach
use of image analysis for the evaluation of rolling bottle tests results
Abstract The adhesion between bitumen and aggregates is of paramount importance for asphalt mixtures, because it is confirmed that a weak bond strength results in a premature failure of the pavement. Methods for determining the affinity or the adhesion between components are made both on loose and compacted samples. Among the first category the rolling bottle method, which is standardized in EN 12697-11 part a, is very common. It represents a simple, rapid and low cost test for an indication of the affinity between aggregate and bitumen and its influence on the susceptibility of the mixture to stripping. This paper proposes the use of 2D image analysis to evaluate the rolling bottle test results, overcoming the limits and shortcomings of the visual analysis prescribed by the reference standard. In order to demonstrate its applicability to a broad range of materials, this procedure was applied to both light and dark aggregates, mixed with a wax modified binder. The mixing temperature was varied so that the influence of the binder viscosity on the adhesion was assessed. A comparison between visual and semi-automatic estimation is presented, demonstrating that the latter brings to far better results. The accuracies were determined through confusion matrixes that permit to identify the errors made during the process of classification
Energy Management of a Building Cooling System With Thermal Storage: An Approximate Dynamic Programming Solution
This paper concerns the design of an energy management system for a building cooling system that includes a chiller plant (with two or more chiller units), a thermal storage unit, and a cooling load. The latter is modeled in a probabilistic framework to account for the uncertainty in the building occupancy. The energy management task essentially consists in the minimization of the energy consumption of the cooling system, while preserving comfort in the building. This is achieved by a twofold strategy. The cooling power request is optimally distributed among the chillers and the thermal storage unit. At the same time, a slight modulation of the temperature set-point of the zone is allowed, trading energy saving for comfort. The problem can be decoupled into a static optimization problem (mainly addressing the chiller plant optimization) and a dynamic programming (DP) problem for a discrete time stochastic hybrid system (SHS) that takes care of the overall energy minimization. The DP problem is solved by abstracting the SHS to a (finite) controlled Markov chain, where costs associated with state transitions are computed by simulating the original model and determining the corresponding energy consumption. A numerical example shows the efficacy of the approach
Is the haematopoietic effect of testosterone mediated by erythropoietin? The results of a clinical trial in older men
The stimulatory effects of testosterone on erythropoiesis are very well known, but the mechanisms underlying the erythropoietic action of testosterone are still poorly understood, although erythropoietin has long been considered a potential mediator. A total of 108 healthy men >65 years old with serum testosterone concentration <475 ng/dL were recruited by direct mailings to alumni of the University of Pennsylvania and Temple University, and randomized to receive a 60-cm(2) testosterone or placebo patch for 36 months. Ninety-six subjects completed the trial. We used information and stored serum specimens from this trial to test the hypothesis that increasing testosterone increases haemoglobin by stimulating erythropoietin production. We used information of 67 men, 43 in the testosterone group and 24 in the placebo group who had banked specimens available for assays of testosterone, haemoglobin and erythropoietin at baseline and after 36 months. The original randomized clinical study was primarily designed to verify the effects of testosterone on bone mineral density. The primary outcome of this report was to investigate whether or not transdermal testosterone increases haemoglobin by increasing erythropoietin levels. The mean age +/- SD of the 67 subjects at baseline was 71.8 +/- 4.9 years. Testosterone replacement therapy for 36 months, as compared with placebo, induced a significant increase in haemoglobin (0.86 +/- 0.31 g/dL, p = 0.01), but no change in erythropoietin levels (-0.24 +/- 2.16 mIU/mL, p = 0.91). Included time-varying measure of erythropoietin did not significantly account for the effect of testosterone on haemoglobin (Treatment-by-time: beta = 0.93, SE = 0.33, p = 0.01). No serious adverse effect was observed. Transdermal testosterone treatment of older men for 36 months significantly increased haemoglobin, but not erythropoietin levels. The haematopoietic effect of testosterone does not appear to be mediated by stimulation of erythropoietin production
X-ray redshifts for obscured AGN: a case study in the J1030 deep field
We present a procedure to constrain the redshifts of obscured ( cm) Active Galactic Nuclei (AGN) based on low-count statistics
X-ray spectra, which can be adopted when photometric and/or spectroscopic
redshifts are unavailable or difficult to obtain. We selected a sample of 54
obscured AGN candidates on the basis of their X-ray hardness ratio, ,
in the Chandra deep field (479 ks, 335 arcmin) around the QSO
SDSS J1030+0524. The sample has a median value of net counts in the
0.5-7 keV energy band. We estimate reliable X-ray redshift solutions taking
advantage of the main features in obscured AGN spectra, like the Fe 6.4 keV
K emission line, the 7.1 keV Fe absorption edge and the
photoelectric absorption cut-off. The significance of such features is
investigated through spectral simulations, and the derived X-ray redshift
solutions are then compared with photometric redshifts. Both photometric and
X-ray redshifts are derived for 33 sources. When multiple solutions are derived
by any method, we find that combining the redshift solutions of the two
techniques improves the rms by a factor of two. Using our redshift estimates
(), we derived absorbing column densities in the
range cm and absorption-corrected, 2-10 keV
rest-frame luminosities between and erg s, with
median values of cm and erg s, respectively. Our results suggest that
the adopted procedure can be applied to current and future X-ray surveys, for
sources detected only in the X-rays or that have uncertain photometric or
single-line spectroscopic redshifts.Comment: 22 pages, 18 figure
Testing the paradigm: First spectroscopic evidence of a quasar-galaxy Mpc-scale association at cosmic dawn
State-of-the-art models of massive black hole formation postulate that quasars at z > 6 reside in extreme peaks of the cosmic density structure in the early universe. Even so, direct observational evidence of these overdensities is elusive, especially on large scales ( 6b1 Mpc) as the spectroscopic follow-up of z > 6 galaxies is observationally expensive. Here we present Keck/DEIMOS optical and IRAM/NOEMA millimeter spectroscopy of a z \u303 6 Lyman-break galaxy candidate originally discovered via broadband selection, at a projected separation of 4.65 physical Mpc (13.94 arcmin) from the luminous z = 6.308 quasar J1030+0524. This well-studied field presents the strongest indication to date of a large-scale overdensity around a z > 6 quasar. The Keck observations suggest a z \u303 6.3 dropout identification of the galaxy. The NOEMA 1.2 mm spectrum shows a 3.5\u3c3 line that, if interpreted as [C II], would place the galaxy at z = 6.318 (i.e., at a line-of-sight separation of 3.9 comoving Mpc assuming that relative proper motion is negligible). The measured [C II] luminosity is 3
7 108 L&09, in line with expectations for a galaxy with a star formation rate \u30315 M&09 yr-1, as inferred from the rest-frame UV photometry. Our combined observations place the galaxy at the same redshift as the quasar, thus strengthening the overdensity scenario for this z > 6 quasar. This pilot experiment demonstrates the power of millimeter-wavelength observations in the characterization of the environment of early quasar
- …