3 research outputs found
Enabling Privacy-preserving Auctions in Big Data
We study how to enable auctions in the big data context to solve many
upcoming data-based decision problems in the near future. We consider the
characteristics of the big data including, but not limited to, velocity,
volume, variety, and veracity, and we believe any auction mechanism design in
the future should take the following factors into consideration: 1) generality
(variety); 2) efficiency and scalability (velocity and volume); 3) truthfulness
and verifiability (veracity). In this paper, we propose a privacy-preserving
construction for auction mechanism design in the big data, which prevents
adversaries from learning unnecessary information except those implied in the
valid output of the auction. More specifically, we considered one of the most
general form of the auction (to deal with the variety), and greatly improved
the the efficiency and scalability by approximating the NP-hard problems and
avoiding the design based on garbled circuits (to deal with velocity and
volume), and finally prevented stakeholders from lying to each other for their
own benefit (to deal with the veracity). We achieve these by introducing a
novel privacy-preserving winner determination algorithm and a novel payment
mechanism. Additionally, we further employ a blind signature scheme as a
building block to let bidders verify the authenticity of their payment reported
by the auctioneer. The comparison with peer work shows that we improve the
asymptotic performance of peer works' overhead from the exponential growth to a
linear growth and from linear growth to a logarithmic growth, which greatly
improves the scalability
Recommended from our members
Big data optimization in electric power systems: a review
There are different definitions of big data, and among them, the most common definition refers
to three or five characteristics, called volume, velocity, variety, value, and veracity from (Laney
(2001)). Volume could include Tera Byte, Peta Byte, Exa Byte, and Zetta Byte. Velocity
describes how fast the data are retrieved and processed ‘‘Batch or streaming”. Variety describes
structured, semi-structured, and unstructured data (Laney, 2001, Zikopoulos and Eaton, 2011).
Veracity explains the integrity and disorderliness of data, while value refers to how good is the
“value” we derive from analyzing data? (Zicari et al., 2016).
Electrical power systems are networks of components arrayed to supply, transfer, and use
electric power. In power system since models are used to predict and characterize operations.
However, there is a necessity for powerful optimization algorithms for information processing to
learn models as the size increase of data is becoming a global problem to solve large-scale
optimization problems. Any optimization problem includes a real function to be maximized or
minimized by systematically determination of input values from an allowed set of values.
Richness and quantity of large data sets provide the potential to enhance statistical learning
performance but require smart models that use the latent low-dimensional structure for effective
2
data separation.
This chapter reviews the most recent scientific articles related to large and big data optimization
in power systems. Optimization issues such as logistics in power systems and techniques
including nonsmooth, nonconvex, and unconstrained large-scale optimization are presented.
After a brief review of big data, scientometric analysis has been applied using keywords of “big
data” and “power system.” Besides, keywords analysis, network visualization, journal map, and
bibliographic coupling analysis have been done to draw a path on big data works in power
system problems. Also, the most common useful techniques in large-scale optimization in power
system have been reviewed. At the end of this chapter, metaheuristic techniques in big data
optimization are reviewed to show that many efforts have been involved in big data optimization
in power system and systematically highlight some perspectives on big data optimization
Óptima determinación de potencia para abastecer la demanda en el mediano plazo a través de subastas inversas de energía usando programación lineal (LP)
El abastecimiento de la demanda eléctrica es instantánea y surge automáticamente cuando lo requiere el consumidor; aspecto por el cual el parque generador debe estar listo para cumplir este requerimiento, sin embargo, una capacidad insuficiente de generación conlleva a la escasez, mientras que una capacidad excesiva de generación no aprovechada causa efectos negativos a nivel económico; por consiguiente es de gran importancia el establecimiento de mecanismos que permitan la incorporación optima de centrales de generación en base a la planificación a mediano plazo, pues las decisiones que se tomen implican la afectación de recursos, con posibles riesgos económicos para el usuario y la economía en general.
En este sentido, el presente trabajo elabora un modelo de optimización el cual será resuelto mediante GAMS y determina, través de subastas de bloques de energía, la combinación de generadores que abastecerán la demanda en el mediano plazo, analizando como caso de estudio el sistema eléctrico ecuatoriano.The supply of electricity demand is instantaneous and automatically when required by the consumer; As a result, the generating park must be ready to meet this requirement. However, insufficient generation capacity leads to a shortage, while an excessively unused generation capacity causes negative economic effects; therefore, the establishment of mechanisms that enable the optimal incorporation of generation plants based on medium-term planning is of great importance, since the decisions taken imply the allocation of resources, with possible economic risks for the user and the economy.
In this sense, this work elaborates an optimization model which will be solved by GAMS and determines, through energy block auctions, the combination of generators that will supply demand in the medium term, analyzing the Ecuadorian electrical system as a case study