1,872 research outputs found
An optimal power flow based dispatch model for distributed generation embedded network
The installation of distributed generation (DG) introduces challenges to distribution systems operation. The distribution network operator needs to schedule DG outputs considering some constraints, such as DG characteristics, reactive power control mode of generators, automatic voltage regulation, compensator and power quality standard, etc. Based on an optimal power flow model, this paper proposes a dispatch model for DG embedded distribution systems. The model is proposed basing on energy prices, weather forecasting and load forecasting. The objective is to minimize the electricity supply cost of the distribution company. The proposed model is tested in the 33-buse system. The results show that DisCo's cost and losses of the distribution system can be reduced by enhancing system operation flexibility.published_or_final_versionThe 9th International Conference on Environment and Electrical Engineering (EEEIC 2010), Prague, Czech Republic, 16-19 May 2010. In Proceedings of EEEIC, 2010, p. 101-10
Long-Term Optimal Generation Expansion Planing considering CO2 Reduction Policies and Mechanisms
In a deregulated power system, a well-designed market mechanism could promote the optimal distribution of resources within the system. As one of the major sources of CO2 emissions, power industry can be seriously impacted by carbon emission-related policies and regulations. This paper considers three widely promoted emission policies, using a long-term power system planning model to obtain the optimal generation expansion plan for twenty years. By analysing and comparing system investment schemes under different emission-related mechanisms, the processes of how these policies influence generation investment are revealed. The effectiveness of these policies is compared to determine the optimal carbon emission mechanism.published_or_final_versio
Model selection in High-Dimensions: A Quadratic-risk based approach
In this article we propose a general class of risk measures which can be used
for data based evaluation of parametric models. The loss function is defined as
generalized quadratic distance between the true density and the proposed model.
These distances are characterized by a simple quadratic form structure that is
adaptable through the choice of a nonnegative definite kernel and a bandwidth
parameter. Using asymptotic results for the quadratic distances we build a
quick-to-compute approximation for the risk function. Its derivation is
analogous to the Akaike Information Criterion (AIC), but unlike AIC, the
quadratic risk is a global comparison tool. The method does not require
resampling, a great advantage when point estimators are expensive to compute.
The method is illustrated using the problem of selecting the number of
components in a mixture model, where it is shown that, by using an appropriate
kernel, the method is computationally straightforward in arbitrarily high data
dimensions. In this same context it is shown that the method has some clear
advantages over AIC and BIC.Comment: Updated with reviewer suggestion
Overcoming data scarcity of Twitter: using tweets as bootstrap with application to autism-related topic content analysis
Notwithstanding recent work which has demonstrated the potential of using
Twitter messages for content-specific data mining and analysis, the depth of
such analysis is inherently limited by the scarcity of data imposed by the 140
character tweet limit. In this paper we describe a novel approach for targeted
knowledge exploration which uses tweet content analysis as a preliminary step.
This step is used to bootstrap more sophisticated data collection from directly
related but much richer content sources. In particular we demonstrate that
valuable information can be collected by following URLs included in tweets. We
automatically extract content from the corresponding web pages and treating
each web page as a document linked to the original tweet show how a temporal
topic model based on a hierarchical Dirichlet process can be used to track the
evolution of a complex topic structure of a Twitter community. Using
autism-related tweets we demonstrate that our method is capable of capturing a
much more meaningful picture of information exchange than user-chosen hashtags.Comment: IEEE/ACM International Conference on Advances in Social Networks
Analysis and Mining, 201
Bayesian Networks for Max-linear Models
We study Bayesian networks based on max-linear structural equations as
introduced in Gissibl and Kl\"uppelberg [16] and provide a summary of their
independence properties. In particular we emphasize that distributions for such
networks are generally not faithful to the independence model determined by
their associated directed acyclic graph. In addition, we consider some of the
basic issues of estimation and discuss generalized maximum likelihood
estimation of the coefficients, using the concept of a generalized likelihood
ratio for non-dominated families as introduced by Kiefer and Wolfowitz [21].
Finally we argue that the structure of a minimal network asymptotically can be
identified completely from observational data.Comment: 18 page
How to Educate Entrepreneurs?
Entrepreneurship education has two purposes: To improve students’ entrepreneurial skills and to provide impetus to those suited to entrepreneurship while discouraging the rest. While entrepreneurship education helps students to make a vocational decision its effects may conflict for those not suited to entrepreneurship. This study shows that vocational and the skill formation effects of entrepreneurship education can be identified empirically by drawing on the Theory of Planned Behavior. This is embedded in a structural equation model which we estimate and test using a robust 2SLS estimator. We find that the attitudinal factors posited by the Theory of Planned Behavior are positively correlated with students’ entrepreneurial intentions. While conflicting effects of vocational and skill directed course content are observed in some individuals, overall these types of content are complements. This finding contradicts previous results in the literature. We reconcile the conflicting findings and discuss implications for the design of entrepreneurship courses
The Emerging Scholarly Brain
It is now a commonplace observation that human society is becoming a coherent
super-organism, and that the information infrastructure forms its emerging
brain. Perhaps, as the underlying technologies are likely to become billions of
times more powerful than those we have today, we could say that we are now
building the lizard brain for the future organism.Comment: to appear in Future Professional Communication in Astronomy-II
(FPCA-II) editors A. Heck and A. Accomazz
Quantitative Analysis of Bloggers Collective Behavior Powered by Emotions
Large-scale data resulting from users online interactions provide the
ultimate source of information to study emergent social phenomena on the Web.
From individual actions of users to observable collective behaviors, different
mechanisms involving emotions expressed in the posted text play a role. Here we
combine approaches of statistical physics with machine-learning methods of text
analysis to study emergence of the emotional behavior among Web users. Mapping
the high-resolution data from digg.com onto bipartite network of users and
their comments onto posted stories, we identify user communities centered
around certain popular posts and determine emotional contents of the related
comments by the emotion-classifier developed for this type of texts. Applied
over different time periods, this framework reveals strong correlations between
the excess of negative emotions and the evolution of communities. We observe
avalanches of emotional comments exhibiting significant self-organized critical
behavior and temporal correlations. To explore robustness of these critical
states, we design a network automaton model on realistic network connections
and several control parameters, which can be inferred from the dataset.
Dissemination of emotions by a small fraction of very active users appears to
critically tune the collective states
Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures
Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models
- …