849 research outputs found

    Autonomous Optimization and Control for Central Plants with Energy Storage

    Get PDF
    A model predictive control (MPC) framework is used to determine how to optimize the distribution of energy resources across a central energy facility including chillers, water heaters, and thermal energy storage; present the results to an operator; and execute the plan. The objective of this MPC framework is to minimize cost in real-time in response to both real-time energy prices and demand charges as well as allow the operator to appropriately interact with the system. Operators must be given the correct intersection points in order to build trust before they are willing to turn the tool over and put it into fully autonomous mode. Once in autonomous mode, operators need to be able to intervene and impute their knowledge of the facilities they are serving into the system without disengaging optimization. For example, an operator may be working on a central energy facility that serves a college campus on Friday night before a home football game. The optimization system is predicting the electrical load, but does not have knowledge of the football game. Rather than try to include every possible factor into the prediction of the loads, a daunting task, the optimization system empowers the operator to make human-in-the-loop decisions in these rare scenarios without exiting autonomous (auto) mode. Without this empowerment, the operator either takes the system out of auto mode or allows the system to make poor decisions. Both scenarios will result in an optimization system that has low “on time†and thus saves little money. A cascaded, model predictive control framework lends itself well to allowing an operator to intervene. The system presented is a four tiered approach to central plant optimization. The first tier is the prediction of the energy loads of the campus; i.e., the inputs to the optimization system. The predictions are made for a week in advance, giving the operator ample time to react to predictions they do not agree with and override the predictions if they feel it necessary. The predictions are inputs to the subplant-level optimization. The subplant-level optimization determines the optimal distribution of energy across major equipment classes (subplants and storage) for the prediction horizon and sends the current distribution to the equipment level optimization. The operators are able to use the subplant-level optimization for “advisory†only and enter their own load distribution into the equipment level optimization. This could be done if they feel that they need to be conservative with the charge of the tank. Finally, the equipment level optimization determines the devices to turn on and their setpoints in each subplant and sends those setpoints to the building automation system. These decisions can be overridden, but should be extremely rare as the system takes device availability, accumulated runtime, etc. as inputs. Building an optimization system that empowers the operator ensures that the campus owner realizes the full potential of his investment. Optimal plant control has shown over 10% savings, for large plants this can translate to savings of more than US $1 million per year

    Model Predictive Control for Central Plant Optimization with Thermal Energy Storage

    Get PDF
    An optimization framework is used in order to determine how to distribute both hot and cold water loads across a central energy plant including heat pump chillers, conventional chillers, water heaters, and hot and cold water (thermal energy) storage. The objective of the optimization framework is to minimize cost in response to both real-time energy prices and demand charges. The linear programming framework used allows for the optimal solution to be found in real-time. Real-time optimization lead to two separate applications: A planning tool and a real-time optimization tool. In the planning tool the optimization is performed repeatedly with a sliding horizon accepting a subset of the optimized distribution trajectory horizon as each subsequent optimization problem is solved. This is the same strategy as model predictive control except that in the design and planning tool the optimization is working on a given set of loads, weather (e.g. TMY data), and real-time pricing data and does not need to predict these values. By choosing the varying lengths of the horizon (2 to 10 days) and size of the accepted subset (1 to 24 hours), the design and planning tool can be used to find the design year’s optimal distribution trajectory in less than 5 minutes for interactive plant design, or the design and planning tool can perform a high fidelity run in a few hours. The fast solution times also allow for the optimization framework to be used in real-time to optimize the load distribution of an operational central plant using a desktop computer or microcontroller in an onsite Enterprise controller. In the real-time optimization tool Model Predictive Control is used; estimation, prediction, and optimization are performed to find the optimal distribution of loads for duration of the horizon in the presence of disturbances. The first distribution trajectory in the horizon is then applied to the central energy plant and the estimation, prediction, and optimization is repeated in 15 minutes using new plant telemetry and forecasts. Prediction is performed using a deterministic plus stochastic model where the deterministic portion of the model is a simplified system representing the load of all buildings connected to the central energy plant and the stochastic model is used to respond to disturbances in the load. The deterministic system uses forecasted weather, time of day, and day type in order to determine a predicted load. The estimator uses past data to determine the current state of the stochastic model; the current state is then projected forward and added to the deterministic system’s projection. In simulation, the system has demonstrated more than 10% savings over other schedule based control trajectories even when the subplants are assumed to be running optimally in both cases (i.e., optimal chiller staging, etc.). For large plants this can mean savings of more than US $1 million per year

    SentiCircles for contextual and conceptual semantic sentiment analysis of Twitter

    Get PDF
    Lexicon-based approaches to Twitter sentiment analysis are gaining much popularity due to their simplicity, domain independence, and relatively good performance. These approaches rely on sentiment lexicons, where a collection of words are marked with fixed sentiment polarities. However, words’ sentiment orientation (positive, neural, negative) and/or sentiment strengths could change depending on context and targeted entities. In this paper we present SentiCircle; a novel lexicon-based approach that takes into account the contextual and conceptual semantics of words when calculating their sentiment orientation and strength in Twitter. We evaluate our approach on three Twitter datasets using three different sentiment lexicons. Results show that our approach significantly outperforms two lexicon baselines. Results are competitive but inconclusive when comparing to state-of-art SentiStrength, and vary from one dataset to another. SentiCircle outperforms SentiStrength in accuracy on average, but falls marginally behind in F-measure

    Efficient Recognition of Partially Visible Objects Using a Logarithmic Complexity Matching Technique

    Full text link
    An important task in computer vision is the recognition of partially visible two-dimensional objects in a gray scale image. Recent works addressing this problem have attempted to match spatially local features from the image to features generated by models of the objects. However, many algo rithms are considerably less efficient than they might be, typ ically being O(IN) or worse, where I is the number offeatures in the image and N is the number of features in the model set. This is invariably due to the feature-matching portion of the algorithm. In this paper we discuss an algorithm that significantly improves the efficiency offeature matching. In addition, we show experimentally that our recognition algo rithm is accurate and robust. Our algorithm uses the local shape of contour segments near critical points, represented in slope angle-arclength space (θ-s space), as fundamental fea ture vectors. These feature vectors are further processed by projecting them onto a subspace in θ-s space that is obtained by applying the Karhunen-Loève expansion to all such fea tures in the set of models, yielding the final feature vectors. This allows the data needed to store the features to be re duced, while retaining nearly all information important for recognition. The heart of the algorithm is a technique for performing matching between the observed image features and the precomputed model features, which reduces the runtime complexity from O(IN) to O(I log I + I log N), where I and N are as above. The matching is performed using a tree data structure, called a kD tree, which enables multidi mensional searches to be performed in O(log) time.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/66975/2/10.1177_027836498900800608.pd

    Control of Insects in The Home Garden.

    Get PDF
    8 p

    SHCal13 Southern Hemisphere calibration, 0–50,000 years cal BP

    Get PDF
    The Southern Hemisphere SHCal04 radiocarbon calibration curve has been updated with the addition of new data sets extending measurements to 2145 cal BP and including the ANSTO Younger Dryas Huon pine data set. Outside the range of measured data, the curve is based upon the Northern Hemisphere data sets as presented in IntCal13, with an interhemispheric offset averaging 43 ± 23 yr modeled by an autoregressive process to represent the short-term correlations in the offset

    Context-Guided Self-supervised Relation Embeddings

    Get PDF
    A semantic relation between two given words a and b can be represented using two complementary sources of information: (a) the semantic representations of a and b (expressed as word embeddings) and, (b) the contextual information obtained from the co-occurrence contexts of the two words (expressed in the form of lexico-syntactic patterns). Pattern-based approach suffers from sparsity while methods rely only on word embeddings for the related pairs lack of relational information. Prior works on relation embeddings have pre-dominantly focused on either one type of those two resources exclusively, except for a notable few exceptions. In this paper, we proposed a self-supervised context-guided Relation Embedding method (CGRE) using the two sources of information. We evaluate the learnt method to create relation representations for word-pairs that do not co-occur. Experimental results on SemEval-2012 task2 dataset show that the proposed operator outperforms other methods in representing relations for unobserved word-pairs

    Large Scale Optimization Problems for Central Energy Facilities with Distributed Energy Storage

    Get PDF
    On large campuses, energy facilities are used to serve the heating and cooling needs of all the buildings, while utilizing cost savings strategies to manage operational cost. Strategies range from shifting loads to participating in utility programs that offer payouts. Among available strategies are central plant optimization, electrical energy storage, participation in utility demand response programs, and manipulating the temperature setpoints in the campus buildings. However, simultaneously optimizing all of the central plant assets, temperature setpoints and participation in utility programs can be a daunting task even for a powerful computer if the desire is real time control. These strategies may be implemented separately across several optimization systems without a coordinating algorithm. Due to system interactions, decentralized control may be far from optimal and worse yet may try to use the same asset for different goals. In this work, a hierarchal optimization system has been created to coordinate the optimization of the central plant, the battery, participation in demand response programs, and temperature setpoints. In the hierarchal controller, the high level coordinator determines the load allocations across the campus or facility. The coordinator also determines the participation in utility incentive programs. It is shown that these incentive programs can be grouped into reservation programs and price adjustment programs. The second tier of control is split into 3 portions: control of the central energy facility, control of the battery system, and control of the temperature setpoints. The second tier is responsible for converting load allocations into central plant temperature setpoints and flows, battery charge and discharge setpoints, and temperature setpoints, which are delivered to the Building Automation System for execution. It is shown that the whole system can be coordinated by representing the second tier controllers with a smaller set of data that can be used by the coordinating controller. The central plant optimizer must supply an operational domain which constrains how each group of equipment can operate. The high level controller uses this information to send down loadings for each resource a group of equipment in the plant produces or consumes. For battery storage, the coordinating controller uses a simple integrator model of the battery and is responsible for providing a demand target and the amount of participation in any incentive programs. Finally, to perform temperature setpoint optimization a dynamic model of the zone is provided to the coordinating controller. This information is used to determine load allocations for groups of zones. The hierarchal control strategy is successful at optimizing the entire energy facility fast enough to allow the algorithms to control the energy facility, building setpoints, and program bids in real-time

    Technical note: Optimizing the utility of combined GPR, OSL, and Lidar (GOaL) to extract paleoenvironmental records and decipher shoreline evolution

    Get PDF
    Records of past sea levels, storms, and their impacts on coastlines are crucial for forecasting and managing future changes resulting from anthropogenic global warming. Coastal barriers that have prograded over the Holocene preserve within their accreting sands a history of storm erosion and changes in sea level. High-resolution geophysics, geochronology, and remote sensing techniques offer an optimal way to extract these records and decipher shoreline evolution. These methods include light detection and ranging (lidar) to image the lateral extent of relict shoreline dune morphology in 3-D, ground-penetrating radar (GPR) to record paleo-dune, beach, and nearshore stratigraphy, and optically stimulated luminescence (OSL) to date the deposition of sand grains along these shorelines. Utilization of these technological advances has recently become more prevalent in coastal research. The resolution and sensitivity of these methods offer unique insights on coastal environments and their relationship to past climate change. However, discrepancies in the analysis and presentation of the data can result in erroneous interpretations. When utilized correctly on prograded barriers these methods (independently or in various combinations) have produced storm records, constructed sea-level curves, quantified sediment budgets, and deciphered coastal evolution. Therefore, combining the application of GPR, OSL, and Lidar (GOaL) on one prograded barrier has the potential to generate three detailed records of (1) storms, (2) sea level, and (3) sediment supply for that coastline. Obtaining all three for one barrier (a GOaL hat-trick) can provide valuable insights into how these factors influenced past and future barrier evolution. Here we argue that systematically achieving GOaL hat-tricks on some of the 300+ prograded barriers worldwide would allow us to disentangle local patterns of sediment supply from the regional effects of storms or global changes in sea level, providing for a direct comparison to climate proxy records. Fully realizing this aim requires standardization of methods to optimize results. The impetus for this initiative is to establish a framework for consistent data collection and analysis that maximizes the potential of GOaL to contribute to climate change research that can assist coastal communities in mitigating future impacts of global warming.</p
    corecore