1,664 research outputs found

    Preliminary study of VTO thrust requirements for a V/STOL aircraft with lift plus lift/cruise propulsion

    Get PDF
    A preliminary assessment was made of the VTO thrust requirements for a supersonic (Type B) aircraft with a Lift plus Lift/Cruise propulsion system. A baseline aircraft with a takeoff gross weight (TOGW) of 13 608 kg (30,000 lb) was assumed. Pitch, roll, and yaw control thrusts (i.e., the thrusts needed for aircraft attitude control in the flight hover mode) were estimated based on a specified set of maneuver acceleration requirements for V/STOL aircraft. Other effects (such as installation losses, suckdown, reingestion, etc.), which add to the thrust requirements for VTO were also estimated. For the baseline aircraft, the excess thrust required for attitude control of the aircraft during VTO and flight hover was estimated to range from 36.9 to 50.9 percent of the TOGW. It was concluded that the total thrust requirements for the aircraft/propulsion system are large and significant. In order to achieve the performance expected of this aircraft/propulsion system, reductions must be made in the excess thrust requirements

    Alien Registration- Turney, Effie J. (Fort Fairfield, Aroostook County)

    Get PDF
    https://digitalmaine.com/alien_docs/36055/thumbnail.jp

    Comparison of two parallel/series flow turbofan propulsion concepts for supersonic V/STOL

    Get PDF
    The thrust, specific fuel consumption, and relative merits of the tandem fan and the dual reverse flow front fan propulsion systems for a supersonic V/STOL aircraft are discussed. Consideration is given to: fan pressure ratio, fan air burning, and variable core supercharging. The special propulsion system components required are described, namely: the deflecting front inlet/nozzle, the aft subsonic inlet, the reverse pitch fan, the variable core supercharger and the low pressure forward burner. The potential benefits for these unconventional systems are indicated

    System Identification for Model Predictive Control of Building Region Temperature

    Get PDF
    Model predictive control (MPC) is a promising technology for energy cost optimization of buildings because it provides a natural framework for optimally controlling such systems by computing control actions that minimize the energy cost while meeting constraints. In our previous work, we developed a cascaded MPC framework capable of minimizing the energy cost of building zone temperature control applications. The outer loop MPC computes power set-points to minimize the energy cost while ensuring that the zone temperature is maintained within its comfort constraints. The inner loop MPC receives the power set-points from the outer loop MPC and manipulates the zone temperature set-point to ensure that the zone power consumption tracks the power set-points computed by the outer layer MPC. Since both MPCs require a predictive model, a modeling framework and system identification (SI) methodology must be developed that is capable of accurately predicting the energy usage and zone temperature for a diverse range of building zones. In this work, two grey-box models for the outer and inner loop MPCs are developed and parameterized. The model parameters are fit to input-output data for a particular zone application so that the resulting model accurately predicts the behavior of the zone. State and disturbance estimation, which is required by the MPCs, is performed via a Kalman filter with a steady-state Kalman gain. The model parameters and Kalman gains of each grey-box model are updated in a sequential fashion. The significant disturbances affecting the zone temperature (e.g., outside temperature and occupancy) may typically be considered as a slowly varying disturbance (with respect to the control time-scale). To prevent steady-state offset in the identified model caused by the slowly time-varying disturbance, a high-pass filter is applied to the input-output data to filter out the effect of the disturbance. The model parameters are subsequently computed from the filtered input-output data without the Kalman filter applied. The Kalman gain is also adjusted as the model parameters are updated to ensure stability of the resulting observer and for optimal estimation. After the model parameters are computed, the steady-state Kalman gain matrix is parameterized and the parameters are updated using the prediction error method with the unfiltered input-output data and the updated model parameters. The Kalman gain update methodology is advantageous because it avoids the need to estimate the noise statistics. Stability of the observer is verified after the parameters are updated. If the updated parameters result in an unstable observer, the update is rejected and the previous parameters are retained. Additionally, since a standard quadratic cost function that penalizes the squared prediction error is sensitive to data outliers in the prediction error method, a piecewise defined cost function is employed to reduce its sensitivity to outliers and to improve the robustness of the SI methodology. The cost function penalizes the squared prediction error when the error is within certain thresholds. When the error is outside the thresholds, the cost function evaluates to a constant. The SI algorithm is applied to a building zone to assess the approach

    Economic Model Predictive Control for Variable Refrigerant Systems

    Get PDF
    Variable refrigerant (VRF) systems are in a unique position to be combined with economic model predictive control (MPC) in order to reap significant benefits. In buildings with a variable utility price, it is feasible to use the building mass to shift a portion of the building heating, ventilation, and air conditioning (HVAC) load from the high priced (peak) period to the low priced (off-peak) period. It is also feasible for further savings to be visualized through a reduction of the monthly demand charge. By employing the building mass as an element to store thermal energy, one can see a significant reduction in utility costs. The MPC algorithm can accomplish this by using the building mass to store and release heat at the appropriate time to reduce HVAC usage during the peak utility price periods. This is accomplished through MPC of the indoor air temperature within the acceptable temperature set point limits. With proper, linear models, a linear programming (LP) algorithm can be employed to perform the economic optimization over the future time horizon. Savings in commercial buildings estimate HVAC cost savings from --% to --% annually

    Greigite formation in aqueous solutions: critical constraints into the role of iron and sulphur ratios, pH and Eh, and temperature using reaction pathway modelling

    Get PDF
    Greigite forms as an intermediate phase along the pyrite reaction pathway. Despite being considered metastable, it is observed in numerous shallow natural systems, suggesting it could be a unique proxy for diagenetic and environmental conditions. We use thermodynamic reaction pathway modelling in PHREEQC software, to understand the role of iron and sulphur ratios, pH and Eh, and temperature on the formation and retention of greigite in aqueous solutions. With newly available experimental thermodynamic properties, this work identifies the chemical boundary conditions for greigite formation in aqueous solutions. Greigite precipitation is likely favourable in anoxic and alkaline aqueous solutions at or below 25 °C. Our numerical experiments show that greigite is closer to saturation in iron-rich solutions with minor sulphur input. Greigite precipitation in strongly alkaline solutions suggest polysulfides and ferric iron-bearing minerals may be favourable reactants for its formation. Greigite precipitates at iron and sulphur concentrations that are over two orders of magnitude greater than iron sulphide-hosted natural porewaters. This disparity between model and field observations suggest microenvironments within bulk solutions may be important for greigite formation and retention. These constraints suggest greigite is more likely to form alongside pyrite in shallow, non-steady state aqueous solutions

    Autonomous Optimization and Control for Central Plants with Energy Storage

    Get PDF
    A model predictive control (MPC) framework is used to determine how to optimize the distribution of energy resources across a central energy facility including chillers, water heaters, and thermal energy storage; present the results to an operator; and execute the plan. The objective of this MPC framework is to minimize cost in real-time in response to both real-time energy prices and demand charges as well as allow the operator to appropriately interact with the system. Operators must be given the correct intersection points in order to build trust before they are willing to turn the tool over and put it into fully autonomous mode. Once in autonomous mode, operators need to be able to intervene and impute their knowledge of the facilities they are serving into the system without disengaging optimization. For example, an operator may be working on a central energy facility that serves a college campus on Friday night before a home football game. The optimization system is predicting the electrical load, but does not have knowledge of the football game. Rather than try to include every possible factor into the prediction of the loads, a daunting task, the optimization system empowers the operator to make human-in-the-loop decisions in these rare scenarios without exiting autonomous (auto) mode. Without this empowerment, the operator either takes the system out of auto mode or allows the system to make poor decisions. Both scenarios will result in an optimization system that has low “on time†and thus saves little money. A cascaded, model predictive control framework lends itself well to allowing an operator to intervene. The system presented is a four tiered approach to central plant optimization. The first tier is the prediction of the energy loads of the campus; i.e., the inputs to the optimization system. The predictions are made for a week in advance, giving the operator ample time to react to predictions they do not agree with and override the predictions if they feel it necessary. The predictions are inputs to the subplant-level optimization. The subplant-level optimization determines the optimal distribution of energy across major equipment classes (subplants and storage) for the prediction horizon and sends the current distribution to the equipment level optimization. The operators are able to use the subplant-level optimization for “advisory†only and enter their own load distribution into the equipment level optimization. This could be done if they feel that they need to be conservative with the charge of the tank. Finally, the equipment level optimization determines the devices to turn on and their setpoints in each subplant and sends those setpoints to the building automation system. These decisions can be overridden, but should be extremely rare as the system takes device availability, accumulated runtime, etc. as inputs. Building an optimization system that empowers the operator ensures that the campus owner realizes the full potential of his investment. Optimal plant control has shown over 10% savings, for large plants this can translate to savings of more than US $1 million per year

    Model Predictive Control for Central Plant Optimization with Thermal Energy Storage

    Get PDF
    An optimization framework is used in order to determine how to distribute both hot and cold water loads across a central energy plant including heat pump chillers, conventional chillers, water heaters, and hot and cold water (thermal energy) storage. The objective of the optimization framework is to minimize cost in response to both real-time energy prices and demand charges. The linear programming framework used allows for the optimal solution to be found in real-time. Real-time optimization lead to two separate applications: A planning tool and a real-time optimization tool. In the planning tool the optimization is performed repeatedly with a sliding horizon accepting a subset of the optimized distribution trajectory horizon as each subsequent optimization problem is solved. This is the same strategy as model predictive control except that in the design and planning tool the optimization is working on a given set of loads, weather (e.g. TMY data), and real-time pricing data and does not need to predict these values. By choosing the varying lengths of the horizon (2 to 10 days) and size of the accepted subset (1 to 24 hours), the design and planning tool can be used to find the design year’s optimal distribution trajectory in less than 5 minutes for interactive plant design, or the design and planning tool can perform a high fidelity run in a few hours. The fast solution times also allow for the optimization framework to be used in real-time to optimize the load distribution of an operational central plant using a desktop computer or microcontroller in an onsite Enterprise controller. In the real-time optimization tool Model Predictive Control is used; estimation, prediction, and optimization are performed to find the optimal distribution of loads for duration of the horizon in the presence of disturbances. The first distribution trajectory in the horizon is then applied to the central energy plant and the estimation, prediction, and optimization is repeated in 15 minutes using new plant telemetry and forecasts. Prediction is performed using a deterministic plus stochastic model where the deterministic portion of the model is a simplified system representing the load of all buildings connected to the central energy plant and the stochastic model is used to respond to disturbances in the load. The deterministic system uses forecasted weather, time of day, and day type in order to determine a predicted load. The estimator uses past data to determine the current state of the stochastic model; the current state is then projected forward and added to the deterministic system’s projection. In simulation, the system has demonstrated more than 10% savings over other schedule based control trajectories even when the subplants are assumed to be running optimally in both cases (i.e., optimal chiller staging, etc.). For large plants this can mean savings of more than US $1 million per year

    Interpreting high-temperature magnetic susceptibility data of natural systems

    Get PDF
    High-temperature susceptibility (HT-χ) data are routinely measured in Earth, planetary, and environmental sciences to rapidly identify the magnetic mineralogy of natural systems. The interpretation of such data can be complicated. Whilst some minerals are relatively unaltered by heating and are easy to identify through their Curie or Néel temperature, other common magnetic phases, e.g., iron sulphides, are very unstable to heating. This makes HT-χ interpretation challenging, especially in multi-mineralogical samples. Here, we report a review of the HT-χ data measured primarily at Imperial College London of common magnetic minerals found in natural samples. We show examples of “near pure” natural samples, in addition to examples of interpretation of multi-phase HT-χ data. We hope that this paper will act be the first reference paper for HT-χ data interpretation

    SentiCircles for contextual and conceptual semantic sentiment analysis of Twitter

    Get PDF
    Lexicon-based approaches to Twitter sentiment analysis are gaining much popularity due to their simplicity, domain independence, and relatively good performance. These approaches rely on sentiment lexicons, where a collection of words are marked with fixed sentiment polarities. However, words’ sentiment orientation (positive, neural, negative) and/or sentiment strengths could change depending on context and targeted entities. In this paper we present SentiCircle; a novel lexicon-based approach that takes into account the contextual and conceptual semantics of words when calculating their sentiment orientation and strength in Twitter. We evaluate our approach on three Twitter datasets using three different sentiment lexicons. Results show that our approach significantly outperforms two lexicon baselines. Results are competitive but inconclusive when comparing to state-of-art SentiStrength, and vary from one dataset to another. SentiCircle outperforms SentiStrength in accuracy on average, but falls marginally behind in F-measure
    • …
    corecore