17,821 research outputs found

    Mining sequences in distributed sensors data for energy production.

    Get PDF
    Brief Overview of the Problem: The Environmental Protection Agency (EPA), a government funded agency, provides both legislative and judicial powers for emissions monitoring in the United States. The agency crafts laws based on self-made regulations to enforce companies to operate within the limits of the law resulting in environmentally safe operation. Specifically, power companies operate electric generating facilities under guidelines drawn-up and enforced by the EPA. Acid rain and other harmful factors require that electric generating facilities report hourly emissions recorded via a Supervisory Control and Data Acquisition (SCADA) system. SCADA is a control and reporting system that is present in all power plants consisting of sensors and control mechanisms that monitor all equipment within the plants. The data recorded by a SCADA system is collected by the EPA and allows them to enforce proper plant operation relating to emissions. This data includes a lot of generating unit and power plant specific details, including hourly generation. This hourly generation (termed grossunitload by the EPA) is the actual hourly average output of the generator on a per unit basis. The questions to be answered are do any of these units operate in tandem and do any of the units start, stop, or change operation as a result of another\u27s change in generation? These types of questions will be answered for the years of April 2002 through April 2003 for facilities that operate pipeline natural-gas-fired generating units. Purpose of Research The research conducted has dual uses if fruitful. First, the use of a local modeling between generating units would be highly profitable among energy traders. Betting that a plant will operate a unit based on another\u27s current characteristics would be sensationally profitable to energy traders. This profitability is variable due to fuel type. For instance, if the price of coal is extremely high due to shortages, the value of knowing a semioperating characteristic of two generating units is highly valuable. Second, this known characteristic can also be used in regulation and operational modeling. The second use is of great importance to government agencies. If regulatory committees can be aware of past (or current) similarities between power producers, they may be able to avoid a power struggle in a region caused by greedy traders or companies. Not considering profitable motives, the Department of Energy may use something similar to generate a model of power grid generation availability based on previous data for reliability purposes. Type of Problem: The problem tackled within this Master\u27s thesis is of multiple time series pattern recognition. This field is expansive and well studied, therefore the research performed will benefit from previously known techniques. The author has chosen to experiment with conventional techniques such as correlation, principal component analysis, and kmeans clustering for feature and eventually pattern extraction. For the primary analysis performed, the author chose to use a conventional sequence discovery algorithm. The sequence discovery algorithm has no prior knowledge of space limitations, therefore it searches over the entire space resulting in an expense but complete process. Prior to sequence discovery the author applies a uniform coding schema to the raw data, which is an adaption of a coding schema presented by Keogh. This coding and discovery process is deemed USD, or Uniform Sequence Discovery. The data is highly dimensional along with being extremely dynamic and sporadic with regards to magnitude. The energy market that demands power generation is profit and somewhat reliability driven. The obvious factors are more reliability based, for instance to keep system frequency at 60Hz, units may operate in an idle state resulting in a constant or very low value for a period of time (idle time). Also to avoid large frequency swings on the power grid, companies are require

    Smart Home and Artificial Intelligence as Environment for the Implementation of New Technologies

    Get PDF
    The technologies of a smart home and artificial intelligence (AI) are now inextricably linked. The perception and consideration of these technologies as a single system will make it possible to significantly simplify the approach to their study, design and implementation. The introduction of AI in managing the infrastructure of a smart home is a process of irreversible close future at the level with personal assistants and autopilots. It is extremely important to standardize, create and follow the typical models of information gathering and device management in a smart home, which should lead in the future to create a data analysis model and decision making through the software implementation of a specialized AI. AI techniques such as multi-agent systems, neural networks, fuzzy logic will form the basis for the functioning of a smart home in the future. The problems of diversity of data and models and the absence of centralized popular team decisions in this area significantly slow down further development. A big problem is a low percentage of open source data and code in the smart home and the AI when the research results are mostly unpublished and difficult to reproduce and implement independently. The proposed ways of finding solutions to models and standards can significantly accelerate the development of specialized AIs to manage a smart home and create an environment for the emergence of native innovative solutions based on analysis of data from sensors collected by monitoring systems of smart home. Particular attention should be paid to the search for resource savings and the profit from surpluses that will push for the development of these technologies and the transition from a level of prospect to technology exchange and the acquisition of benefits.The technologies of a smart home and artificial intelligence (AI) are now inextricably linked. The perception and consideration of these technologies as a single system will make it possible to significantly simplify the approach to their study, design and implementation. The introduction of AI in managing the infrastructure of a smart home is a process of irreversible close future at the level with personal assistants and autopilots. It is extremely important to standardize, create and follow the typical models of information gathering and device management in a smart home, which should lead in the future to create a data analysis model and decision making through the software implementation of a specialized AI. AI techniques such as multi-agent systems, neural networks, fuzzy logic will form the basis for the functioning of a smart home in the future. The problems of diversity of data and models and the absence of centralized popular team decisions in this area significantly slow down further development. A big problem is a low percentage of open source data and code in the smart home and the AI when the research results are mostly unpublished and difficult to reproduce and implement independently. The proposed ways of finding solutions to models and standards can significantly accelerate the development of specialized AIs to manage a smart home and create an environment for the emergence of native innovative solutions based on analysis of data from sensors collected by monitoring systems of smart home. Particular attention should be paid to the search for resource savings and the profit from surpluses that will push for the development of these technologies and the transition from a level of prospect to technology exchange and the acquisition of benefits

    Index to 1984 NASA Tech Briefs, volume 9, numbers 1-4

    Get PDF
    Short announcements of new technology derived from the R&D activities of NASA are presented. These briefs emphasize information considered likely to be transferrable across industrial, regional, or disciplinary lines and are issued to encourage commercial application. This index for 1984 Tech B Briefs contains abstracts and four indexes: subject, personal author, originating center, and Tech Brief Number. The following areas are covered: electronic components and circuits, electronic systems, physical sciences, materials, life sciences, mechanics, machinery, fabrication technology, and mathematics and information sciences

    A brief network analysis of Artificial Intelligence publication

    Full text link
    In this paper, we present an illustration to the history of Artificial Intelligence(AI) with a statistical analysis of publish since 1940. We collected and mined through the IEEE publish data base to analysis the geological and chronological variance of the activeness of research in AI. The connections between different institutes are showed. The result shows that the leading community of AI research are mainly in the USA, China, the Europe and Japan. The key institutes, authors and the research hotspots are revealed. It is found that the research institutes in the fields like Data Mining, Computer Vision, Pattern Recognition and some other fields of Machine Learning are quite consistent, implying a strong interaction between the community of each field. It is also showed that the research of Electronic Engineering and Industrial or Commercial applications are very active in California. Japan is also publishing a lot of papers in robotics. Due to the limitation of data source, the result might be overly influenced by the number of published articles, which is to our best improved by applying network keynode analysis on the research community instead of merely count the number of publish.Comment: 18 pages, 7 figure

    A Survey on IT-Techniques for a Dynamic Emergency Management in Large Infrastructures

    Get PDF
    This deliverable is a survey on the IT techniques that are relevant to the three use cases of the project EMILI. It describes the state-of-the-art in four complementary IT areas: Data cleansing, supervisory control and data acquisition, wireless sensor networks and complex event processing. Even though the deliverableā€™s authors have tried to avoid a too technical language and have tried to explain every concept referred to, the deliverable might seem rather technical to readers so far little familiar with the techniques it describes

    Big Data in Critical Infrastructures Security Monitoring: Challenges and Opportunities

    Full text link
    Critical Infrastructures (CIs), such as smart power grids, transport systems, and financial infrastructures, are more and more vulnerable to cyber threats, due to the adoption of commodity computing facilities. Despite the use of several monitoring tools, recent attacks have proven that current defensive mechanisms for CIs are not effective enough against most advanced threats. In this paper we explore the idea of a framework leveraging multiple data sources to improve protection capabilities of CIs. Challenges and opportunities are discussed along three main research directions: i) use of distinct and heterogeneous data sources, ii) monitoring with adaptive granularity, and iii) attack modeling and runtime combination of multiple data analysis techniques.Comment: EDCC-2014, BIG4CIP-201
    • ā€¦
    corecore