57 research outputs found

    Energy Management of Distributed Generation Systems

    Get PDF
    The book contains 10 chapters, and it is divided into four sections. The first section includes three chapters, providing an overview of Energy Management of Distributed Systems. It outlines typical concepts, such as Demand-Side Management, Demand Response, Distributed, and Hierarchical Control for Smart Micro-Grids. The second section contains three chapters and presents different control algorithms, software architectures, and simulation tools dedicated to Energy Management Systems. In the third section, the importance and the role of energy storage technology in a Distribution System, describing and comparing different types of energy storage systems, is shown. The fourth section shows how to identify and address potential threats for a Home Energy Management System. Finally, the fifth section discusses about Economical Optimization of Operational Cost for Micro-Grids, pointing out the effect of renewable energy sources, active loads, and energy storage systems on economic operation

    An investigation into the use of B-Nodes and state models for computer network technology and education

    Get PDF
    This thesis consists of a series of internationally published, peer reviewed, conference research papers and one journal paper. The papers evaluate and further develop two modelling methods for use in Information Technology (IT) design and for the educational and training needs of students within the area of computer and network technology. The IT age requires technical talent to fill positions such as network managers, web administrators, e-commerce consultants and network security experts as IT is changing rapidly, and this is placing considerable demands on higher educational institutions, both within Australia and internationally, to respond to these changes

    Adaptive Caching of Distributed Components

    Get PDF
    Die Zugriffslokalität referenzierter Daten ist eine wichtige Eigenschaft verteilter Anwendungen. Lokales Zwischenspeichern abgefragter entfernter Daten (Caching) wird vielfach bei der Entwicklung solcher Anwendungen eingesetzt, um diese Eigenschaft auszunutzen. Anschliessende Zugriffe auf diese Daten können so beschleunigt werden, indem sie aus dem lokalen Zwischenspeicher bedient werden. Gegenwärtige Middleware-Architekturen bieten dem Anwendungsprogrammierer jedoch kaum Unterstützung für diesen nicht-funktionalen Aspekt. Die vorliegende Arbeit versucht deshalb, Caching als separaten, konfigurierbaren Middleware-Dienst auszulagern. Durch die Einbindung in den Softwareentwicklungsprozess wird die frühzeitige Modellierung und spätere Wiederverwendung caching-spezifischer Metadaten gewährleistet. Zur Laufzeit kann sich das entwickelte System außerdem bezüglich der Cachebarkeit von Daten adaptiv an geändertes Nutzungsverhalten anpassen.Locality of reference is an important property of distributed applications. Caching is typically employed during the development of such applications to exploit this property by locally storing queried data: Subsequent accesses can be accelerated by serving their results immediately form the local store. Current middleware architectures however hardly support this non-functional aspect. The thesis at hand thus tries outsource caching as a separate, configurable middleware service. Integration into the software development lifecycle provides for early capturing, modeling, and later reuse of cachingrelated metadata. At runtime, the implemented system can adapt to caching access characteristics with respect to data cacheability properties, thus healing misconfigurations and optimizing itself to an appropriate configuration. Speculative prefetching of data probably queried in the immediate future complements the presented approach

    A digital polar transmitter for multi-band OFDM Ultra-WideBand

    No full text
    Linear power amplifiers used to implement the Ultra-Wideband standard must be backed off from optimum power efficiency to meet the standard specifications and the power efficiency suffers. The problem of low efficiency can be mitigated by polar modulation. Digital polar architectures have been employed on numerous wireless standards like GSM, EDGE, and WLAN, where the fractional bandwidths achieved are only about 1%, and the power levels achieved are often in the vicinity of 20 dBm. Can the architecture be employed on wireless standards with low-power and high fractional bandwidth requirements and yet achieve good power efficiency? To answer these question, this thesis studies the application of a digital polar transmitter architecture with parallel amplifier stages for UWB. The concept of the digital transmitter is motivated and inspired by three factors. First, unrelenting advances in the CMOS technology in deep-submicron process and the prevalence of low-cost Digital Signal processing have resulted in the realization of higher level of integration using digitally intensive approaches. Furthermore, the architecture is an evolution of polar modulation, which is known for high power efficiency in other wireless applications. Finally, the architecture is operated as a digital-to-analog converter which circumvents the use of converters in conventional transmitters. Modeling and simulation of the system architecture is performed on the Agilent Advanced Design System Ptolemy simulation platform. First, by studying the envelope signal, we found that envelope clipping results in a reduction in the peak-to-average power ratio which in turn improves the error vector magnitude performance (figure of merit for the study). In addition, we have demonstrated that a resolution of three bits suffices for the digital polar transmitter when envelope clipping is performed. Next, this thesis covers a theoretical derivation for the estimate of the error vector magnitude based on the resolution, quantization and phase noise errors. An analysis on the process variations - which result in gain and delay mismatches - for a digital transmitter architecture with four bits ensues. The above studies allow RF designers to estimate the number of bits required and the amount of distortion that can be tolerated in the system. Next, a study on the circuit implementation was conducted. A DPA that comprises 7 parallel RF amplifiers driven by a constant RF phase-modulated signal and 7 cascode transistors (individually connected in series with the bottom amplifiers) digitally controlled by a 3-bit digitized envelope signal to reconstruct the UWB signal at the output. Through the use of NFET models from the IBM 130-nm technology, our simulation reveals that our DPA is able to achieve an EVM of - 22 dB. The DPA simulations have been performed at 3.432 GHz centre frequency with a channel bandwidth of 528 MHz, which translates to a fractional bandwidth of 15.4%. Drain efficiencies of 13.2/19.5/21.0% have been obtained while delivering -1.9/2.5/5.5 dBm of output power and consuming 5/9/17 mW of power. In addition, we performed a yield analysis on the digital polar amplifier, based on unit-weighted and binary-weighted architecture, when gain variations are introduced in all the individual stages. The dynamic element matching method is also introduced for the unit-weighted digital polar transmitter. Monte Carlo simulations reveal that when the gain of the amplifiers are allowed to vary at a mean of 1 with a standard deviation of 0.2, the binary-weighted architecture obtained a yield of 79%, while the yields of the unit-weighted architectures are in the neighbourhood of 95%. Moreover, the dynamic element matching technique demonstrates an improvement in the yield by approximately 3%. Finally, a hardware implementation for this architecture based on software-defined arbitrary waveform generators is studied. In this section, we demonstrate that the error vector magnitude results obtained with a four-stage binary-weighted digital polar transmitter under ideal combining conditions fulfill the European Computer Manufacturers Association requirements. The proposed experimental setup, believed to be the first ever attempted, confirm the feasibility of a digital polar transmitter architecture for Ultra-Wideband. In addition, we propose a number of power combining techniques suitable for the hardware implementation. Spatial power combining, in particular, shows a high potential for the digital polar transmitter architecture. The above studies demonstrate the feasibility of the digital polar architecture with good power efficiency for a wideband wireless standard with low-power and high fractional bandwidth requirements

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction

    Toward timely, predictable and cost-effective data analytics

    Get PDF
    Modern industrial, government, and academic organizations are collecting massive amounts of data at an unprecedented scale and pace. The ability to perform timely, predictable and cost-effective analytical processing of such large data sets in order to extract deep insights is now a key ingredient for success. Traditional database systems (DBMS) are, however, not the first choice for servicing these modern applications, despite 40 years of database research. This is due to the fact that modern applications exhibit different behavior from the one assumed by DBMS: a) timely data exploration as a new trend is characterized by ad-hoc queries and a short user interaction period, leaving little time for DBMS to do good performance tuning, b) accurate statistics representing relevant summary information about distributions of ever increasing data are frequently missing, resulting in suboptimal plan decisions and consequently poor and unpredictable query execution performance, and c) cloud service providers - a major winner in the data analytics game due to the low cost of (shared) storage - have shifted the control over data storage from DBMS to the cloud providers, making it harder for DBMS to optimize data access. This thesis demonstrates that database systems can still provide timely, predictable and cost-effective analytical processing, if they use an agile and adaptive approach. In particular, DBMS need to adapt at three levels (to workload, data and hardware characteristics) in order to stabilize and optimize performance and cost when faced with requirements posed by modern data analytics applications. Workload-driven data ingestion is introduced with NoDB as a means to enable efficient data exploration and reduce the data-to-insight time (i.e., the time to load the data and tune the system) by doing these steps lazily and incrementally as a side-effect of posed queries as opposed to mandatory first steps. Data-driven runtime access path decision making introduced with Smooth Scan alleviates suboptimal query execution, postponing the decision on access paths from query optimization, where statistics are heavily exploited, to query execution, where the system can obtain more details about data distributions. Smooth Scan uses access path morphing from one physical alternative to another to fit the observed data distributions, which removes the need for a priori access path decisions and substantially improves the predictability of DBMS. Hardware-driven query execution introduced with Skipper enables the usage of cold storage devices (CSD) as a cost-effective solution for storing the ever increasing customer data. Skipper uses an out-of-order CSD-driven query execution model based on multi-way joins coupled with efficient cache and I/O scheduling policies to hide the non-uniform access latencies of CSD. This thesis advocates runtime adaptivity as a key to dealing with raising uncertainty about workload characteristics that modern data analytics applications exhibit. Overall, the techniques introduced in this thesis through the three levels of adaptivity (workload, data and hardware-driven adaptivity) increase the usability of database systems and the user satisfaction in the case of big data exploration, making low-cost data analytics reality

    An investigation into computer and network curricula

    Get PDF
    This thesis consists of a series of internationally published, peer reviewed, journal and conference research papers that analyse the educational and training needs of undergraduate Information Technology (IT) students within the area of Computer and Network Technology (CNT) Education. Research by Maj et al has found that accredited computing science curricula can fail to meet the expectations of employers in the field of CNT: “It was found that none of these students could perform first line maintenance on a Personal Computer (PC) to a professional standard with due regard to safety, both to themselves and the equipment. Neither could they install communication cards, cables and network operating system or manage a population of networked PCs to an acceptable commercial standard without further extensive training. It is noteworthy that none of the students interviewed had ever opened a PC. It is significant that all those interviewed for this study had successfully completed all the units on computer architecture and communication engineering (Maj, Robbins, Shaw, & Duley, 1998). The students\u27 curricula at that time lacked units in which they gained hands-on experience in modern PC hardware or networking skills. This was despite the fact that their computing science course was level one accredited, the highest accreditation level offered by the Australian Computer Society (ACS). The results of the initial survey in Western Australia led to the introduction of two new units within the Computing Science Degree at Edith Cowan University (ECU), Computer Installation & Maintenance (CIM) and Network Installation & Maintenance (NIM) (Maj, Fetherston, Charlesworth, & Robbins, 1998). Uniquely within an Australian university context these new syllabi require students to work on real equipment. Such experience excludes digital circuit investigation, which is still a recommended approach by the Association for Computing Machinery (ACM) for computer architecture units (ACM, 2001, p.97). Instead, the CIM unit employs a top-down approach based initially upon students\u27 everyday experiences, which is more in accordance with constructivist educational theory and practice. These papers propose an alternate model of IT education that helps to accommodate the educational and vocational needs of IT students in the context of continual rapid changes and developments in technology. The ACM have recognised the need for variation noting that: There are many effective ways to organize a curriculum even for a particular set of goals and objectives (Tucker et al., 1991, p.70). A possible major contribution to new knowledge of these papers relates to how high level abstract bandwidth (B-Node) models may contribute to the understanding of why and how computer and networking technology systems have developed over time. Because these models are de-coupled from the underlying technology, which is subject to rapid change, these models may help to future-proof student knowledge and understanding of the ongoing and future development of computer and networking systems. The de-coupling is achieved through abstraction based upon bandwidth or throughput rather than the specific implementation of the underlying technologies. One of the underlying problems is that computing systems tend to change faster than the ability of most educational institutions to respond. Abstraction and the use of B-Node models could help educational models to more quickly respond to changes in the field, and can also help to introduce an element of future-proofing in the education of IT students. The importance of abstraction has been noted by the ACM who state that: Levels of Abstraction: the nature and use of abstraction in computing; the use of abstraction in managing complexity, structuring systems, hiding details, and capturing recurring patterns; the ability to represent an entity or system by abstractions having different levels of detail and specificity (ACM, 1991b). Bloom et al note the importance of abstraction, listing under a heading of: “Knowledge of the universals and abstractions in a field” the objective: Knowledge of the major schemes and patterns by which phenomena and ideas arc organized. These are large structures, theories, and generalizations which dominate a subject or field or problems. These are the highest levels of abstraction and complexity\u27\u27 (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956, p. 203). Abstractions can be applied to computer and networking technology to help provide students with common fundamental concepts regardless of the particular underlying technological implementation to help avoid the rapid redundancy of a detailed knowledge of modem computer and networking technology implementation and hands-on skills acquisition. Again the ACM note that: “Enduring computing concepts include ideas that transcend any specific vendor, package or skill set... While skills are fleeting, fundamental concepts are enduring and provide long lasting benefits to students, critically important in a rapidly changing discipline (ACM, 2001, p.70) These abstractions can also be reinforced by experiential learning to commercial practices. In this context, the other possibly major contribution of new knowledge provided by this thesis is an efficient, scalable and flexible model for assessing hands-on skills and understanding of IT students. This is a form of Competency-Based Assessment (CBA), which has been successfully tested as part of this research and subsequently implemented at ECU. This is the first time within this field that this specific type of research has been undertaken within the university sector within Australia. Hands-on experience and understanding can become outdated hence the need for future proofing provided via B-Nodes models. The three major research questions of this study are: •Is it possible to develop a new, high level abstraction model for use in CNT education? •Is it possible to have CNT curricula that are more directly relevant to both student and employer expectations without suffering from rapid obsolescence? •Can WI effective, efficient and meaningful assessment be undertaken to test students\u27 hands-on skills and understandings? The ACM Special Interest Group on Data Communication (SJGCOMM) workshop report on Computer Networking, Curriculum Designs and Educational Challenges, note a list of teaching approaches: ... the more \u27hands-on\u27 laboratory approach versus the more traditional in-class lecture-based approach; the bottom-up approach towards subject matter verus the top-down approach (Kurose, Leibeherr, Ostermann, & Ott-Boisseau, 2002, para 1). Bandwidth considerations are approached from the PC hardware level and at each of the seven layers of the International Standards Organisation (ISO) Open Systems Interconnection (OSI) reference model. It is believed that this research is of significance to computing education. However, further research is needed
    • …
    corecore