12 research outputs found
Operations Management
Global competition has caused fundamental changes in the competitive environment of the manufacturing and service industries. Firms should develop strategic objectives that, upon achievement, result in a competitive advantage in the market place. The forces of globalization on one hand and rapidly growing marketing opportunities overseas, especially in emerging economies on the other, have led to the expansion of operations on a global scale. The book aims to cover the main topics characterizing operations management including both strategic issues and practical applications. A global environmental business including both manufacturing and services is analyzed. The book contains original research and application chapters from different perspectives. It is enriched through the analyses of case studies
Applied Metaheuristic Computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
Factors affecting the adoption of green data centres in Nigeria.
Masters Degree. University of KwaZulu-Natal, Durban.Green technology adoption is a reasonable effort that organisations, which are into data centres,
should endorse due to the environmental crisis in the world concerning electronic waste and
emission of harmful gasses, amongst other environmental concerns. Countries worldwide,
especially the developed countries like the United States of America, have improved their data
centres for environmental sustainability. However, most organisations in developing countries are
yet to improve the level of environmental sustainability in the area of Information Technology.
The adoption of green data centres in Nigeria is essential because it influences the environment.
Anecdotal evidence suggests that most organisations in developing countries lack efforts to go
green; this may be attributed to a lack of knowledge on reducing land space and technological
components, ultimately affecting productivity. Various factors influence the adoption of green
technology, and this study aims to determine these factors in the context of green data centres.
This study discovered factors that affect the adoption of green data centres in Nigeria using a
descriptive qualitative research approach. Interview questions were aligned to the technology
organisational and environmental (TOE) framework. Thematic data analysis using NVivo
software was used to find themes that show the factors affecting the adoption of green data centres
in Nigeria. Results indicate a lack of awareness, technical difficultly, lack of management support
and inadequate policies for green data centres, as predominant factors affecting green data centre
adoption
Applied Methuerstic computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
A selective list of acronyms and abbreviations
A glossary of acronyms, abbreviations, initials, code words, and phrases used at the John F. Kennedy Space Center is presented. The revision contains more than 12,100 entries
Adaptive monitoring and control framework in Application Service Management environment
The economics of data centres and cloud computing services have pushed hardware and software requirements to the limits, leaving only very small performance overhead before systems get into saturation. For Application Service Management–ASM, this carries the growing risk of impacting the execution times of various processes. In order to deliver a stable service at times of great demand for computational power, enterprise data centres and cloud providers must implement fast and robust control mechanisms that are capable of adapting to changing operating conditions while satisfying service–level agreements. In ASM practice, there are normally two methods for dealing with increased load, namely increasing computational power or releasing load. The first approach typically involves allocating additional machines, which must be available, waiting idle, to deal with high demand situations. The second approach is implemented by terminating incoming actions that are less important to new activity demand patterns, throttling, or rescheduling jobs. Although most modern cloud platforms, or operating systems, do not allow adaptive/automatic termination of processes, tasks or actions, it is administrators’ common practice to manually end, or stop, tasks or actions at any level of the system, such as at the level of a node, function, or process, or kill a long session that is executing on a database server. In this context, adaptive control of actions termination remains a significantly
underutilised subject of Application Service Management and deserves further consideration. For example, this approach may be eminently suitable for systems with harsh
execution time Service Level Agreements, such as real–time systems, or systems running
under conditions of hard pressure on power supplies, systems running under variable priority, or constraints set up by the green computing paradigm. Along this line of work,
the thesis investigates the potential of dimension relevance and metrics signals decomposition as methods that would enable more efficient action termination. These methods are integrated in adaptive control emulators and actuators powered by neural networks that are used to adjust the operation of the system to better conditions in environments with established goals seen from both system performance and economics perspectives. The behaviour of the proposed control framework is evaluated using complex load and service agreements scenarios of systems compatible with the requirements of on–premises, elastic compute cloud deployments, server–less computing, and micro–services architectures
Recommended from our members
Study on the gender dimension of trafficking in human beings
The purpose of this study is to contribute to the identification and understanding of what it means to be ‘taking into account the gender perspective, to strengthen the prevention of this crime and protection of the victims there-of’, as required in Article 1 of European Union (EU) Directive 2011/36/EU on Preventing and Combating Trafficking in Human Beings and Protecting its Victims in the context of the EU Strategy (COM(2012) 286 final) Towards the Eradication of Trafficking in Human Beings.
The study contributes to Priority E Action 2 of the Strategy, which states that ‘the Commission will develop knowledge on the gender dimensions of human trafficking, including the gender consequences of the various forms of trafficking and potential differences in the vulnerability of men and women to victimisation and its impact on them.’ Its specific objectives and tasks are to address: the ‘gender dimension of vulnerability, recruitment, and victimisation’; ‘gender issues related to traffickers and to those creating demand’; and ‘an examination of law and policy responses on trafficking in human beings from a gender perspective’.
The study addresses the five priorities of the EU Strategy: identifying, protecting, and assisting victims of traf-ficking; stepping up the prevention of trafficking in human beings; better law enforcement; enhanced coordination and cooperation among key actors and policy coherence; and increased knowledge of an effective response to emerging concerns.
This study, according to its terms of reference, aims to look specifically at the gender dimension of trafficking for the purpose of sexual exploitation. This follows evidence from statistical data from Eurostat, as well as da-ta from The European Police Office (Europol) and the United Nations Office on Drugs and Crime (UNODC), accord-ing to which the most reported form of exploitation of victims is that of sexual exploitation and its strong gen-der dimension (96 % women and girls). It further addresses recommendations addressed in the Resolution of the European Parliament of 26 February 2014 on sexual exploitation and prostitution and its impact on gender equality (2013/2103(INI)) urging the European Commission to evaluate the impact that the European legal frame-work designed to eliminate trafficking for sexual exploitation has had to date and to undertake further research on patterns of prostitution, on human trafficking for the purpose of sexual exploitation and on the increased lev-el of sex tourism in the EU, with particular reference to minors, and to promote the exchange of best practices among the Member States.
The study identifies and draws on EU law and policy competence in gender equality in its identification of the gen-der dimensions of trafficking. The gender dimensions are clustered into five issues: gender specificity and equal treatment; gender expertise, gender balance in decision-making and gender mainstreaming; the relationship be-tween prostitution and trafficking; gendered policy fields and strategic priorities; gendered systems and the the-ory of prevention
A reference model for integrated energy and power management of HPC systems
Optimizing a computer for highest performance dictates the efficient use of its limited resources.
Computers as a whole are rather complex. Therefore, it is not sufficient to consider optimizing hardware and software components independently. Instead, a holistic view to manage the interactions of all components is essential to achieve system-wide efficiency.
For High Performance Computing (HPC) systems, today, the major limiting resources are energy and power. The hardware mechanisms to measure and control energy and power are exposed to software. The software systems using these mechanisms range from firmware, operating system, system software to tools and applications. Efforts to improve energy and power efficiency of HPC systems and the infrastructure of HPC centers achieve perpetual advances. In isolation, these efforts are unable to cope with the rising energy and power demands of large scale systems. A systematic way to integrate multiple optimization strategies, which build on complementary, interacting hardware and software systems is missing.
This work provides a reference model for integrated energy and power management of HPC systems: the Open Integrated Energy and Power (OIEP) reference model. The goal is to enable the implementation, setup, and maintenance of modular system-wide energy and power management solutions. The proposed model goes beyond current practices, which focus on individual HPC centers or implementations, in that it allows to universally describe any hierarchical energy and power management systems with a multitude of requirements. The model builds solid foundations to be understandable and verifiable, to guarantee stable interaction of hardware and software components, for a known and trusted chain of command. This work identifies the main building blocks of the OIEP reference model, describes their abstract setup, and shows concrete instances thereof. A principal aspect is how the individual components are connected, interface in a hierarchical manner and thus can optimize for the global policy, pursued as a computing center's operating strategy. In addition to the reference model itself, a method for applying the reference model is presented. This method is used to show the practicality of the reference model and its application.
For future research in energy and power management of HPC systems, the OIEP reference model forms a cornerstone to realize --- plan, develop and integrate --- innovative energy and power management solutions. For HPC systems themselves, it supports to transparently manage current systems with their inherent complexity, it allows to integrate novel solutions into existing setups, and it enables to design new systems from scratch. In fact, the OIEP reference model represents a basis for holistic efficient optimization.Computer auf höchstmögliche Rechenleistung zu optimieren bedingt Effizienzmaximierung aller limitierenden Ressourcen. Computer sind komplexe Systeme. Deshalb ist es nicht ausreichend, Hardware und Software isoliert zu betrachten. Stattdessen ist eine Gesamtsicht des Systems notwendig, um die Interaktionen aller Einzelkomponenten zu organisieren und systemweite Optimierungen zu ermöglichen.
Für Höchstleistungsrechner (HLR) ist die limitierende Ressource heute ihre Leistungsaufnahme und der resultierende Gesamtenergieverbrauch. In aktuellen HLR-Systemen sind Energie- und Leistungsaufnahme programmatisch auslesbar als auch direkt und indirekt steuerbar. Diese Mechanismen werden in diversen Softwarekomponenten von Firmware, Betriebssystem, Systemsoftware bis hin zu Werkzeugen und Anwendungen genutzt und stetig weiterentwickelt. Durch die Komplexität der interagierenden Systeme ist eine systematische Optimierung des Gesamtsystems nur schwer durchführbar, als auch nachvollziehbar. Ein methodisches Vorgehen zur Integration verschiedener Optimierungsansätze, die auf komplementäre, interagierende Hardware- und Softwaresysteme aufbauen, fehlt.
Diese Arbeit beschreibt ein Referenzmodell für integriertes Energie- und Leistungsmanagement von HLR-Systemen, das „Open Integrated Energy and Power (OIEP)“ Referenzmodell. Das Ziel ist ein Referenzmodell, dass die Entwicklung von modularen, systemweiten energie- und leistungsoptimierenden Sofware-Verbunden ermöglicht und diese als allgemeines hierarchisches Managementsystem beschreibt. Dies hebt das Modell von bisherigen Ansätzen ab, welche sich auf Einzellösungen, spezifischen Software oder die Bedürfnisse einzelner Rechenzentren beschränken. Dazu beschreibt es Grundlagen für ein planbares und verifizierbares Gesamtsystem und erlaubt nachvollziehbares und sicheres Delegieren von Energie- und Leistungsmanagement an Untersysteme unter Aufrechterhaltung der Befehlskette. Die Arbeit liefert die Grundlagen des Referenzmodells. Hierbei werden die Einzelkomponenten der Software-Verbunde identifiziert, deren abstrakter Aufbau sowie konkrete Instanziierungen gezeigt. Spezielles Augenmerk liegt auf dem hierarchischen Aufbau und der resultierenden Interaktionen der Komponenten. Die allgemeine Beschreibung des Referenzmodells erlaubt den Entwurf von Systemarchitekturen, welche letztendlich die Effizienzmaximierung der Ressource Energie mit den gegebenen Mechanismen ganzheitlich umsetzen können. Hierfür wird ein Verfahren zur methodischen Anwendung des Referenzmodells beschrieben, welches die Modellierung beliebiger Energie- und Leistungsverwaltungssystemen ermöglicht.
Für Forschung im Bereich des Energie- und Leistungsmanagement für HLR bildet das OIEP Referenzmodell Eckstein, um Planung, Entwicklung und Integration von innovativen Lösungen umzusetzen. Für die HLR-Systeme selbst unterstützt es nachvollziehbare Verwaltung der komplexen Systeme und bietet die Möglichkeit, neue Beschaffungen und Entwicklungen erfolgreich zu integrieren. Das OIEP Referenzmodell bietet somit ein Fundament für gesamtheitliche effiziente Systemoptimierung