217 research outputs found

    Maintenance Modelling

    Get PDF

    An Automated Approach of Detection of Memory Leaks for Remote Server Controllers

    Get PDF
    Memory leaks are a major concern to the long running applications like servers which make the working set to grow with the program. This eventually leads to system crashing. This paper discusses a staged approach to detect leaks in firmware of remote server controller. Remote server controller monitors the server remotely with many processes running in the background. Any memory leak in the long running applications pose a threat to the performance of the system. The approach adopted here filters the processes running in the system with leaks based on time threshold in the first stage. These processes with leaks are passed to the next stage where precise memory leak detection is done using the open source dynamic instrumentation tool Valgrind. The system leverages an automated leak detection approach that invokes the leak detection process on encountering any severity in the system and generates a consolidated leak report. The proposed approach has less impact on the performance of the system and is faster compared to many available systems as there is no need to modify or re-compile the program. In addition, the automated approach offers an effective technique for detecting possible leakages in early software development phases

    A failure index for high performance computing applications

    Get PDF
    This dissertation introduces a new metric in the area of High Performance Computing (HPC) application reliability and performance modeling. Derived via the time-dependent implementation of an existing inequality measure, the Failure index (FI) generates a coefficient representing the level of volatility for the failures incurred by an application running on a given HPC system in a given time interval. This coefficient presents a normalized cross-system representation of the failure volatility of applications running on failure-rich HPC platforms. Further, the origin and ramifications of application failures are investigated, from which certain mathematical conclusions yield greater insight into the behavior of these applications in failure-rich system environments. This work also includes background information on the problems facing HPC applications at the highest scale, the lack of standardized application-specific metrics within this arena, and a means of generating such metrics in a low latency manner. A case study containing detailed analysis showcasing the benefits of the FI is also included

    Do Memories Haunt You? An Automated Black Box Testing Approach for Detecting Memory Leaks in Android Apps

    Get PDF
    Memory leaks represent a remarkable problem for mobile app developers since a waste of memory due to bad programming practices may reduce the available memory of the device, slow down the apps, reduce their responsiveness and, in the worst cases, they may cause the crash of the app. A common cause of memory leaks in the specific context of Android apps is the bad handling of the events tied to the Activity Lifecycle. In order to detect and characterize these memory leaks, we present FunesDroid, a tool-supported black box technique for the automatic detection of memory leaks tied to the Activity Lifecycle in Android apps. FunesDroid implements a testing approach that can find memory leaks by analyzing unnecessary heap object replications after the execution of three different sequences of Activity Lifecycle events. In the paper, we present an exploratory study that shows the capability of the proposed technique to detect memory leaks and to characterize them in terms of their size, persistence and growth trend. The study also illustrates how memory leak causes can be detected with the support of the information provided by the FunesDroid tool

    Machine and component residual life estimation through the application of neural networks

    Get PDF
    Analysis of reliability data plays an important role in the maintenance decision making process. The accurate estimation of residual life in components and systems can be a great asset when planning the preventive replacement of components on machines. Artificial intelligence is a field that has rapidly developed over the last twenty years and practical applications have been found in many diverse areas. The use of such methods in the maintenance field have however not yet been fully explored. With the common availability of condition monitoring data, another dimension has been added to the analysis of reliability data. Neural networks allow for explanatory variables to be incorporated into the analysis process. This is expected to improve the quality of predictions when compared to the results achieved through the use of methods that rely solely on failure time data. Neural networks can therefore be seen as an alternative to the various regression models, such as the proportional hazards model, which also incorporate such covariates into the analysis. For the purpose of investigating their applicability to the problem of predicting the residual life of machines and components, neural networks were trained and tested with the data of two different reliability related datasets. The first dataset represents the renewal case where repair leads to complete restoration of the system. A typical maintenance situation was simulated in the laboratory by subjecting a series of similar test pieces to different loading conditions. Measurements were taken at regular intervals during testing with a number of sensors which provided an indication of the test piece’s condition at the time of measurement. The dataset was split into a training set and a test set and a number of neural network variations were trained using the first set. The networks’ ability to generalize was then tested by presenting the data from the test set to each of these networks. The second dataset contained data collected from a group of pumps working in a coal mining environment. This dataset therefore represented an example of the situation encountered with a repaired system. The performance of different neural network variations was subsequently compared through the use of cross-validation. It was proved that in most cases the use of condition monitoring data as network inputs improved the accuracy of the neural networks’ predictions. The average prediction error of the various neural networks under comparison varied between 431 and 841 seconds on the renewal dataset, where test pieces had a characteristic life of 8971 seconds. When optimized the multi-layer perceptron neural networks trained with the Levenberg-Marquardt algorithm and the general regression neural network produced a sum of squares error within 11.1% of each other for the data of the repaired system. This result emphasizes the importance of adjusting parameters, network architecture and training targets for optimal performance The advantage of using neural networks for predicting residual life was clearly illustrated when comparing their performance to the results achieved through the use of the traditional statistical methods. The potential of using neural networks for residual life prediction was therefore illustrated in both cases.Dissertation (MEng (Mechanical Engineering))--University of Pretoria, 2007.Mechanical and Aeronautical EngineeringMEngunrestricte

    Theoretical and Computational Research in Various Scheduling Models

    Get PDF
    Nine manuscripts were published in this Special Issue on “Theoretical and Computational Research in Various Scheduling Models, 2021” of the MDPI Mathematics journal, covering a wide range of topics connected to the theory and applications of various scheduling models and their extensions/generalizations. These topics include a road network maintenance project, cost reduction of the subcontracted resources, a variant of the relocation problem, a network of activities with generally distributed durations through a Markov chain, idea on how to improve the return loading rate problem by integrating the sub-tour reversal approach with the method of the theory of constraints, an extended solution method for optimizing the bi-objective no-idle permutation flowshop scheduling problem, the burn-in (B/I) procedure, the Pareto-scheduling problem with two competing agents, and three preemptive Pareto-scheduling problems with two competing agents, among others. We hope that the book will be of interest to those working in the area of various scheduling problems and provide a bridge to facilitate the interaction between researchers and practitioners in scheduling questions. Although discrete mathematics is a common method to solve scheduling problems, the further development of this method is limited due to the lack of general principles, which poses a major challenge in this research field

    DEPENDABILITY IN CLOUD COMPUTING

    Get PDF
    The technological advances and success of Service-Oriented Architectures and the Cloud computing paradigm have produced a revolution in the Information and Communications Technology (ICT). Today, a wide range of services are provisioned to the users in a flexible and cost-effective manner, thanks to the encapsulation of several technologies with modern business models. These services not only offer high-level software functionalities such as social networks or e-commerce but also middleware tools that simplify application development and low-level data storage, processing, and networking resources. Hence, with the advent of the Cloud computing paradigm, today's ICT allows users to completely outsource their IT infrastructure and benefit significantly from the economies of scale. At the same time, with the widespread use of ICT, the amount of data being generated, stored and processed by private companies, public organizations and individuals is rapidly increasing. The in-house management of data and applications is proving to be highly cost intensive and Cloud computing is becoming the destination of choice for increasing number of users. As a consequence, Cloud computing services are being used to realize a wide range of applications, each having unique dependability and Quality-of-Service (Qos) requirements. For example, a small enterprise may use a Cloud storage service as a simple backup solution, requiring high data availability, while a large government organization may execute a real-time mission-critical application using the Cloud compute service, requiring high levels of dependability (e.g., reliability, availability, security) and performance. Service providers are presently able to offer sufficient resource heterogeneity, but are failing to satisfy users' dependability requirements mainly because the failures and vulnerabilities in Cloud infrastructures are a norm rather than an exception. This thesis provides a comprehensive solution for improving the dependability of Cloud computing -- so that -- users can justifiably trust Cloud computing services for building, deploying and executing their applications. A number of approaches ranging from the use of trustworthy hardware to secure application design has been proposed in the literature. The proposed solution consists of three inter-operable yet independent modules, each designed to improve dependability under different system context and/or use-case. A user can selectively apply either a single module or combine them suitably to improve the dependability of her applications both during design time and runtime. Based on the modules applied, the overall proposed solution can increase dependability at three distinct levels. In the following, we provide a brief description of each module. The first module comprises a set of assurance techniques that validates whether a given service supports a specified dependability property with a given level of assurance, and accordingly, awards it a machine-readable certificate. To achieve this, we define a hierarchy of dependability properties where a property represents the dependability characteristics of the service and its specific configuration. A model of the service is also used to verify the validity of the certificate using runtime monitoring, thus complementing the dynamic nature of the Cloud computing infrastructure and making the certificate usable both at discovery and runtime. This module also extends the service registry to allow users to select services with a set of certified dependability properties, hence offering the basic support required to implement dependable applications. We note that this module directly considers services implemented by service providers and provides awareness tools that allow users to be aware of the QoS offered by potential partner services. We denote this passive technique as the solution that offers first level of dependability in this thesis. Service providers typically implement a standard set of dependability mechanisms that satisfy the basic needs of most users. Since each application has unique dependability requirements, assurance techniques are not always effective, and a pro-active approach to dependability management is also required. The second module of our solution advocates the innovative approach of offering dependability as a service to users' applications and realizes a framework containing all the mechanisms required to achieve this. We note that this approach relieves users from implementing low-level dependability mechanisms and system management procedures during application development and satisfies specific dependability goals of each application. We denote the module offering dependability as a service as the solution that offers second level of dependability in this thesis. The third, and the last, module of our solution concerns secure application execution. This module considers complex applications and presents advanced resource management schemes that deploy applications with improved optimality when compared to the algorithms of the second module. This module improves dependability of a given application by minimizing its exposure to existing vulnerabilities, while being subject to the same dependability policies and resource allocation conditions as in the second module. Our approach to secure application deployment and execution denotes the third level of dependability offered in this thesis. The contributions of this thesis can be summarized as follows.The contributions of this thesis can be summarized as follows. \u2022 With respect to assurance techniques our contributions are: i) de finition of a hierarchy of dependability properties, an approach to service modeling, and a model transformation scheme; ii) de finition of a dependability certifi cation scheme for services; iii) an approach to service selection that considers users' dependability requirements; iv) de finition of a solution to dependability certifi cation of composite services, where the dependability properties of a composite service are calculated on the basis of the dependability certi ficates of component services. \u2022 With respect to off ering dependability as a service our contributions are: i) de finition of a delivery scheme that transparently functions on users' applications and satisfi es their dependability requirements; ii) design of a framework that encapsulates all the components necessary to o er dependability as a service to the users; iii) an approach to translate high level users' requirements to low level dependability mechanisms; iv) formulation of constraints that allow enforcement of deployment conditions inherent to dependability mechanisms and an approach to satisfy such constraints during resource allocation; v) a resource management scheme that masks the a ffect of system changes by adapting the current allocation of the application. \u2022 With respect to security management our contributions are: i) an approach that deploys users' applications in the Cloud infrastructure such that their exposure to vulnerabilities is minimized; ii) an approach to build interruptible elastic algorithms whose optimality improves as the processing time increases, eventually converging to an optimal solution

    Responsible Inventory Models for Operation and Logistics Management

    Get PDF
    The industrialization and the subsequent economic development occurred in the last century have led industrialized societies to pursue increasingly higher economic and financial goals, laying temporarily aside the safeguard of the environment and the defense of human health. However, over the last decade, modern societies have begun to reconsider the importance of social and environmental issues nearby the economic and financial goals. In the real industrial environment as well as in today research activities, new concepts have been introduced, such as sustainable development (SD), green supply chain and ergonomics of the workplace. The notion of “triple bottom line” (3BL) accounting has become increasingly important in industrial management over the last few years (Norman and MacDonald, 2004). The main idea behind the 3BL paradigm is that companies’ ultimate success should not be measured only by the traditional financial results, but also by their ethical and environmental performances. Social and environmental responsibility is essential because a healthy society cannot be achieved and maintained if the population is in poor health. The increasing interest in sustainable development spurs companies and researchers to treat operations management and logistics decisions as a whole by integrating economic, environmental, and social goals (Bouchery et al., 2012). Because of the wideness of the field under consideration, this Ph.D. thesis focuses on a restricted selection of topics, that is Inventory Management and in particular the Lot Sizing problem. The lot sizing problem is undoubtedly one of the most traditional operations management interests, so much so that the first research about lot sizing has been faced more than one century ago (Harris, 1913). The main objectives of this thesis are listed below: 1) The study and the detailed analysis of the existing literature concerning Inventory Management and Lot Sizing, supporting the management of production and logistics activities. In particular, this thesis aims to highlight the different factors and decision-making approaches behind the existing models in the literature. Moreover, it develops a conceptual framework identifying the associated sub-problems, the decision variables and the sources of sustainable achievement in the logistics decisions. The last part of the literature analysis outlines the requirements for future researches. 2) The development of new computational models supporting the Inventory Management and Sustainable Lot Sizing. As a result, an integrated methodological procedure has been developed by making a complete mathematical modeling of the Sustainable Lot Sizing problem. Such a method has been properly validated with data derived from real cases. 3) Understanding and applying the multi-objective optimization techniques, in order to analyze the economic, environmental and social impacts derived from choices concerning the supply, transport and management of incoming materials to a production system. 4) The analysis of the feasibility and convenience of governmental systems of incentives to promote the reduction of emissions owing to the procurement and storage of purchasing materials. A new method based on the multi-objective theory is presented by applying the models developed and by conducting a sensitivity analysis. This method is able to quantify the effectiveness of carbon reduction incentives on varying the input parameters of the problem. 5) Extending the method developed in the first part of the research for the “Single-buyer” case in a "multi-buyer" optics, by introducing the possibility of Horizontal Cooperation. A kind of cooperation among companies in different stages of the purchasing and transportation of raw materials and components on a global scale is the Haulage Sharing approach which is here taken into consideration in depth. This research was supported by a fruitful collaboration with Prof. Robert W. Grubbström (University of Linkoping, Sweden) and its aim has been from the beginning to make a breakthrough both in the theoretical basis concerning sustainable Lot Sizing, and in the subsequent practical application in today industrial contexts

    Law & The Good Life

    Get PDF
    Meeting proceedings of a seminar by the same name, held November 10, 2022

    Law & The Good Life

    Get PDF
    Meeting proceedings of a seminar by the same name, held November 12, 2021
    • …
    corecore