54,233 research outputs found

    Utilization of fuzzy critical path method and fuzzy program evaluation and review technique for building a hydroelectric power station

    Get PDF
    In this paper, fuzzy critical path method and fuzzy program evaluation and review technique are used to calculate the earliest project completion time for constructing a hydroelectric power plant project. Fuzzy trapezoidal numbers are used to estimate the activity time and determine the range of pessimistic to optimistic variation of time. Furthermore, the minimum and maximum times of project completion duration were calculated by using arithmetic operations and ranking of fuzzy trapezoidal numbers. These hybrid methods are able to deal with the limitations associated with classical critical path method and program evaluation and review technique. The fuzzy techniques were applied to network activities in a manner similar to the classical methods for optimizing the project completion duration, thereby minimizing the cost of the project. Analysis was carried out to determine the critical path through the use of fuzzy critical path method. The fuzzy program evaluation and review technique was also used to determine the probability of completing the project at a scheduled time. These two methods were then compared and the most probable scenarios were analyzed. Finally, it was concluded that fuzzy program evaluation and review technique is better than fuzzy critical path method and more efficient in terms of early project completion time

    The Project Scheduling Problem with Non-Deterministic Activities Duration: A Literature Review

    Get PDF
    Purpose: The goal of this article is to provide an extensive literature review of the models and solution procedures proposed by many researchers interested on the Project Scheduling Problem with nondeterministic activities duration. Design/methodology/approach: This paper presents an exhaustive literature review, identifying the existing models where the activities duration were taken as uncertain or random parameters. In order to get published articles since 1996, was employed the Scopus database. The articles were selected on the basis of reviews of abstracts, methodologies, and conclusions. The results were classified according to following characteristics: year of publication, mathematical representation of the activities duration, solution techniques applied, and type of problem solved. Findings: Genetic Algorithms (GA) was pointed out as the main solution technique employed by researchers, and the Resource-Constrained Project Scheduling Problem (RCPSP) as the most studied type of problem. On the other hand, the application of new solution techniques, and the possibility of incorporating traditional methods into new PSP variants was presented as research trends. Originality/value: This literature review contents not only a descriptive analysis of the published articles but also a statistical information section in order to examine the state of the research activity carried out in relation to the Project Scheduling Problem with non-deterministic activities duration.Peer Reviewe

    Project scheduling under undertainty – survey and research potentials.

    Get PDF
    The vast majority of the research efforts in project scheduling assume complete information about the scheduling problem to be solved and a static deterministic environment within which the pre-computed baseline schedule will be executed. However, in the real world, project activities are subject to considerable uncertainty, that is gradually resolved during project execution. In this survey we review the fundamental approaches for scheduling under uncertainty: reactive scheduling, stochastic project scheduling, stochastic GERT network scheduling, fuzzy project scheduling, robust (proactive) scheduling and sensitivity analysis. We discuss the potentials of these approaches for scheduling projects under uncertainty.Management; Project management; Robustness; Scheduling; Stability;

    A Survey on Software Testing Techniques using Genetic Algorithm

    Full text link
    The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.Comment: 13 Page

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Spatial Pattern Learning, Catastophic Forgetting and Optimal Rules of Synaptic Transmission

    Full text link
    It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.Advanced Research Projects Agency (ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309

    Designing an expert knowledge-based Systemic Importance Index for financial institutions

    Get PDF
    Defining whether a financial institution is systemically important (or not) is challenging due to (i) the inevitability of combining complex importance criteria such as institutions’ size, connectedness and substitutability; (ii) the ambiguity of what an appropriate threshold for those criteria may be; and (iii) the involvement of expert knowledge as a key input for combining those criteria. The proposed method, a Fuzzy Logic Inference System, uses four key systemic importance indicators that capture institutions’ size, connectedness and substitutability, and a convenient deconstruction of expert knowledge to obtain a Systemic Importance Index. This method allows for combining dissimilar concepts in a non-linear, consistent and intuitive manner, whilst considering them as continuous –non binary- functions. Results reveal that the method imitates the way experts them-selves think about the decision process regarding what a systemically important financial institution is within the financial system under analysis. The Index is a comprehensive relative assessment of each financial institution’s systemic importance. It may serve financial authorities as a quantitative tool for focusing their attention and resources where the severity resulting from an institution failing or near-failing is estimated to be the greatest. It may also serve for enhanced policy-making (e.g. prudential regulation, oversight and supervision) and decision-making (e.g. resolving, restructuring or providing emergency liquidity).Systemic Importance, Systemic Risk, Fuzzy Logic, Approximate Reasoning, Too-connected-to-fail, Too-big-to-fail. Classification JEL: D85, C63, E58, G28.

    Big Data in Critical Infrastructures Security Monitoring: Challenges and Opportunities

    Full text link
    Critical Infrastructures (CIs), such as smart power grids, transport systems, and financial infrastructures, are more and more vulnerable to cyber threats, due to the adoption of commodity computing facilities. Despite the use of several monitoring tools, recent attacks have proven that current defensive mechanisms for CIs are not effective enough against most advanced threats. In this paper we explore the idea of a framework leveraging multiple data sources to improve protection capabilities of CIs. Challenges and opportunities are discussed along three main research directions: i) use of distinct and heterogeneous data sources, ii) monitoring with adaptive granularity, and iii) attack modeling and runtime combination of multiple data analysis techniques.Comment: EDCC-2014, BIG4CIP-201
    • 

    corecore