87 research outputs found

    Entropy based Software Reliability Growth Modelling for Open Source Software Evolution

    Get PDF
    During Open Source Software (OSS) development, users submit "new features (NFs)", "feature improvements (IMPs)" and bugs to fix. A proportion of these issues get fixed before the next software release. During the introduction of NFs and IMPs, the source code files change. A proportion of these source code changes may result in generation of bugs. We have developed calendar time and entropy-dependent mathematical models to represent the growth of OSS based on the rate at which NFs are added, IMPs are added, and bugs introduction rate.The empirical validation has been conducted on five products, namely "Avro, Pig, Hive, jUDDI and Whirr" of the Apache open source project. We compared the proposed models with eminent reliability growth models, Goel and Okumoto (1979) and Yamada et al. (1983) and found that the proposed models exhibit better goodness of fit

    The Impact of Artificial Intelligence on Strategic and Operational Decision Making

    Get PDF
    openEffective decision making lies at the core of organizational success. In the era of digital transformation, businesses are increasingly adopting data-driven approaches to gain a competitive advantage. According to existing literature, Artificial Intelligence (AI) represents a significant advancement in this area, with the ability to analyze large volumes of data, identify patterns, make accurate predictions, and provide decision support to organizations. This study aims to explore the impact of AI technologies on different levels of organizational decision making. By separating these decisions into strategic and operational according to their properties, the study provides a more comprehensive understanding of the feasibility, current adoption rates, and barriers hindering AI implementation in organizational decision making

    Task scheduling mechanisms for fog computing: A systematic survey

    Get PDF
    In the Internet of Things (IoT) ecosystem, some processing is done near data production sites at higher speeds without the need for high bandwidth by combining Fog Computing (FC) and cloud computing. Fog computing offers advantages for real-time systems that require high speed internet connectivity. Due to the limited resources of fog nodes, one of the most important challenges of FC is to meet dynamic needs in real-time. Therefore, one of the issues in the fog environment is the optimal assignment of tasks to fog nodes. An efficient scheduling algorithm should reduce various qualitative parameters such as cost and energy consumption, taking into account the heterogeneity of fog nodes and the commitment to perform tasks within their deadlines. This study provides a detailed taxonomy to gain a better understanding of the research issues and distinguishes important challenges in existing work. Therefore, a systematic overview of existing task scheduling techniques for cloud-fog environment, as well as their benefits and drawbacks, is presented in this article. Four main categories are introduced to study these techniques, including machine learning-based, heuristic-based, metaheuristic-based, and deterministic mechanisms. A number of papers are studied in each category. This survey also compares different task scheduling techniques in terms of execution time, resource utilization, delay, network bandwidth, energy consumption, execution deadline, response time, cost, uncertainty, and complexity. The outcomes revealed that 38% of the scheduling algorithms use metaheuristic-based mechanisms, 30% use heuristic-based, 23% use machine learning algorithms, and the other 9% use deterministic methods. The energy consumption is the most significant parameter addressed in most articles with a share of 19%. Finally, a number of important areas for improving the task scheduling methods in the FC in the future are presented

    Intensional Cyberforensics

    Get PDF
    This work focuses on the application of intensional logic to cyberforensic analysis and its benefits and difficulties are compared with the finite-state-automata approach. This work extends the use of the intensional programming paradigm to the modeling and implementation of a cyberforensics investigation process with backtracing of event reconstruction, in which evidence is modeled by multidimensional hierarchical contexts, and proofs or disproofs of claims are undertaken in an eductive manner of evaluation. This approach is a practical, context-aware improvement over the finite state automata (FSA) approach we have seen in previous work. As a base implementation language model, we use in this approach a new dialect of the Lucid programming language, called Forensic Lucid, and we focus on defining hierarchical contexts based on intensional logic for the distributed evaluation of cyberforensic expressions. We also augment the work with credibility factors surrounding digital evidence and witness accounts, which have not been previously modeled. The Forensic Lucid programming language, used for this intensional cyberforensic analysis, formally presented through its syntax and operational semantics. In large part, the language is based on its predecessor and codecessor Lucid dialects, such as GIPL, Indexical Lucid, Lucx, Objective Lucid, and JOOIP bound by the underlying intensional programming paradigm.Comment: 412 pages, 94 figures, 18 tables, 19 algorithms and listings; PhD thesis; v2 corrects some typos and refs; also available on Spectrum at http://spectrum.library.concordia.ca/977460

    Location-Enabled IoT (LE-IoT): A Survey of Positioning Techniques, Error Sources, and Mitigation

    Get PDF
    The Internet of Things (IoT) has started to empower the future of many industrial and mass-market applications. Localization techniques are becoming key to add location context to IoT data without human perception and intervention. Meanwhile, the newly-emerged Low-Power Wide-Area Network (LPWAN) technologies have advantages such as long-range, low power consumption, low cost, massive connections, and the capability for communication in both indoor and outdoor areas. These features make LPWAN signals strong candidates for mass-market localization applications. However, there are various error sources that have limited localization performance by using such IoT signals. This paper reviews the IoT localization system through the following sequence: IoT localization system review -- localization data sources -- localization algorithms -- localization error sources and mitigation -- localization performance evaluation. Compared to the related surveys, this paper has a more comprehensive and state-of-the-art review on IoT localization methods, an original review on IoT localization error sources and mitigation, an original review on IoT localization performance evaluation, and a more comprehensive review of IoT localization applications, opportunities, and challenges. Thus, this survey provides comprehensive guidance for peers who are interested in enabling localization ability in the existing IoT systems, using IoT systems for localization, or integrating IoT signals with the existing localization sensors

    An Integrated Method for Optimizing Bridge Maintenance Plans

    Get PDF
    Bridges are one of the vital civil infrastructure assets, essential for economic developments and public welfare. Their large numbers, deteriorating condition, public demands for safe and efficient transportation networks and limited maintenance and intervention budgets pose a challenge, particularly when coupled with the need to respect environmental constraints. This state of affairs creates a wide gap between critical needs for intervention actions, and tight maintenance and rehabilitation funds. In an effort to meet this challenge, a newly developed integrated method for optimized maintenance and intervention plans for reinforced concrete bridge decks is introduced. The method encompasses development of five models: surface defects evaluation, corrosion severities evaluation, deterioration modeling, integrated condition assessment, and optimized maintenance plans. These models were automated in a set of standalone computer applications, coded using C#.net in Matlab environment. These computer applications were subsequently combined to form an integrated method for optimized maintenance and intervention plans. Four bridges and a dataset of bridge images were used in testing and validating the developed optimization method and its five models. The developed models have unique features and demonstrated noticeable performance and accuracy over methods used in practice and those reported in the literature. For example, the accuracy of the surface defects detection and evaluation model outperforms those of widely-recognized machine leaning and deep learning models; reducing detection, recognition and evaluation of surface defects error by 56.08%, 20.2% and 64.23%, respectively. The corrosion evaluation model comprises design of a standardized amplitude rating system that circumvents limitations of numerical amplitude-based corrosion maps. In the integrated condition, it was inferred that the developed model accomplished consistent improvement over the visual inspection procedures in-use by the Ministry of Transportation in Quebec. Similarly, the deterioration model displayed average enhancement in the prediction accuracies by 60% when compared against the most commonly-utilized weibull distribution. The performance of the developed multi-objective optimization model yielded 49% and 25% improvement over that of genetic algorithm in a five-year study period and a twenty five-year study period, respectively. At the level of thirty five-year study period, unlike the developed model, classical meta-heuristics failed to find feasible solutions within the assigned constraints. The developed integrated platform is expected to provide an efficient tool that enables decision makers to formulate sustainable maintenance plans that optimize budget allocations and ensure efficient utilization of resources

    THE DEVELOPMENT OF A HOLISTIC EXPERT SYSTEM FOR INTEGRATED COASTAL ZONE MANAGEMENT

    Get PDF
    Coastal data and information comprise a massive and complex resource, which is vital to the practice of Integrated Coastal Zone Management (ICZM), an increasingly important application. ICZM is just as complex, but uses the holistic paradigm to deal with the sophistication. The application domain and its resource require a tool of matching characteristics, which is facilitated by the current wide availability of high performance computing. An object-oriented expert system, COAMES, has been constructed to prove this concept. The application of expert systems to ICZM in particular has been flagged as a viable challenge and yet very few have taken it up. COAMES uses the Dempster- Shafer theory of evidence to reason with uncertainty and importantly introduces the power of ignorance and integration to model the holistic approach. In addition, object orientation enables a modular approach, embodied in the inference engine - knowledge base separation. Two case studies have been developed to test COAMES. In both case studies, knowledge has been successfully used to drive data and actions using metadata. Thus a holism of data, information and knowledge has been achieved. Also, a technological holism has been proved through the effective classification of landforms on the rapidly eroding Holderness coast. A holism across disciplines and CZM institutions has been effected by intelligent metadata management of a Fal Estuary dataset. Finally, the differing spatial and temporal scales that the two case studies operate at implicitly demonstrate a holism of scale, though explicit means of managing scale were suggested. In all cases the same knowledge structure was used to effectively manage and disseminate coastal data, information and knowledge

    Multispace & Multistructure. Neutrosophic Transdisciplinarity (100 Collected Papers of Sciences), Vol. IV

    Get PDF
    The fourth volume, in my book series of “Collected Papers”, includes 100 published and unpublished articles, notes, (preliminary) drafts containing just ideas to be further investigated, scientific souvenirs, scientific blogs, project proposals, small experiments, solved and unsolved problems and conjectures, updated or alternative versions of previous papers, short or long humanistic essays, letters to the editors - all collected in the previous three decades (1980-2010) – but most of them are from the last decade (2000-2010), some of them being lost and found, yet others are extended, diversified, improved versions. This is an eclectic tome of 800 pages with papers in various fields of sciences, alphabetically listed, such as: astronomy, biology, calculus, chemistry, computer programming codification, economics and business and politics, education and administration, game theory, geometry, graph theory, information fusion, neutrosophic logic and set, non-Euclidean geometry, number theory, paradoxes, philosophy of science, psychology, quantum physics, scientific research methods, and statistics. It was my preoccupation and collaboration as author, co-author, translator, or cotranslator, and editor with many scientists from around the world for long time. Many topics from this book are incipient and need to be expanded in future explorations

    Management: A bibliography for NASA managers

    Get PDF
    This bibliography lists 630 reports, articles and other documents introduced into the NASA Scientific and Technical Information System in 1991. Items are selected and grouped according to their usefulness to the manager as manager. Citations are grouped into ten subject categories: human factors and personnel issues; management theory and techniques; industrial management and manufacturing; robotics and expert systems; computers and information management; research and development; economics, costs and markets; logistics and operations management; reliability and quality control; and legality, legislation, and policy
    • 

    corecore