186,182 research outputs found

    A Hierarchical Framework for Estimating Heterogeneous Architecture-based Software Reliability

    Get PDF
    Problem. The composite model approach that follows a DTMC process with constant failure rate is not analytically tractable for improving its method of solution for estimating software reliability. In this case, a hierarchical approach is preferred to improve accuracy for the method of solution for estimating reliability. Very few studies have been conducted on heterogeneous architecture-based software reliability, and those that have been done use the composite model for reliability estimation. To my knowledge, no research has been done where a hierarchical approach is taken to estimate heterogeneous architecture-based software reliability. This paper explores the use and effectiveness of a hierarchical framework to estimate heterogeneous architecture-based software reliability. -- Method. Concepts of reliability and reliability prediction models for heterogeneous software architecture were surveyed. The different architectural styles were identified as batch-sequential, parallel filter, fault tolerance, and call and return. A method for evaluating these four styles solely on the basis of transition probability was proposed. Four case studies were selected from similar researches which have been done to test the effectiveness of the proposed hierarchical framework. The study assumes that the method of extracting the information about the software architecture was accurate and that the actual reliability of the systems used were free of software errors. -- Results. The percentage difference in results of the reliability estimated by the proposed hierarchical framework compared with the actual reliability was 5.12%, 11.09%, 0.82%, and 52.14% for Cases 1, 2, 3, and 4 respectively. The proposed hierarchical framework did not work for Case 4, which showed much higher values in component utilization and therefore higher interactions between components when compared with the other cases. -- Conclusions. The proposed hierarchical framework generally showed close comparison with the actual reliability of the software systems used in the case studies. However, the results obtained by the proposed hierarchical framework compared to the actual reliability were in disagreement for Case 4. This is due to the higher component interactions in Case 4 when compared with other cases and showed that there are limitations to the extent to which the proposed hierarchical framework can be applied. The reasoning for the limitations of the hierarchical approach has not been cited in any research on the subject matter. Even with the limitations, the hierarchical framework for estimating heterogeneous architecture-based software reliability can still be applied when high accuracy is not required and not too high interactions among components in the software system exist. Thesis (M.S.) -- Andrews University, College of Arts and Sciences, 201

    Towards robust and reliable multimedia analysis through semantic integration of services

    Get PDF
    Thanks to ubiquitous Web connectivity and portable multimedia devices, it has never been so easy to produce and distribute new multimedia resources such as videos, photos, and audio. This ever-increasing production leads to an information overload for consumers, which calls for efficient multimedia retrieval techniques. Multimedia resources can be efficiently retrieved using their metadata, but the multimedia analysis methods that can automatically generate this metadata are currently not reliable enough for highly diverse multimedia content. A reliable and automatic method for analyzing general multimedia content is needed. We introduce a domain-agnostic framework that annotates multimedia resources using currently available multimedia analysis methods. By using a three-step reasoning cycle, this framework can assess and improve the quality of multimedia analysis results, by consecutively (1) combining analysis results effectively, (2) predicting which results might need improvement, and (3) invoking compatible analysis methods to retrieve new results. By using semantic descriptions for the Web services that wrap the multimedia analysis methods, compatible services can be automatically selected. By using additional semantic reasoning on these semantic descriptions, the different services can be repurposed across different use cases. We evaluated this problem-agnostic framework in the context of video face detection, and showed that it is capable of providing the best analysis results regardless of the input video. The proposed methodology can serve as a basis to build a generic multimedia annotation platform, which returns reliable results for diverse multimedia analysis problems. This allows for better metadata generation, and improves the efficient retrieval of multimedia resources

    Alternative sweetener from curculigo fruits

    Get PDF
    This study gives an overview on the advantages of Curculigo Latifolia as an alternative sweetener and a health product. The purpose of this research is to provide another option to the people who suffer from diabetes. In this research, Curculigo Latifolia was chosen, due to its unique properties and widely known species in Malaysia. In order to obtain the sweet protein from the fruit, it must go through a couple of procedures. First we harvested the fruits from the Curculigo trees that grow wildly in the garden. Next, the Curculigo fruits were dried in the oven at 50 0C for 3 days. Finally, the dried fruits were blended in order to get a fine powder. Curculin is a sweet protein with a taste-modifying activity of converting sourness to sweetness. The curculin content from the sample shown are directly proportional to the mass of the Curculigo fine powder. While the FTIR result shows that the sample spectrum at peak 1634 cm–1 contains secondary amines. At peak 3307 cm–1 contains alkynes

    A comparison study for two fuzzy-based systems: improving reliability and security of JXTA-overlay P2P platform

    Get PDF
    This is a copy of the author's final draft version of an article published in the journal Soft computing.The reliability of peers is very important for safe communication in peer-to-peer (P2P) systems. The reliability of a peer can be evaluated based on the reputation and interactions with other peers to provide different services. However, for deciding the peer reliability there are needed many parameters, which make the problem NP-hard. In this paper, we present two fuzzy-based systems (called FBRS1 and FBRS2) to improve the reliability of JXTA-overlay P2P platform. In FBRS1, we considered three input parameters: number of interactions (NI), security (S), packet loss (PL) to decide the peer reliability (PR). In FBRS2, we considered four input parameters: NI, S, PL and local score to decide the PR. We compare the proposed systems by computer simulations. Comparing the complexity of FBRS1 and FBRS2, the FBRS2 is more complex than FBRS1. However, it also considers the local score, which makes it more reliable than FBRS1.Peer ReviewedPostprint (author's final draft

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft
    corecore