5,125 research outputs found

    Multi-source Information Fusion Technology and Its Engineering Application

    Get PDF
    With the continuous development of information technology in recent years, information fusion technology, which originated from military applications, plays an important role in various fields. In addition, the rapidly increasing amount of data and the changing lifestyles of people in the information age are affecting the development of information fusion technology. More experts and scholars have focused their attention on the research of image or audio and video fusion or distributed fusion technology. This article summarizes the origin and development of information fusion technology and typical algorithms, as well as the future development trends and challenges of information fusion technology

    Designing a Framework to Handle Context Information

    Get PDF
    In the recent years, a number of context-aware frameworks have been proposed to facilitate the development of context-aware applications. From the experience gained, in this paper we explore the design principles that contextaware platforms should conform to, the functionalities they have to provide and the technologies and tools that can be used for their implementation. Subsequently, we propose a context-aware framework and describe the architecture it adopts, making our own technological selection from the options previously identified

    Knowledge Acquisition Analytical Games: games for cognitive systems design

    Get PDF
    Knowledge discovery from data and knowledge acquisition from experts are steps of paramount importance when designing cognitive systems. The literature discusses extensively on the issues related to current knowledge acquisition techniques. In this doctoral work we explore the use of gaming approaches as a knowledge acquisition tools, capitalising on aspects such as engagement, ease of use and ability to access tacit knowledge. More specifically, we explore the use of analytical games for this purpose. Analytical game for decision making is not a new class of games, but rather a set of platform independent simulation games, designed not for entertainment, whose main purpose is research on decision-making, either in its complete dynamic cycle or a portion of it (i.e. Situational Awareness). Moreover, the work focuses on the use of analytical games as knowledge acquisition tools. To this end, the Knowledge Acquisition Analytical Game (K2AG) method is introduced. K2AG is an innovative game framework for supporting the knowledge acquisition task. The framework introduced in this doctoral work was born as a generalisation of the Reliability Game, which on turn was inspired by the Risk Game. More specifically, K2AGs aim at collecting information and knowledge to be used in the design of cognitive systems and their algorithms. The two main aspects that characterise those games are the use of knowledge cards to render information and meta-information to the players and the use of an innovative data gathering method that takes advantage of geometrical features of simple shapes (e.g. a triangle) to easily collect players\u2019 beliefs. These beliefs can be mapped to subjective probabilities or masses (in evidence theory framework) and used for algorithm design purposes. However, K2AGs might use also different means of conveying information to the players and to collect data. Part of the work has been devoted to a detailed articulation of the design cycle of K2AGs. More specifically, van der Zee\u2019s simulation gaming design framework has been extended in order to account for the fact that the design cycle steps should be modified to include the different kinds of models that characterise the design of simulation games and simulations in general, namely a conceptual model (platform independent), a design model (platform independent) and one or more implementation models (platform dependent). In addition, the processes that lead from one model to the other have been mapped to design phases of analytical wargaming. Aspects of game validation and player experience evaluation have been addressed in this work. Therefore, based on the literature a set of validation criteria for K2AG has been proposed and a player experience questionnaire for K2AGs has been developed. This questionnaire extends work proposed in the literature, but a validation has not been possible at the time of writing. Finally, two instantiations of the K2AG framework, namely the Reliability Game and the MARISA Game, have been designed and analysed in details to validate the approach and show its potentialities

    Towards Autonomic Computing: Effective Event Management

    Get PDF
    Autonomic Computing is emerging as a significant new approach for the design of computing systems. Its goal is the production of systems that are self-managing, self-healing, self-protecting and self-optimizing. Achieving this goal will involve techniques from both Software Engineering and Artificial Intelligence. This paper discusses one particular aspect of Autonomic Computing: event management. It considers the range of event handling techniques in use, particularly in relation to distributed systems. Intelligent approaches are illustrated using the example of event handling in telecommunication systems. In particular, the telecom survivable network architecture is analyzed to identify lessons and potential pitfalls for Autonomic Computing

    Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability

    Full text link
    Artificial Intelligence (AI) is rapidly integrating into various aspects of our daily lives, influencing decision-making processes in areas such as targeted advertising and matchmaking algorithms. As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial. Functional transparency is a fundamental aspect of algorithmic decision-making systems, allowing stakeholders to comprehend the inner workings of these systems and enabling them to evaluate their fairness and accuracy. However, achieving functional transparency poses significant challenges that need to be addressed. In this paper, we propose a design for user-centered compliant-by-design transparency in transparent systems. We emphasize that the development of transparent and explainable AI systems is a complex and multidisciplinary endeavor, necessitating collaboration among researchers from diverse fields such as computer science, artificial intelligence, ethics, law, and social science. By providing a comprehensive understanding of the challenges associated with transparency in AI systems and proposing a user-centered design framework, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.Comment: Hosain, M. T. , Anik, M. H. , Rafi, S. , Tabassum, R. , Insia, K. & S{\i}dd{\i}ky, M. M. (). Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability . Journal of Metaverse , 3 (2) , 166-180 . DOI: 10.57019/jmv.130668

    A Cybersecurity Model for a Roblox-based Metaverse Architecture Framework

    Get PDF
    The adoption of virtual reality VR and augmented reality AR headsets in futuristic and science fiction has made it possible for the Metaverse to exist as a single universal immersive virtual universe By extending technology outside of our physical reality the Metaverse alters the human experience The four categories we use to categorize metaverse definitions are environment interface interaction and social value Currently it is unclear what the metaverse s structure and elements are A cybersecurity framework for these devices is necessary as the world grows more interconnected and immersive technologies are increasingly widely used in business government and consumer markets Used was a literature revie

    Development of computer model and expert system for pneumatic fracturing of geologic formations

    Get PDF
    The objective of this study was the development of a new computer program called PF-Model to analyze pneumatic fracturing of geologic formations. Pneumatic fracturing is an in situ remediation process that involves injecting high pressure gas into soil or rock matrices to enhance permeability, as well as to introduce liquid and solid amendments. PF-Model has two principal components: (1) Site Screening, which heuristically evaluates sites with regard to process applicability; and (2) System Design, which uses the numerical solution of a coupled algorithm to generate preliminary design parameters. Designed as an expert system, the Site Screening component is a high performance computer program capable of simulating human expertise within a narrow domain. The reasoning process is controlled by the inference engine, which uses subjective probability theory (based on Bayes\u27 theorem) to handle uncertainty. The expert system also contains an extensive knowledge base of geotechnical data related to field performance of pneumatic fracturing. The hierarchical order of importance established for the geotechnical properties was formation type, depth, consistency/relative density, plasticity, fracture frequency, weathering, and depth of water table. The expert system was validated by a panel of five experts who rated selected sites on the applicability of the three main variants of pneumatic fracturing. Overall, PF-Model demonstrated better than an 80% agreement with the expert panel. The System Design component was programmed with structured algorithms to accomplish two main functions: (1) to estimate fracture aperture and radius (Fracture Prediction Mode); and (2) to calibrate post-fracture Young\u27s modulus and pneumatic conductivity (Calibration Mode). The Fracture Prediction Mode uses numerical analysis to converge on a solution by considering the three coupled physical processes that affect fracture propagation: pressure distribution, leakoff, and deflection. The Calibration Mode regresses modulus using a modified deflection equation, and then converges on the conductivity in a method similar to the Fracture Prediction Mode. The System Design component was validated and calibrated for each of the 14 different geologic formation types supported by the program. Validation was done by comparing the results of PF-Model to the original mathematical model. For the calibration process, default values for flow rate, density, Poisson\u27s ratio, modulus, and pneumatic conductivity were established by regression until the model simulated, in general, actual site behavior. PF-Model was programmed in Visual Basic 5.0 and features a menu driven GUI. Three extensive default libraries are provided: probabilistic knowledge base, flownet shape factors, and geotechnical defaults. Users can conveniently access and modify the default libraries to reflect evolving trends and knowledge. Recommendations for future study are included in the work
    corecore