604,153 research outputs found

    Impact of Positioning Error on Achievable Spectral Efficiency in Database-Aided Networks

    Get PDF
    Database-aided user association, where users are associated with data base stations (BSs) based on a database which stores their geographical location with signal-to-noise-ratio tagging, will play a vital role in the futuristic cellular architecture with separated control and data planes. However, such approach can lead to inaccurate user-data BS association, as a result of the inaccuracies in the positioning technique, thus leading to sub-optimal performance. In this paper, we investigate the impact of database-aided user association approach on the average spectral efficiency (ASE). We model the data plane base stations using its fluid model equivalent and derive the ASE for the channel model with pathloss only and when shadowing is incorporated. Our results show that the ASE in database-aided networks degrades as the accuracy of the user positioning technique decreases. Hence, system specifications for database-aided networks must take account of inaccuracies in positioning techniques

    Design and Development of the Architecture and Framework of a Knowledge-Based Expert System for Environmental Impact Assessment

    Get PDF
    The development of the architecture and framework of a knowledge-based expert system (ES) named "JESEIA" for environmental impact assessment (EIA) was developed using the C Language Integrated Production System (CLIPS) that incorporates relevant expert knowledge on EIA and integrates a computational tool to support the preparation of an EIA study. The research was based on the conceptualization and development of the architecture and framework of a knowledge-based expert system that demonstrates the feasibility of integrating the following aspects: Expert knowledge-based system approach, Object-oriented techniques and rules structuring as knowledge modeling paradigm, database management system as a repository connection between domain knowledge sources and the expert system kernel, and finally EIA as a significant knowledge domain and incremental approach as a development model. This work describes the functional framework of combining shared knowledge from various experts as knowledge sources through the implementation of a blackboard system approach that organizes the solution elements and determines which information has the highest certainty to contribute to the inference solution. The rules, in the rule base, were developed according to the environmental component classification characteristics with attributes in an object-oriented technique. The developed system considers the robustness, expandability and modularity throughout its development process. The raw knowledge and database were kept in a supportive data base developed in the system for further reference or updating through the developed expert system as a built-in functionality as well as through a connection to an external data base environment through an open database connectivity mechanism

    Model Integrasi Sistem Dengan Pendekatan Metode Service Oriented Architecture Dan Model View Controller Pada Pusat Penelitian Perkembangan Iptek Lembaga Ilmu Pengetahuan Indonesia

    Full text link
    The administration management as an activity and institute's main task which is also called as Back Office System, supported by several information systems. The data need and information in one information system cannot be fulfilled only by one information source, but it needs a composition from a two or more sources in one institute/organization. To solve this problem, we need an architecture model which can solve integration system between different information systems. This Integrated Information System Design Research Method uses Service Oriented Architecture (SOA) approach as the architecture base, Model View Controller (MVC) method as the model in the programming (coding). The applied development method SOA utilizes Service Oriented Modelling and Architecture (SOMA) development system, it is a system design method which classifies business process in to a service group. The SOA application is because of its loosely coupled, highly interoperable, reusable and interoperability characteristics cause SOA reliable in information development and integration. While the integrated information system which is built using MVC method, easier to be maintained and developed. This research produce employment service, asset service, supplies service, financial service and also system prototype as a dashboard for employment service with SOA approach and MVC method which uses Representational State Transfer (REST) technolog

    Self-Organizing Fuzzy Inference Ensemble System for Big Streaming Data Classification

    Get PDF
    An evolving intelligent system (EIS) is able to self-update its system structure and meta-parameters from streaming data. However, since the majority of EISs are implemented on a single-model architecture, their performances on large-scale, complex data streams are often limited. To address this deficiency, a novel self-organizing fuzzy inference ensemble framework is proposed in this paper. As the base learner of the proposed ensemble system, the self-organizing fuzzy inference system is capable of self-learning a highly transparent predictive model from streaming data on a chunk-by-chunk basis through a human-interpretable process. Very importantly, the base learner can continuously self-adjust its decision boundaries based on the inter-class and intra-class distances between prototypes identified from successive data chunks for higher classification precision. Thanks to its parallel distributed computing architecture, the proposed ensemble framework can achieve great classification precision while maintain high computational efficiency on large-scale problems. Numerical examples based on popular benchmark big data problems demonstrate the superior performance of the proposed approach over the state-of-the-art alternatives in terms of both classification accuracy and computational efficiency

    Hybrid quantum computing with ancillas

    Get PDF
    In the quest to build a practical quantum computer, it is important to use efficient schemes for enacting the elementary quantum operations from which quantum computer programs are constructed. The opposing requirements of well-protected quantum data and fast quantum operations must be balanced to maintain the integrity of the quantum information throughout the computation. One important approach to quantum operations is to use an extra quantum system - an ancilla - to interact with the quantum data register. Ancillas can mediate interactions between separated quantum registers, and by using fresh ancillas for each quantum operation, data integrity can be preserved for longer. This review provides an overview of the basic concepts of the gate model quantum computer architecture, including the different possible forms of information encodings - from base two up to continuous variables - and a more detailed description of how the main types of ancilla-mediated quantum operations provide efficient quantum gates.Comment: Review paper. An introduction to quantum computation with qudits and continuous variables, and a review of ancilla-based gate method

    Development of a creep data base management system for engineering materials

    Get PDF
    A fully menu driven creep data base management system has been developed for various high temperature materials using the client /server (C/S) architecture with Sybase system. 10 as backend and power builder 4.0 as an inter-face. The relational data base constitutes of various classes of materials, their heat treatment, prior history and the related creep properties at different test condit-ions, in addition to the source process route and chemical composition details.Top-down approach has been adopted in designing the entity-relationship (E-R) model. The creep data is organized into the third normal form, and the entire system is divided into manageable modules. Coding for the system is done using Transact-SQL for data defin- ition, manipulation and control operations, and power script language for application development. This article briefly outlines the formulation of data base design, and the implemented E-R model, in addition , to the prese-ntation of various screen formats used for data entry and retrieval modules

    Desain Dan Implementasi Layanan Penyedia Data Penerimaan Mahasiswa Baru Berbasis Web Services Untuk Menunjang Executive Support System

    Full text link
    A design and implement a data provider services (services provider) new admissions web-based services to support the needs of data for executive support system without reducing the workload of academic database server, ensure interoperability and security systems in Lampung State Polytechnic. With the data provider services (services provider) is a web-based services, the future of academic data base server can be accessed and processed using a multi-platform applications.Methods floating system used in this study are the method of software engineering approach, Linear Models. Starting with the analysis, to collect and analyze data through field studies. Later stages of the design, architectural design services for data providers, data design, interfaces and applications required. The next stage is the implementation of the service architecture design data providers (services provider) web-based services, data, interfaces and applications implemented on the real or actual conditions. The final step is testing the method of black box testing, test architecture for service providers warrant the entire system runs well and fi

    Mixing Context Granularities for Improved Entity Linking on Question Answering Data across Entity Categories

    Full text link
    The first stage of every knowledge base question answering approach is to link entities in the input question. We investigate entity linking in the context of a question answering task and present a jointly optimized neural architecture for entity mention detection and entity disambiguation that models the surrounding context on different levels of granularity. We use the Wikidata knowledge base and available question answering datasets to create benchmarks for entity linking on question answering data. Our approach outperforms the previous state-of-the-art system on this data, resulting in an average 8% improvement of the final score. We further demonstrate that our model delivers a strong performance across different entity categories.Comment: Accepted as *SEM 2018 Long Paper (co-located with NAACL 2018), 9 page
    corecore