388,541 research outputs found

    Multi-layer Architecture For Storing Visual Data Based on WCF and Microsoft SQL Server Database

    Full text link
    In this paper we present a novel architecture for storing visual data. Effective storing, browsing and searching collections of images is one of the most important challenges of computer science. The design of architecture for storing such data requires a set of tools and frameworks such as SQL database management systems and service-oriented frameworks. The proposed solution is based on a multi-layer architecture, which allows to replace any component without recompilation of other components. The approach contains five components, i.e. Model, Base Engine, Concrete Engine, CBIR service and Presentation. They were based on two well-known design patterns: Dependency Injection and Inverse of Control. For experimental purposes we implemented the SURF local interest point detector as a feature extractor and KK-means clustering as indexer. The presented architecture is intended for content-based retrieval systems simulation purposes as well as for real-world CBIR tasks.Comment: Accepted for the 14th International Conference on Artificial Intelligence and Soft Computing, ICAISC, June 14-18, 2015, Zakopane, Polan

    BUSINESS INFORMATION SYSTEMS: TRENDS AND TECHNOLOGICAL CHALLENGES

    Get PDF
    Although many companies are using computer-based business information systems (BIS) for more than 30 years, this area is still characterized by a very rapid technological development, "technological jumps ", and changing user requirements. Especially during the last years new technological trends and solutions like object-oriented and object-relational database management systems, decentralization, client/server, data warehouse, multi-media support, and workflow management systems have been developed, offering new possibilities and chances but also challenges and risks. This paper describes and characterizes these developments, explains the driving forces behind them, and discusses some of the impacts and potentials of these trends and technologies on the development of business information systems

    Query processing on multi-core architectures

    Get PDF
    The upcoming generation of computer hardware poses several new challenges for database developers and engineers. Software in general and database management systems (DBMSs) in particular will no longer benefit from performance gains of future hardware due to increase clock speed, as it was the case for the last 35 years; instead, the number of cores per CPU will increase steadily. Today’s approach is to run each query on a single core or only a few different cores using parallel query execution. This approach suffers from several problems (e.g. contention problem) and therefore leads to poor speed up and scale up behavior. These observations open several important research questions on how to use the new multi-core CPU architecture for improving the overall performance of DBMSs. This paper outlines our approach for query processing on multi-core CPU architectures. We present an abstract architecture view for multi-core CPUs, meta operators to control and to interact with the hardware, and a new query operator model that makes use of the meta operators to control the parallel execution of a query over different cores. We illustrate how each of these parts fits in our framework for query processing on multi-core architectures

    Search extension transforms Wiki into a relational system: A case for flavonoid metabolite database

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL.</p> <p>Results</p> <p>To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available.</p> <p>Conclusion</p> <p>This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.</p

    Collaborative Engineering Environments. Two Examples of Process Improvement

    Get PDF
    Companies are recognising that innovative processes are determining factors in competitiveness. Two examples from projects in aircraft development describe the introduction of collaborative engineering environments as a way to improve engineering processes. A multi-disciplinary simulation environment integrates models from all disciplines involved in a common functional structure. Quick configuration for specific design problems and powerful feedback / visualisation capabilities enable engineering teams to concentrate on the integrated behaviour of the design. An engineering process management system allows engineering teams to work concurrently in tasks, following a defined flow of activities, applying tools on a shared database. Automated management of workspaces including data consistency enables engineering teams to concentrate on the design activities. The huge amount of experience in companies must be transformed for effective application in engineering processes. Compatible concepts, notations and implementation platforms make tangible knowledge like models and algorithms accessible. Computer-based design management makes knowledge on engineering processes and methods explicit

    Development and Application of a Digital Twin for Chiller Plant Performance Assessment

    Get PDF
    As the complexity of industrial equipment continues to increase, the management of the individual machines and integrated operations becomes difficult without computer tools. The availability of streaming data from manufacturing floors, plant operations, and deployed fleets can be overwhelming to analyze, although it provides opportunities to improve performance. The use of dedicated monitoring systems in the plant and field to troubleshoot machinery can be integrated within a product lifecycle management (PLM) architecture to offer greater features. PLM offers virtual processes and software tools for the design, analysis, monitoring, and support of engineering systems and products. Within this paradigm, a digital twin can estimate system behavior based on the assembled physical models and the operating data for preventive maintenance efforts. PLM software can store computer-aided-design, computer-aided-engineering, advanced manufacturing, and data in cloud form for remote access. Integrating physical and performance data into a single database provides flexibility and adaptability while allowing distant commanding and health monitoring of dynamic systems. The recent attention on global warming, and the minimization of energy consumption can be partially addressed by examining those economic sectors that use large quantities of electric power. Across the United States, heating, ventilation, and air conditioning (HVAC) systems use a collective $14 Billion of resources to control the temperature of commercial and residential spaces. A typical commercial HVAC system consists of a chiller plant, water pumps for fluid circulation, multiple heat exchangers, and iii forced air blowers. In this research project, a digital twin is created for a single compressor chilled water-based HVAC system using a multi-disciplinary CAE software package. The system level models are assembled to describe a 1400 ton chiller located in the East-side chiller plant on the Clemson University (Clemson, SC) campus. The dynamic models that estimate the fluid pressures, temperatures, and flow rates, as well as the electrical and mechanical power consumption, are validated against the operating data streamed through the OptiCX System. To demonstrate the capabilities of this digital twin tool in a preventive maintenance mode, various degradations are virtually investigated in the chiller plant\u27s components. The mechanical pump efficiency, electric pump motor friction, pipe blockage, air flow rate sensor, and the expansion valve opening were degraded by 3% to 5%, which impacted component behavior and system performance. The analysis of these predicted plant signals helped to establish preventive maintenance thresholds on these components, which should promote improved plant reliability. A digital twin provides additional flexibility than stand-alone monitoring technologies due to the capability of simulating customized scenarios for analyzing failure-prone conditions and overall equipment effectiveness (OEE). The PLM-based digital twin offers a design and prognostic platform for HVAC systems
    corecore