224,068 research outputs found

    COLLABORATIVE CAD MODELING PROCESS ANALYSIS TO SUPPORT TEAMWORK FOR BUILDING DESIGN

    Get PDF
    Collaborative tools are information systems which allow document sharing through local area networks, intranets and extranets. Collaborative design can be a solution to increase the productivity and the final quality of the product in a building design office. In this way it is possible to assure the information integration and also the data integrity during the design process based on computer network communication. The goal of this article is to analyze how one CAD system based on BIM concept (ArchiCAD software - Graphisoft/Nemetschek) can support a collaborative teamwork structured on an integrated model for different design views. In this model, the tasks are assigned by a coordinator and executed by the designers in different places following the client-service scheme. It is intended to contribute with the diffusion of this information technology tool and to present its potentiality for the improvement of the design performance. The research method used was a case study of the development design. In this case study, communication guidelines had been applied to verify the software behavior in relation to the task execution in a shared framework. The use of the collaborative CAD modeling in the development design provided information sharing, track and control of document versions and also the integration of design modifications in such automatic and simultaneous way between different computers used.Collaborative tools are information systems which allow document sharing through local area networks, intranets and extranets. Collaborative design can be a solution to increase the productivity and the final quality of the product in a building design office. In this way it is possible to assure the information integration and also the data integrity during the design process based on computer network communication. The goal of this article is to analyze how one CAD system based on BIM concept (ArchiCAD software - Graphisoft/Nemetschek) can support a collaborative teamwork structured on an integrated model for different design views. In this model, the tasks are assigned by a coordinator and executed by the designers in different places following the client-service scheme. It is intended to contribute with the diffusion of this information technology tool and to present its potentiality for the improvement of the design performance. The research method used was a case study of the development design. In this case study, communication guidelines had been applied to verify the software behavior in relation to the task execution in a shared framework. The use of the collaborative CAD modeling in the development design provided information sharing, track and control of document versions and also the integration of design modifications in such automatic and simultaneous way between different computers used

    Decision Support for Healthcare ICT Network System Appraisal

    Get PDF
    A framework to support the appraisal process to improve the quality of service (QoS) of an Information and Communication Technology (ICT) network system in health care service is presented. Most of health-related activities stand to benefit from ICT endorsement; however, technical problems may appear, as an inadequate physical infrastructure, insufficient access by the user to the hardware/software communication infrastructure and QoS issues. The aim is to develop a prototype assessment model based on data collected from the main users of a health network system An evaluation process is carried out to analyze and assess the support of QoS of ICT, its infrastructure and user interface perception of the QoS offered through case study for hospitals in Chile. Performance has been evaluated by simulation and modelling network Architecture. The Optimization Network Engineering Tool (OPNET) simulation platform is used to examine the network behaviour and performance to ensure consistency and reliability for thousands of staff across the hospital network

    A machine learning-based framework for preventing video freezes in HTTP adaptive streaming

    Get PDF
    HTTP Adaptive Streaming (HAS) represents the dominant technology to deliver videos over the Internet, due to its ability to adapt the video quality to the available bandwidth. Despite that, HAS clients can still suffer from freezes in the video playout, the main factor influencing users' Quality of Experience (QoE). To reduce video freezes, we propose a network-based framework, where a network controller prioritizes the delivery of particular video segments to prevent freezes at the clients. This framework is based on OpenFlow, a widely adopted protocol to implement the software-defined networking principle. The main element of the controller is a Machine Learning (ML) engine based on the random undersampling boosting algorithm and fuzzy logic, which can detect when a client is close to a freeze and drive the network prioritization to avoid it. This decision is based on measurements collected from the network nodes only, without any knowledge on the streamed videos or on the clients' characteristics. In this paper, we detail the design of the proposed ML-based framework and compare its performance with other benchmarking HAS solutions, under various video streaming scenarios. Particularly, we show through extensive experimentation that the proposed approach can reduce video freezes and freeze time with about 65% and 45% respectively, when compared to benchmarking algorithms. These results represent a major improvement for the QoE of the users watching multimedia content online

    Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services

    Full text link
    One of the most widely-implemented service standards provided by the Open Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS). WMS is widely employed globally, but there is limited knowledge of the global distribution, adoption status or the service quality of these online WMS resources. To fill this void, we investigated global WMSs resources and performed distributed performance monitoring of these services. This paper explicates a distributed monitoring framework that was used to monitor 46,296 WMSs continuously for over one year and a crawling method to discover these WMSs. We analyzed server locations, provider types, themes, the spatiotemporal coverage of map layers and the service versions for 41,703 valid WMSs. Furthermore, we appraised the stability and performance of basic operations for 1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major reasons for request errors and performance issues, as well as the relationship between service response times and the spatiotemporal distribution of client monitoring sites. This paper will help service providers, end users and developers of standards to grasp the status of global WMS resources, as well as to understand the adoption status of OGC standards. The conclusions drawn in this paper can benefit geospatial resource discovery, service performance evaluation and guide service performance improvements.Comment: 24 pages; 15 figure

    Boosting in Image Quality Assessment

    Full text link
    In this paper, we analyze the effect of boosting in image quality assessment through multi-method fusion. Existing multi-method studies focus on proposing a single quality estimator. On the contrary, we investigate the generalizability of multi-method fusion as a framework. In addition to support vector machines that are commonly used in the multi-method fusion, we propose using neural networks in the boosting. To span different types of image quality assessment algorithms, we use quality estimators based on fidelity, perceptually-extended fidelity, structural similarity, spectral similarity, color, and learning. In the experiments, we perform k-fold cross validation using the LIVE, the multiply distorted LIVE, and the TID 2013 databases and the performance of image quality assessment algorithms are measured via accuracy-, linearity-, and ranking-based metrics. Based on the experiments, we show that boosting methods generally improve the performance of image quality assessment and the level of improvement depends on the type of the boosting algorithm. Our experimental results also indicate that boosting the worst performing quality estimator with two or more additional methods leads to statistically significant performance enhancements independent of the boosting technique and neural network-based boosting outperforms support vector machine-based boosting when two or more methods are fused.Comment: Paper: 6 pages, 5 tables, 1 figure, Presentation: 16 slides [Ancillary files

    Synthetic Iris Presentation Attack using iDCGAN

    Full text link
    Reliability and accuracy of iris biometric modality has prompted its large-scale deployment for critical applications such as border control and national ID projects. The extensive growth of iris recognition systems has raised apprehensions about susceptibility of these systems to various attacks. In the past, researchers have examined the impact of various iris presentation attacks such as textured contact lenses and print attacks. In this research, we present a novel presentation attack using deep learning based synthetic iris generation. Utilizing the generative capability of deep convolutional generative adversarial networks and iris quality metrics, we propose a new framework, named as iDCGAN (iris deep convolutional generative adversarial network) for generating realistic appearing synthetic iris images. We demonstrate the effect of these synthetically generated iris images as presentation attack on iris recognition by using a commercial system. The state-of-the-art presentation attack detection framework, DESIST is utilized to analyze if it can discriminate these synthetically generated iris images from real images. The experimental results illustrate that mitigating the proposed synthetic presentation attack is of paramount importance.Comment: International Joint Conference on Biometrics 201

    On the Challenges and KPIs for Benchmarking Open-Source NFV MANO Systems: OSM vs ONAP

    Full text link
    NFV management and orchestration (MANO) systems are being developed to meet the agile and flexible management requirements of virtualized network services in the 5G era and beyond. In this regard, ETSI ISG NFV has specified a standard NFV MANO system that is being used as a reference by MANO system vendors as well as open-source MANO projects. However, in the absence of MANO specific KPIs, it is difficult for users to make an informed decision on the choice of the MANO system better suited to meet their needs. Given the absence of any formal MANO specific KPIs on the basis of which a performance of a MANO system can be quantified, benchmarked and compared, users are left with simply comparing the claimed feature set. It is thus the motivation of this paper to highlight the challenges of testing and validating MANO systems in general, and propose MANO specific KPIs. Based on the proposed KPIs, we analyze and compare the performance of the two most popular open-source MANO projects, namely ONAP and OSM, using a complex open-source vCPE VNF and identify the features/performance gaps. In addition, we also provide a sketch of a test-jig that has been designed for benchmarking MANO systems.Comment: 12 pages, 11 figure

    Towards Data-driven Simulation of End-to-end Network Performance Indicators

    Full text link
    Novel vehicular communication methods are mostly analyzed simulatively or analytically as real world performance tests are highly time-consuming and cost-intense. Moreover, the high number of uncontrollable effects makes it practically impossible to reevaluate different approaches under the exact same conditions. However, as these methods massively simplify the effects of the radio environment and various cross-layer interdependencies, the results of end-to-end indicators (e.g., the resulting data rate) often differ significantly from real world measurements. In this paper, we present a data-driven approach that exploits a combination of multiple machine learning methods for modeling the end-to-end behavior of network performance indicators within vehicular networks. The proposed approach can be exploited for fast and close to reality evaluation and optimization of new methods in a controllable environment as it implicitly considers cross-layer dependencies between measurable features. Within an example case study for opportunistic vehicular data transfer, the proposed approach is validated against real world measurements and a classical system-level network simulation setup. Although the proposed method does only require a fraction of the computation time of the latter, it achieves a significantly better match with the real world evaluations

    Listening to Chaotic Whispers: A Deep Learning Framework for News-oriented Stock Trend Prediction

    Full text link
    Stock trend prediction plays a critical role in seeking maximized profit from stock investment. However, precise trend prediction is very difficult since the highly volatile and non-stationary nature of stock market. Exploding information on Internet together with advancing development of natural language processing and text mining techniques have enable investors to unveil market trends and volatility from online content. Unfortunately, the quality, trustworthiness and comprehensiveness of online content related to stock market varies drastically, and a large portion consists of the low-quality news, comments, or even rumors. To address this challenge, we imitate the learning process of human beings facing such chaotic online news, driven by three principles: sequential content dependency, diverse influence, and effective and efficient learning. In this paper, to capture the first two principles, we designed a Hybrid Attention Networks to predict the stock trend based on the sequence of recent related news. Moreover, we apply the self-paced learning mechanism to imitate the third principle. Extensive experiments on real-world stock market data demonstrate the effectiveness of our approach

    Uncovering the Social Interaction in Swarm Intelligence with Network Science

    Full text link
    Swarm intelligence is the collective behavior emerging in systems with locally interacting components. Because of their self-organization capabilities, swarm-based systems show essential properties for handling real-world problems such as robustness, scalability, and flexibility. Yet, we do not know why swarm-based algorithms work well and neither we can compare the different approaches in the literature. The lack of a common framework capable of characterizing these several swarm-based algorithms, transcending their particularities, has led to a stream of publications inspired by different aspects of nature without a systematic comparison over existing approaches. Here, we address this gap by introducing a network-based framework---the interaction network---to examine computational swarm-based systems via the optics of the social dynamics of such interaction network; a clear example of network science being applied to bring further clarity to a complicated field within artificial intelligence. We discuss the social interactions of four well-known swarm-based algorithms and provide an in-depth case study of the Particle Swarm Optimization. The interaction network enables researchers to study swarm algorithms as systems, removing the algorithm particularities from the analyses while focusing on the structure of the social interactions.Comment: 23 pages, 6 figure
    corecore