34,396 research outputs found

    No-reference bitstream-based visual quality impairment detection for high definition H.264/AVC encoded video sequences

    Get PDF
    Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives

    Assessing the role of EO in biodiversity monitoring: options for integrating in-situ observations with EO within the context of the EBONE concept

    Get PDF
    The European Biodiversity Observation Network (EBONE) is a European contribution on terrestrial monitoring to GEO BON, the Group on Earth Observations Biodiversity Observation Network. EBONE’s aims are to develop a system of biodiversity observation at regional, national and European levels by assessing existing approaches in terms of their validity and applicability starting in Europe, then expanding to regions in Africa. The objective of EBONE is to deliver: 1. A sound scientific basis for the production of statistical estimates of stock and change of key indicators; 2. The development of a system for estimating past changes and forecasting and testing policy options and management strategies for threatened ecosystems and species; 3. A proposal for a cost-effective biodiversity monitoring system. There is a consensus that Earth Observation (EO) has a role to play in monitoring biodiversity. With its capacity to observe detailed spatial patterns and variability across large areas at regular intervals, our instinct suggests that EO could deliver the type of spatial and temporal coverage that is beyond reach with in-situ efforts. Furthermore, when considering the emerging networks of in-situ observations, the prospect of enhancing the quality of the information whilst reducing cost through integration is compelling. This report gives a realistic assessment of the role of EO in biodiversity monitoring and the options for integrating in-situ observations with EO within the context of the EBONE concept (cfr. EBONE-ID1.4). The assessment is mainly based on a set of targeted pilot studies. Building on this assessment, the report then presents a series of recommendations on the best options for using EO in an effective, consistent and sustainable biodiversity monitoring scheme. The issues that we faced were many: 1. Integration can be interpreted in different ways. One possible interpretation is: the combined use of independent data sets to deliver a different but improved data set; another is: the use of one data set to complement another dataset. 2. The targeted improvement will vary with stakeholder group: some will seek for more efficiency, others for more reliable estimates (accuracy and/or precision); others for more detail in space and/or time or more of everything. 3. Integration requires a link between the datasets (EO and in-situ). The strength of the link between reflected electromagnetic radiation and the habitats and their biodiversity observed in-situ is function of many variables, for example: the spatial scale of the observations; timing of the observations; the adopted nomenclature for classification; the complexity of the landscape in terms of composition, spatial structure and the physical environment; the habitat and land cover types under consideration. 4. The type of the EO data available varies (function of e.g. budget, size and location of region, cloudiness, national and/or international investment in airborne campaigns or space technology) which determines its capability to deliver the required output. EO and in-situ could be combined in different ways, depending on the type of integration we wanted to achieve and the targeted improvement. We aimed for an improvement in accuracy (i.e. the reduction in error of our indicator estimate calculated for an environmental zone). Furthermore, EO would also provide the spatial patterns for correlated in-situ data. EBONE in its initial development, focused on three main indicators covering: (i) the extent and change of habitats of European interest in the context of a general habitat assessment; (ii) abundance and distribution of selected species (birds, butterflies and plants); and (iii) fragmentation of natural and semi-natural areas. For habitat extent, we decided that it did not matter how in-situ was integrated with EO as long as we could demonstrate that acceptable accuracies could be achieved and the precision could consistently be improved. The nomenclature used to map habitats in-situ was the General Habitat Classification. We considered the following options where the EO and in-situ play different roles: using in-situ samples to re-calibrate a habitat map independently derived from EO; improving the accuracy of in-situ sampled habitat statistics, by post-stratification with correlated EO data; and using in-situ samples to train the classification of EO data into habitat types where the EO data delivers full coverage or a larger number of samples. For some of the above cases we also considered the impact that the sampling strategy employed to deliver the samples would have on the accuracy and precision achieved. Restricted access to European wide species data prevented work on the indicator ‘abundance and distribution of species’. With respect to the indicator ‘fragmentation’, we investigated ways of delivering EO derived measures of habitat patterns that are meaningful to sampled in-situ observations

    Predicting growing stock volume of Eucalyptus plantations using 3-D point clouds derived from UAV imagery and ALS data

    Get PDF
    Estimating forest inventory variables is important in monitoring forest resources and mitigating climate change. In this respect, forest managers require flexible, non-destructive methods for estimating volume and biomass. High-resolution and low-cost remote sensing data are increasingly available to measure three-dimensional (3D) canopy structure and to model forest structural attributes. The main objective of this study was to evaluate and compare the individual tree volume estimates derived from high-density point clouds obtained from airborne laser scanning (ALS) and digital aerial photogrammetry (DAP) in Eucalyptus spp. plantations. Object-based image analysis (OBIA) techniques were applied for individual tree crown (ITC) delineation. The ITC algorithm applied correctly detected and delineated 199 trees from ALS-derived data, while 192 trees were correctly identified using DAP-based point clouds acquired fromUnmannedAerialVehicles(UAV), representing accuracy levels of respectively 62% and 60%. Addressing volume modelling, non-linear regression fit based on individual tree height and individual crown area derived from the ITC provided the following results: Model E ciency (Mef) = 0.43 and 0.46, Root Mean Square Error (RMSE) = 0.030 m3 and 0.026 m3, rRMSE = 20.31% and 19.97%, and an approximately unbiased results (0.025 m3 and 0.0004 m3) using DAP and ALS-based estimations, respectively. No significant di erence was found between the observed value (field data) and volume estimation from ALS and DAP (p-value from t-test statistic = 0.99 and 0.98, respectively). The proposed approaches could also be used to estimate basal area or biomass stocks in Eucalyptus spp. plantationsinfo:eu-repo/semantics/publishedVersio

    Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    Get PDF
    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel no-reference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream

    Predicting Defects in Software Using Grammar-Guided Genetic Programming

    Get PDF
    The knowledge of the software quality can allow an organization to allocate the needed resources for the code maintenance. Maintaining the software is considered as a high cost factor for most organizations. Consequently, there is need to assess software modules in respect of defects that will arise. Addressing the prediction of software defects by means of computational intelligence has only recently become evident. In this paper, we investigate the capability of the genetic programming approach for producing solution composed of decision rules. We applied the model into four software engineering databases of NASA. The overall performance of this system denotes its competitiveness as compared with past methodologies, and is shown capable of producing simple, highly accurate, tangible rules

    The Challenges in SDN/ML Based Network Security : A Survey

    Full text link
    Machine Learning is gaining popularity in the network security domain as many more network-enabled devices get connected, as malicious activities become stealthier, and as new technologies like Software Defined Networking (SDN) emerge. Sitting at the application layer and communicating with the control layer, machine learning based SDN security models exercise a huge influence on the routing/switching of the entire SDN. Compromising the models is consequently a very desirable goal. Previous surveys have been done on either adversarial machine learning or the general vulnerabilities of SDNs but not both. Through examination of the latest ML-based SDN security applications and a good look at ML/SDN specific vulnerabilities accompanied by common attack methods on ML, this paper serves as a unique survey, making a case for more secure development processes of ML-based SDN security applications.Comment: 8 pages. arXiv admin note: substantial text overlap with arXiv:1705.0056

    Daily Stress Recognition from Mobile Phone Data, Weather Conditions and Individual Traits

    Full text link
    Research has proven that stress reduces quality of life and causes many diseases. For this reason, several researchers devised stress detection systems based on physiological parameters. However, these systems require that obtrusive sensors are continuously carried by the user. In our paper, we propose an alternative approach providing evidence that daily stress can be reliably recognized based on behavioral metrics, derived from the user's mobile phone activity and from additional indicators, such as the weather conditions (data pertaining to transitory properties of the environment) and the personality traits (data concerning permanent dispositions of individuals). Our multifactorial statistical model, which is person-independent, obtains the accuracy score of 72.28% for a 2-class daily stress recognition problem. The model is efficient to implement for most of multimedia applications due to highly reduced low-dimensional feature space (32d). Moreover, we identify and discuss the indicators which have strong predictive power.Comment: ACM Multimedia 2014, November 3-7, 2014, Orlando, Florida, US
    corecore