33 research outputs found

    CORRELATION OF ARTIFICIAL INTELLIGENCE TECHNIQUES WITH SOFT COMPUTING IN VARIOUS AREAS

    Get PDF
    Artificial Intelligence (AI) is a part of computer science concerned with designing intelligent computer systems that exhibit the characteristics used to associate with intelligence in human behavior. Basically, it define as a field that study and design of intelligent agents. Traditional AI approach deals with cognitive and biological models that imitate and describe human information processing skills. This processing skills help to perceive and interact with their environment. But in modern era developers can build system that assemble superior information processing needs of government and industry by choosing from large areas of mature technologies. Soft Computing (SC) is an added area of AI. It focused on the design of intelligent systems that process uncertain, imprecise and incomplete information. It applied in real world problems frequently to offer more robust, tractable and less costly solutions than those obtained by more conventional mathematical techniques. This paper reviews correlation of artificial intelligence techniques with soft computing in various areas

    Multi-Scale Fusion of Enhanced Hazy Images Using Particle Swarm Optimization and Fuzzy Intensification Operators

    Get PDF
    Dehazing from a single image is still a challenging task, where the thickness of the haze depends on depth information. Researchers focus on this area by eliminating haze from the single image by using restoration techniques based on haze image model. Using haze image model, the haze is eliminated by estimating atmospheric light, transmission, and depth. A few researchers have focused on enhancement based methods for eliminating haze from images. Enhancement based dehazing algorithms will lead to saturation of pixels in the enhanced image. This is due to assigning fixed values to the parameters used to enhance an image. Therefore, the enhancement based methods fail in the proper tuning of the parameters. This can be overcome by optimizing the parameters that are used to enhance the images. This paper describes the research work carried to derive two enhanced images from a single input hazy image using particle swarm optimization and fuzzy intensification operators. The two derived images are further fused using multi-scale fusion technique. The objective evaluation shows that the entropy of the haze eliminated images is comparatively better than the state-of-the-art algorithms. Also, the fog density is measured using an evaluator known as fog aware density evaluator (FADE), which considers all the statistical parameters to differentiate a hazy image from a highly visible natural image. Using this evaluator we found that the density of the fog is less in our proposed method when compared with enhancement based algorithms used to eliminate haze from images

    A delay-based dynamic scheduling algorithm for bag-of-task workflows with stochastic task execution times in clouds

    Full text link
    [EN] Bag-of-Tasks (BoT) workflows are widespread in many big data analysis fields. However, there are very few cloud resource provisioning and scheduling algorithms tailored for BoT workflows. Furthermore, existing algorithms fail to consider the stochastic task execution times of BoT workflows which leads to deadline violations and increased resource renting costs. In this paper, we propose a dynamic cloud resource provisioning and scheduling algorithm which aims to fulfill the workflow deadline by using the sum of task execution time expectation and standard deviation to estimate real task execution times. A bag-based delay scheduling strategy and a single-type based virtual machine interval renting method are presented to decrease the resource renting cost. The proposed algorithm is evaluated using a cloud simulator ElasticSim which is extended from CloudSim. The results show that the dynamic algorithm decreases the resource renting cost while guaranteeing the workflow deadline compared to the existing algorithms. (C) 2017 Elsevier B.V. All rights reserved.The authors would like to thank the reviewers for their constructive and useful comments. This work is supported by the National Natural Science Foundation of China (Grant No. 61602243 and 61572127), the Natural Science Foundation ofJiangsu Province (Grant No. BK20160846), Jiangsu Key Laboratory of Image and Video Understanding for Social Safety (Nanjing University of Science and Technology, Grant No. 30916014107), the Fundamental Research Funds for the Central University (Grant No. 30916015104). Ruben Ruiz is partially supported by the Spanish Ministry of Economy and Competitiveness, under the project "SCHEYARD" (No. DP12015-65895-R) co-financed by FEDER funds.Cai, Z.; Li, X.; Ruiz García, R.; Li, Q. (2017). A delay-based dynamic scheduling algorithm for bag-of-task workflows with stochastic task execution times in clouds. Future Generation Computer Systems. 71:57-72. https://doi.org/10.1016/j.future.2017.01.020S57727

    Detecção de linha de plantio de cana de açúcar a partir de imagens de VANT usando Segmentação Semântica e Transformada de Radon

    Get PDF
    In recent years, UAVs (Unmanned Aerial Vehicles) have become increasingly popular in the agricultural sector, promoting and enabling the application of aerial image monitoring in both scientific and business contexts. Images captured by UAVs are fundamental for precision farming practices, as they allow activities that deal with low and medium altitude images. After the effective sowing, the scenario of the planted area may change drastically over time due to the appearance of erosion, gaps, death and drying of part of the crop, animal interventions, etc. Thus, the process of detecting the crop rows is strongly important for planning the harvest, estimating the use of inputs, control of costs of production, plant stand counts, early correction of sowing failures, more-efficient watering, etc. In addition, the geolocation information of the detected lines allows the use of autonomous machinery and a better application of inputs, reducing financial costs and the aggression to the environment. In this work we address the problem of detection and segmentation of sugarcane crop lines using UAV imagery. First, we experimented an approach based on \ac{GA} associated with Otsu method to produce binarized images. Then, due to some reasons including the recent relevance of Semantic Segmentation in the literature, its levels of abstraction, and the non-feasible results of Otsu associated with \ac{GA}, we proposed a new approach based on \ac{SSN} divided in two steps. First, we use a Convolutional Neural Network (CNN) to automatically segment the images, classifying their regions as crop lines or as non-planted soil. Then, we use the Radon transform to reconstruct and improve the already segmented lines, making them more uniform or grouping fragments of lines and loose plants belonging to the same planting line. We compare our results with segmentation performed manually by experts and the results demonstrate the efficiency and feasibility of our approach to the proposed task.Dissertação (Mestrado)Nos últimos anos, os VANTs (Veículos Aéreos Não Tripulados) têm se tornado cada vez mais populares no setor agrícola, promovendo e possibilitando o monitoramento de imagens aéreas tanto no contexto científico, quanto no de negócios. Imagens capturadas por VANTs são fundamentais para práticas de agricultura de precisão, pois permitem a realização de atividades que lidam com imagens de baixa ou média altitude. O cenário da área plantada pode mudar drasticamente ao longo do tempo devido ao aparecimento de erosões, falhas de plantio, morte e ressecamento de parte da cultura, intervenções de animais, etc. Assim, o processo de detecção das linhas de plantio é de grande importância para o planejamento da colheita, controle de custos de produção, contagem de plantas, correção de falhas de semeadura, irrigação eficiente, entre outros. Além disso, a informação de geolocalização das linhas detectadas permite o uso de maquinários autônomos e um melhor planejamento de aplicação de insumos, reduzindo custos e a agressão ao meio ambiente. Neste trabalho, abordamos o problema de segmentação e detecção de linhas de plantio de cana-de-açúcar em imagens de VANTs. Primeiro, experimentamos uma abordagem baseada em Algoritmo Genético (AG) e Otsu para produzir imagens binarizadas. Posteriormente, devido a alguns motivos, incluindo a relevância recente da Segmentação Semântica, seus níveis de abstração e os resultados inviáveis obtidos com AG, estudamos e propusemos uma nova abordagem baseada em \ac{SSN} em duas etapas. Primeiro, usamos uma \ac{SSN} para segmentar as imagens, classificando suas regiões como linhas de plantio ou como solo não plantado. Em seguida, utilizamos a transformada de Radon para reconstruir e melhorar as linhas já segmentadas, tornando-as mais uniformes ou agrupando fragmentos de linhas e plantas soltas. Comparamos nossos resultados com segmentações feitas manualmente por especialistas e os resultados demonstram a eficiência e a viabilidade de nossa abordagem para a tarefa proposta

    The Department of Electrical and Computer Engineering Newsletter

    Get PDF
    Summer 2017 News and notes for University of Dayton\u27s Department of Electrical and Computer Engineering.https://ecommons.udayton.edu/ece_newsletter/1010/thumbnail.jp

    Factor Graphs for Computer Vision and Image Processing

    No full text
    Factor graphs have been used extensively in the decoding of error correcting codes such as turbo codes, and in signal processing. However, while computer vision and pattern recognition are awash with graphical model usage, it is some-what surprising that factor graphs are still somewhat under-researched in these communities. This is surprising because factor graphs naturally generalise both Markov random fields and Bayesian networks. Moreover, they are useful in modelling relationships between variables that are not necessarily probabilistic and allow for efficient marginalisation via a sum-product of probabilities. In this thesis, we present and illustrate the utility of factor graphs in the vision community through some of the field’s popular problems. The thesis does so with a particular focus on maximum a posteriori (MAP) inference in graphical structures with layers. To this end, we are able to break-down complex problems into factored representations and more computationally realisable constructions. Firstly, we present a sum-product framework that uses the explicit factorisation in local subgraphs from the partitioned factor graph of a layered structure to perform inference. This provides an efficient method to perform inference since exact inference is attainable in the resulting local subtrees. Secondly, we extend this framework to the entire graphical structure without partitioning, and discuss preliminary ways to combine outputs from a multilevel construction. Lastly, we further our endeavour to combine evidence from different methods through a simplicial spanning tree reparameterisation of the factor graph in a way that ensures consistency, to produce an ensembled and improved result. Throughout the thesis, the underlying feature we make use of is to enforce adjacency constraints using Delaunay triangulations computed by adding points dynamically, or using a convex hull algorithm. The adjacency relationships from Delaunay triangulations aid the factor graph approaches in this thesis to be both efficient and competitive for computer vision tasks. This is because of the low treewidth they provide in local subgraphs, as well as the reparameterised interpretation of the graph they form through the spanning tree of simplexes. While exact inference is known to be intractable for junction trees obtained from the loopy graphs in computer vision, in this thesis we are able to effect exact inference on our spanning tree of simplexes. More importantly, the approaches presented here are not restricted to the computer vision and image processing fields, but are extendable to more general applications that involve distributed computations

    Overview of Environment Perception for Intelligent Vehicles

    Get PDF
    This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area

    이동 물체 감지 및 분진 영상 복원의 연구

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 자연과학대학 수리과학부, 2021. 2. 강명주.Robust principal component analysis(RPCA), a method used to decom- pose a matrix into the sum of a low-rank matrix and a sparse matrix, has been proven effective in modeling the static background of videos. However, because a dynamic background cannot be represented by a low-rank matrix, measures additional to the RPCA are required. In this thesis, we propose masked RPCA to process backgrounds containing moving textures. First- order Marcov random field (MRF) is used to generate a mask that roughly labels moving objects and backgrounds. To estimate the background, the rank minimization process is then applied with the mask multiplied. During the iteration, the background rank increases as the object mask expands, and the weight of the rank constraint term decreases, which increases the accuracy of the background. We compared the proposed method with state- of-art, end-to-end methods to demonstrate its advantages. Subsequently, we suggest novel dedusting method based on dust-optimized transmission map and deep image prior. This method consists of estimating atmospheric light and transmission in that order, which is similar to dark channel prior-based dehazing methods. However, existing atmospheric light estimating methods widely used in dehazing schemes give an overly bright estimation, which results in unrealistically dark dedusting results. To ad- dress this problem, we propose a segmentation-based method that gives new estimation in atmospheric light. Dark channel prior based transmission map with new atmospheric light gives unnatural intensity ordering and zero value at low transmission regions. Therefore, the transmission map is refined by scattering model based transformation and dark channel adaptive non-local total variation (NLTV) regularization. Parameter optimizing steps with deep image prior(DIP) gives the final dedusting result.강건 주성분 분석은 배경 감산을 통한 동영상의 전경 추출의 방법으로 이 용되어왔으나, 동적배경은저계수행렬로표현될수없기때문에동적배경 감산에성능적한계를가지고있었다. 우리는전경과배경을구분하는일계마 르코프연쇄를도입해정적배경을나타내는항과곱하고이것을이용한새로 운형태의강건주성분분석을제안하여동적배경감산문제를해결한다. 해당 최소화문제는반복적인교차최적화를통하여해결한다. 이어서대기중의미세 먼지에의해오염된영상을복원한다. 영상분할과암흑채널가정에기반하여 깊이지도를구하고, 비국소총변동최소화를통하여정제한다. 이후깊은영상 가정에기반한영상생성기를통하여최종적으로복원된영상을구한다. 실험을 통하여제안된방법을다른방법들과비교하고질적인측면과양적인측면모 두에서우수함을확인한다.Abstract i 1 Introduction 1 1.1 Moving Object Detection In Dynamic Backgrounds 1 1.2 Image Dedusting 2 2 Preliminaries 4 2.1 Moving Object Detection In Dynamic Backgrounds 4 2.1.1 Literature review 5 2.1.2 Robust principal component analysis(RPCA) and their application status 7 2.1.3 Graph cuts and α-expansion algorithm 14 2.2 Image Dedusting 16 2.2.1 Image dehazing methods 16 2.2.2 Dust model 18 2.2.3 Non-local total variation(NLTV) 19 3 Dynamic Background Subtraction With Masked RPCA 21 3.1 Motivation 21 3.1.1 Motivation of background modeling 21 3.1.2 Mask formulation 23 3.1.3 Model 24 3.2 Optimization 25 3.2.1 L-Subproblem 25 3.2.2 L˜-Subproblem 26 3.2.3 M-Subproblem 27 3.2.4 p-Subproblem 28 3.2.5 Adaptive parameter control 28 3.2.6 Convergence 29 3.3 Experimental results 31 3.3.1 Benchmark Algorithms And Videos 31 3.3.2 Implementation 32 3.3.3 Evaluation 32 4 Deep Image Dedusting With Dust-Optimized Transmission Map 41 4.1 Transmission estimation 41 4.1.1 Atmospheric light estimation 41 4.1.2 Transmission estimation 43 4.2 Scene radiance recovery 47 4.3 Experimental results 51 4.3.1 Implementation 51 4.3.2 Evaluation 52 5 Conclusion 58 Abstract (in Korean) 69 Acknowledgement (in Korean) 70Docto

    Nonlinear kernel based feature maps for blur-sensitive unsharp masking of JPEG images

    Get PDF
    In this paper, a method for estimating the blur regions of an image is first proposed, resorting to a mixture of linear and nonlinear convolutional kernels. The blur map obtained is then utilized to enhance images such that the enhancement strength is an inverse function of the amount of measured blur. The blur map can also be used for tasks such as attention-based object classification, low light image enhancement, and more. A CNN architecture is trained with nonlinear upsampling layers using a standard blur detection benchmark dataset, with the help of blur target maps. Further, it is proposed to use the same architecture to build maps of areas affected by the typical JPEG artifacts, ringing and blockiness. The blur map and the artifact map pair permit to build an activation map for the enhancement of a (possibly JPEG compressed) image. Extensive experiments on standard test images verify the quality of the maps obtained using the algorithm and their effectiveness in locally controlling the enhancement, for superior perceptual quality. Last but not least, the computation time for generating these maps is much lower than the one of other comparable algorithms

    Visibility recovery on images acquired in attenuating media. Application to underwater, fog, and mammographic imaging

    Get PDF
    136 p.When acquired in attenuating media, digital images of ten suffer from a particularly complex degradation that reduces their visual quality, hindering their suitability for further computational applications, or simply decreasing the visual pleasan tness for the user. In these cases, mathematical image processing reveals it self as an ideal tool to recover some of the information lost during the degradation process. In this dissertation,we deal with three of such practical scenarios in which this problematic is specially relevant, namely, underwater image enhancement, fogremoval and mammographic image processing. In the case of digital mammograms,X-ray beams traverse human tissue, and electronic detectorscapture them as they reach the other side. However, the superposition on a bidimensional image of three-dimensional structures produces low contraste dimages in which structures of interest suffer from a diminished visibility, obstructing diagnosis tasks. Regarding fog removal, the loss of contrast is produced by the atmospheric conditions, and white colour takes over the scene uniformly as distance increases, also reducing visibility.For underwater images, there is an added difficulty, since colour is not lost uniformly; instead, red colours decay the fastest, and green and blue colours typically dominate the acquired images. To address all these challenges,in this dissertation we develop new methodologies that rely on: a)physical models of the observed degradation, and b) the calculus of variations.Equipped with this powerful machinery, we design novel theoreticaland computational tools, including image-dependent functional energies that capture the particularities of each degradation model. These energie sare composed of different integral terms that are simultaneous lyminimized by means of efficient numerical schemes, producing a clean,visually-pleasant and use ful output image, with better contrast and increased visibility. In every considered application, we provide comprehensive qualitative (visual) and quantitative experimental results to validateour methods, confirming that the developed techniques out perform other existing approaches in the literature
    corecore