11 research outputs found

    Text Detection in Natural Scenes and Technical Diagrams with Convolutional Feature Learning and Cascaded Classification

    Get PDF
    An enormous amount of digital images are being generated and stored every day. Understanding text in these images is an important challenge with large impacts for academic, industrial and domestic applications. Recent studies address the difficulty of separating text targets from noise and background, all of which vary greatly in natural scenes. To tackle this problem, we develop a text detection system to analyze and utilize visual information in a data driven, automatic and intelligent way. The proposed method incorporates features learned from data, including patch-based coarse-to-fine detection (Text-Conv), connected component extraction using region growing, and graph-based word segmentation (Word-Graph). Text-Conv is a sliding window-based detector, with convolution masks learned using the Convolutional k-means algorithm (Coates et. al, 2011). Unlike convolutional neural networks (CNNs), a single vector/layer of convolution mask responses are used to classify patches. An initial coarse detection considers both local and neighboring patch responses, followed by refinement using varying aspect ratios and rotations for a smaller local detection window. Different levels of visual detail from ground truth are utilized in each step, first using constraints on bounding box intersections, and then a combination of bounding box and pixel intersections. Combining masks from different Convolutional k-means initializations, e.g., seeded using random vectors and then support vectors improves performance. The Word-Graph algorithm uses contextual information to improve word segmentation and prune false character detections based on visual features and spatial context. Our system obtains pixel, character, and word detection f-measures of 93.14%, 90.26%, and 86.77% respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems, and producing highly accurate text detection masks at the pixel level. To investigate the utility of our feature learning approach for other image types, we perform tests on 8- bit greyscale USPTO patent drawing diagram images. An ensemble of Ada-Boost classifiers with different convolutional features (MetaBoost) is used to classify patches as text or background. The Tesseract OCR system is used to recognize characters in detected labels and enhance performance. With appropriate pre-processing and post-processing, f-measures of 82% for part label location, and 73% for valid part label locations and strings are obtained, which are the best obtained to-date for the USPTO patent diagram data set used in our experiments. To sum up, an intelligent refinement of convolutional k-means-based feature learning and novel automatic classification methods are proposed for text detection, which obtain state-of-the-art results without the need for strong prior knowledge. Different ground truth representations along with features including edges, color, shape and spatial relationships are used coherently to improve accuracy. Different variations of feature learning are explored, e.g. support vector-seeded clustering and MetaBoost, with results suggesting that increased diversity in learned features benefit convolution-based text detectors

    Detectando agrupamientos y contornos: un estudio doble sobre representación de formas

    Get PDF
    Las formas juegan un rol clave en nuestro sistema cognitivo: en la percepción de las formas yace el principio de la formación de conceptos. Siguiendo esta línea de pensamiento, la escuela de la Gestalt ha estudiado extensivamente la percep- ción de formas como el proceso de asir características estructurales encontradas o impuestas sobre el material de estímulo.En resumen, tenemos dos modelos de formas: pueden existir físicamente o ser un producto de nuestros procesos cogni- tivos. El primer grupo está compuesto por formas que pueden ser definidas extra- yendo los contornos de objetos sólidos. En este trabajo nos restringiremos al caso bidimensional. Decimos entonces que las formas del primer tipo son formas planares. Atacamos el problema de detectar y reconocer formas planares. Cier- tas restricciones teóricas y prácticas nos llevan a definir una forma planar como cualquier pedazo de línea de nivel de una imagen. Comenzamos por establecer que los métodos a contrario existentes para de- tectar líneas de nivel son usualmente muy restrictivos: una curva debe ser enter- amente saliente para ser detectada. Esto se encuentra en clara contradicción con la observación de que pedazos de líneas de nivel coinciden con los contornos de los objetos. Por lo tanto proponemos una modificación en la que el algoritmo de detección es relajado, permitiendo la detección de curvas parcialmente salientes. En un segundo acercamiento, estudiamos la interacción entre dos maneras diferentes de determinar la prominencia de una línea de nivel. Proponemos un esquema para competición de características donde el contraste y la regularidad compiten entre ellos, resultando en que solamente las líneas de nivel contrastadas y regulares son consderedas salientes. Una tercera contribución es un algoritmo de limpieza que analiza líneas de nivel salientes, descartando los pedazos no salientes y conservando los salientes. Está basado en un algoritmo para detección de multisegmentos que fue extendido para trabajar con entradas periódicas. Finalmente, proponemos un descriptor de formas para codificar las formas detectadas, basado en el Shape Context global. Cada línea de nivel es codificada usando shape contexts, generando así un nuevo descriptor semi-local. A contin- uación adaptamos un algoritmShape plays a key role in our cognitive system: in the perception of shape lies the beginning of concept formation. Following this lines of thought, the Gestalt school has extensively studied shape perception as the grasping of structural fea- tures found in or imposed upon the stimulus material. In summary, we have two models for shapes: they can exist physically or be a product of our cognitive pro- cesses. The first group is formed by shapes that can be defined by extracting contours from solid objects. In this work we will restrict ourselves to the two dimensional case. Therefore we say that these shapes of the first type are planar shapes. We ad- dress the problem of detecting and recognizing planar shapes. A few theoretical and practical restrictions lead us to define a planar shape as any piece of mean- ingful level line of an image. We begin by stating that previous a contrario methods to detect level lines are often too restrictive: a curve must be entirely salient to be detected. This is clearly in contradiction with the observation that pieces to level lines coincide with object boundaries. Therefore we propose a modification in which the detection criterion is relaxed by permitting the detection of partially salient level lines. As a second approach, we study the interaction between two different ways of determining level line saliency: contrast and regularity. We propose a scheme for feature competition where contrast and regularity contend with each other, resulting in that only contrasted and regular level lines are considered salient. A third contribution is a clean-up algorithm that analyses salient level lines, discarding the non-salient pieces and returning the salient ones. It is based on an algorithm for multisegment detection, which was extended to work with periodic inputs. Finally, we propose a shape descriptor to encode the detected shapes, based on the global Shape Context. Each level line is encoded using shape contexts, thus generating a new semi-local descriptor. We then adapt an existing a contrario shape matching algorithm to our particular case. The second group is composed by shapes that do not correspond to a solid object but are formed by integrating several solid objects. The simplest shapes in this group are arrangements of points in two dimensions. Clustering techniques might be helpful in these situations. In a seminal work from 1971, Zahn faced the problem of finding perceptual clusters according to the proximity gestalt and proposed three basic principles for clustering algorithms: (1) only inter-point distances matter, (2) stable results across executions and (3) independence from the exploration strategy. A last implicit requirement is crucial: clusters may have arbitrary shapes and detection algorithms must be capable of dealing with this. In this part we will focus on designing clustering methods that completely fulfils the aforementioned requirements and that impose minimal assumptions on the data to be clustered. We begin by assessing the problem of validating clusters in a hierarchical struc- ture. Based on nonparametric density estimation methods, we propose to com- pute the saliency of a given cluster. Then, it is possible to select the most salient clusters in the hierarchy. In practice, the method shows a preference toward com- pact clusters and we propose a simple heuristic to correct this issue. In general, graph-based hierarchical methods require to first compute the com- plete graph of interpoint distances. For this reason, hierarchical methods are often considered slow. The most usually used, and the fastest hierarchical clustering al- gorithm is based on the Minimum Spanning Tree (MST). We therefore propose an algorithm to compute the MST while avoiding the intermediate step of computing the complete set of interpoint distances. Moreover, the algorithm can be fully par- allelized with ease. The algorithm exhibits good performance for low-dimensional datasets and allows for an approximate but robust solution for higher dimensions. Finally we propose a method to select clustered subtrees from the MST, by computing simple edge statistics. The method allows naturally to retrieve clus- ters with arbitrary shapes. It also works well in noisy situations, where noise is regarded as unclustered data, allowing to separate it from clustered data. We also show that the iterative application of the algorithm allows to solve a phenomenon called masking, where highly populated clusters avoid the detection less popu- lated ones.Fil:Tepper, Mariano. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina

    Un arbre des formes pour les images multivariées

    Get PDF
    Nowadays, the demand for multi-scale and region-based analysis in many computer vision and pattern recognition applications is obvious. No one would consider a pixel-based approach as a good candidate to solve such problems. To meet this need, the Mathematical Morphology (MM) framework has supplied region-based hierarchical representations of images such as the Tree of Shapes (ToS). The ToS represents the image in terms of a tree of the inclusion of its level-lines. The ToS is thus self-dual and contrast-change invariant which make it well-adapted for high-level image processing. Yet, it is only defined on grayscale images and most attempts to extend it on multivariate images - e.g. by imposing an “arbitrary” total ordering - are not satisfactory. In this dissertation, we present the Multivariate Tree of Shapes (MToS) as a novel approach to extend the grayscale ToS on multivariate images. This representation is a mix of the ToS's computed marginally on each channel of the image; it aims at merging the marginal shapes in a “sensible” way by preserving the maximum number of inclusion. The method proposed has theoretical foundations expressing the ToS in terms of a topographic map of the curvilinear total variation computed from the image border; which has allowed its extension on multivariate data. In addition, the MToS features similar properties as the grayscale ToS, the most important one being its invariance to any marginal change of contrast and any marginal inversion of contrast (a somewhat “self-duality” in the multidimensional case). As the need for efficient image processing techniques is obvious regarding the larger and larger amount of data to process, we propose an efficient algorithm that can be build the MToS in quasi-linear time w.r.t. the number of pixels and quadraticw.r.t. the number of channels. We also propose tree-based processing algorithms to demonstrate in practice, that the MToS is a versatile, easy-to-use, and efficient structure. Eventually, to validate the soundness of our approach, we propose some experiments testing the robustness of the structure to non-relevant components (e.g. with noise or with low dynamics) and we show that such defaults do not affect the overall structure of the MToS. In addition, we propose many real-case applications using the MToS. Many of them are just a slight modification of methods employing the “regular” ToS and adapted to our new structure. For example, we successfully use the MToS for image filtering, image simplification, image segmentation, image classification and object detection. From these applications, we show that the MToS generally outperforms its ToS-based counterpart, demonstrating the potential of our approachDe nombreuses applications issues de la vision par ordinateur et de la reconnaissance des formes requièrent une analyse de l'image multi-échelle basée sur ses régions. De nos jours, personne ne considérerait une approche orientée « pixel » comme une solution viable pour traiter ce genre de problèmes. Pour répondre à cette demande, la Morphologie Mathématique a fourni des représentations hiérarchiques des régions de l'image telles que l'Arbre des Formes (AdF). L'AdF représente l'image par un arbre d'inclusion de ses lignes de niveaux. L'AdF est ainsi auto-dual et invariant au changement de contraste, ce qui fait de lui une structure bien adaptée aux traitements d'images de haut niveau. Néanmoins, il est seulement défini aux images en niveaux de gris et la plupart des tentatives d'extension aux images multivariées (e.g. en imposant un ordre total «arbitraire ») ne sont pas satisfaisantes. Dans ce manuscrit, nous présentons une nouvelle approche pour étendre l'AdF scalaire aux images multivariées : l'Arbre des Formes Multivarié (AdFM). Cette représentation est une « fusion » des AdFs calculés marginalement sur chaque composante de l'image. On vise à fusionner les formes marginales de manière « sensée » en préservant un nombre maximal d'inclusion. La méthode proposée a des fondements théoriques qui consistent en l'expression de l'AdF par une carte topographique de la variation totale curvilinéaire depuis la bordure de l'image. C'est cette reformulation qui a permis l'extension de l'AdF aux données multivariées. De plus, l'AdFM partage des propriétés similaires avec l'AdF scalaire ; la plus importante étant son invariance à tout changement ou inversion de contraste marginal (une sorte d'auto-dualité dans le cas multidimensionnel). Puisqu'il est évident que, vis-à-vis du nombre sans cesse croissant de données à traiter, nous ayons besoin de techniques rapides de traitement d'images, nous proposons un algorithme efficace qui permet de construire l'AdF en temps quasi-linéaire vis-à-vis du nombre de pixels et quadratique vis-à-vis du nombre de composantes. Nous proposons également des algorithmes permettant de manipuler l'arbre, montrant ainsi que, en pratique, l'AdFM est une structure facile à manipuler, polyvalente, et efficace. Finalement, pour valider la pertinence de notre approche, nous proposons quelques expériences testant la robustesse de notre structure aux composantes non-pertinentes (e.g. avec du bruit ou à faible dynamique) et nous montrons que ces défauts n'affectent pas la structure globale de l'AdFM. De plus, nous proposons des applications concrètes utilisant l'AdFM. Certaines sont juste des modifications mineures aux méthodes employant d'ores et déjà l'AdF scalaire mais adaptées à notre nouvelle structure. Par exemple, nous utilisons l'AdFM à des fins de filtrage, segmentation, classification et de détection d'objet. De ces applications, nous montrons ainsi que les méthodes basées sur l'AdFM surpassent généralement leur analogue basé sur l'AdF, démontrant ainsi le potentiel de notre approch

    Multi-target tracking and performance evaluation on videos

    Get PDF
    PhDMulti-target tracking is the process that allows the extraction of object motion patterns of interest from a scene. Motion patterns are often described through metadata representing object locations and shape information. In the first part of this thesis we discuss the state-of-the-art methods aimed at accomplishing this task on monocular views and also analyse the methods for evaluating their performance. The second part of the thesis describes our research contribution to these topics. We begin presenting a method for multi-target tracking based on track-before-detect (MTTBD) formulated as a particle filter. The novelty involves the inclusion of the target identity (ID) into the particle state, which enables the algorithm to deal with an unknown and unlimited number of targets. We propose a probabilistic model of particle birth and death based on Markov Random Fields. This model allows us to overcome the problem of the mixing of IDs of close targets. We then propose three evaluation measures that take into account target-size variations, combine accuracy and cardinality errors, quantify long-term tracking accuracy at different accuracy levels, and evaluate ID changes relative to the duration of the track in which they occur. This set of measures does not require pre-setting of parameters and allows one to holistically evaluate tracking performance in an application-independent manner. Lastly, we present a framework for multi-target localisation applied on scenes with a high density of compact objects. Candidate target locations are initially generated by extracting object features from intensity maps using an iterative method based on a gradient-climbing technique and an isocontour slicing approach. A graph-based data association method for multi-target tracking is then applied to link valid candidate target locations over time and to discard those which are spurious. This method can deal with point targets having indistinguishable appearance and unpredictable motion. MT-TBD is evaluated and compared with state-of-the-art methods on real-world surveillanceThis work was supported by the EU, under the FP7 project APIDIS (ICT-216023) and the Artemis JU and TSB as part of the COPCAMS project (332913)

    A perceptual learning model to discover the hierarchical latent structure of image collections

    Get PDF
    Biology has been an unparalleled source of inspiration for the work of researchers in several scientific and engineering fields including computer vision. The starting point of this thesis is the neurophysiological properties of the human early visual system, in particular, the cortical mechanism that mediates learning by exploiting information about stimuli repetition. Repetition has long been considered a fundamental correlate of skill acquisition andmemory formation in biological aswell as computational learning models. However, recent studies have shown that biological neural networks have differentways of exploiting repetition in forming memory maps. The thesis focuses on a perceptual learning mechanism called repetition suppression, which exploits the temporal distribution of neural activations to drive an efficient neural allocation for a set of stimuli. This explores the neurophysiological hypothesis that repetition suppression serves as an unsupervised perceptual learning mechanism that can drive efficient memory formation by reducing the overall size of stimuli representation while strengthening the responses of the most selective neurons. This interpretation of repetition is different from its traditional role in computational learning models mainly to induce convergence and reach training stability, without using this information to provide focus for the neural representations of the data. The first part of the thesis introduces a novel computational model with repetition suppression, which forms an unsupervised competitive systemtermed CoRe, for Competitive Repetition-suppression learning. The model is applied to generalproblems in the fields of computational intelligence and machine learning. Particular emphasis is placed on validating the model as an effective tool for the unsupervised exploration of bio-medical data. In particular, it is shown that the repetition suppression mechanism efficiently addresses the issues of automatically estimating the number of clusters within the data, as well as filtering noise and irrelevant input components in highly dimensional data, e.g. gene expression levels from DNA Microarrays. The CoRe model produces relevance estimates for the each covariate which is useful, for instance, to discover the best discriminating bio-markers. The description of the model includes a theoretical analysis using Huber’s robust statistics to show that the model is robust to outliers and noise in the data. The convergence properties of themodel also studied. It is shown that, besides its biological underpinning, the CoRe model has useful properties in terms of asymptotic behavior. By exploiting a kernel-based formulation for the CoRe learning error, a theoretically sound motivation is provided for the model’s ability to avoid local minima of its loss function. To do this a necessary and sufficient condition for global error minimization in vector quantization is generalized by extending it to distance metrics in generic Hilbert spaces. This leads to the derivation of a family of kernel-based algorithms that address the local minima issue of unsupervised vector quantization in a principled way. The experimental results show that the algorithm can achieve a consistent performance gain compared with state-of-the-art learning vector quantizers, while retaining a lower computational complexity (linear with respect to the dataset size). Bridging the gap between the low level representation of the visual content and the underlying high-level semantics is a major research issue of current interest. The second part of the thesis focuses on this problem by introducing a hierarchical and multi-resolution approach to visual content understanding. On a spatial level, CoRe learning is used to pool together the local visual patches by organizing them into perceptually meaningful intermediate structures. On the semantical level, it provides an extension of the probabilistic Latent Semantic Analysis (pLSA) model that allows discovery and organization of the visual topics into a hierarchy of aspects. The proposed hierarchical pLSA model is shown to effectively address the unsupervised discovery of relevant visual classes from pictorial collections, at the same time learning to segment the image regions containing the discovered classes. Furthermore, by drawing on a recent pLSA-based image annotation system, the hierarchical pLSA model is extended to process and representmulti-modal collections comprising textual and visual data. The results of the experimental evaluation show that the proposed model learns to attach textual labels (available only at the level of the whole image) to the discovered image regions, while increasing the precision/ recall performance with respect to flat, pLSA annotation model

    Scientific Advances in STEM: From Professor to Students

    Get PDF
    This book collects the publications of the special Topic Scientific advances in STEM: from Professor to students. The aim is to contribute to the advancement of the Science and Engineering fields and their impact on the industrial sector, which requires a multidisciplinary approach. University generates and transmits knowledge to serve society. Social demands continuously evolve, mainly because of cultural, scientific, and technological development. Researchers must contextualize the subjects they investigate to their application to the local industry and community organizations, frequently using a multidisciplinary point of view, to enhance the progress in a wide variety of fields (aeronautics, automotive, biomedical, electrical and renewable energy, communications, environmental, electronic components, etc.). Most investigations in the fields of science and engineering require the work of multidisciplinary teams, representing a stockpile of research projects in different stages (final year projects, master’s or doctoral studies). In this context, this Topic offers a framework for integrating interdisciplinary research, drawing together experimental and theoretical contributions in a wide variety of fields

    Graph matching using position coordinates and local features for image analysis

    Get PDF
    Encontrar las correspondencias entre dos imágenes es un problema crucial en el campo de la visión por ordenador i el reconocimiento de patrones. Es relevante para un amplio rango de propósitos des de aplicaciones de reconocimiento de objetos en las áreas de biometría, análisis de documentos i análisis de formas hasta aplicaciones relacionadas con la geometría desde múltiples puntos de vista tales cómo la recuperación de la pose, estructura desde el movimiento y localización y mapeo. La mayoría de las técnicas existentes enfocan este problema o bien usando características locales en la imagen o bien usando métodos de registro de conjuntos de puntos (o bien una mezcla de ambos). En las primeras, un conjunto disperso de características es primeramente extraído de las imágenes y luego caracterizado en la forma de vectores descriptores usando evidencias locales de la imagen. Las características son asociadas según la similitud entre sus descriptores. En las segundas, los conjuntos de características son considerados cómo conjuntos de puntos los cuales son asociados usando técnicas de optimización no lineal. Estos son procedimientos iterativos que estiman los parámetros de correspondencia y de alineamiento en pasos alternados. Los grafos son representaciones que contemplan relaciones binarias entre las características. Tener en cuenta relaciones binarias al problema de la correspondencia a menudo lleva al llamado problema del emparejamiento de grafos. Existe cierta cantidad de métodos en la literatura destinados a encontrar soluciones aproximadas a diferentes instancias del problema de emparejamiento de grafos, que en la mayoría de casos es del tipo "NP-hard". El cuerpo de trabajo principal de esta tesis está dedicado a formular ambos problemas de asociación de características de imagen y registro de conjunto de puntos como instancias del problema de emparejamiento de grafos. En todos los casos proponemos algoritmos aproximados para solucionar estos problemas y nos comparamos con un número de métodos existentes pertenecientes a diferentes áreas como eliminadores de "outliers", métodos de registro de conjuntos de puntos y otros métodos de emparejamiento de grafos. Los experimentos muestran que en la mayoría de casos los métodos propuestos superan al resto. En ocasiones los métodos propuestos o bien comparten el mejor rendimiento con algún método competidor o bien obtienen resultados ligeramente peores. En estos casos, los métodos propuestos normalmente presentan tiempos computacionales inferiores.Trobar les correspondències entre dues imatges és un problema crucial en el camp de la visió per ordinador i el reconeixement de patrons. És rellevant per un ampli ventall de propòsits des d’aplicacions de reconeixement d’objectes en les àrees de biometria, anàlisi de documents i anàlisi de formes fins aplicacions relacionades amb geometria des de múltiples punts de vista tals com recuperació de pose, estructura des del moviment i localització i mapeig. La majoria de les tècniques existents enfoquen aquest problema o bé usant característiques locals a la imatge o bé usant mètodes de registre de conjunts de punts (o bé una mescla d’ambdós). En les primeres, un conjunt dispers de característiques és primerament extret de les imatges i després caracteritzat en la forma de vectors descriptors usant evidències locals de la imatge. Les característiques son associades segons la similitud entre els seus descriptors. En les segones, els conjunts de característiques son considerats com conjunts de punts els quals son associats usant tècniques d’optimització no lineal. Aquests son procediments iteratius que estimen els paràmetres de correspondència i d’alineament en passos alternats. Els grafs son representacions que contemplen relacions binaries entre les característiques. Tenir en compte relacions binàries al problema de la correspondència sovint porta a l’anomenat problema de l’emparellament de grafs. Existeix certa quantitat de mètodes a la literatura destinats a trobar solucions aproximades a diferents instàncies del problema d’emparellament de grafs, el qual en la majoria de casos és del tipus “NP-hard”. Una part del nostre treball està dedicat a investigar els beneficis de les mesures de ``bins'' creuats per a la comparació de característiques locals de les imatges. La resta està dedicat a formular ambdós problemes d’associació de característiques d’imatge i registre de conjunt de punts com a instàncies del problema d’emparellament de grafs. En tots els casos proposem algoritmes aproximats per solucionar aquests problemes i ens comparem amb un nombre de mètodes existents pertanyents a diferents àrees com eliminadors d’“outliers”, mètodes de registre de conjunts de punts i altres mètodes d’emparellament de grafs. Els experiments mostren que en la majoria de casos els mètodes proposats superen a la resta. En ocasions els mètodes proposats o bé comparteixen el millor rendiment amb algun mètode competidor o bé obtenen resultats lleugerament pitjors. En aquests casos, els mètodes proposats normalment presenten temps computacionals inferiors

    LWA 2013. Lernen, Wissen & Adaptivität ; Workshop Proceedings Bamberg, 7.-9. October 2013

    Get PDF
    LWA Workshop Proceedings: LWA stands for "Lernen, Wissen, Adaption" (Learning, Knowledge, Adaptation). It is the joint forum of four special interest groups of the German Computer Science Society (GI). Following the tradition of the last years, LWA provides a joint forum for experienced and for young researchers, to bring insights to recent trends, technologies and applications, and to promote interaction among the SIGs

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Climate Change and Environmental Sustainability- Volume 5

    Get PDF
    This volume of Climate Change and Environmental Sustainability covers topics on greenhouse gas emissions, climatic impacts, climate models and prediction, and analytical methods. Issues related to two major greenhouse gas emissions, namely of carbon dioxide and methane, particularly in wetlands and agriculture sector, and radiative energy flux variations along with cloudiness are explored in this volume. Further, climate change impacts such as rainfall, heavy lake-effect snowfall, extreme temperature, impacts on grassland phenology, impacts on wind and wave energy, and heat island effects are explored. A major focus of this volume is on climate models that are of significance to projection and to visualise future climate pathways and possible impacts and vulnerabilities. Such models are widely used by scientists and for the generation of mitigation and adaptation scenarios. However, dealing with uncertainties has always been a critical issue in climate modelling. Therefore, methods are explored for improving climate projection accuracy through addressing the stochastic properties of the distributions of climate variables, addressing variational problems with unknown weights, and improving grid resolution in climatic models. Results reported in this book are conducive to a better understanding of global warming mechanisms, climate-induced impacts, and forecasting models. We expect the book to benefit decision makers, practitioners, and researchers in different fields and contribute to climate change adaptation and mitigation
    corecore