233 research outputs found

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    The 1991 3rd NASA Symposium on VLSI Design

    Get PDF
    Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2

    Content-based image analysis with applications to the multifunction printer imaging pipeline and image databases

    Get PDF
    Image understanding is one of the most important topics for various applications. Most of image understanding studies focus on content-based approach while some others also rely on meta data of images. Image understanding includes several sub-topics such as classification, segmentation, retrieval and automatic annotation etc., which are heavily studied recently. This thesis proposes several new methods and algorithms for image classification, retrieval and automatic tag generation. The proposed algorithms have been tested and verified in multiple platforms. For image classification, our proposed method can complete classification in real-time under hardware constraints of all-in-one printer and adaptively improve itself by online learning. Another image understanding engine includes both classification and image quality analysis is designed to solve the optimal compression problem of printing system. Our proposed image retrieval algorithm can be applied to either PC or mobile device to improve the hybrid learning experience. We also develop a new matrix factorization algorithm to better recover the image meta data (tag). The proposed algorithm outperforms other existing matrix factorization methods

    Enhanced Living Environments

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1303 “Algorithms, Architectures and Platforms for Enhanced Living Environments (AAPELE)”. The concept of Enhanced Living Environments (ELE) refers to the area of Ambient Assisted Living (AAL) that is more related with Information and Communication Technologies (ICT). Effective ELE solutions require appropriate ICT algorithms, architectures, platforms, and systems, having in view the advance of science and technology in this area and the development of new and innovative solutions that can provide improvements in the quality of life for people in their homes and can reduce the financial burden on the budgets of the healthcare providers. The aim of this book is to become a state-of-the-art reference, discussing progress made, as well as prompting future directions on theories, practices, standards, and strategies related to the ELE area. The book contains 12 chapters and can serve as a valuable reference for undergraduate students, post-graduate students, educators, faculty members, researchers, engineers, medical doctors, healthcare organizations, insurance companies, and research strategists working in this area

    Acta Cybernetica : Volume 19. Number 1.

    Get PDF

    Application of Geographic Information Systems

    Get PDF
    The importance of Geographic Information Systems (GIS) can hardly be overemphasized in today’s academic and professional arena. More professionals and academics have been using GIS than ever – urban & regional planners, civil engineers, geographers, spatial economists, sociologists, environmental scientists, criminal justice professionals, political scientists, and alike. As such, it is extremely important to understand the theories and applications of GIS in our teaching, professional work, and research. “The Application of Geographic Information Systems” presents research findings that explain GIS’s applications in different subfields of social sciences. With several case studies conducted in different parts of the world, the book blends together the theories of GIS and their practical implementations in different conditions. It deals with GIS’s application in the broad spectrum of geospatial analysis and modeling, water resources analysis, land use analysis, infrastructure network analysis like transportation and water distribution network, and such. The book is expected to be a useful source of knowledge to the users of GIS who envision its applications in their teaching and research. This easy-to-understand book is surely not the end in itself but a little contribution to toward our understanding of the rich and wonderful subject of GIS

    Uncertain Multi-Criteria Optimization Problems

    Get PDF
    Most real-world search and optimization problems naturally involve multiple criteria as objectives. Generally, symmetry, asymmetry, and anti-symmetry are basic characteristics of binary relationships used when modeling optimization problems. Moreover, the notion of symmetry has appeared in many articles about uncertainty theories that are employed in multi-criteria problems. Different solutions may produce trade-offs (conflicting scenarios) among different objectives. A better solution with respect to one objective may compromise other objectives. There are various factors that need to be considered to address the problems in multidisciplinary research, which is critical for the overall sustainability of human development and activity. In this regard, in recent decades, decision-making theory has been the subject of intense research activities due to its wide applications in different areas. The decision-making theory approach has become an important means to provide real-time solutions to uncertainty problems. Theories such as probability theory, fuzzy set theory, type-2 fuzzy set theory, rough set, and uncertainty theory, available in the existing literature, deal with such uncertainties. Nevertheless, the uncertain multi-criteria characteristics in such problems have not yet been explored in depth, and there is much left to be achieved in this direction. Hence, different mathematical models of real-life multi-criteria optimization problems can be developed in various uncertain frameworks with special emphasis on optimization problems

    Reinforced Segmentation of Images Containing One Object of Interest

    Get PDF
    In many image-processing applications, one object of interest must be segmented. The techniques used for segmentation vary depending on the particular situation and the specifications of the problem at hand. In methods that rely on a learning process, the lack of a sufficient number of training samples is usually an obstacle, especially when the samples need to be manually prepared by an expert. The performance of some other methods may suffer from frequent user interactions to determine the critical segmentation parameters. Also, none of the existing approaches use online (permanent) feedback, from the user, in order to evaluate the generated results. Considering the above factors, a new multi-stage image segmentation system, based on Reinforcement Learning (RL) is introduced as the main contribution of this research. In this system, the RL agent takes specific actions, such as changing the tasks parameters, to modify the quality of the segmented image. The approach starts with a limited number of training samples and improves its performance in the course of time. In this system, the expert knowledge is continuously incorporated to increase the segmentation capabilities of the method. Learning occurs based on interactions with an offline simulation environment, and later online through interactions with the user. The offline mode is performed using a limited number of manually segmented samples, to provide the segmentation agent with basic information about the application domain. After this mode, the agent can choose the appropriate parameter values for different processing tasks, based on its accumulated knowledge. The online mode, consequently, guarantees that the system is continuously training and can increase its accuracy, the more the user works with it. During this mode, the agent captures the user preferences and learns how it must change the segmentation parameters, so that the best result is achieved. By using these two learning modes, the RL agent allows us to optimally recognize the decisive parameters for the entire segmentation process

    Desarrollo eficiente de algoritmos de clasificación difusa en entornos Big Data

    Get PDF
    Estamos presenciando una época de transición donde los “datos” son los principales protagonistas. En la actualidad, cada día se genera una ingente cantidad de información en la conocida como era del Big Data. La toma de decisiones basada en estos datos, su estructuración, organización, así como su correcta integración y análisis, constituyen un factor clave para muchos sectores estratégicos de la sociedad. En el tratamiento de cantidades grandes de datos, las técnicas de almacenamiento y análisis asociadas al Big Data nos proporcionan una gran ayuda. Entre estas técnicas predominan los algoritmos conocidos como machine learning, esenciales para el análisis predictivo a partir de grandes cantidades de datos. Dentro del campo del machine learning, los algoritmos de clasificación difusa son empleados con frecuencia para la resolución de una gran variedad de problemas, principalmente, los relacionados con control de procesos industriales complejos, sistemas de decisión en general, la resolución y la compresión de datos. Los sistemas de clasificación están también muy extendidos en la tecnología cotidiana, por ejemplo, en cámaras digitales, sistemas de aire acondicionado, etc. El éxito del uso de las técnicas de machine learning está limitado por las restricciones de los recursos computacionales actuales, especialmente, cuando se trabaja con grandes conjuntos de datos y requisitos de tiempo real. En este contexto, dichos algoritmos necesitan ser rediseñados e, incluso, repensados con la finalidad de aprovechar al máximo las arquitecturas masivamente paralelas que ofrecen el máximo rendimiento en la actualidad. Esta tesis doctoral se centra dentro de este contexto, analizando computacionalmente el actual panorama de algoritmos de clasificación y proponiendo algoritmos de clasificación paralelos que permitan ofrecer soluciones adecuadas en un intervalo de tiempo reducido. En concreto, se ha realizado un estudio en profundidad de técnicas bien conocidas de machine learning mediante un caso de aplicación práctica. Esta aplicación predice el nivel de ozono en diferentes áreas de la Región de Murcia. Dicho análisis se fundamentó en la recogida de distintos parámetros de contaminación para cada día durante los años 2013 y 2014. El estudio reveló que la técnica que obtenía mejores resultados fue Random Forest y se obtuvo una regionalización en dos grandes zonas, atendiendo a los datos procesados. A continuación, se centró el objetivo en los algoritmos de clasificación difusa. En este caso, se utilizó una modificación del algoritmo Fuzzy C-Means (FCM), mFCM, como técnica de discretización con el objetivo de convertir los datos de entrada de continuos a discretos. Este proceso tiene especial importancia debido a que hay determinados algoritmos que necesitan valores discretos para poder trabajar, incluso técnicas que sí trabajan con datos continuos, obtienen mejores resultados con datos discretos. Esta técnica fue validada a través de la aplicación al bien conocido conjunto de Iris Data de Anderson, donde se comparó estadísticamente con la técnica de K-Means (KM), proporcionando mejores resultados. Una vez realizado el estudio de los algoritmos de clasificación difusa, se detecta que dichas técnicas son sensibles a la cantidad de datos, incrementando su tiempo computacional. De modo que la eficiencia en la programación de estos algoritmos es un factor crítico para su posible aplicabilidad al Big Data. Por lo tanto, se propone la paralización de un algoritmo de clasificación difusa a fin de conseguir que la aplicación sea más rápida conforme aumente el grado de paralelismo del sistema. Para ello, se propuso el algoritmo de clasificación difusa Parallel Fuzzy Minimals (PFM) y se comparó con los algoritmos FCM y Fuzzy Minimals (FM) en diferentes conjuntos de datos. En términos de calidad, la clasificación era similar a la obtenida por los tres algoritmos, sin embargo, en términos de escalabilidad, el algoritmo paralelizado PFM obtenía una aceleración lineal con respecto al número de procesadores empleados. Habiendo identificado la necesidad de que dichas técnicas tengan que ser desarrolladas en entornos masivamente paralelos, se propone una infraestructura de hardware y software de alto rendimiento para procesar, en tiempo real, los datos obtenidos de varios vehículos en relación a variables que analizan problemas de contaminación y tráfico. Los resultados mostraron un rendimiento adecuado del sistema trabajando con grandes cantidades de datos y, en términos de escalabilidad, las ejecuciones fueron satisfactorias. Se visualizan grandes retos a la hora de identificar otras aplicaciones en entornos Big Data y ser capaces de utilizar dichas técnicas para la predicción en áreas tan relevantes como la contaminación, el tráfico y las ciudades inteligentes.Ingeniería, Industria y Construcció

    A lightweight, graph-theoretic model of class-based similarity to support object-oriented code reuse.

    Get PDF
    The work presented in this thesis is principally concerned with the development of a method and set of tools designed to support the identification of class-based similarity in collections of object-oriented code. Attention is focused on enhancing the potential for software reuse in situations where a reuse process is either absent or informal, and the characteristics of the organisation are unsuitable, or resources unavailable, to promote and sustain a systematic approach to reuse. The approach builds on the definition of a formal, attributed, relational model that captures the inherent structure of class-based, object-oriented code. Based on code-level analysis, it relies solely on the structural characteristics of the code and the peculiarly object-oriented features of the class as an organising principle: classes, those entities comprising a class, and the intra and inter-class relationships existing between them, are significant factors in defining a two-phase similarity measure as a basis for the comparison process. Established graph-theoretic techniques are adapted and applied via this model to the problem of determining similarity between classes. This thesis illustrates a successful transfer of techniques from the domains of molecular chemistry and computer vision. Both domains provide an existing template for the analysis and comparison of structures as graphs. The inspiration for representing classes as attributed relational graphs, and the application of graph-theoretic techniques and algorithms to their comparison, arose out of a well-founded intuition that a common basis in graph-theory was sufficient to enable a reasonable transfer of these techniques to the problem of determining similarity in object-oriented code. The practical application of this work relates to the identification and indexing of instances of recurring, class-based, common structure present in established and evolving collections of object-oriented code. A classification so generated additionally provides a framework for class-based matching over an existing code-base, both from the perspective of newly introduced classes, and search "templates" provided by those incomplete, iteratively constructed and refined classes associated with current and on-going development. The tools and techniques developed here provide support for enabling and improving shared awareness of reuse opportunity, based on analysing structural similarity in past and ongoing development, tools and techniques that can in turn be seen as part of a process of domain analysis, capable of stimulating the evolution of a systematic reuse ethic
    corecore