40 research outputs found

    Latency and accuracy optimized mobile face detection

    Get PDF
    Abstract. Face detection is a preprocessing step in many computer vision applications. Important factors are accuracy, inference duration, and energy efficiency of the detection framework. Computationally light detectors that execute in real-time are a requirement for many application areas, such as face tracking and recognition. Typical operating platforms in everyday use are smartphones and embedded devices, which have limited computation capacity. The capability of face detectors is comparable to the ability of a human in easy detection tasks. When the conditions change, the challenges become different. Current challenges in face detection include atypically posed and tiny faces. Partially occluded faces and dim or bright environments pose challenges for detection systems. State-of-the-art performance in face detection research employs deep learning methods called neural networks, which loosely imitate the mammalian brain system. The most relevant technologies are convolutional neural networks, which are designed for local feature description. In this thesis, the main computational optimization approach is neural network quantization. The network models were delegated to digital signal processors and graphics processing units. Quantization was shown to reduce the latency of computation substantially. The most energy-efficient inference was achieved through digital signal processor delegation. Multithreading was used for inference acceleration. It reduced the amount of energy consumption per algorithm run.Latenssi- ja tarkkuusoptimoitu kasvontunnistus mobiililaitteilla. Tiivistelmä. Kasvojen ilmaisu on esikäsittelyvaihe monelle konenäön sovellukselle. Tärkeitä kasvoilmaisimen ominaisuuksia ovat tarkkuus, energiatehokkuus ja suoritusnopeus. Monet sovellukset vaativat laskennallisesti kevyitä ilmaisimia, jotka toimivat reaaliajassa. Esimerkkejä sovelluksista ovat kasvojen seuranta- ja tunnistusjärjestelmät. Yleisiä käyttöalustoja ovat älypuhelimet ja sulautetut järjestelmät, joiden laskentakapasiteetti on rajallinen. Kasvonilmaisimien tarkkuus vastaa ihmisen kykyä helpoissa ilmaisuissa. Nykyiset ongelmat kasvojen ilmaisussa liittyvät epätyypillisiin asentoihin ja erityisen pieniin kasvokokoihin. Myös kasvojen osittainen peittyminen, ja pimeät ja kirkkaat ympäristöt, vaikeuttavat ilmaisua. Neuroverkkoja käytetään tekoälyjärjestelmissä, joiden lähtökohtana on ollut mallintaa nisäkkäiden aivojen toimintaa. Konvoluutiopohjaiset neuroverkot ovat erikoistuneet paikallisten piirteiden analysointiin. Tässä opinnäytetyössä käytetty laskennallisen optimoinnin menetelmä on neuroverkkojen kvantisointi. Neuroverkkojen ajo delegoitiin digitaalisille signaalinkäsittely- ja grafiikkasuorittimille. Kvantisoinnin osoitettiin vähentävän laskenta-aikaa huomattavasti ja suurin energiatehokkuus saavutettiin digitaalisen signaaliprosessorin avulla. Suoritusnopeutta lisättiin monisäikeistyksellä, jonka havaittiin vähentävän energiankulutusta

    Investigating the attainment of optimum data quality for EHR Big Data: proposing a new methodological approach

    Get PDF
    The value derivable from the use of data is continuously increasing since some years. Both commercial and non-commercial organisations have realised the immense benefits that might be derived if all data at their disposal could be analysed and form the basis of decision taking. The technological tools required to produce, capture, store, transmit and analyse huge amounts of data form the background to the development of the phenomenon of Big Data. With Big Data, the aim is to be able to generate value from huge amounts of data, often in non-structured format and produced extremely frequently. However, the potential value derivable depends on general level of governance of data, more precisely on the quality of the data. The field of data quality is well researched for traditional data uses but is still in its infancy for the Big Data context. This dissertation focused on investigating effective methods to enhance data quality for Big Data. The principal deliverable of this research is in the form of a methodological approach which can be used to optimize the level of data quality in the Big Data context. Since data quality is contextual, (that is a non-generalizable field), this research study focuses on applying the methodological approach in one use case, in terms of the Electronic Health Records (EHR). The first main contribution to knowledge of this study systematically investigates which data quality dimensions (DQDs) are most important for EHR Big Data. The two most important dimensions ascertained by the research methods applied in this study are accuracy and completeness. These are two well-known dimensions, and this study confirms that they are also very important for EHR Big Data. The second important contribution to knowledge is an investigation into whether Artificial Intelligence with a special focus upon machine learning could be used in improving the detection of dirty data, focusing on the two data quality dimensions of accuracy and completeness. Regression and clustering algorithms proved to be more adequate for accuracy and completeness related issues respectively, based on the experiments carried out. However, the limits of implementing and using machine learning algorithms for detecting data quality issues for Big Data were also revealed and discussed in this research study. It can safely be deduced from the knowledge derived from this part of the research study that use of machine learning for enhancing data quality issues detection is a promising area but not yet a panacea which automates this entire process. The third important contribution is a proposed guideline to undertake data repairs most efficiently for Big Data; this involved surveying and comparing existing data cleansing algorithms against a prototype developed for data reparation. Weaknesses of existing algorithms are highlighted and are considered as areas of practice which efficient data reparation algorithms must focus upon. Those three important contributions form the nucleus for a new data quality methodological approach which could be used to optimize Big Data quality, as applied in the context of EHR. Some of the activities and techniques discussed through the proposed methodological approach can be transposed to other industries and use cases to a large extent. The proposed data quality methodological approach can be used by practitioners of Big Data Quality who follow a data-driven strategy. As opposed to existing Big Data quality frameworks, the proposed data quality methodological approach has the advantage of being more precise and specific. It gives clear and proven methods to undertake the main identified stages of a Big Data quality lifecycle and therefore can be applied by practitioners in the area. This research study provides some promising results and deliverables. It also paves the way for further research in the area. Technical and technological changes in Big Data is rapidly evolving and future research should be focusing on new representations of Big Data, the real-time streaming aspect, and replicating same research methods used in this current research study but on new technologies to validate current results

    Analysis of airport telematic data using data mining and machine learning

    Get PDF
    BibliografiaThe rapid development of technology in all the scientific fields has caused the convergence of them into the same science's field. An example of this is Telematics; a synergetic combination of informatics and telecommunications. Nowadays telematics is not very common in the context of airport technologies. However, many of the paperwork, data and services are computer-based and then retransmitted to telecommunications systems. In this thesis the characteristics that surround telematics technology are studied, and the benefits of its use are analyzed through case studies. To develop models, Machine Learning and Data Science techniques have been used. These techniques allow the extraction of proper information of the data obtained by telematics. The models include different procedures, including prediction and interpretation data techniques. As a result, the application of this model could be useful in different fields or environments. In this thesis, the models are adapted to the extraction of the information about the characteristics of Telematics in Vehicles; therefore if this model is going to be used in other ambits, it should be adapted to the new context (keeping the main base). During the thesis different application uses for the model are suggested. These applications, joining data and telematics services, might provide many advantages in the ground operations management at the airport such as the fuel consumption optimization across an accurate use of the concerning features affecting the fuel waste, reducing therefore emissions and avoiding monetary penalties for the accomplishment of certain environment protocols

    High-level compiler analysis for OpenMP

    Get PDF
    Nowadays, applications from dissimilar domains, such as high-performance computing and high-integrity systems, require levels of performance that can only be achieved by means of sophisticated heterogeneous architectures. However, the complex nature of such architectures hinders the production of efficient code at acceptable levels of time and cost. Moreover, the need for exploiting parallelism adds complications of its own (e.g., deadlocks, race conditions,...). In this context, compiler analysis is fundamental for optimizing parallel programs. There is however a trade-off between complexity and profit: low complexity analyses (e.g., reaching definitions) provide information that may be insufficient for many relevant transformations, and complex analyses based on mathematical representations (e.g., polyhedral model) give accurate results at a high computational cost. A range of parallel programming models providing different levels of programmability, performance and portability enable the exploitation of current architectures. However, OpenMP has proved many advantages over its competitors: 1) it delivers levels of performance comparable to highly tunable models such as CUDA and MPI, and better robustness than low level libraries such as Pthreads; 2) the extensions included in the latest specification meet the characteristics of current heterogeneous architectures (i.e., the coupling of a host processor to one or more accelerators, and the capability of expressing fine-grained, both structured and unstructured, and highly-dynamic task parallelism); 3) OpenMP is widely implemented by several chip (e.g., Kalray MPPA, Intel) and compiler (e.g., GNU, Intel) vendors; and 4) although currently the model lacks resiliency and reliability mechanisms, many works, including this thesis, pursue their introduction in the specification. This thesis addresses the study of compiler analysis techniques for OpenMP with two main purposes: 1) enhance the programmability and reliability of OpenMP, and 2) prove OpenMP as a suitable model to exploit parallelism in safety-critical domains. Particularly, the thesis focuses on the tasking model because it offers the flexibility to tackle the parallelization of algorithms with load imbalance, recursiveness and uncountable loop based kernels. Additionally, current works have proved the time-predictability of this model, shortening the distance towards its introduction in safety-critical domains. To enable the analysis of applications using the OpenMP tasking model, the first contribution of this thesis is the extension of a set of classic compiler techniques with support for OpenMP. As a basis for including reliability mechanisms, the second contribution consists of the development of a series of algorithms to statically detect situations involving OpenMP tasks, which may lead to a loss of performance, non-deterministic results or run-time failures. A well-known problem of parallel processing related to compilers is the static scheduling of a program represented by a directed graph. Although the literature is extensive in static scheduling techniques, the work related to the generation of the task graph at compile-time is very scant. Compilers are limited by the knowledge they can extract, which depends on the application and the programming model. The third contribution of this thesis is the generation of a predicated task dependency graph for OpenMP that can be interpreted by the runtime in such a way that the cost of solving dependences is reduced to the minimum. With the previous contributions as a basis for determining the functional safety of OpenMP, the final contribution of this thesis is the adaptation of OpenMP to the safety-critical domain considering two directions: 1) indicating how OpenMP can be safely used in such a domain, and 2) integrating OpenMP into Ada, a language widely used in the safety-critical domain.Actualment, aplicacions de dominis diversos com la computació d'altes prestacions i els sistemes d'alta integritat, requereixen nivells de rendiment assolibles només mitjançant arquitectures heterogènies sofisticades. No obstant, la natura complexa d'aquestes dificulta la producció de codi eficient en un temps i cost acceptables. A més, la necessitat d’explotar paral·lelisme introdueix complicacions en sí mateixa (p. ex. bloqueig mutu, condicions de carrera,...). En aquest context, l'anàlisi de compiladors és fonamental per optimitzar programes paral·lels. Existeix però un equilibri entre complexitat i beneficis: la informació obtinguda amb anàlisis simples (p. ex. definicions abastables) pot ser insuficient per moltes transformacions rellevants, i anàlisis complexos basats en models matemàtics (p. ex. model polièdric) faciliten resultats acurats a un alt cost computacional. Existeixen molts models de programació paral·lela que proporcionen diferents nivells de programabilitat, rendiment i portabilitat per l'explotació de les arquitectures actuals. En aquest marc, OpenMP ha demostrat molts avantatges respecte dels seus competidors: 1) el seu nivell de rendiment és comparable a models molt ajustables com CUDA i MPI, i proporciona més robustesa que llibreries de baix nivell com Pthreads; 2) les extensions que inclou la darrera especificació satisfan les característiques de les actuals arquitectures heterogènies (és a dir, l’acoblament d’un processador principal i un o més acceleradors, i la capacitat d'expressar paral·lelisme de tasques de gra fi, ja sigui estructurat o sense estructura; 3) OpenMP és àmpliament implementat per venedors de xips (p. ex. Kalray MPPA, Intel) i compiladors (p. ex. GNU, Intel); i 4) tot i que el model actual manca de mecanismes de resiliència i fiabilitat, molts treballs, incloent aquesta tesi, busquen la seva introducció a l'especificació. Aquesta tesi adreça l'estudi de tècniques d’anàlisi de compiladors amb dos objectius: 1) millorar la programabilitat i la fiabilitat de OpenMP, i 2) provar que OpenMP és un model adequat per explotar paral·lelisme en sistemes crítics. En particular, la tesi es centra en el model de tasques per què aquest ofereix la flexibilitat per abordar aplicacions amb problemes de balanceig de càrrega, recursivitat i bucles incomptables. A més, treballs recents han provat la predictibilitat en qüestió de temps del model, escurçant la distància cap a la seva introducció en sistemes crítics. Per a poder analitzar aplicacions que utilitzen el model de tasques d’OpenMP, la primera contribució d’aquesta tesi consisteix en l’extensió d'un conjunt de tècniques clàssiques de compilació per suportar OpenMP. Com a base per incloure mecanismes de fiabilitat, la segona contribució consisteix en el desenvolupament duna sèrie d'algorismes per detectar de forma estàtica situacions que involucren tasques d’OpenMP, i que poden conduir a una pèrdua de rendiment, resultats no deterministes, o fallades en temps d’execució. Un problema ben conegut del processament paral·lel relacionat amb els compiladors és la planificació estàtica d’un programa representat mitjançant un graf dirigit. Tot i que la literatura sobre planificació estàtica és extensa, aquella relacionada amb la generació del graf en temps de compilació és molt escassa. Els compiladors estan limitats pel coneixement que poden extreure, que depèn de l’aplicació i del model de programació. La tercera contribució de la tesi és la generació d’un graf de dependències enriquit que pot ser interpretat pel sistema en temps d’execució de manera que el cost de resoldre les dependències sigui mínim. Amb les anteriors contribucions com a base per a determinar la seguretat funcional de OpenMP, la darrera contribució de la tesi consisteix en adaptar OpenMP a sistemes crítics, explorant dues direccions: 1) indicar com OpenMP es pot utilitzar de forma segura en un domini com, i 2) integrar OpenMP en Ada, un llenguatge molt utilitzat en el domini de seguretat.Postprint (published version

    Resilience for large ensemble computations

    Get PDF
    With the increasing power of supercomputers, ever more detailed models of physical systems can be simulated, and ever larger problem sizes can be considered for any kind of numerical system. During the last twenty years the performance of the fastest clusters went from the teraFLOPS domain (ASCI RED: 2.3 teraFLOPS) to the pre-exaFLOPS domain (Fugaku: 442 petaFLOPS), and we will soon have the first supercomputer with a peak performance cracking the exaFLOPS (El Capitan: 1.5 exaFLOPS). Ensemble techniques experience a renaissance with the availability of those extreme scales. Especially recent techniques, such as particle filters, will benefit from it. Current ensemble methods in climate science, such as ensemble Kalman filters, exhibit a linear dependency between the problem size and the ensemble size, while particle filters show an exponential dependency. Nevertheless, with the prospect of massive computing power come challenges such as power consumption and fault-tolerance. The mean-time-between-failures shrinks with the number of components in the system, and it is expected to have failures every few hours at exascale. In this thesis, we explore and develop techniques to protect large ensemble computations from failures. We present novel approaches in differential checkpointing, elastic recovery, fully asynchronous checkpointing, and checkpoint compression. Furthermore, we design and implement a fault-tolerant particle filter with pre-emptive particle prefetching and caching. And finally, we design and implement a framework for the automatic validation and application of lossy compression in ensemble data assimilation. Altogether, we present five contributions in this thesis, where the first two improve state-of-the-art checkpointing techniques, and the last three address the resilience of ensemble computations. The contributions represent stand-alone fault-tolerance techniques, however, they can also be used to improve the properties of each other. For instance, we utilize elastic recovery (2nd contribution) for mitigating resiliency in an online ensemble data assimilation framework (3rd contribution), and we built our validation framework (5th contribution) on top of our particle filter implementation (4th contribution). We further demonstrate that our contributions improve resilience and performance with experiments on various architectures such as Intel, IBM, and ARM processors.Amb l’increment de les capacitats de còmput dels supercomputadors, es poden simular models de sistemes físics encara més detallats, i es poden resoldre problemes de més grandària en qualsevol tipus de sistema numèric. Durant els últims vint anys, el rendiment dels clústers més ràpids ha passat del domini dels teraFLOPS (ASCI RED: 2.3 teraFLOPS) al domini dels pre-exaFLOPS (Fugaku: 442 petaFLOPS), i aviat tindrem el primer supercomputador amb un rendiment màxim que sobrepassa els exaFLOPS (El Capitan: 1.5 exaFLOPS). Les tècniques d’ensemble experimenten un renaixement amb la disponibilitat d’aquestes escales tan extremes. Especialment les tècniques més noves, com els filtres de partícules, se¿n beneficiaran. Els mètodes d’ensemble actuals en climatologia, com els filtres d’ensemble de Kalman, exhibeixen una dependència lineal entre la mida del problema i la mida de l’ensemble, mentre que els filtres de partícules mostren una dependència exponencial. No obstant, juntament amb les oportunitats de poder computar massivament, apareixen desafiaments com l’alt consum energètic i la necessitat de tolerància a errors. El temps de mitjana entre errors es redueix amb el nombre de components del sistema, i s’espera que els errors s’esdevinguin cada poques hores a exaescala. En aquesta tesis, explorem i desenvolupem tècniques per protegir grans càlculs d’ensemble d’errors. Presentem noves tècniques en punts de control diferencials, recuperació elàstica, punts de control totalment asincrònics i compressió de punts de control. A més, dissenyem i implementem un filtre de partícules tolerant a errors amb captació i emmagatzematge en caché de partícules de manera preventiva. I finalment, dissenyem i implementem un marc per la validació automàtica i l’aplicació de compressió amb pèrdua en l’assimilació de dades d’ensemble. En total, en aquesta tesis presentem cinc contribucions, les dues primeres de les quals milloren les tècniques de punts de control més avançades, mentre que les tres restants aborden la resiliència dels càlculs d’ensemble. Les contribucions representen tècniques independents de tolerància a errors; no obstant, també es poden utilitzar per a millorar les propietats de cadascuna. Per exemple, utilitzem la recuperació elàstica (segona contribució) per a mitigar la resiliència en un marc d’assimilació de dades d’ensemble en línia (tercera contribució), i construïm el nostre marc de validació (cinquena contribució) sobre la nostra implementació del filtre de partícules (quarta contribució). A més, demostrem que les nostres contribucions milloren la resiliència i el rendiment amb experiments en diverses arquitectures, com processadors Intel, IBM i ARM.Postprint (published version

    Parallel and Distributed Computing

    Get PDF
    The 14 chapters presented in this book cover a wide variety of representative works ranging from hardware design to application development. Particularly, the topics that are addressed are programmable and reconfigurable devices and systems, dependability of GPUs (General Purpose Units), network topologies, cache coherence protocols, resource allocation, scheduling algorithms, peertopeer networks, largescale network simulation, and parallel routines and algorithms. In this way, the articles included in this book constitute an excellent reference for engineers and researchers who have particular interests in each of these topics in parallel and distributed computing

    Radio and computing resource management in SDR clouds

    Get PDF
    The aim of this thesis is defining and developing the concept of an efficient management of radio and computing resources in an SDR cloud. The SDR cloud breaks with today's cellular architecture. A set of distributed antennas are connected by optical fibre to data processing centres. The radio and computing infrastructure can be shared between different operators (virtualization), reducing costs and risks, while increasing the capacity and creating new business models and opportunities. The data centre centralizes the management of all system resources: antennas, spectrum, computing, routing, etc. Specially relevant is the computing resource management (CRM), whose objective is dynamically providing sufficient computing resources for a real-time execution of signal processing algorithms. Current CRM techniques are not designed for wireless applications. We demonstrate that this imposes a limit on the wireless traffic a CRM entity is capable to support. Based on this, a distributed management is proposed, where multiple CRM entities manage a cluster of processors, whose optimal size is derived from the traffic density. Radio resource management techniques (RRM) also need to be adapted to the characteristics of the new SDR cloud architecture. We introduce a linear cost model to measure the cost associated to the infrastructure resources consumed according to the pay-per-use model. Based on this model, we formulate the efficiency maximization power allocation problem (EMPA). The operational costs per transmitted bit achieved by EMPA are 6 times lower than with traditional power allocation methods. Analytical solutions are obtained for the single channel case, with and without channel state information at the transmitter. It is shown that the optimal transmission rate is an increasing function of the product of the channel gain with the operational costs divided by the power costs. The EMPA solution for multiple channels has the form of water-filling, present in many power allocation problems. In order to be able to obtain insights about how the optimal solution behaves as a function of the problem parameters, a novel technique based on ordered statistics has been developed. This technique allows solving general water-filling problems based on the channel statistics rather than their realization. This approach has allowed designing a low complexity EMPA algorithm (2 to 4 orders of magnitude faster than state-of-the-art algorithms). Using the ordered statistics technique, we have shown that the optimal transmission rate behaviour with respect to the average channel gains and cost parameters is equivalent to the single channel case and that the efficiency increases with the number of available channels. The results can be applied to design more efficient SDR clouds. As an example, we have derived the optimal ratio of number of antennas per user that maximizes the efficiency. As new users enter and leave the network, this ratio should be kept constant, enabling and disabling antennas dynamically. This approach exploits the dynamism and elasticity provided by the SDR cloud. In summary, this dissertation aims at influencing towards a change in the communications system management model (typically RRM), considering the introduction of the new infrastructure model (SDR cloud), new business models (based on Cloud Computing) and a more conciliatory view of an efficient resource management, not only focused on the optimization of the spectrum usage.El objetivo de esta tesis es de nir y desarrollar el concepto de gesti on e ciente de los recursos de radio y computaci on en un SDR cloud. El SDR cloud rompe con la estructura del sistema celular actual. Un conjunto de antenas distribuidas se conectan a centros de procesamiento mediante enlaces de comunicaci on de bra optica. La infraestructura de radio y procesamiento puede ser compartida entre distintos operadores (virtualizacion), disminuyendo costes y riesgos, aumentando la capacidad y abriendo nuevos modelos y oportunidades de negocio. La centralizaci on de la gesti on del sistema viene soportada por el centro de procesamiento, donde se realiza una gesti on de todos los recursos del sistema: antenas, espectro, computaci on, enrutado, etc. Resulta de especial relevancia la gesti on de los recursos de computaci on (CRM) cuyo objetivo es el de proveer, din amicamente, de su cientes recursos de computaci on para la ejecuci on en tiempo real de algoritmos de procesado del señal. Las t ecnicas actuales de CRM no han sido diseñadas para aplicaciones de comunicaciones. Demostramos que esta caracter stica impone un l ímite en el tr áfi co que un gestor CRM puede soportar. En base a ello, proponemos una gesti on distribuida donde m ultiples entidades CRM gestionan grupos de procesadores, cuyo tamaño optimo se deriva de la densidad de tr áfi co. Las t ecnicas actuales de gesti on de recursos radio (RRM) tambi en deben ser adaptadas a las caracter sticas de la nueva arquitectura SDR cloud. Introducimos un modelo de coste lineal que caracteriza los costes asociados al consumo de recursos de la infraestructura seg un el modelo de pago-por-uso. A partir de este modelo, formulamos el problema de asignaci on de potencia de m axima e ciencia (EMPA). Mediante una asignaci on EMPA, los costes de operaci on por bit transmitido son del orden de 6 veces menores que con los m etodos tradicionales. Se han obtenido soluciones anal ticas para el caso de un solo canal, con y sin informacion del canal disponible en el transmisor, y se ha demostrado que la velocidad optima de transmisi on es una funci on creciente del producto de la ganancia del canal por los costes operativos dividido entre los costes de potencia. La soluci on EMPA para varios canales satisface el modelo "water- lling", presente en muchos tipos de optimizaci on de potencia. Con el objetivo de conocer c omo esta se comporta en funci on de los par ametros del sistema, se ha desarrollado una t ecnica nueva basada en estadí sticas ordenadas. Esta t ecnica permite solucionar el problema del water- lling bas andose en la estadí stica del canal en vez de en su realizaci on. Este planteamiento, despu es de profundos an alisis matem aticos, ha permitido desarrollar un algoritmo de asignaci on de potencia de baja complejidad (2 a 4 ordenes de magnitud m as r apido que el estado del arte). Mediante esta t ecnica, se ha demostrado que la velocidad optima de transmisi on se comporta de forma equivalente al caso de un solo canal y que la e ciencia incrementa a medida que aumentan el numero de canales disponibles. Estos resultados pueden aplicarse a diseñar un SDR cloud de forma m as e ciente. A modo de ejemplo, hemos obtenido el ratio optimo de n umero de antenas por usuario que maximiza la e ciencia. A medida que los usuarios entran y salen de la red, este ratio debe mantenerse constante, a fin de mantener una efi ciencia lo m as alta posible, activando o desactivando antenas din amicamente. De esta forma se explota completamente el dinamismo ofrecido por una arquitectura el astica como el SDR cloud. En de nitiva, este trabajo pretende incidir en un cambio del modelo de gesti on de un sistema de comunicaciones (t ípicamente RRM) habida cuenta de la introducci on de una nueva infraestructura (SDR cloud), nuevos modelos de negocio (basados en Cloud Computing) y una visi on m as integradora de la gesti on e ciente de los recursos del sistema, no solo centrada en la optimizaci on del uso del espectro

    Intelligence artificielle et optimisation avec parallélisme

    Get PDF
    This document is devoted to artificial intelligence and optimization. This part will bedevoted to having fun with high level ideas and to introduce the subject. Thereafter,Part II will be devoted to Monte-Carlo Tree Search, a recent great tool for sequentialdecision making; we will only briefly discuss other tools for sequential decision making;the complexity of sequential decision making will be reviewed. Then, part IIIwill discuss optimization, with a particular focus on robust optimization and especiallyevolutionary optimization. Part IV will present some machine learning tools, useful ineveryday life, such as supervised learning and active learning. A conclusion (part V)will come back to fun and to high level ideas.On parlera ici de Monte-Carlo Tree Search, d'UCT, d'algorithmes évolutionnaires et d'autres trucs et astuces d'IA;l'accent sera mis sur la parallélisation

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering
    corecore