483 research outputs found

    Autonomous agents for multi-function radar resource management

    Get PDF
    The multifunction radar, aided by advances in electronically steered phased array technology, is capable of supporting numerous, differing and potentially conflicting tasks. However, the full potential of the radar system is only realised through its ability to automatically manage and configure the finite resource it has available. This thesis details the novel application of agent systems to this multifunction radar resource management problem. Agent systems are computational societies where the synergy of local interactions between agents produces emergent, global desirable behaviour. In this thesis the measures and models which can be used to allocate radar resource is explored; this choice of objective function is crucial as it determines which attribute is allocated resource and consequently constitutes a description of the problem to be solved. A variety of task specific and information theoretic measures are derived and compared. It is shown that by utilising as wide a variety of measures and models as possible the radar’s multifunction capability is enhanced. An agent based radar resource manager is developed using the JADE Framework which is used to apply the sequential first price auction and continuous double auctions to the multifunction radar resource management problem. The application of the sequential first price auction leads to the development of the Sequential First Price Auction Resource Management algorithm from which numerous novel conclusions on radar resource management algorithm design are drawn. The application of the continuous double auction leads to the development of the Continuous Double Auction Parameter Selection (CDAPS) algorithm. The CDAPS algorithm improves the current state of the art by producing an improved allocation with low computational burden. The algorithm is shown to give worthwhile improvements in task performance over a conventional rule based approach for the tracking and surveillance functions as well as exhibiting graceful degradation and adaptation to a dynamic environment

    Joint 1D and 2D Neural Networks for Automatic Modulation Recognition

    Get PDF
    The digital communication and radar community has recently manifested more interest in using data-driven approaches for tasks such as modulation recognition, channel estimation and distortion correction. In this research we seek to apply an object detector for parameter estimation to perform waveform separation in the time and frequency domain prior to classification. This enables the full automation of detecting and classifying simultaneously occurring waveforms. We leverage a lD ResNet implemented by O\u27Shea et al. in [1] and the YOLO v3 object detector designed by Redmon et al. in [2]. We conducted an in depth study of the performance of these architectures and integrated the models to perform joint detection and classification. To our knowledge, the present research is the first to study and successfully combine a lD ResNet classifier and Yolo v3 object detector to fully automate the process of AMR for parameter estimation, pulse extraction and waveform classification for non-cooperative scenarios. The overall performance of the joint detector/ classifier is 90 at 10 dB signal to noise ratio for 24 digital and analog modulations

    Management: A continuing bibliography with indexes

    Get PDF
    This bibliography lists 344 reports, articles, and other documents introduced into the NASA scientific and technical information system in 1978

    Running stream-like programs on heterogeneous multi-core systems

    Get PDF
    All major semiconductor companies are now shipping multi-cores. Phones, PCs, laptops, and mobile internet devices will all require software that can make effective use of these cores. Writing high-performance parallel software is difficult, time-consuming and error prone, increasing both time-to-market and cost. Software outlives hardware; it typically takes longer to develop new software than hardware, and legacy software tends to survive for a long time, during which the number of cores per system will increase. Development and maintenance productivity will be improved if parallelism and technical details are managed by the machine, while the programmer reasons about the application as a whole. Parallel software should be written using domain-specific high-level languages or extensions. These languages reveal implicit parallelism, which would be obscured by a sequential language such as C. When memory allocation and program control are managed by the compiler, the program's structure and data layout can be safely and reliably modified by high-level compiler transformations. One important application domain contains so-called stream programs, which are structured as independent kernels interacting only through one-way channels, called streams. Stream programming is not applicable to all programs, but it arises naturally in audio and video encode and decode, 3D graphics, and digital signal processing. This representation enables high-level transformations, including kernel unrolling and kernel fusion. This thesis develops new compiler and run-time techniques for stream programming. The first part of the thesis is concerned with a statically scheduled stream compiler. It introduces a new static partitioning algorithm, which determines which kernels should be fused, in order to balance the loads on the processors and interconnects. A good partitioning algorithm is crucial if the compiler is to produce efficient code. The algorithm also takes account of downstream compiler passes---specifically software pipelining and buffer allocation---and it models the compiler's ability to fuse kernels. The latter is important because the compiler may not be able to fuse arbitrary collections of kernels. This thesis also introduces a static queue sizing algorithm. This algorithm is important when memory is distributed, especially when local stores are small. The algorithm takes account of latencies and variations in computation time, and is constrained by the sizes of the local memories. The second part of this thesis is concerned with dynamic scheduling of stream programs. First, it investigates the performance of known online, non-preemptive, non-clairvoyant dynamic schedulers. Second, it proposes two dynamic schedulers for stream programs. The first is specifically for one-dimensional stream programs. The second is more general: it does not need to be told the stream graph, but it has slightly larger overhead. This thesis also introduces some support tools related to stream programming. StarssCheck is a debugging tool, based on Valgrind, for the StarSs task-parallel programming language. It generates a warning whenever the program's behaviour contradicts a pragma annotation. Such behaviour could otherwise lead to exceptions or race conditions. StreamIt to OmpSs is a tool to convert a streaming program in the StreamIt language into a dynamically scheduled task based program using StarSs.Totes les empreses de semiconductors produeixen actualment multi-cores. Mòbils,PCs, portàtils, i dispositius mòbils d’Internet necessitaran programari quefaci servir eficientment aquests cores. Escriure programari paral·lel d’altrendiment és difícil, laboriós i propens a errors, incrementant tant el tempsde llançament al mercat com el cost. El programari té una vida més llarga queel maquinari; típicament pren més temps desenvolupar nou programi que noumaquinari, i el programari ja existent pot perdurar molt temps, durant el qualel nombre de cores dels sistemes incrementarà. La productivitat dedesenvolupament i manteniment millorarà si el paral·lelisme i els detallstècnics són gestionats per la màquina, mentre el programador raona sobre elconjunt de l’aplicació.El programari paral·lel hauria de ser escrit en llenguatges específics deldomini. Aquests llenguatges extrauen paral·lelisme implícit, el qual és ocultatper un llenguatge seqüencial com C. Quan l’assignació de memòria i lesestructures de control són gestionades pel compilador, l’estructura iorganització de dades del programi poden ser modificades de manera segura ifiable per les transformacions d’alt nivell del compilador.Un dels dominis de l’aplicació importants és el que consta dels programes destream; aquest programes són estructurats com a nuclis independents queinteractuen només a través de canals d’un sol sentit, anomenats streams. Laprogramació de streams no és aplicable a tots els programes, però sorgeix deforma natural en la codificació i descodificació d’àudio i vídeo, gràfics 3D, iprocessament de senyals digitals. Aquesta representació permet transformacionsd’alt nivell, fins i tot descomposició i fusió de nucli.Aquesta tesi desenvolupa noves tècniques de compilació i sistemes en tempsd’execució per a programació de streams. La primera part d’aquesta tesi esfocalitza amb un compilador de streams de planificació estàtica. Presenta unnou algorisme de partició estàtica, que determina quins nuclis han de serfusionats, per tal d’equilibrar la càrrega en els processadors i en lesinterconnexions. Un bon algorisme de particionat és fonamental per tal de queel compilador produeixi codi eficient. L’algorisme també té en compte elspassos de compilació subseqüents---específicament software pipelining il’arranjament de buffers---i modela la capacitat del compilador per fusionarnuclis. Aquesta tesi també presenta un algorisme estàtic de redimensionament de cues.Aquest algorisme és important quan la memòria és distribuïda, especialment quanles memòries locals són petites. L’algorisme té en compte latències ivariacions en els temps de càlcul, i considera el límit imposat per la mida deles memòries locals.La segona part d’aquesta tesi es centralitza en la planificació dinàmica deprogrames de streams. En primer lloc, investiga el rendiment dels planificadorsdinàmics online, non-preemptive i non-clairvoyant. En segon lloc, proposa dosplanificadors dinàmics per programes de stream. El primer és específicament pera programes de streams unidimensionals. El segon és més general: no necessitael graf de streams, però els overheads són una mica més grans.Aquesta tesi també presenta un conjunt d’eines de suport relacionades amb laprogramació de streams. StarssCheck és una eina de depuració, que és basa enValgrind, per StarSs, un llenguatge de programació paral·lela basat en tasques.Aquesta eina genera un avís cada vegada que el comportament del programa estàen contradicció amb una anotació pragma. Aquest comportament d’una altra manerapodria causar excepcions o situacions de competició. StreamIt to OmpSs és unaeina per convertir un programa de streams codificat en el llenguatge StreamIt aun programa de tasques en StarSs planificat de forma dinàmica.Postprint (published version

    Advances in analytical models and applications for RFID, WSN and AmI systems

    Get PDF
    Experimentos llevados a cabo con el equipo de división de honor UCAM Volleyball Murcia.[SPA] Internet de las cosas (IoT) integra distintos elementos que actúan tanto como fuentes, como sumideros de información, a diferencia de la percepción que se ha tenido hasta ahora de Internet, centrado en las personas. Los avances en IoT engloban un amplio número de áreas y tecnologías, desde la adquisición de información hasta el desarrollo de nuevos protocolos y aplicaciones. Un concepto clave que subyace en el concepto de IoT, es el procesamiento de forma inteligente y autónoma de los flujos de información que se dispone. En este trabajo, estudiamos tres aspectos diferentes de IoT. En primer lugar, nos centraremos en la infraestructura de obtención de datos. Entre las diferentes tecnologías de obtención de datos disponibles en los sistemas IoT, la Identificación por Radio Frecuencia (RFID) es considerada como una de las tecnologías predominantes. RFID es la tecnología detrás de aplicaciones tales como control de acceso, seguimiento y rastreo de contenedores, gestión de archivos, clasificación de equipaje o localización de equipos. Con el auge de la tecnología RFID, muchas instalaciones empiezan a requerir la presencia de múltiples lectores RFID que operan próximos entre sí y conjuntamente. A estos escenarios se les conoce como dense reader environments (DREs). La coexistencia de varios lectores operando simultáneamente puede causar graves problemas de interferencias en el proceso de identificación. Uno de los aspectos claves a resolver en los RFID DREs consiste en lograr la coordinación entre los lectores. Estos problemas de coordinación son tratados en detalle en esta tesis doctoral. Además, dentro del área de obtención de datos relativa a IoT, las Redes de Sensores Inalámbricas (WSNs) desempeñan un papel fundamental. Durante la última década, las WSNs han sido estudiadas ampliamente de forma teórica, y la mayoría de problemas relacionados con la comunicación en este tipo de redes se han conseguido resolver de forma favorable. Sin embargo, con la implementación de WSNs en proyectos reales, han surgido nuevos problemas, siendo uno de ellos el desarrollo de estrategias realistas para desplegar las WSN. En este trabajo se estudian diferentes métodos que resuelven este problema, centrándonos en distintos criterios de optimización, y analizando las diferentes ventajas e inconvenientes que se producen al buscar una solución equilibrada. Por último, la Inteligencia Ambiental (AmI) forma parte del desarrollo de aplicaciones inteligentes en IoT. Hasta ahora, han sido las personas quienes han tenido que adaptarse al entorno, en cambio, AmI persigue crear entornos de obtención de datos capaces de anticipar y apoyar las acciones de las personas. AmI se está introduciendo progresivamente en diversos entornos reales tales como el sector de la educación y la salud, en viviendas, etc. En esta tesis se introduce un sistema AmI orientado al deporte que busca mejorar el entrenamiento de los atletas, siendo el objetivo prioritario el desarrollo de un asistente capaz de proporcionar órdenes de entrenamiento, basadas tanto en el entorno como en el rendimiento de los atletas. [ENG] Internet of Things (IoT) is being built upon many different elements acting as sources and sinks of information, rather than the previous human-centric Internet conception. Developments in IoT include a vast set of fields ranging from data sensing, to development of new protocols and applications. Indeed, a key concept underlying in the conception of IoT is the smart and autonomous processing of the new huge data flows available. In this work, we aim to study three different aspects within IoT. First, we will focus on the sensing infrastructure. Among the different kind of sensing technologies available to IoT systems, Radio Frequency Identification (RFID) is widely considered one of the leading technologies. RFID is the enabling technology behind applications such as access control, tracking and tracing of containers, file management, baggage sorting or equipment location. With the grow up of RFID, many facilities require multiple RFID readers usually operating close to each other. These are known as Dense Reader Environments (DREs). The co-existence of several readers operating concurrently is known to cause severe interferences on the identification process. One of the key aspects to solve in RFID DREs is achieving proper coordination among readers. This is the focus of the first part of this doctoral thesis. Unlike previous works based on heuristics, we address this problem through an optimization-based approach. The goal is identifying the maximum mean number of tags while network constraints are met. To be able to formulate these optimization problems, we have obtained analytically the mean number of identifications in a bounded -discrete or continuous- time period, an additional novel contribution of our work. Results show that our approach is overwhelmingly better than previous known methods. Along sensing technologies of IoT, Wireless Sensor Networks (WSNs) plays a fundamental role. WSNs have been largely and theoretically studied in the past decade, and many of their initial problems related to communication aspects have been successfully solved. However, with the adoption of WSNs in real-life projects, new issues have arisen, being one of them the development of realistic strategies to deploy WSNs. We have studied different ways of solving this aspect by focusing on different optimality criteria and evaluating the different trade-offs that occur when a balanced solution must be selected. On the one hand, deterministic placements subject to conflicting goals have been addressed. Results can be obtained in the form of Pareto-frontiers, allowing proper solution selection. On the other hand, a number of situations correspond to deployments were the nodes¿ position is inherently random. We have analyzed these situations leading first to a theoretical model, which later has been particularized to a Moon WSN survey. Our work is the first considering a full model with realistic properties such as 3D topography, propellant consumptions or network lifetime and mass limitations. Furthermore, development of smart applications within IoT is the focus of the Ambient Intelligence (AmI) field. Rather than having people adapting to the surrounding environment, AmI pursues the development of sensitive environments able to anticipate support in people¿s actions. AmI is progressively being introduced in many real-life environments like education, homes, health and so forth. In this thesis we develop a sport-oriented AmI system designed to improve athletes training. The goal is developing an assistant able to provide real-time training orders based on both environment and athletes¿ biometry, which is aimed to control the aerobic and the technical-tactical training. Validation experiments with the honor league UCAM Volleyball Murcia team have shown the suitability of this approach.[ENG] Internet of Things (IoT) is being built upon many different elements acting as sources and sinks of information, rather than the previous human-centric Internet conception. Developments in IoT include a vast set of fields ranging from data sensing, to development of new protocols and applications. Indeed, a key concept underlying in the conception of IoT is the smart and autonomous processing of the new huge data flows available. In this work, we aim to study three different aspects within IoT. First, we will focus on the sensing infrastructure. Among the different kind of sensing technologies available to IoT systems, Radio Frequency Identification (RFID) is widely considered one of the leading technologies. RFID is the enabling technology behind applications such as access control, tracking and tracing of containers, file management, baggage sorting or equipment location. With the grow up of RFID, many facilities require multiple RFID readers usually operating close to each other. These are known as Dense Reader Environments (DREs). The co-existence of several readers operating concurrently is known to cause severe interferences on the identification process. One of the key aspects to solve in RFID DREs is achieving proper coordination among readers. This is the focus of the first part of this doctoral thesis. Unlike previous works based on heuristics, we address this problem through an optimization-based approach. The goal is identifying the maximum mean number of tags while network constraints are met. To be able to formulate these optimization problems, we have obtained analytically the mean number of identifications in a bounded -discrete or continuous- time period, an additional novel contribution of our work. Results show that our approach is overwhelmingly better than previous known methods. Along sensing technologies of IoT, Wireless Sensor Networks (WSNs) plays a fundamental role. WSNs have been largely and theoretically studied in the past decade, and many of their initial problems related to communication aspects have been successfully solved. However, with the adoption of WSNs in real-life projects, new issues have arisen, being one of them the development of realistic strategies to deploy WSNs. We have studied different ways of solving this aspect by focusing on different optimality criteria and evaluating the different trade-offs that occur when a balanced solution must be selected. On the one hand, deterministic placements subject to conflicting goals have been addressed. Results can be obtained in the form of Pareto-frontiers, allowing proper solution selection. On the other hand, a number of situations correspond to deployments were the nodes¿ position is inherently random. We have analyzed these situations leading first to a theoretical model, which later has been particularized to a Moon WSN survey. Our work is the first considering a full model with realistic properties such as 3D topography, propellant consumptions or network lifetime and mass limitations. Furthermore, development of smart applications within IoT is the focus of the Ambient Intelligence (AmI) field. Rather than having people adapting to the surrounding environment, AmI pursues the development of sensitive environments able to anticipate support in people¿s actions. AmI is progressively being introduced in many real-life environments like education, homes, health and so forth. In this thesis we develop a sport-oriented AmI system designed to improve athletes training. The goal is developing an assistant able to provide real-time training orders based on both environment and athletes¿ biometry, which is aimed to control the aerobic and the technical-tactical training. Validation experiments with the honor league UCAM Volleyball Murcia team have shown the suitability of this approach.Universidad Politécnica de CartagenaPrograma de doctorado en Tecnología de la Información y de las Comunicacione

    Probabilistic grid scheduling based on job statistics and monitoring information

    Get PDF
    This transfer thesis presents a novel, probabilistic approach to scheduling applications on computational Grids based on their historical behaviour, current state of the Grid and predictions of the future execution times and resource utilisation of such applications. The work lays a foundation for enabling a more intuitive, user-friendly and effective scheduling technique termed deadline scheduling. Initial work has established motivation and requirements for a more efficient Grid scheduler, able to adaptively handle dynamic nature of the Grid resources and submitted workload. Preliminary scheduler research identified the need for a detailed monitoring of Grid resources on the process level, and for a tool to simulate non-deterministic behaviour and statistical properties of Grid applications. A simulation tool, GridLoader, has been developed to enable modelling of application loads similar to a number of typical Grid applications. GridLoader is able to simulate CPU utilisation, memory allocation and network transfers according to limits set through command line parameters or a configuration file. Its specific strength is in achieving set resource utilisation targets in a probabilistic manner, thus creating a dynamic environment, suitable for testing the scheduler’s adaptability and its prediction algorithm. To enable highly granular monitoring of Grid applications, a monitoring framework based on the Ganglia Toolkit was developed and tested. The suite is able to collect resource usage information of individual Grid applications, integrate it into standard XML based information flow, provide visualisation through a Web portal, and export data into a format suitable for off-line analysis. The thesis also presents initial investigation of the utilisation of University College London Central Computing Cluster facility running Sun Grid Engine middleware. Feasibility of basic prediction concepts based on the historical information and process meta-data have been successfully established and possible scheduling improvements using such predictions identified. The thesis is structured as follows: Section 1 introduces Grid computing and its major concepts; Section 2 presents open research issues and specific focus of the author’s research; Section 3 gives a survey of the related literature, schedulers, monitoring tools and simulation packages; Section 4 presents the platform for author’s work – the Self-Organising Grid Resource management project; Sections 5 and 6 give detailed accounts of the monitoring framework and simulation tool developed; Section 7 presents the initial data analysis while Section 8.4 concludes the thesis with appendices and references

    Parallel architectures and runtime systems co-design for task-based programming models

    Get PDF
    The increasing parallelism levels in modern computing systems has extolled the need for a holistic vision when designing multiprocessor architectures taking in account the needs of the programming models and applications. Nowadays, system design consists of several layers on top of each other from the architecture up to the application software. Although this design allows to do a separation of concerns where it is possible to independently change layers due to a well-known interface between them, it is hampering future systems design as the Law of Moore reaches to an end. Current performance improvements on computer architecture are driven by the shrinkage of the transistor channel width, allowing faster and more power efficient chips to be made. However, technology is reaching physical limitations were the transistor size will not be able to be reduced furthermore and requires a change of paradigm in systems design. This thesis proposes to break this layered design, and advocates for a system where the architecture and the programming model runtime system are able to exchange information towards a common goal, improve performance and reduce power consumption. By making the architecture aware of runtime information such as a Task Dependency Graph (TDG) in the case of dataflow task-based programming models, it is possible to improve power consumption by exploiting the critical path of the graph. Moreover, the architecture can provide hardware support to create such a graph in order to reduce the runtime overheads and making possible the execution of fine-grained tasks to increase the available parallelism. Finally, the current status of inter-node communication primitives can be exposed to the runtime system in order to perform a more efficient communication scheduling, and also creates new opportunities of computation and communication overlap that were not possible before. An evaluation of the proposals introduced in this thesis is provided and a methodology to simulate and characterize the application behavior is also presented.El aumento del paralelismo proporcionado por los sistemas de cómputo modernos ha provocado la necesidad de una visión holística en el diseño de arquitecturas multiprocesador que tome en cuenta las necesidades de los modelos de programación y las aplicaciones. Hoy en día el diseño de los computadores consiste en diferentes capas de abstracción con una interfaz bien definida entre ellas. Las limitaciones de esta aproximación junto con el fin de la ley de Moore limitan el potencial de los futuros computadores. La mayoría de las mejoras actuales en el diseño de los computadores provienen fundamentalmente de la reducción del tamaño del canal del transistor, lo cual permite chips más rápidos y con un consumo eficiente sin apenas cambios fundamentales en el diseño de la arquitectura. Sin embargo, la tecnología actual está alcanzando limitaciones físicas donde no será posible reducir el tamaño de los transistores motivando así un cambio de paradigma en la construcción de los computadores. Esta tesis propone romper este diseño en capas y abogar por un sistema donde la arquitectura y el sistema de tiempo de ejecución del modelo de programación sean capaces de intercambiar información para alcanzar una meta común: La mejora del rendimiento y la reducción del consumo energético. Haciendo que la arquitectura sea consciente de la información disponible en el modelo de programación, como puede ser el grafo de dependencias entre tareas en los modelos de programación dataflow, es posible reducir el consumo energético explotando el camino critico del grafo. Además, la arquitectura puede proveer de soporte hardware para crear este grafo con el objetivo de reducir el overhead de construir este grado cuando la granularidad de las tareas es demasiado fina. Finalmente, el estado de las comunicaciones entre nodos puede ser expuesto al sistema de tiempo de ejecución para realizar una mejor planificación de las comunicaciones y creando nuevas oportunidades de solapamiento entre cómputo y comunicación que no eran posibles anteriormente. Esta tesis aporta una evaluación de todas estas propuestas, así como una metodología para simular y caracterizar el comportamiento de las aplicacionesPostprint (published version
    corecore