3 research outputs found

    Coordinación de sistemas multiagentes para tareas de localización de objetos

    Get PDF
    Tal como su nombre indica, “sistema multiagente para la localización de objetos”, el presente proyecto desarrolla un sistema multiagente diseñado para la localización de un tipo determinado de objeto, en nuestro caso para la localización de naufragios y barcos. El sistema podría usarse con mínimas modificaciones para la detección aérea de otro tipo de objetos, como focos de incendios, alpinistas perdidos... pero, en este caso, el objetivo de la detección que hemos decidido como demostración de sus capacidades ha sido el mencionado. ¿Y como es capaz este “sistema” de detectar náufragos en alta mar?, ¿En que consiste este “sistema”?. Se trata de un conjunto de aviones inteligentes que sobrevuelan en formación una zona seleccionada, tomando fotografías de lo que van sobrevolando e identificando autónomamente e 'in situ' si en alguna de ellas aparecen restos de naufragio. Pero para hacer todo esto, el sistema necesita ser pilotado a distancia ¿no?. No. El sistema (es decir, los aviones) son completamente autónomos. Una vez que desde la base de despegue se les dice que área tienen que explorar, ellos deciden como explorarla, se van, la exploran, y vuelven al cabo de un tiempo con los resultados. Entonces esto significa que vosotros tenéis unos aviones y que los podéis usar ya mismo para realizar esta tarea. No. Lo que nosotros tenemos es un simulador tridimensional del escenario en donde se mueven los aviones. El sistema está pensado para ser instalado en unos aviones robóticos reales, pero actualmente sobre lo que funciona es sobre un simulador de vuelo. Nosotros nos hemos encargados de diseñar la inteligencia artificial de estos aviones, pero su instalación en robots reales tendría que ser realizada por otros como continuación de este proyecto. Un ejemplo de un candidato para esto, son los aviones del proyecto SIVA del INTA. Respecto a lo que son capaces de hacer estos aviones inteligentes, o agentes, diremos que pueden: calcular la ruta que deben seguir para explorar una determinada zona con forma de polígono irregular utilizando poco tiempo y combustible; comunicarse y coordinarse; volar en formación para aumentar el área que son capaces de fotografiar juntos; identificar si en las fotografías que toman aparece el objeto que buscan; no perderse si se interrumpe la comunicación con el jefe de formación o la base. Respecto a que no son capaces de hacer, diremos: no pueden despegar o aterrizar en tierra firme, en el simulador los aviones comienzan y terminan el vuelo en el aire; no pueden rescatar a los náufragos ellos mismos, tan sólo avisar a base de su posición; aún no están programados para enfrentarse exitosamente a cualquier eventualidad imprevista posible, aunque el sistema está diseñado para poder incluir capacidades como esta fácilmente. Descripción general. Capacidades y límites actuales . [ABSTRACT] As its name tells, “multiagent system for objects localization”, this project develops a multiagent system designed to be used to search and find a specified kind of object, in our case to find ships and wrecks. The system could be used with very few modifications to recognise other kind of objects, such as fires, lost alpinists... but, in the case presented in this memo, the detection objective that we have selected to show its capabilities, is a ship. How is this system able of detecting wrecks in deep sea? How does this system work?. The system is a set of various planes (by default three) that fly in formation over a selected zone, making aerial photographs of the extension below them and recognising 'on the spot' if in one of them a ship appears. To do all this, is the system piloted remotely?. No. The system is completely autonomous. The people who use this system only have to say them what is the area the planes must explore, then the planes go there, explore this area, and after that they came back to base with the exploration results. Then, have you a group of planes that do all that and you can use them just now to manage this task?. No. The thing we have is a three-dimensional simulator of the scenario in which the planes work, and we run our intelligent agents there. The system is designed to be installed in real robotic planes, but presently they run in a simulator we have build for them. The artificial intelligences we have designed are intended to be installed ultimately in real planes, but its installation in those real planes would have to be made by other people as a continuation to this project. An example of a robotic candidate for that, are the planes of the SIVA project in INTA. About the capacities those intelligent planes have, they can: calculate the shortest route they must follow to search all the area described by a irregular polygon; communicate and coordinate each other; fly in formation to increase the area they are able to spot in flight; identify in which photographs, between those that they make, a ship appears; do not become lost if one of them lose the communication with the boss of the formation. About the things they can not do: They can't take off or land, in the simulator they began and end the flight in the air; they can't rescue the shipwrecked persons themselves, they only notify either if there is some wreck or not; they are not designed to confront whatever possible eventuality, but the agent architecture is devised to include more capabilities easily

    ADAPTIVE MODEL BASED COMBUSTION PHASING CONTROL FOR MULTI FUEL SPARK IGNITION ENGINES

    Get PDF
    This research describes a physics-based control-oriented feed-forward model, combined with cylinder pressure feedback, to regulate combustion phasing in a spark-ignition engine operating on an unknown mix of fuels. This research may help enable internal combustion engines that are capable of on-the-fly adaptation to a wide range of fuels. These engines could; (1) facilitate a reduction in bio-fuel processing, (2) encourage locally-appropriate bio-fuels to reduce transportation, (3) allow new fuel formulations to enter the market with minimal infrastructure, and (4) enable engine adaptation to pump-to-pump fuel variations. These outcomes will help make bio-fuels cost-competitive with other transportation fuels, lessen dependence on traditional sources of energy, and reduce greenhouse gas emissions from automobiles; all of which are pivotal societal issues. Spark-ignition engines are equipped with a large number of control actuators to satisfy fuel economy targets and maintain regulated emissions compliance. The increased control flexibility also allows for adaptability to a wide range of fuel compositions, while maintaining efficient operation when input fuel is altered. Ignition timing control is of particular interest because it is the last control parameter prior to the combustion event, and significantly influences engine efficiency and emissions. Although Map-based ignition timing control and calibration routines are state of art, they become cumbersome when the number of control degrees of freedom increases are used in the engine. The increased system complexity motivates the use of model-based methods to minimize product development time and ensure calibration flexibility when the engine is altered during the design process. A closed loop model based ignition timing control algorithm is formulated with: 1) a feed forward fuel type sensitive combustion model to predict combustion duration from spark to 50% mass burned; 2) two virtual fuel property observers for octane number and laminar flame speed feedback; 3) an adaptive combustion phasing target model that is able to self-calibrate for wide range of fuel sources input. The proposed closed loop algorithm is experimentally validated in real time on the dynamometer. Satisfactory results are observed and conclusions are made that the closed loop approach is able to regulate combustion phasing for multi fuel adaptive SI engines

    Studies on object recognition from degraded images using neural networks

    No full text
    The objective of this paper is to study the performance of artificial neural network models for recognition of objects from poorly resolved, noisy, and transformed (scaled, rotated, translated) images, such as images reconstructed from sparse and noisy data in a sensor array imaging context. Noise and sparsity of data in the imaging context result in degradation of quality of the reconstructed image as a whole, instead of affecting it in the form of local corruption of the image pixel information as in many image processing situations. Hence, (i) neighbourhood processing methods for noise cleaning may not be suitable, (ii) feature extraction cannot be reliably performed, and (iii) model-based methods for classification cannot easily be applied. In this paper, we show that neural network models can be used to overcome some of the difficulties in dealing with degraded images as obtained in an imaging context
    corecore