22 research outputs found
First zooarchaeological studies in Moreta (Puna of Jujuy, Argentina, S. VII-XVI d.C.)
En este artículo presentamos los primeros resultados del análisis de restos arqueofaunísticos hallados en Moreta, un sitio arqueológico ubicado en el borde oriental de la cuenca de Pozuelos (Departamento Rinconada, Jujuy, Argentina), ocupado entre los siglos VII y XVI d.C. Los resultados sugieren que los camélidos silvestres (Vicugna vicugna) y domésticos (Lama glama), tanto adultos como juveniles, fueron el principal recurso animal aprovechado por los pobladores de Moreta, de acuerdo con los análisis cualitativos (comparación directa) y cuantitativos (osteométricos y estadísticos) realizados.We present the first results of the analysis of archaeofauna found in Moreta, an archaeological site, occupied between the 7th and 16th centuries A.D., located on the eastern edge of the Pozuelos basin (Rinconada department, Jujuy, Argentina). The results suggest that both wild (Vicugna vicugna) and domestic camelids (Lama glama), young and old, were the main animal resource used by the population of Moreta, according to qualitative analysis (direct comparison) and quantitative analysis (osteometrical and statistical).Fil: Camuñas, José Luis. Universidad Nacional de Tucumán. Facultad de Ciencias Naturales e Instituto Miguel Lillo. Instituto de Arqueología y Museo; ArgentinaFil: Angiorama, Carlos Ignacio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán. Instituto Superior de Estudios Sociales. Universidad Nacional de Tucumán. Instituto Superior de Estudios Sociales; Argentina. Universidad Nacional de Tucumán. Facultad de Ciencias Naturales e Instituto Miguel Lillo. Instituto de Arqueología y Museo; ArgentinaFil: Nasif, Norma. Universidad Nacional de Tucumán. Facultad de Ciencias Naturales e Instituto Miguel Lillo; Argentin
First zooarchaeological studies in Moreta (Puna of Jujuy, Argentina, S. VII-XVI d.C.)
En este artículo presentamos los primeros resultados del análisis de restos arqueofaunísticos hallados en Moreta, un sitio arqueológico ubicado en el borde oriental de la cuenca de Pozuelos (Departamento Rinconada, Jujuy, Argentina), ocupado entre los siglos VII y XVI d.C. Los resultados sugieren que los camélidos silvestres (Vicugna vicugna) y domésticos (Lama glama), tanto adultos como juveniles, fueron el principal recurso animal aprovechado por los pobladores de Moreta, de acuerdo con los análisis cualitativos (comparación directa) y cuantitativos (osteométricos y estadísticos) realizados.We present the first results of the analysis of archaeofauna found in Moreta, an archaeological site, occupied between the 7th and 16th centuries A.D., located on the eastern edge of the Pozuelos basin (Rinconada department, Jujuy, Argentina). The results suggest that both wild (Vicugna vicugna) and domestic camelids (Lama glama), young and old, were the main animal resource used by the population of Moreta, according to qualitative analysis (direct comparison) and quantitative analysis (osteometrical and statistical).Fil: Camuñas, José Luis. Universidad Nacional de Tucumán. Facultad de Ciencias Naturales e Instituto Miguel Lillo. Instituto de Arqueología y Museo; ArgentinaFil: Angiorama, Carlos Ignacio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tucumán. Instituto Superior de Estudios Sociales. Universidad Nacional de Tucumán. Instituto Superior de Estudios Sociales; Argentina. Universidad Nacional de Tucumán. Facultad de Ciencias Naturales e Instituto Miguel Lillo. Instituto de Arqueología y Museo; ArgentinaFil: Nasif, Norma. Universidad Nacional de Tucumán. Facultad de Ciencias Naturales e Instituto Miguel Lillo; Argentin
An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors
Event-Driven vision sensing is a new way of sensing
visual reality in a frame-free manner. This is, the vision sensor
(camera) is not capturing a sequence of still frames, as in conventional
video and computer vision systems. In Event-Driven sensors
each pixel autonomously and asynchronously decides when to
send its address out. This way, the sensor output is a continuous
stream of address events representing reality dynamically continuously
and without constraining to frames. In this paper we present
an Event-Driven Convolution Module for computing 2D convolutions
on such event streams. The Convolution Module has been
designed to assemble many of them for building modular and hierarchical
Convolutional Neural Networks for robust shape and
pose invariant object recognition. The Convolution Module has
multi-kernel capability. This is, it will select the convolution kernel
depending on the origin of the event. A proof-of-concept test prototype
has been fabricated in a 0.35 m CMOS process and extensive
experimental results are provided. The Convolution Processor has
also been combined with an Event-Driven Dynamic Vision Sensor
(DVS) for high-speed recognition examples. The chip can discriminate
propellers rotating at 2 k revolutions per second, detect symbols
on a 52 card deck when browsing all cards in 410 ms, or detect
and follow the center of a phosphor oscilloscope trace rotating at
5 KHz.Unión Europea 216777 (NABAB)Ministerio de Ciencia e Innovación TEC2009-10639-C04-0
Fully Digital AER Convolution Chip for Vision Processing
We present a neuromorphic fully digital convolution
microchip for Address Event Representation (AER)
spike-based processing systems. This microchip computes
2-D convolutions with a programmable kernel in
real time. It operates on a pixel array of size 32 x 32, and
the kernel is programmable and can be of arbitrary shape
and size up to 32 x 32 pixels. The chip receives and generates
data in AER format, which is asynchronous and
digital. The paper describes the architecture of the chip,
the test setup, and experimental results obtained from a
fabricated prototype.European Union IST-2001-34124 (CAVIAR)Comisión Interministerial de Ciencia y Tecnología TIC-2003-08164-C03-01Ministerio de Educación y Ciencia TEC2006-11730-C03-01Junta de Andalucía P06-TIC-0141
Fast vision through frameless event-based sensing and convolutional processing: Application to texture recognition
Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.Ministerio de Educación y Ciencia TEC-2006-11730-C03-01Junta de Andalucía P06-TIC-01417European Union IST-2001-34124, 21677
Combining Software-Defined Radio Learning Modules and Neural Networks for Teaching Communication Systems Courses
The paradigm known as Cognitive Radio (CR) proposes a continuous sensing of the electromagnetic spectrum in order to dynamically modify transmission parameters, making intelligent use of the environment by taking advantage of different techniques such as Neural Networks. This paradigm is becoming especially relevant due to the congestion in the spectrum produced by increasing numbers of IoT (Internet of Things) devices. Nowadays, many different Software-Defined Radio (SDR) platforms provide tools to implement CR systems in a teaching laboratory environment. Within the framework of a ‘Communication Systems’ course, this paper presents a methodology for learning the fundamentals of radio transmitters and receivers in combination with Convolutional Neural Networks (CNNs)
Dos nuevos sintáxones rupícolas bilbilitanos, refugio de endemismos de área restringida
Se describen y caracterizan dos nuevos sintáxones correspondientes a comunidades rupícolas silicícolas bilbilitanas: Centaureo pinnatae-Dianthetum lusitani ass. nov. (con óptimo en bioclima mesomediterráneo seco-subhúmedo) y Hieracio schmidtii-Dianthetum lusitani biscutelletosum bilbilitanae subass. nov. (con óptimo en bioclima supra-oromediterráneo seco-subhúmedo). Se discuten sus afinidades sintaxonómicas y se aportan datos referentes a su importancia como refugio de endemismos de área restringida. Por último, se comenta en particular el estado de conservación de Centaurea pinnata.Two new syntaxa are described to include plant communities growing on schistose rocks near Calatayud (Zaragoza province, NE Spain): Centaureo pinnatae-Dianthetum lusitani ass. nov. (optimum in Mesomediterranean Dry-Subhumid bioclimate) and Hieracio schmidtii-Dianthetum lusitani biscutelletosum bilbilitanae subass. nov. (optimum in Supra-Oromediterranean Dry-Subhumid bioclimate). Affinities to other syntaxa are discussed, and data concerning the interest of those plant communities as shelters for endemics with narrow distribution areas are also reported. Finally, the conservation status of Centaurea pinnata is commented
Fast vision through frameless event-based sensing and convolutional processing: Application to texture recognition
Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.This work was supported in part by the Spanish Ministry of Education and Science under Grant TEC-2006-11730-C03-01 (SAMANTA2), be the Andalusian regional government under Grant P06-TIC-01417 (Brain System), and by the European
Union (EU) Grants IST-2001-34124 (CAVIAR) and 216777 (NABAB). The work of J. A. Pérez-Carrasco was supported by a doctoral scholarship as
part of research project Brain System.Peer reviewe