19 research outputs found

    Autonomous Multicamera Tracking on Embedded Smart Cameras

    Get PDF
    There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus

    The robotic multiobject focal plane system of the Dark Energy Spectroscopic Instrument (DESI)

    Full text link
    Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, los autores pertenecientes a la UAM y el nombre del grupo de colaboración, si lo hubiereA system of 5020 robotic fiber positioners was installed in 2019 on the Mayall Telescope, at Kitt Peak National Observatory. The robots automatically retarget their optical fibers every 10-20 minutes, each to a precision of several microns, with a reconfiguration time of fewer than 2 minutes. Over the next 5 yr, they will enable the newly constructed Dark Energy Spectroscopic Instrument (DESI) to measure the spectra of 35 million galaxies and quasars. DESI will produce the largest 3D map of the universe to date and measure the expansion history of the cosmos. In addition to the 5020 robotic positioners and optical fibers, DESI’s Focal Plane System includes six guide cameras, four wave front cameras, 123 fiducial point sources, and a metrology camera mounted at the primary mirror. The system also includes associated structural, thermal, and electrical systems. In all, it contains over 675,000 individual parts. We discuss the design, construction, quality control, and integration of all these components. We include a summary of the key requirements, the review and acceptance process, on-sky validations of requirements, and lessons learned for future multiobject, fiber-fed spectrograph

    Domain-Specific Computing Architectures and Paradigms

    Full text link
    We live in an exciting era where artificial intelligence (AI) is fundamentally shifting the dynamics of industries and businesses around the world. AI algorithms such as deep learning (DL) have drastically advanced the state-of-the-art cognition and learning capabilities. However, the power of modern AI algorithms can only be enabled if the underlying domain-specific computing hardware can deliver orders of magnitude more performance and energy efficiency. This work focuses on this goal and explores three parts of the domain-specific computing acceleration problem; encapsulating specialized hardware and software architectures and paradigms that support the ever-growing processing demand of modern AI applications from the edge to the cloud. This first part of this work investigates the optimizations of a sparse spatio-temporal (ST) cognitive system-on-a-chip (SoC). This design extracts ST features from videos and leverages sparse inference and kernel compression to efficiently perform action classification and motion tracking. The second part of this work explores the significance of dataflows and reduction mechanisms for sparse deep neural network (DNN) acceleration. This design features a dynamic, look-ahead index matching unit in hardware to efficiently discover fine-grained parallelism, achieving high energy efficiency and low control complexity for a wide variety of DNN layers. Lastly, this work expands the scope to real-time machine learning (RTML) acceleration. A new high-level architecture modeling framework is proposed. Specifically, this framework consists of a set of high-performance RTML-specific architecture design templates, and a Python-based high-level modeling and compiler tool chain for efficient cross-stack architecture design and exploration.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162870/1/lchingen_1.pd

    A survey on wireless indoor localization from the device perspective

    Get PDF
    corecore