14 research outputs found
Field Programmable Gate Array (FPGA) Based Fish Detection Using Haar Classifiers
The quantification of abundance, size, and distribution of fish is critical to properly manage and protect marine ecosystems and regulate marine fisheries. Currently, fish surveys are conducted using fish tagging, scientific diving, and/or capture and release methods (i.e., net trawls), methods that are both costly and time consuming. Therefore, providing an automated way to conduct fish surveys could provide a real benefit to marine managers. In order to provide automated fish counts and classification we propose an automated fish species classification system using computer vision. This computer vision system can count and classify fish found in underwater video images using a classification method known as Haar classification. We have partnered with the Birch Aquarium to obtain underwater images of a variety of fish species, and present in this paper the implementation of our vision system and its detection results for our first test species, the Scythe Butterfly fish, subject of the Birch Aquarium logo
The systems engineering of a network-centric distributed intelligent system of systems for robust human behavior classifications
Automating intelligence within sensor networks for situational awareness and responses is the overall motivational application for this dissertation. Traditionally, intelligence is manually gathered and extracted by intelligence analysts. However, there will never be enough intelligence analysts, intelligent centers, or even bandwidth (for mobile sensors) to manually extract information for intelligence from raw sensor data. Fusing a large number of sensor types and inputs is also required. All of this can be implemented and automated in an artificial intelligent (AI) hierarchy described herein, and therefore not require human power to observe, fuse, and interpret. This objective is fulfilled in this systems dissertation with several independent systems combined together to form an intelligent system of systems (SoS). In order to design and implement an intelligent SoS, there are a number of unique contributions from this author in this dissertation. The first six listed author contributions are systems' developments as Chief Engineer on the intelligent SoS and the last six contributions are novel technological developments. The following are the SoS systems' developments : (1) a Fixed Camera System containing a multi-camera network (thirty-six PoE cameras) and six processing units ; (2) a Kiosk System containing dual Pan/Tilt/Zoom cameras, a microphone network and two processing units ; and (3) a Command and Control System containing a database on a server with dual monitors displaying an (4) interactive executive graphical user interface displaying (5) mustered personnel and (6) abnormal behavior alarms. This SoS was designed and built with novel technologies that the author developed for this SoS : (7) high-level syntactical classifiers for classifying human/object behaviors that are predefined based on sequences of (8) identified combinations of fused (9) object recognitions (e.g. body postures and face recognitions) by low-level classifiers on video data, including a (10) generalized parts-based object recognition low-level classifier. The system uses a (11) high-level syntactical classifier to recover from low- level classification errors. This intelligent SoS was built and implemented as a prototype. Additionally, preliminary transitions are underway for transitioning the prototype to a product system, such as (12) providing a Field Programmable Gate Array (FPGA) architecture for the generalized object recognition low-level classifie
Recommended from our members
The systems engineering of a network-centric distributed intelligent system of systems for robust human behavior classifications
Automating intelligence within sensor networks for situational awareness and responses is the overall motivational application for this dissertation. Traditionally, intelligence is manually gathered and extracted by intelligence analysts. However, there will never be enough intelligence analysts, intelligent centers, or even bandwidth (for mobile sensors) to manually extract information for intelligence from raw sensor data. Fusing a large number of sensor types and inputs is also required. All of this can be implemented and automated in an artificial intelligent (AI) hierarchy described herein, and therefore not require human power to observe, fuse, and interpret. This objective is fulfilled in this systems dissertation with several independent systems combined together to form an intelligent system of systems (SoS). In order to design and implement an intelligent SoS, there are a number of unique contributions from this author in this dissertation. The first six listed author contributions are systems' developments as Chief Engineer on the intelligent SoS and the last six contributions are novel technological developments. The following are the SoS systems' developments : (1) a Fixed Camera System containing a multi-camera network (thirty-six PoE cameras) and six processing units ; (2) a Kiosk System containing dual Pan/Tilt/Zoom cameras, a microphone network and two processing units ; and (3) a Command and Control System containing a database on a server with dual monitors displaying an (4) interactive executive graphical user interface displaying (5) mustered personnel and (6) abnormal behavior alarms. This SoS was designed and built with novel technologies that the author developed for this SoS : (7) high-level syntactical classifiers for classifying human/object behaviors that are predefined based on sequences of (8) identified combinations of fused (9) object recognitions (e.g. body postures and face recognitions) by low-level classifiers on video data, including a (10) generalized parts-based object recognition low-level classifier. The system uses a (11) high-level syntactical classifier to recover from low- level classification errors. This intelligent SoS was built and implemented as a prototype. Additionally, preliminary transitions are underway for transitioning the prototype to a product system, such as (12) providing a Field Programmable Gate Array (FPGA) architecture for the generalized object recognition low-level classifie
Human posture recognition for intelligent vehicles
The article of record as published may be found at http://dx.doi.org/10.1007/s11554-010-0150-0Pedestrian detection systems are finding their
way into many modern ‘‘intelligent’’ vehicles. The body
posture could reveal further insight about the pedestrian’s
intent and her awareness of the oncoming car. This article
details the algorithms and implementation of a library for
real-time body posture recognition. It requires prior person
detection and then calculates overall stance, torso orientation in four increments, and head location and orientation,
all based on individual frames. A syntactic post-processing
module takes temporal information into account and
smoothes the results over time while correcting improbable
configurations. We show accuracy and timing measurements for the library and its utilization in a training
application.Office of Naval Researc
Your Town Radio & Television Program: Guests Larry Reeves, Rachel Goshorn and Deborah Goshorn [video]
Host: John SandersGuests: Larry Reeves, President, Monterey Bay Chapter AFCEA; Dr. Rachel Goshorn, C4I Chari, Naval Postgraduate School; Deborah Goshorn, MOVES Institute, Naval Postgraduate Schoo
The Multi-level Learning and Classification of Multi-class Parts-Based Representations of U.S. Marine Postures
The article of record as published may be found at http://dx.doi.org/10.1007/978-3-642-10268-4_59This paper primarily investigates the possibility of using
multi-level learning of sparse parts-based representations of US Marine
postures in an outside and often crowded environment for training exercises. To do so, the paper discusses two approaches to learning partsbased representations for each posture needed. The first approach uses
a two-level learning method which consists of simple clustering of interest patches extracted from a set of training images for each posture, in
addition to learning the nonparametric spatial frequency distribution of
the clusters that represents one posture type. The second approach uses
a two-level learning method which involves convolving interest patches
with filters and in addition performing joint boosting on the spatial locations of the first level of learned parts in order to create a global set
of parts that the various postures share in representation. Experimental
results on video from actual US Marine training exercises are included