151,736 research outputs found
The Space Object Ontology
Achieving space domain awareness requires the
identification, characterization, and tracking of space objects.
Storing and leveraging associated space object data for purposes
such as hostile threat assessment, object identification, and
collision prediction and avoidance present further challenges.
Space objects are characterized according to a variety of
parameters including their identifiers, design specifications,
components, subsystems, capabilities, vulnerabilities, origins,
missions, orbital elements, patterns of life, processes, operational
statuses, and associated persons, organizations, or nations. The
Space Object Ontology provides a consensus-based realist
framework for formulating such characterizations in a
computable fashion. Space object data are aligned with classes
and relations in the Space Object Ontology and stored in a
dynamically updated Resource Description Framework triple
store, which can be queried to support space domain awareness
and the needs of spacecraft operators. This paper presents the
core of the Space Object Ontology, discusses its advantages over
other approaches to space object classification, and demonstrates
its ability to combine diverse sets of data from multiple sources
within an expandable framework. Finally, we show how the
ontology provides benefits for enhancing and maintaining longterm
space domain awareness
Detecting and tracking multiple interacting objects without class-specific models
We propose a framework for detecting and tracking multiple interacting objects from a single, static, uncalibrated camera. The number of objects is variable and unknown, and object-class-specific models are not available. We use background subtraction results as measurements for object detection and tracking. Given these constraints, the main challenge is to associate pixel measurements with (possibly interacting) object targets. We first track clusters of pixels, and note when they merge or split. We then build an inference graph, representing relations between the tracked clusters. Using this graph and a generic object model based on spatial connectedness and coherent motion, we label the tracked clusters as whole objects, fragments of objects or groups of interacting objects. The outputs of our algorithm are entire tracks of objects, which may include corresponding tracks from groups of objects during interactions. Experimental results on multiple video sequences are shown
TrajectoryFormer: 3D Object Tracking Transformer with Predictive Trajectory Hypotheses
3D multi-object tracking (MOT) is vital for many applications including
autonomous driving vehicles and service robots. With the commonly used
tracking-by-detection paradigm, 3D MOT has made important progress in recent
years. However, these methods only use the detection boxes of the current frame
to obtain trajectory-box association results, which makes it impossible for the
tracker to recover objects missed by the detector. In this paper, we present
TrajectoryFormer, a novel point-cloud-based 3D MOT framework. To recover the
missed object by detector, we generates multiple trajectory hypotheses with
hybrid candidate boxes, including temporally predicted boxes and current-frame
detection boxes, for trajectory-box association. The predicted boxes can
propagate object's history trajectory information to the current frame and thus
the network can tolerate short-term miss detection of the tracked objects. We
combine long-term object motion feature and short-term object appearance
feature to create per-hypothesis feature embedding, which reduces the
computational overhead for spatial-temporal encoding. Additionally, we
introduce a Global-Local Interaction Module to conduct information interaction
among all hypotheses and models their spatial relations, leading to accurate
estimation of hypotheses. Our TrajectoryFormer achieves state-of-the-art
performance on the Waymo 3D MOT benchmarks.Comment: 10 pages, 3 figure
Meta-data alignment in open Tracking & Tracing systems
In Tracking and Tracing systems, attributes of objects (such as location, time, status and temperature) are recorded as these objects move through a supply chain. In closed, dedicated systems, the attributes to record and store are determined at design time. However, in open Tracking and Tracing systems, the attributes are not known beforehand, as the type of objects and the set of stakeholders may evolve over time. Many supply chains require open Tracking and Tracing systems. The participants in the supply chain are individual companies, spread over many countries. Their trading relations change constantly. Usually they participate in multiple supply chains. E.g., a company producing chemicals may serve the chemical industry, the food industry and the textile industry at the same time. Transport companies carry goods for multiple industry sectors. Yet, they play a role in the traceability of all goods they produce or carry. Open tracking and Tracing systems are not dedicated for a certain type of product or object nor for a specific industry sector. They simply record the location, time and other attributes of the identified objects, and store that information in the data store of the object owner, based on the identification (e.g. RFID) tag. What attributes are to be stored is determined by stakeholders, such as (end) users of the object. In some cases (e.g. food) legislation prescribes what to record. An open Tracking and Tracing system therefore needs to be able to dynamically handle the set of attributes to be recorded and stored. In this chapter, a method is presented that enables components of Tracking and Tracing systems to negotiate at run time what attributes may be stored for a particular object type. Components may include scanning equipment, data stores and query clients. Attributes may be of any data type, including time, location, status, temperature and ownership. Apart from simple attributes, associations between objects may be recorded and stored, e.g. when an object is packed in another object, loaded in a truck or container or assembled to be a new object. The method makes use of findings in ontology engineering and of type theory. New types are based on existing types, with some restrictions. Both the range of values of a type and its meta‐attributes (such as cardinality) may be restricted to define a new type. Programmatically, concepts of co‐ and contra variance are used to make the method implementable. The method was developed in two European funded research projects: TraSer and ADVANCE. In TraSer, a truly open and extensible Tracking and Tracing system was developed (TraSer project consortium, 2006; Monostori et al., 2009). In ADVANCE, a distributed management information system for logistics operations was designed and implemented, that makes use of Tracking and Tracing information (ADVANCE project consortium, 2010; Kemény et al., 2011a)
Generic colour image segmentation via multi-stage region merging
We present a non-parametric unsupervised colour image segmentation system that is fast and retains significant perceptual correspondence with the input data. The method uses a region merging approach based on statistics of growing local structures. A two-stage algorithm is employed during which neighbouring regions of homogeneity are traced using feature gradients between groups of pixels, thus giving priority to topological relations. The system finds spatially cohesive and globally salient image regions usually without losing smaller localised areas of high saliency. Unoptimised implementations of the method work nearly in real-time, handling multiple frames a second. The system is successfully applied to problems such as object detection and tracking
BEVTrack: A Simple and Strong Baseline for 3D Single Object Tracking in Bird's-Eye View
3D Single Object Tracking (SOT) is a fundamental task of computer vision,
proving essential for applications like autonomous driving. It remains
challenging to localize the target from surroundings due to appearance
variations, distractors, and the high sparsity of point clouds. The spatial
information indicating objects' spatial adjacency across consecutive frames is
crucial for effective object tracking. However, existing trackers typically
employ point-wise representation with irregular formats, leading to
insufficient use of this important spatial knowledge. As a result, these
trackers usually require elaborate designs and solving multiple subtasks. In
this paper, we propose BEVTrack, a simple yet effective baseline that performs
tracking in Bird's-Eye View (BEV). This representation greatly retains spatial
information owing to its ordered structure and inherently encodes the implicit
motion relations of the target as well as distractors. To achieve accurate
regression for targets with diverse attributes (\textit{e.g.}, sizes and motion
patterns), BEVTrack constructs the likelihood function with the learned
underlying distributions adapted to different targets, rather than making a
fixed Laplace or Gaussian assumption as in previous works. This provides
valuable priors for tracking and thus further boosts performance. While only
using a single regression loss with a plain convolutional architecture,
BEVTrack achieves state-of-the-art performance on three large-scale datasets,
KITTI, NuScenes, and Waymo Open Dataset while maintaining a high inference
speed of about 200 FPS. The code will be released at
https://github.com/xmm-prio/BEVTrack.Comment: The code will be released at https://github.com/xmm-prio/BEVTrac
- …