2,854 research outputs found
Robust Mobile Object Tracking Based on Multiple Feature Similarity and Trajectory Filtering
This paper presents a new algorithm to track mobile objects in different
scene conditions. The main idea of the proposed tracker includes estimation,
multi-features similarity measures and trajectory filtering. A feature set
(distance, area, shape ratio, color histogram) is defined for each tracked
object to search for the best matching object. Its best matching object and its
state estimated by the Kalman filter are combined to update position and size
of the tracked object. However, the mobile object trajectories are usually
fragmented because of occlusions and misdetections. Therefore, we also propose
a trajectory filtering, named global tracker, aims at removing the noisy
trajectories and fusing the fragmented trajectories belonging to a same mobile
object. The method has been tested with five videos of different scene
conditions. Three of them are provided by the ETISEO benchmarking project
(http://www-sop.inria.fr/orion/ETISEO) in which the proposed tracker
performance has been compared with other seven tracking algorithms. The
advantages of our approach over the existing state of the art ones are: (i) no
prior knowledge information is required (e.g. no calibration and no contextual
models are needed), (ii) the tracker is more reliable by combining multiple
feature similarities, (iii) the tracker can perform in different scene
conditions: single/several mobile objects, weak/strong illumination,
indoor/outdoor scenes, (iv) a trajectory filtering is defined and applied to
improve the tracker performance, (v) the tracker performance outperforms many
algorithms of the state of the art
A comprehensive insight towards Pre-processing Methodologies applied on GPS data
Reliability in the utilization of the Global Positioning System (GPS) data demands a higher degree of accuracy with respect to time and positional information required by the user. However, various extrinsic and intrinsic parameters disrupt the data transmission phenomenon from GPS satellite to GPS receiver which always questions the trustworthiness of such data. Therefore, this manuscript offers a comprehensive insight into the data preprocessing methodologies evolved and adopted by present-day researchers. The discussion is carried out with respect to standard methods of data cleaning as well as diversified existing research-based approaches. The review finds that irrespective of a good number of work carried out to address the problem of data cleaning, there are critical loopholes in almost all the existing studies. The paper extracts open end research problems as well as it also offers an evidential insight using use-cases where it is found that still there is a critical need to investigate data cleaning methods
A taxonomy of multi-industry labour force skills
This paper proposes an empirical study of the skill repertoires of 290 sectors in the United States over the period 2002â2011. We use information on employment structures and job content of occupations to flesh out structural characteristics of industry-specific know-how. The exercise of mapping the skills structures embedded in the workforce yields a taxonomy that discloses novel nuances on the organization of industry. In so doing we also take an initial step towards the integration of labour and employment in the area of innovation studies
Cluster Complexes: A Framework for Understanding the Internationalisation of Innovation Systems
The literature on clustering that has developed over the last two decades or so has given us a wealth of information on the formation and competitiveness of places in the global economy. Similarly, the systems literature on innovation has been valuable in moving the debate around technology from a focus on the entrepreneur to one than encompasses institutions, government, suppliers, customers and universities. However, there remains an important limit to this research; the borders of political jurisdictions, usually nation states, typically delineate the studies. It is argued in this paper that during an era when the international architecture of production relationships is changing, this view of systems is hindering its further development. This paper briefly examines what we have learnt of innovation systems, including clustering and also explores the limitations of this work. From this foundation it is proposed in this paper that a framework which understands clusters as nodes within extra-territorial networks is a promising approach for internationalising the systems of innovation perspective. The advantage of the approach presented here is that it can simultaneously capture regional specialisations and be disaggregated enough to apply on a technology / sectoral basis. Another principle advantage is that such a framework goes someway towards an understanding of interregional and international trade that is consistent with what other studies have shown of the development of innovation within particular geographic locations. The paper draws from extensive data analysis of industrial interdependencies that cross national borders to support the case for cluster complexes that transcend regional and national borders.innovation systems; clusters; internationalisation
Analyzing complex data using domain constraints
Data-driven research approaches are becoming increasingly popular in a growing number of scientific disciplines. While a data-driven research approach can yield superior results, generating the required data can be very costly. This frequently leads to small and complex data sets, in which it is impossible to rely on volume alone to compensate for all shortcomings of the data. To counter this problem, other reliable sources of information must be incorporated. In this work, domain knowledge, as a particularly reliable type of additional information, is used to inform data-driven analysis methods. This domain knowledge is represented as constraints on the possible solutions, which the presented methods can use to inform their analysis. It focusses on spatial constraints as a particularly common type of constraint, but the proposed techniques are general enough to be applied to other types of constraints.
In this thesis, new methods using domain constraints for data-driven science applications are discussed. These methods have applications in feature evaluation, route database repair, and Gaussian Mixture modeling of spatial data. The first application focuses on feature evaluation. The presented method receives two representations of the same data: one as the intended target and the other for investigation. It calculates a score indicating how much the two representations agree. A presented application uses this technique to compare a reference attribute set with different subsets to determine the importance and
relevance of individual attributes.
A second technique analyzes route data for constraint compliance. The presented framework allows the user to specify constraints and possible actions to modify the data. The presented method then uses these inputs to generate a version of the data, which agrees with the constraints, while otherwise reducing the impact of the modifications as much as possible. Two extensions of this schema are presented: an extension to continuously valued costs, which are minimized, and an extension to constraints involving more than one moving object.
Another addressed application area is modeling of multivariate measurement data, which was measured at spatially distributed locations. The spatial information recorded with the data can be used as the basis for constraints. This thesis presents multiple approaches to building a model of this kind of data while complying with spatial constraints. The first approach is an interactive tool, which allows domain scientists to generate a model of the data, which complies with their knowledge about the data. The second is a Monte Carlo approach, which generates a large number of possible models, tests them for compliance with the constraints, and returns the best one. The final two approaches are based on the EM algorithm and use different ways of incorporating the information into their models.
At the end of the thesis, two applications of the models, which have been generated in the previous chapter, are presented. The first is prediction of the origin of samples and the other is the visual representation of the extracted models on a map. These tools can be used by domain scientists to augment their tried and tested tools.
The developed techniques are applied to a real-world data set collected in the archaeobiological research project FOR 1670 (Transalpine mobility and cultural transfer) of the German Science Foundation. The data set contains isotope ratio measurements of samples, which were discovered at archaeological sites in the Alps region of central Europe. Using the presented data analysis methods, the data is analyzed to answer relevant domain questions. In a first application, the attributes of the measurements are analyzed for their relative importance and their ability to predict the spatial location of samples. Another presented
application is the reconstruction of potential migration routes between the investigated sites. Then spatial models are built using the presented modeling approaches. Univariate outliers are determined and used to predict locations based on the generated models. These are cross-referenced with the recorded origins. Finally, maps of the isotope distribution in the investigated regions are presented.
The described methods and demonstrated analyses show that domain knowledge can be used to formulate constraints that inform the data analysis process to yield valid models from relatively small data sets and support domain scientists in their analyses.Datengetriebene ForschungsansaÌtze werden fuÌr eine wachsende Anzahl von wissenschaftlichen Disziplinen immer wichtiger. Obwohl ein datengetriebener Forschungsansatz bessere Ergebnisse erzielen kann, kann es sehr teuer sein die notwendigen Daten zu gewinnen. Dies hat haÌufig zur Folge, dass kleine und komplexe DatensaÌtze entstehen, bei denen es nicht moÌglich ist sich auf die Menge der Datenpunkte zu verlassen um Probleme bei der Analyse auszugleichen.
Um diesem Problem zu begegnen muÌssen andere Informationsquellen verwendet werden. Fachwissen als eine besonders zuverlaÌssige Quelle solcher Informationen kann herangezogen werden, um die datengetriebenen Analysemethoden zu unterstuÌtzen. Dieses Fachwissen wird ausgedruÌckt als Constraints (Nebenbedingungen) der moÌglichen LoÌsungen, die die vorgestellten Methoden benutzen koÌnnen um ihre Analyse zu steuern. Der Fokus liegt dabei auf raÌumlichen Constraints als eine besonders haÌufige Art von Constraints, aber die vorgeschlagenen Methoden sind allgemein genug um auf andere Arte von Constraints angewendet zu werden.
Es werden neue Methoden diskutiert, die Fachwissen fuÌr datengetriebene wissenschaftliche Anwendungen verwenden. Diese Methoden haben Anwendungen auf Feature-Evaluation, die Reparatur von Bewegungsdatenbanken und auf Gaussian-Mixture-Modelle von raÌumlichen Daten. Die erste Anwendung betrifft Feature-Evaluation. Die vorgestellte Methode erhaÌlt zwei RepraÌsentationen der selben Daten: eine als ZielrepraÌsentation und eine zur Untersuchung. Sie berechnet einen Wert, der aussagt, wie einig sich die beiden RepraÌsentationen sind. Eine vorgestellte Anwendung benutzt diese Technik um eine Referenzmenge von Attributen mit verschiedenen Untermengen zu vergleichen, um die Wichtigkeit und Relevanz einzelner Attribute zu bestimmen.
Eine zweite Technik analysiert die Einhaltung von Constraints in Bewegungsdaten. Das praÌsentierte Framework erlaubt dem Benutzer Constraints zu definieren und moÌgliche Aktionen zur VeraÌnderung der Daten anzuwenden. Die praÌsentierte Methode benutzt diese Eingaben dann um eine neue Variante der Daten zu erstellen, die die Constraints erfuÌllt ohne die Datenbank mehr als notwendig zu veraÌndern. Zwei Erweiterungen dieser Grundidee werden vorgestellt: eine Erweiterung auf stetige Kostenfunktionen, die minimiert werden, und eine Erweiterung auf Bedingungen, die mehr als ein bewegliches Objekt betreffen.
Ein weiteres behandeltes Anwendungsgebiet ist die Modellierung von multivariaten Messungen, die an raÌumlich verteilten Orten gemessen wurden. Die raÌumliche Information, die zusammen mit diesen Daten erhoben wurde, kann als Grundlage genutzt werden um Constraints zu formulieren. Mehrere AnsaÌtze zum Erstellen von Modellen auf dieser Art von Daten werden vorgestellt, die raÌumliche Constraints einhalten. Der erste dieser AnsaÌtze ist ein interaktives Werkzeug, das Fachwissenschaftlern dabei hilft, Modelle der Daten zu erstellen, die mit ihrem Wissen uÌber die Daten uÌbereinstimmen. Der zweite ist eine Monte-Carlo-Simulation, die eine groĂe Menge moÌglicher Modelle erstellt, testet ob sie mit den Constraints uÌbereinstimmen und das beste Modell zuruÌckgeben. Zwei letzte AnsaÌtze basieren auf dem EM-Algorithmus und benutzen verschiedene Arten diese Information in das Modell zu integrieren.
Am Ende werden zwei Anwendungen der gerade vorgestellten Modelle vorgestellt. Die erste ist die Vorhersage der Herkunft von Proben und die andere ist die grafische Darstellung der erstellten Modelle auf einer Karte. Diese Werkzeuge koÌnnen von Fachwissenschaftlern benutzt werden um ihre bewaÌhrten Methoden zu unterstuÌtzen.
Die entwickelten Methoden werden auf einen realen Datensatz angewendet, der von dem archaÌo-biologischen Forschungsprojekt FOR 1670 (Transalpine MobilitaÌt und Kulturtransfer der Deutschen Forschungsgemeinschaft erhoben worden ist. Der Datensatz enthaÌlt Messungen von IsotopenverhaÌltnissen von Proben, die in archaÌologischen Fundstellen in den zentraleuropaÌischen Alpen gefunden wurden. Die praÌsentierten Datenanalyse-Methoden werden verwendet um diese Daten zu analysieren und relevante Forschungsfragen zu klaÌren. In einer ersten Anwendung werden die Attribute der Messungen analysiert um ihre relative Wichtigkeit und ihre FaÌhigkeit zu bewerten, die raÌumliche Herkunft der Proben vorherzusagen. Eine weitere vorgestellte Anwendung ist die Wiederherstellung von moÌglichen Migrationsrouten zwischen den untersuchten Fundstellen. Danach werden raÌumliche Modelle der Daten unter Verwendung der vorgestellten Methoden erstellt. Univariate Outlier werden bestimmt und ihre moÌglich Herkunft basierend auf der erstellten Karte wird bestimmt. Die vorhergesagte Herkunft wird mit der tatsaÌchlichen Fundstelle verglichen. Zuletzt werden Karten der Isotopenverteilung der untersuchten Region vorgestellt.
Die beschriebenen Methoden und vorgestellten Analysen zeigen, dass Fachwissen verwendet werden kann um Constraints zu formulieren, die den Datenanalyseprozess unterstuÌtzen, um guÌltige Modelle aus relativ kleinen DatensaÌtzen zu erstellen und Fachwissenschaftler bei ihren Analysen zu unterstuÌtzen
Recommended from our members
Post-automation: report from an international workshop
The purpose of this report is to share lessons from an international research workshop dedicated to post- automation. Twenty-seven researchers from eleven different countries in Africa, Asia, Latin America and Europe, met at the Science Policy Research Unit at Sussex University on 11-13 September 2019, where we discussed empirical research papers and explored post-automation in group activities. We write this report primarily for researchers, but also for activists and policy advisors looking for more imaginative approaches to governing technology, work and sustainability in society, compared to those dominant agendas adapting automatically to the interests behind automation.
The report is structured as follows. Section two introduces the workshop topic and papers presented, and which leads into two related areas that became a focus for discussion. First, some challenges in the foundations
of automation theory (section three). And second, post-automation as a more constructive proposition to the challenges of automation, and that is happening right now (section four). Section five summarises some key points arising from the workshop, based on empirical observations from the margins of digital technology development, and that give both a flavour of the workshop and help elaborate the post-automation proposition. Some analytical and strategic themes are discussed in section six. We conclude in section seven with proposals for a post-automation agenda
A robust and efficient video representation for action recognition
This paper introduces a state-of-the-art video representation and applies it
to efficient action recognition and detection. We first propose to improve the
popular dense trajectory features by explicit camera motion estimation. More
specifically, we extract feature point matches between frames using SURF
descriptors and dense optical flow. The matches are used to estimate a
homography with RANSAC. To improve the robustness of homography estimation, a
human detector is employed to remove outlier matches from the human body as
human motion is not constrained by the camera. Trajectories consistent with the
homography are considered as due to camera motion, and thus removed. We also
use the homography to cancel out camera motion from the optical flow. This
results in significant improvement on motion-based HOF and MBH descriptors. We
further explore the recent Fisher vector as an alternative feature encoding
approach to the standard bag-of-words histogram, and consider different ways to
include spatial layout information in these encodings. We present a large and
varied set of evaluations, considering (i) classification of short basic
actions on six datasets, (ii) localization of such actions in feature-length
movies, and (iii) large-scale recognition of complex events. We find that our
improved trajectory features significantly outperform previous dense
trajectories, and that Fisher vectors are superior to bag-of-words encodings
for video recognition tasks. In all three tasks, we show substantial
improvements over the state-of-the-art results
LMGP: Lifted Multicut Meets Geometry Projections for Multi-Camera Multi-Object Tracking
Multi-Camera Multi-Object Tracking is currently drawing attention in the
computer vision field due to its superior performance in real-world
applications such as video surveillance in crowded scenes or in wide spaces. In
this work, we propose a mathematically elegant multi-camera multiple object
tracking approach based on a spatial-temporal lifted multicut formulation. Our
model utilizes state-of-the-art tracklets produced by single-camera trackers as
proposals. As these tracklets may contain ID-Switch errors, we refine them
through a novel pre-clustering obtained from 3D geometry projections. As a
result, we derive a better tracking graph without ID switches and more precise
affinity costs for the data association phase. Tracklets are then matched to
multi-camera trajectories by solving a global lifted multicut formulation that
incorporates short and long-range temporal interactions on tracklets located in
the same camera as well as inter-camera ones. Experimental results on the
WildTrack dataset yield near-perfect performance, outperforming
state-of-the-art trackers on Campus while being on par on the PETS-09 dataset.Comment: Official version for CVPR 202
- âŠ