316 research outputs found
Absolute height measurement of specular surfaces with modified active fringe reflection photogrammetry
Deflectometric methods have existed for more than a decade for slope measurement of specular freeform surfaces through utilization of the deformation of a sample pattern after reflection from a test surface. Usually, these approaches require two-directional fringe patterns to be projected on a LCD screen or ground glass and require slope integration, which leads to some complexity for the whole measuring process.
This paper proposes a new mathematical measurement model for measuring topography information of freeform specular surfaces, which integrates a virtual reference specular surface into the method of active fringe reflection delfectometry and presents a straight-forward relation between height and phase. This method only requires one direction of horizontal or vertical sinusoidal fringe patterns to be projected on a LCD screen, resulting in a significant reduction in capture time over established method. Assuming the whole system has been pre-calibrated, during the measurement process, the fringe patterns are captured separately via the virtual reference and detected freeform surfaces by a CCD camera. The reference phase can be solved according to spatial geometrical relation between LCD screen and CCD camera. The captured phases can be unwrapped with a heterodyne technique and optimum frequency selection method. Based on this calculated unwrapped-phase and that proposed mathematical model, absolute height of the inspected surface can be computed. Simulated and experimental results show that this methodology can conveniently calculate topography information for freeform and structured specular surfaces without integration and reconstruction processes
Multivariate Time Series Anomaly Detection: Fancy Algorithms and Flawed Evaluation Methodology
Multivariate Time Series (MVTS) anomaly detection is a long-standing and
challenging research topic that has attracted tremendous research effort from
both industry and academia recently. However, a careful study of the literature
makes us realize that 1) the community is active but not as organized as other
sibling machine learning communities such as Computer Vision (CV) and Natural
Language Processing (NLP), and 2) most proposed solutions are evaluated using
either inappropriate or highly flawed protocols, with an apparent lack of
scientific foundation. So flawed is one very popular protocol, the so-called
\pa protocol, that a random guess can be shown to systematically outperform
\emph{all} algorithms developed so far. In this paper, we review and evaluate
many recent algorithms using more robust protocols and discuss how a normally
good protocol may have weaknesses in the context of MVTS anomaly detection and
how to mitigate them. We also share our concerns about benchmark datasets,
experiment design and evaluation methodology we observe in many works.
Furthermore, we propose a simple, yet challenging, baseline algorithm based on
Principal Components Analysis (PCA) that surprisingly outperforms many recent
Deep Learning (DL) based approaches on popular benchmark datasets. The main
objective of this work is to stimulate more effort towards important aspects of
the research such as data, experiment design, evaluation methodology and result
interpretability, instead of putting the highest weight on the design of
increasingly more complex and "fancier" algorithms
A Benchmarking Study of Matching Algorithms for Knowledge Graph Entity Alignment
How to identify those equivalent entities between knowledge graphs (KGs),
which is called Entity Alignment (EA), is a long-standing challenge. So far,
many methods have been proposed, with recent focus on leveraging Deep Learning
to solve this problem. However, we observe that most of the efforts has been
paid to having better representation of entities, rather than improving entity
matching from the learned representations. In fact, how to efficiently infer
the entity pairs from this similarity matrix, which is essentially a matching
problem, has been largely ignored by the community. Motivated by this
observation, we conduct an in-depth analysis on existing algorithms that are
particularly designed for solving this matching problem, and propose a novel
matching method, named Bidirectional Matching (BMat). Our extensive
experimental results on public datasets indicate that there is currently no
single silver bullet solution for EA. In other words, different classes of
entity similarity estimation may require different matching algorithms to reach
the best EA results for each class. We finally conclude that using PARIS, the
state-of-the-art EA approach, with BMat gives the best combination in terms of
EA performance and the algorithm's time and space complexity.Comment: 11 pages, 1 figure, 7 table
- …