1,009 research outputs found
AI-based design methodologies for hot form quench (HFQ®)
This thesis aims to develop advanced design methodologies that fully exploit the capabilities of the Hot Form Quench (HFQ®) stamping process in stamping complex geometric features in high-strength aluminium alloy structural components. While previous research has focused on material models for FE simulations, these simulations are not suitable for early-phase design due to their high computational cost and expertise requirements. This project has two main objectives: first, to develop design guidelines for the early-stage design phase; and second, to create a machine learning-based platform that can optimise 3D geometries under hot stamping constraints, for both early and late-stage design. With these methodologies, the aim is to facilitate the incorporation of HFQ capabilities into component geometry design, enabling the full realisation of its benefits.
To achieve the objectives of this project, two main efforts were undertaken. Firstly, the analysis of aluminium alloys for stamping deep corners was simplified by identifying the effects of corner geometry and material characteristics on post-form thinning distribution. New equation sets were proposed to model trends and design maps were created to guide component design at early stages. Secondly, a platform was developed to optimise 3D geometries for stamping, using deep learning technologies to incorporate manufacturing capabilities. This platform combined two neural networks: a geometry generator based on Signed Distance Functions (SDFs), and an image-based manufacturability surrogate model. The platform used gradient-based techniques to update the inputs to the geometry generator based on the surrogate model's manufacturability information. The effectiveness of the platform was demonstrated on two geometry classes, Corners and Bulkheads, with five case studies conducted to optimise under post-stamped thinning constraints. Results showed that the platform allowed for free morphing of complex geometries, leading to significant improvements in component quality.
The research outcomes represent a significant contribution to the field of technologically advanced manufacturing methods and offer promising avenues for future research. The developed methodologies provide practical solutions for designers to identify optimal component geometries, ensuring manufacturing feasibility and reducing design development time and costs. The potential applications of these methodologies extend to real-world industrial settings and can significantly contribute to the continued advancement of the manufacturing sector.Open Acces
Possibilities of Evaluating the Dimensional Acceptability of Workpieces Using Computer Vision
This paper discusses the possibilities of an automated solution for determining dimensionally accurate and defective products using a computer vision system. In a real industrial environment, research was conducted on a prototype of a quality control machine, i.e. a machine that, based on product images, evaluates whether the product is accurate or defective using computer vision. Various geometric features are extracted from the obtained images of products, on the basis of which a fuzzy inference system based on Fuzzy C-means clustering features is created. The extracted geometric features represent the input variables, and the output variable has two values - true and false. The root mean square error in the evaluation of the accuracy and defectiveness of products ranges between 0.07 and 0.16. Through this research, valuable findings and conclusions were reached for the future research, since this topic is poorly examined in the most renowned databases
COGNITIVE CONSEQUENCES OF VISUAL COMPLEXITY
Objects and events frequently strike us as being simple or complex. From looking at a child’s drawings to following a weaving storyline, the impression of complexity spans an extremely wide range of categories and domains. What is the nature of this experience, how is complexity represented in the mind, and why do we bother to represent complexity in the first place? This thesis explores these questions in the light of information theory. First, I argue that, in the case of visual objects, complexity is perceived efficiently and automatically, and is captured by the entropy of objects' internal descriptions over and above low-level visual features that may be correlated with complexity. Second, I ask whether objects are remembered as simpler or more complex than they really are. Several studies reveal a "caricature effect" whereby random-looking objects are misremembered as more complex than how they were actually presented. Third, I investigate the relationship between "objective" and "subjective" complexity. By taking a new approach in which subjects freely describe images and animations, my work reveals a striking and surprising quadratic relationship between the raw complexity of these stimuli and the length of their spoken descriptions. Finally, I show that complexity modulates aesthetic preferences: Structural complexity predicts subjective preferences in a Goldilocks fashion — medially complex 'drawings' are selected as the most visually appealing paintings in a gallery room. Collectively, this thesis sheds light on the nature and function of complexity (in vision and beyond), showcasing the power of information-theoretic approaches to understanding core perceptual and cognitive processes
Differential operators on sketches via alpha contours
A vector sketch is a popular and natural geometry representation depicting
a 2D shape. When viewed from afar, the disconnected vector strokes of a
sketch and the empty space around them visually merge into positive space
and negative space, respectively. Positive and negative spaces are the key
elements in the composition of a sketch and define what we perceive as the
shape. Nevertheless, the notion of positive or negative space is mathematically ambiguous: While the strokes unambiguously indicate the interior
or boundary of a 2D shape, the empty space may or may not belong to the
shape’s exterior.
For standard discrete geometry representations, such as meshes or point
clouds, some of the most robust pipelines rely on discretizations of differential operators, such as Laplace-Beltrami. Such discretizations are not
available for vector sketches; defining them may enable numerous applications of classical methods on vector sketches. However, to do so, one needs
to define the positive space of a vector sketch, or the sketch shape.
Even though extracting this 2D sketch shape is mathematically ambiguous,
we propose a robust algorithm, Alpha Contours, constructing its conservative
estimate: a 2D shape containing all the input strokes, which lie in its interior
or on its boundary, and aligning tightly to a sketch. This allows us to define
popular differential operators on vector sketches, such as Laplacian and
Steklov operators.
We demonstrate that our construction enables robust tools for vector
sketches, such as As-Rigid-As-Possible sketch deformation and functional
maps between sketches, as well as solving partial differential equations on a
vector sketch
Automated Automotive Radar Calibration With Intelligent Vehicles
While automotive radar sensors are widely adopted and have been used for
automatic cruise control and collision avoidance tasks, their application
outside of vehicles is still limited. As they have the ability to resolve
multiple targets in 3D space, radars can also be used for improving environment
perception. This application, however, requires a precise calibration, which is
usually a time-consuming and labor-intensive task. We, therefore, present an
approach for automated and geo-referenced extrinsic calibration of automotive
radar sensors that is based on a novel hypothesis filtering scheme. Our method
does not require external modifications of a vehicle and instead uses the
location data obtained from automated vehicles. This location data is then
combined with filtered sensor data to create calibration hypotheses. Subsequent
filtering and optimization recovers the correct calibration. Our evaluation on
data from a real testing site shows that our method can correctly calibrate
infrastructure sensors in an automated manner, thus enabling cooperative
driving scenarios.Comment: 5 pages, 4 figures, accepted for presentation at the 31st European
Signal Processing Conference (EUSIPCO), September 4 - September 8, 2023,
Helsinki, Finlan
Automating Inspection of Tunnels With Photogrammetry and Deep Learning
Asset Management of large underground transportation infrastructure requires frequent and detailed inspections to assess its overall structural conditions and to focus available funds where required. At the time of writing, the common approach to perform visual inspections is heavily manual, therefore slow, expensive, and highly subjective.
This research evaluates the applicability of an automated pipeline to perform visual inspections of underground infrastructure for asset management purposes. It also analyses the benefits of using lightweight and low-cost hardware versus high-end technology. The aim is to increase the automation in performing such task to overcome the main drawbacks of the traditional regime. It replaces subjectivity, approximation and limited repeatability of the manual inspection with objectivity and consistent accuracy. Moreover, it reduces the overall end-to-end time required for the inspection and the associated costs. This might translate to more frequent inspections per given budget, resulting in increased service life of the infrastructure. Shorter inspections have social benefits as well. In fact, local communities can rely on a safe transportation with minimum levels of disservice. At last, but not least, it drastically improves health and safety conditions for the inspection engineers who need to spend less time in this hazardous environment.
The proposed pipeline combines photogrammetric techniques for photo-realistic 3D reconstructions alongside with machine learning-based defect detection algorithms. This approach allows to detect and map visible defects on the tunnel’s lining in local coordinate system and provides the asset manager with a clear overview of the critical areas over all infrastructure.
The outcomes of the research show that the accuracy of the proposed pipeline largely outperforms human results, both in three-dimensional mapping and defect detection performance, pushing the benefit-cost ratio strongly in favour of the automated approach. Such outcomes will impact the way construction industry approaches visual inspections and shift towards automated strategies
Learnable Graph Matching: A Practical Paradigm for Data Association
Data association is at the core of many computer vision tasks, e.g., multiple
object tracking, image matching, and point cloud registration. Existing methods
usually solve the data association problem by network flow optimization,
bipartite matching, or end-to-end learning directly. Despite their popularity,
we find some defects of the current solutions: they mostly ignore the
intra-view context information; besides, they either train deep association
models in an end-to-end way and hardly utilize the advantage of
optimization-based assignment methods, or only use an off-the-shelf neural
network to extract features. In this paper, we propose a general learnable
graph matching method to address these issues. Especially, we model the
intra-view relationships as an undirected graph. Then data association turns
into a general graph matching problem between graphs. Furthermore, to make
optimization end-to-end differentiable, we relax the original graph matching
problem into continuous quadratic programming and then incorporate training
into a deep graph neural network with KKT conditions and implicit function
theorem. In MOT task, our method achieves state-of-the-art performance on
several MOT datasets. For image matching, our method outperforms
state-of-the-art methods with half training data and iterations on a popular
indoor dataset, ScanNet. Code will be available at
https://github.com/jiaweihe1996/GMTracker.Comment: Submitted to TPAMI on Mar 21, 2022. arXiv admin note: substantial
text overlap with arXiv:2103.1617
Clutter Detection and Removal in 3D Scenes with View-Consistent Inpainting
Removing clutter from scenes is essential in many applications, ranging from
privacy-concerned content filtering to data augmentation. In this work, we
present an automatic system that removes clutter from 3D scenes and inpaints
with coherent geometry and texture. We propose techniques for its two key
components: 3D segmentation from shared properties and 3D inpainting, both of
which are important problems. The definition of 3D scene clutter
(frequently-moving objects) is not well captured by commonly-studied object
categories in computer vision. To tackle the lack of well-defined clutter
annotations, we group noisy fine-grained labels, leverage virtual rendering,
and impose an instance-level area-sensitive loss. Once clutter is removed, we
inpaint geometry and texture in the resulting holes by merging inpainted RGB-D
images. This requires novel voting and pruning strategies that guarantee
multi-view consistency across individually inpainted images for mesh
reconstruction. Experiments on ScanNet and Matterport dataset show that our
method outperforms baselines for clutter segmentation and 3D inpainting, both
visually and quantitatively.Comment: 18 pages. ICCV 2023. Project page:
https://weify627.github.io/clutter
Classification and Segmentation of Galactic Structuresin Large Multi-spectral Images
Extensive and exhaustive cataloguing of astronomical objects is imperative for studies seeking to understand mechanisms which drive the universe. Such cataloguing tasks can be tedious, time consuming and demand a high level of domain specific knowledge. Past astronomical imaging surveys have been catalogued through mostly manual effort. Immi-nent imaging surveys, however, will produce a magnitude of data that cannot be feasibly processed through manual cataloguing. Furthermore, these surveys will capture objects fainter than the night sky, termed low surface brightness objects, and at unprecedented spatial resolution owing to advancements in astronomical imaging. In this thesis, we in-vestigate the use of deep learning to automate cataloguing processes, such as detection, classification and segmentation of objects. A common theme throughout this work is the adaptation of machine learning methods to challenges specific to the domain of low surface brightness imaging.We begin with creating an annotated dataset of structures in low surface brightness images. To facilitate supervised learning in neural networks, a dataset comprised of input and corresponding ground truth target labels is required. An online tool is presented, allowing astronomers to classify and draw over objects in large multi-spectral images. A dataset produced using the tool is then detailed, containing 227 low surface brightness images from the MATLAS survey and labels made by four annotators. We then present a method for synthesising images of galactic cirrus which appear similar to MATLAS images, allowing pretraining of neural networks.A method for integrating sensitivity to orientation in convolutional neural networks is then presented. Objects in astronomical images can present in any given orientation, and thus the ability for neural networks to handle rotations is desirable. We modify con-volutional filters with sets of Gabor filters with different orientations. These orientations are learned alongside network parameters during backpropagation, allowing exact optimal orientations to be captured. The method is validated extensively on multiple datasets and use cases.We propose an attention based neural network architecture to process global contami-nants in large images. Performing analysis of low surface brightness images requires plenty of contextual information and local textual patterns. As a result, a network for processing low surface brightness images should ideally be able to accommodate large high resolu-tion images without compromising on either local or global features. We utilise attention to capture long range dependencies, and propose an efficient attention operator which significantly reduces computational cost, allowing the input of large images. We also use Gabor filters to build an attention mechanism to better capture long range orientational patterns. These techniques are validated on the task of cirrus segmentation in MAT-LAS images, and cloud segmentation on the SWIMSEG database, where state of the art performance is achieved.Following, cirrus segmentation in MATLAS images is further investigated, and a com-prehensive study is performed on the task. We discuss challenges associated with cirrus segmentation and low surface brightness images in general, and present several tech-niques to accommodate them. A novel loss function is proposed to facilitate training of the segmentation model on probabilistic targets. Results are presented on the annotated MATLAS images, with extensive ablation studies and a final benchmark to test the limits of the detailed segmentation pipeline.Finally, we develop a pipeline for multi-class segmentation of galactic structures and surrounding contaminants. Techniques of previous chapters are combined with a popu-lar instance segmentation architecture to create a neural network capable of segmenting localised objects and extended amorphous regions. The process of data preparation for training instance segmentation models is thoroughly detailed. The method is tested on segmentation of five object classes in MATLAS images. We find that unifying the tasks of galactic structure segmentation and contaminant segmentation improves model perfor-mance in comparison to isolating each task
Ambient Intelligence for Next-Generation AR
Next-generation augmented reality (AR) promises a high degree of
context-awareness - a detailed knowledge of the environmental, user, social and
system conditions in which an AR experience takes place. This will facilitate
both the closer integration of the real and virtual worlds, and the provision
of context-specific content or adaptations. However, environmental awareness in
particular is challenging to achieve using AR devices alone; not only are these
mobile devices' view of an environment spatially and temporally limited, but
the data obtained by onboard sensors is frequently inaccurate and incomplete.
This, combined with the fact that many aspects of core AR functionality and
user experiences are impacted by properties of the real environment, motivates
the use of ambient IoT devices, wireless sensors and actuators placed in the
surrounding environment, for the measurement and optimization of environment
properties. In this book chapter we categorize and examine the wide variety of
ways in which these IoT sensors and actuators can support or enhance AR
experiences, including quantitative insights and proof-of-concept systems that
will inform the development of future solutions. We outline the challenges and
opportunities associated with several important research directions which must
be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the
Springer Handbook of the Metavers
- …