470 research outputs found
Applications of Diversity and the Self-Attention Mechanism in Neural Networks
This thesis covers three contributions in applications of neural networks. The first is related to diversity and ensemble learning, while the other two cover novel applications of the self-attention mechanism. An important aspect of training a neural network is the choice of objective function. Regression via Classification (RvC) is often used to tackle problems in deep learning where the target variable is continuous, but standard regression objectives fail to capture the underlying distance metric of the domain. This can result in better performance of the trained model, but the optimal choice of discrete classes used in RvC is not well understood. In Paper 1, we introduce the concept of label diversity by generalizing the RvC method. By exploiting the fact that labels can be generated in arbitrary ways for continuous and ordinal target variables, we show that using multiple labels can improve the prediction accuracy of a neural network compared to using a single label and provide theoretical justification from ensemble theory. We apply our method to several tasks in computer vision and show increased performance compared to regression and RvC baselines. The performance of a neural network is also influenced by the choice of network architecture, and in the design process it is important to consider the domain of the inputs and its symmetries. Graph neural networks (GNNs) is the family of networks that operates on graphs, where in-formation is propagated between the graph nodes using for example self-attention. However, self-attention can be used for other data domains as well if the inputs can be converted into graphs, which is not always trivial. In Paper 2, we do this for audio by using a complete graph over audio features extracted from different time slots. We apply this technique to the task of keyword spotting and show that a neural network solely based on self-attention is more accurate than previously considered architectures. Finally, in Paper 3 we apply attention-based learning to point cloud processing, where the permutation symmetry must be preserved. In order to make the self-attention mechanism both more efficient and more expressive, we propose a hierarchical approach that allows individual points to interact on both a local and global scale. By extensive experiments on several bench-marks, we show that this approach improves the descriptiveness of the learned features, while simultaneously reducing the computational complexity compared to an architecture that applies self-attention naively on all input points
Improving the Performance of OTDOA based Positioning in NB-IoT Systems
In this paper, we consider positioning with
observed-time-difference-of-arrival (OTDOA) for a device deployed in
long-term-evolution (LTE) based narrow-band Internet-of-things (NB-IoT)
systems. We propose an iterative expectation-maximization based successive
interference cancellation (EM-SIC) algorithm to jointly consider estimations of
residual frequency-offset (FO), fading-channel taps and time-of-arrival (ToA)
of the first arrival-path for each of the detected cells. In order to design a
low complexity ToA detector and also due to the limits of low-cost analog
circuits, we assume an NB-IoT device working at a low-sampling rate such as
1.92 MHz or lower. The proposed EM-SIC algorithm comprises two stages to detect
ToA, based on which OTDOA can be calculated. In a first stage, after running
the EM-SIC block a predefined number of iterations, a coarse ToA is estimated
for each of the detected cells. Then in a second stage, to improve the ToA
resolution, a low-pass filter is utilized to interpolate the correlations of
time-domain PRS signal evaluated at a low sampling-rate to a high sampling-rate
such as 30.72 MHz. To keep low-complexity, only the correlations inside a small
search window centered at the coarse ToA estimates are upsampled. Then, the
refined ToAs are estimated based on upsampled correlations. If at least three
cells are detected, with OTDOA and the locations of detected cell sites, the
position of the NB-IoT device can be estimated. We show through numerical
simulations that, the proposed EM-SIC based ToA detector is robust against
impairments introduced by inter-cell interference, fading-channel and residual
FO. Thus significant signal-to-noise (SNR) gains are obtained over traditional
ToA detectors that do not consider these impairments when positioning a device.Comment: Accepted in GlobeCom 2017, 7 pages, 11 figure
How artificial intelligence can be used to improve lean manufacturing and production processes
The implementation of Lean and Artificial Intelligence has demonstrated a positive correlation across different industries. By integrating AI techniques, the efficiency and effectiveness of Lean processes can be enhanced. The combination of Lean and AI contributes to improved decision-making, increased productivity, and reduced waste. Moreover, AI can identify and rectify process errors, enabling streamlined and more efficient operations.
In 2014, Hennig Olsen initiated the implementation of lean thinking, which yielded mixed results initially. However, they decided to adopt lean principles according to their specific requirements, leading to significantly improved outcomes. With the rapid advancement of technology, Hennig Olsen ventured into experimenting with artificial intelligence, particularly in the realm of vision control, starting in 2019. Subsequently, they have consistently embraced and integrated increasingly advanced technologies to continuously enhance their production lines.
This case study examined the impact of implementing artificial intelligence on the company's performance. The findings revealed that as Hennig Olsen incorporated more artificial intelligence into their production lines, they experienced a significant reduction in customer complaints. However, they continue to face challenges in meeting their overall equipment effectiveness goals. The thesis also identified potential areas for improvement, emphasizing the potential benefits of integrating six sigma processes through AI initiatives. More specifically, implementing predictive maintenance to minimize unexpected downtime and improve OEE emerged as a key opportunity. Leveraging AI to analyze vast amounts of data could also prove advantageous in optimizing cycle time and reducing waste within the organization.
Finally, this report has examined the readiness of Hennig Olsen to further integrate AI tools into their operations. To fully capitalize on the potential benefits of AI and evolve into a comprehensive smart factory, the company needs to invest in additional technologies such as the Internet of Things, big data analytic, and cloud computing. However, a significant hurdle arises from the limitations of their existing machinery, which cannot gather extensive data or establish interconnectivity. Moreover, sourcing qualified personnel proficient in developing these technologies poses a challenge. A more effective strategy, along with support from stakeholders, is necessary to encourage investments in new technologies. This will facilitate the successful implementation of AI technologies and foster improved acceptance of new technology among employees
How artificial intelligence can be used to improve lean manufacturing and production processes A case study of Hennig Olsen
The implementation of Lean and Artificial Intelligence has demonstrated a positive correlation across different industries. By integrating AI techniques, the efficiency and effectiveness of Lean processes can be enhanced. The combination of Lean and AI contributes to improved decision-making, increased productivity, and reduced waste. Moreover, AI can identify and rectify process errors, enabling streamlined and more efficient operations. In 2014, Hennig Olsen initiated the implementation of lean thinking, which yielded mixed results initially. However, they decided to adopt lean principles according to their specific requirements, leading to significantly improved outcomes. With the rapid advancement of technology, Hennig Olsen ventured into experimenting with artificial intelligence, particularly in the realm of vision control, starting in 2019. Subsequently, they have consistently embraced and integrated increasingly advanced technologies to continuously enhance their
production lines.
This case study examined the impact of implementing artificial intelligence on the company’s performance. The findings revealed that as Hennig Olsen incorporated more artificial intelligence into their production lines, they experienced a significant reduction in customer complaints. However, they continue to face challenges in meeting their overall equipment effectiveness goals. The thesis also identified potential areas for improvement, emphasizing the potential benefits of integrating six sigma processes through AI initiatives. More specifically, implementing predictive maintenance to minimize unexpected downtime and improve OEE emerged as a key opportunity. Leveraging AI to analyze vast amounts of data could also prove advantageous in optimizing cycle time and reducing waste within the organization.
Finally, this report has examined the readiness of Hennig Olsen to further integrate AI tools into their operations. To fully capitalize on the potential benefits of AI and evolve into a comprehensive smart factory, the company needs to invest in additional technologies such as the Internet of Things, big data analytic, and cloud computing. However, a significant hurdle arises from the limitations of their existing machinery, which cannot gather extensive data or establish interconnectivity. Moreover, sourcing qualified personnel proficient in developing these technologies poses a challenge. A more effective strategy, along with support from stakeholders, is necessary to encourage investments in new technologies. This will facilitate the successful implementation of AI technologies and foster improved acceptance of new technology among employees
Points to Patches: Enabling the Use of Self-Attention for 3D Shape Recognition
While the Transformer architecture has become ubiquitous in the machine
learning field, its adaptation to 3D shape recognition is non-trivial. Due to
its quadratic computational complexity, the self-attention operator quickly
becomes inefficient as the set of input points grows larger. Furthermore, we
find that the attention mechanism struggles to find useful connections between
individual points on a global scale. In order to alleviate these problems, we
propose a two-stage Point Transformer-in-Transformer (Point-TnT) approach which
combines local and global attention mechanisms, enabling both individual points
and patches of points to attend to each other effectively. Experiments on shape
classification show that such an approach provides more useful features for
downstream tasks than the baseline Transformer, while also being more
computationally efficient. In addition, we also extend our method to feature
matching for scene reconstruction, showing that it can be used in conjunction
with existing scene reconstruction pipelines.Comment: Accepted to the 26th International Conference on Pattern Recognitio
Single Parent Families and Poverty in Continental Welfare States: Examining Dutch Policy Responses to New Social Risks
An interdisciplinary literature demonstrates that lone-parent families confront the new social risks as an overwhelmingly feminized group and have a higher risk of poverty. Recent research also demonstrates cross-national differences in single-parent poverty and emphasizes the role of social policy settings in various welfare states in shaping the economic security of single-mother families. How do welfare regimes in Europe respond to new social risks such as increasing income insecurity of lone-parent families? This research examines the redesign of social policies to respond to the new risk structures by focusing on the situation of lone parents in Netherlands as an empirical terrain to address the question of stability or change of continental welfare states
Keyword Transformer: A Self-Attention Model for Keyword Spotting
The Transformer architecture has been successful across many domains,
including natural language processing, computer vision and speech recognition.
In keyword spotting, self-attention has primarily been used on top of
convolutional or recurrent encoders. We investigate a range of ways to adapt
the Transformer architecture to keyword spotting and introduce the Keyword
Transformer (KWT), a fully self-attentional architecture that exceeds
state-of-the-art performance across multiple tasks without any pre-training or
additional data. Surprisingly, this simple architecture outperforms more
complex models that mix convolutional, recurrent and attentive layers. KWT can
be used as a drop-in replacement for these models, setting two new benchmark
records on the Google Speech Commands dataset with 98.6% and 97.7% accuracy on
the 12 and 35-command tasks respectively.Comment: Proceedings of INTERSPEEC
- …