22,670 research outputs found
The Error is the Feature: how to Forecast Lightning using a Model Prediction Error
Despite the progress within the last decades, weather forecasting is still a
challenging and computationally expensive task. Current satellite-based
approaches to predict thunderstorms are usually based on the analysis of the
observed brightness temperatures in different spectral channels and emit a
warning if a critical threshold is reached. Recent progress in data science
however demonstrates that machine learning can be successfully applied to many
research fields in science, especially in areas dealing with large datasets. We
therefore present a new approach to the problem of predicting thunderstorms
based on machine learning. The core idea of our work is to use the error of
two-dimensional optical flow algorithms applied to images of meteorological
satellites as a feature for machine learning models. We interpret that optical
flow error as an indication of convection potentially leading to thunderstorms
and lightning. To factor in spatial proximity we use various manual convolution
steps. We also consider effects such as the time of day or the geographic
location. We train different tree classifier models as well as a neural network
to predict lightning within the next few hours (called nowcasting in
meteorology) based on these features. In our evaluation section we compare the
predictive power of the different models and the impact of different features
on the classification result. Our results show a high accuracy of 96% for
predictions over the next 15 minutes which slightly decreases with increasing
forecast period but still remains above 83% for forecasts of up to five hours.
The high false positive rate of nearly 6% however needs further investigation
to allow for an operational use of our approach.Comment: 10 pages, 7 figure
From Big Data to Big Displays: High-Performance Visualization at Blue Brain
Blue Brain has pushed high-performance visualization (HPV) to complement its
HPC strategy since its inception in 2007. In 2011, this strategy has been
accelerated to develop innovative visualization solutions through increased
funding and strategic partnerships with other research institutions.
We present the key elements of this HPV ecosystem, which integrates C++
visualization applications with novel collaborative display systems. We
motivate how our strategy of transforming visualization engines into services
enables a variety of use cases, not only for the integration with high-fidelity
displays, but also to build service oriented architectures, to link into web
applications and to provide remote services to Python applications.Comment: ISC 2017 Visualization at Scale worksho
Conedy: a scientific tool to investigate Complex Network Dynamics
We present Conedy, a performant scientific tool to numerically investigate
dynamics on complex networks. Conedy allows to create networks and provides
automatic code generation and compilation to ensure performant treatment of
arbitrary node dynamics. Conedy can be interfaced via an internal script
interpreter or via a Python module
Time-Efficient Hybrid Approach for Facial Expression Recognition
Facial expression recognition is an emerging research area for improving human and computer interaction. This research plays a significant role in the field of social communication, commercial enterprise, law enforcement, and other computer interactions. In this paper, we propose a time-efficient hybrid design for facial expression recognition, combining image pre-processing steps and different Convolutional Neural Network (CNN) structures providing better accuracy and greatly improved training time. We are predicting seven basic emotions of human faces: sadness, happiness, disgust, anger, fear, surprise and neutral. The model performs well regarding challenging facial expression recognition where the emotion expressed could be one of several due to their quite similar facial characteristics such as anger, disgust, and sadness. The experiment to test the model was conducted across multiple databases and different facial orientations, and to the best of our knowledge, the model provided an accuracy of about 89.58% for KDEF dataset, 100% accuracy for JAFFE dataset and 71.975% accuracy for combined (KDEF + JAFFE + SFEW) dataset across these different scenarios. Performance evaluation was done by cross-validation techniques to avoid bias towards a specific set of images from a database
- …