102 research outputs found
Visualization of flow past a marine turbine: the information-assisted search for sustainable energy
Anomaly detection in spatiotemporal data via regularized non-negative tensor analysis
Anomaly detection in multidimensional data is a challenging task. Detecting anomalous mobility patterns in a city needs to take spatial, temporal, and traffic information into consideration. Although existing techniques are able to extract spatiotemporal features for anomaly analysis, few systematic analysis about how different factors contribute to or affect the anomalous patterns has been proposed. In this paper, we propose a novel technique to localize spatiotemporal anomalous events based on tensor decomposition. The proposed method employs a spatial-feature-temporal tensor model and analyzes latent mobility patterns through unsupervised learning. We first train the model based on historical data and then use the model to capture the anomalies, i.e., the mobility patterns that are significantly different from the normal patterns. The proposed technique is evaluated based on the yellow-cab dataset collected from New York City. The results show several interesting latent mobility patterns and traffic anomalies that can be deemed as anomalous events in the city, suggesting the effectiveness of the proposed anomaly detection method
Sensitivity of European glaciers to precipitation and temperature - two case studies
A nonlinear backpropagation network (BPN) has been trained with high-resolution multiproxy reconstructions of temperature and precipitation (input data) and glacier length variations of the Alpine Lower Grindelwald Glacier, Switzerland (output data). The model was then forced with two regional climate scenarios of temperature and precipitation derived from a probabilistic approach: The first scenario ("no change”) assumes no changes in temperature and precipitation for the 2000-2050 period compared to the 1970-2000 mean. In the second scenario ("combined forcing”) linear warming rates of 0.036-0.054°C per year and changing precipitation rates between −17% and +8% compared to the 1970-2000 mean have been used for the 2000-2050 period. In the first case the Lower Grindelwald Glacier shows a continuous retreat until the 2020s when it reaches an equilibrium followed by a minor advance. For the second scenario a strong and continuous retreat of approximately −30m/year since the 1990s has been modelled. By processing the used climate parameters with a sensitivity analysis based on neural networks we investigate the relative importance of different climate configurations for the Lower Grindelwald Glacier during four well-documented historical advance (1590-1610, 1690-1720, 1760-1780, 1810-1820) and retreat periods (1640-1665, 1780-1810, 1860-1880, 1945-1970). It is shown that different combinations of seasonal temperature and precipitation have led to glacier variations. In a similar manner, we establish the significance of precipitation and temperature for the well-known early eighteenth century advance and the twentieth century retreat of Nigardsbreen, a glacier in western Norway. We show that the maritime Nigardsbreen Glacier is more influenced by winter and/or spring precipitation than the Lower Grindelwald Glacie
Figure-Ground Segmentation Using Multiple Cues
The theme of this thesis is figure-ground segmentation. We address the problem in the context of a visual observer, e.g. a mobile robot, moving around in the world and capable of shifting its gaze to and fixating on objects in its environment. We are only considering bottom-up processes, how the system can detect and segment out objects because they stand out from their immediate background in some feature dimension. Since that implies that the distinguishing cues can not be predicted, but depend on the scene, the system must rely on multiple cues. The integrated use of multiple cues forms a major theme of the thesis. In particular, we note that an observer in our real environment has access to 3-D cues. Inspired by psychophysical findings about human vision we try to demonstrate their effectiveness in figure-ground segmentation and grouping also in machine vision
Recommended from our members
A Stochastic Grammar of Images
This exploratory paper quests for a stochastic and context sensitive grammar of images. The grammar should achieve the following four objectives and thus serves as a unified framework of representation, learning, and recognition for a large number of object categories. (i) The grammar represents both the hierarchical decompositions from scenes, to objects, parts, primitives and pixels by terminal and non-terminal nodes and the contexts for spatial and functional relations by horizontal links between the nodes. It formulates each object category as the set of all possible valid configurations produced by the grammar. (ii) The grammar is embodied in a simple And-Or graph representation where each Or-node points to alternative sub-configurations and an And-node is decomposed into a number of components. This representation supports recursive top-down/bottom-up procedures for image parsing under the Bayesian framework and make it convenient to scale up in complexity. Given an input image, the image parsing task constructs a most probable parse graph on-the-fly as the output interpretation and this parse graph is a subgraph of the And-Or graph after making choice on the Or-nodes. (iii) A probabilistic model is defined on this And-Or graph representation to account for the natural occurrence frequency of objects and parts as well as their relations. This model is learned from a relatively small training set per category and then sampled to synthesize a large number of configurations to cover novel object instances in the test set. This generalization capability is mostly missing in discriminative machine learning methods and can largely improve recognition performance in experiments. (iv) To fill the well-known semantic gap between symbols and raw signals, the grammar includes a series of visual dictionaries and organizes them through graph composition. At the bottom-level the dictionary is a set of image primitives each having a number of anchor points with open bonds to link with other primitives. These primitives can be combined to form larger and larger graph structures for parts and objects. The ambiguities in inferring local primitives shall be resolved through top-down computation using larger structures. Finally these primitives forms a primal sketch representation which will generate the input image with every pixels explained. The proposal grammar integrates three prominent representations in the literature: stochastic grammars for composition, Markov (or graphical) models for contexts, and sparse coding with primitives (wavelets). It also combines the structure-based and appearance based methods in the vision literature. Finally the paper presents three case studies to illustrate the proposed grammar.Mathematic
ShapeGraFormer: GraFormer-Based Network for Hand-Object Reconstruction from a Single Depth Map
3D reconstruction of hand-object manipulations is important for emulating
human actions. Most methods dealing with challenging object manipulation
scenarios, focus on hands reconstruction in isolation, ignoring physical and
kinematic constraints due to object contact. Some approaches produce more
realistic results by jointly reconstructing 3D hand-object interactions.
However, they focus on coarse pose estimation or rely upon known hand and
object shapes. We propose the first approach for realistic 3D hand-object shape
and pose reconstruction from a single depth map. Unlike previous work, our
voxel-based reconstruction network regresses the vertex coordinates of a hand
and an object and reconstructs more realistic interaction. Our pipeline
additionally predicts voxelized hand-object shapes, having a one-to-one mapping
to the input voxelized depth. Thereafter, we exploit the graph nature of the
hand and object shapes, by utilizing the recent GraFormer network with
positional embedding to reconstruct shapes from template meshes. In addition,
we show the impact of adding another GraFormer component that refines the
reconstructed shapes based on the hand-object interactions and its ability to
reconstruct more accurate object shapes. We perform an extensive evaluation on
the HO-3D and DexYCB datasets and show that our method outperforms existing
approaches in hand reconstruction and produces plausible reconstructions for
the object
Information and resource management systems for Internet of Things: Energy management, communication protocols and future applications
The idea of the Internet of Things (IoT) has enabled
the objects of our surroundings to intercommunicate with each
other in diverse working environments by utilizing their embedded
architectural and communication technologies. IoT has
provided humans the capability to manipulate the operations
and data available from different information systems using these
intelligent objects available in the surroundings. The scope of IoT
is to serve humanity across different domains of life covering industrial,
health, home and day-to-day operations of Information
Systems (IS). Due to the huge number of heterogeneous network
elements interacting and working under IoT based information
systems, there is an enormous need for resource management
for the smooth running of IoT operations. The key aspect in
IoT implementations is to have resource-constrained embedded
devices and objects participating in IoT operations. It is important
to meet the challenges raised during management and
sharing of resources in IoT based information systems. Managing
resources by implementing protocols, algorithms and techniques
are required to enhance the scalability, reliability and stability in
IoT operations across different fields of technology. This special
issue opens the new areas of interest for the researchers in the
domain of resource management in IoT operations
The Influence Of Social Presence On Virtual Community Participation: The Relational View Based On Community-Trust Theory
Virtual communities constitute an online environment that offers not only a new form of communication through which community members share information and interact with each other, but also an arena in which members develop social relationships. Prior research on the conceptualization of social presence, the degree to which a person is perceived as real in a mediated communication, results in two lines of perspectives. The media richness view conceives social presence as a media attribute while the relational view considers social presence as a quality of relational systems, emphasizing the relational aspects of communication. Drawing upon the relational view of social presence, this research incorporates the commitment-trust theory to investigate the influence of social presence on virtual community members’ continual participation. Moreover, this research considers sense of virtual community (SOVC) as the mediator between social presence and virtual community participation. The contributions of this research are three-fold. First, this research contributes to social presence literature by focusing on the social relational aspects of communication that are dependent on the participants rather than on the medium. Second, this research examines the role and importance of social presence in SOVC and virtual community participation. Lastly, it helps clarify how social presence contributes to continual participation in virtual communities
Self-supervised Lidar place recognition in overhead imagery using unpaired data
As much as place recognition is crucial for navigation, mapping and collecting training ground truth, namely sensor data pairs across different locations, are costly and time-consuming. This paper tackles these by learning lidar place recognition on public overhead imagery and in a self-supervised fashion, with no need for paired lidar and overhead imagery data. We learn the cross-modal data comparison between lidar and overhead imagery with a multi-step framework. First, images are transformed into synthetic lidar data and a latent projection is learned. Next, we discover pseudo pairs of lidar and satellite data from unpaired and asynchronous sequences, and use them for training a final embedding space projection in a cross-modality place recognition framework. We train and test our approach on real data from various environments and show performances approaching a supervised method using paired data
- …
