19,018 research outputs found
Cognitively plausible representations for the alignment of sketch and geo-referenced maps
In many geo-spatial applications, freehand sketch maps are considered as an intuitive way to collect user-generated spatial information. The task of automatically mapping information from such hand-drawn sketch maps to geo-referenced maps is known as the alignment task. Researchers have proposed various qualitative representations to capture distorted and generalized spatial information in sketch maps, however thus far the effectiveness of these representations has not been evaluated in the context of an alignment task. This paper empirically evaluates a set of cognitively plausible representations for alignment using real sketch maps collected from two different study areas with the corresponding geo-referenced maps. Firstly, the representations are evaluated in a single-aspect alignment approach by demonstrating the alignment of maps for each individual sketch aspect. Secondly, representations are evaluated across multiple sketch aspects using more than one representation in the alignment task. The evaluations demonstrated the suitability of the chosen representation for aligning user-generated content with geo-referenced maps in a real-world scenario
Fine-grained sketch-based image retrieval by matching deformable part models
(c) 2014. The copyright of this document resides with its authors.
It may be distributed unchanged freely in print or electronic forms.© 2014. The copyright of this document resides with its authors. An important characteristic of sketches, compared with text, rests with their ability to intrinsically capture object appearance and structure. Nonetheless, akin to traditional text-based image retrieval, conventional sketch-based image retrieval (SBIR) principally focuses on retrieving images of the same category, neglecting the fine-grained characteristics of sketches. In this paper, we advocate the expressiveness of sketches and examine their efficacy under a novel fine-grained SBIR framework. In particular, we study how sketches enable fine-grained retrieval within object categories. Key to this problem is introducing a mid-level sketch representation that not only captures object pose, but also possesses the ability to traverse sketch and image domains. Specifically, we learn deformable part-based model (DPM) as a mid-level representation to discover and encode the various poses in sketch and image domains independently, after which graph matching is performed on DPMs to establish pose correspondences across the two domains. We further propose an SBIR dataset that covers the unique aspects of fine-grained SBIR. Through in-depth experiments, we demonstrate the superior performance of our SBIR framework, and showcase its unique ability in fine-grained retrieval
Learning Aligned Cross-Modal Representations from Weakly Aligned Data
People can recognize scenes across many different modalities beyond natural
images. In this paper, we investigate how to learn cross-modal scene
representations that transfer across modalities. To study this problem, we
introduce a new cross-modal scene dataset. While convolutional neural networks
can categorize cross-modal scenes well, they also learn an intermediate
representation not aligned across modalities, which is undesirable for
cross-modal transfer applications. We present methods to regularize cross-modal
convolutional neural networks so that they have a shared representation that is
agnostic of the modality. Our experiments suggest that our scene representation
can help transfer representations across modalities for retrieval. Moreover,
our visualizations suggest that units emerge in the shared representation that
tend to activate on consistent concepts independently of the modality.Comment: Conference paper at CVPR 201
Recommended from our members
Ontologies and representation spaces for sketch map interpretation
In this paper, we present a systematic approach to sketch map interpretation. The method decomposes the elements of a sketch map into a hierarchy of categories, from the material sketch map level to the non-material representational sketch map level, and then interprets the sketch map using the five formal representation spaces that we develop. These spaces (set, graph, metric and Euclidean) provide a tiered formal representation based on standard mathematical structures. We take the view that a sketch map bears information about the physical world and systematises this using extensions of existing formal ontologies. The motivation for this work is the partially automatic extraction and integration of information from sketch maps. We propose a set of ontologies and methods as a first step in the direction of a formalisation of partially automatic extraction and integration of sketch map content. We also see this work as a contribution to spatial cognition, where researchers externalise spatial knowledge using sketch mapping. The paper concludes by working through an example that demonstrates the sketch map interpretation at different levels using the underlying method
Network geography: relations, interactions, scaling and spatial processes in GIS
This chapter argues that the representational basis of GIS largely avoidseven the most rudimentary distortions of Euclidean space as reflected, forexample, in the notion of the network. Processes acting on networks whichinvolve both short and longer term dynamics are often absent from GIscience. However a sea change is taking place in the way we view thegeography of natural and man-made systems. This is emphasising theirdynamics and the way they evolve from the bottom up, with networks anessential constituent of this decentralized paradigm. Here we will sketchthese developments, showing how ideas about graphs in terms of the waythey evolve as connected, self-organised structures reflected in theirscaling, are generating new and important views of geographical space.We argue that GI science must respond to such developments and needs tofind new forms of representation which enable both theory andapplications through software to be extended to embrace this new scienceof networks
GIS and urban design
Although urban planning has used computer models and information systems sincethe 1950s and architectural practice has recently restructured to the use of computeraideddesign (CAD) and computer drafting software, urban design has hardly beentouched by the digital world. This is about to change as very fine scale spatial datarelevant to such design becomes routinely available, as 2dimensional GIS(geographic information systems) become linked to 3dimensional CAD packages,and as other kinds of photorealistic media are increasingly being fused with thesesoftware. In this chapter, we present the role of GIS in urban design, outlining whatcurrent desktop software is capable of and showing how various new techniques canbe developed which make such software highly suitable as basis for urban design.We first outline the nature of urban design and then present ideas about how varioussoftware might form a tool kit to aid its process. We then look in turn at: utilisingstandard mapping capabilities within GIS relevant to urban design; buildingfunctional extensions to GIS which measure local scale accessibility; providingsketch planning capability in GIS and linking 2-d to 3-d visualisations using low costnet-enabled CAD browsers. We finally conclude with some speculations on thefuture of GIS for urban design across networks whereby a wide range of participantsmight engage in the design process digitally but remotely
How human schematization and systematic errors take effect on sketch map formalizations
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesSketch map is an important way to represent spatial information used in many geospatial reasoning tasks
(Forbus, K., Usher, J., & Chapman, V. 2004). Compared with verbal or textual language, sketch map is a
more interactive mode that more directly supports human spatial thinking and thus is a more natural way to
reflect how people perceive properties of spatial objects and their spatial relations. One challenging
application of sketch maps is called Spatial-Query-by-Sketch proposed by Egenhofer. Being a design of
query language for geographic information systems (GISs), it allows a user to formulate a spatial query by
drawing the desired spatial configuration with a pen on a touch-sensitive computer screen and get it
translated into a symbolic representation to be processed against a geographic database (Egenhofer, M.
1997).
During the period of sketch map drawing, errors due to human spatial cognition in mind may occur. A ready
example is as follows: distance judgments for route are judged longer when the route has many turns or
landmarks or intersections (Tversky, B. 2002). Direction get straightened up in memory. When Parisians
were asked to sketch maps of their city, the Seine was drawn as a curve, but straighter than it actually is
(Milgram, S. and Jodelet, D. 1976). Similarly, buildings and streets with different shapes are often simply
depicted as schematic figures like blobs and lines. These errors are neither random nor due solely to
ignorance; rather they appear to be a consequence of ordinary perceptual and cognitive processes (Tversky,
2003). Therefore, when processing sketch map analysis and representing it in a formal way, like Egenhofer's
analysis approach for Spatial-Query-by-Sketch, the resulting formalization must necessarily be wrong if it
does not account for the fact that some spatial information is distorted or omitted by humans. Therefore,
when sketch map analysis is processed and represented in a formal way same as Egenhofer’s analytical
approach to Spatial-Query-by-Sketch, the resulting formalization is simply erroneous since it never takes
into account the fact that some spatial information is distorted or neglected in human perceptions. Though
Spatial-Query-by-Sketch overcomes the limitations of conventional spatial query language by taking into
consideration those alternative interaction methods between users and data, it is still not always true that
accuracy of its query results is reliable.(...
Cross-Paced Representation Learning with Partial Curricula for Sketch-based Image Retrieval
In this paper we address the problem of learning robust cross-domain
representations for sketch-based image retrieval (SBIR). While most SBIR
approaches focus on extracting low- and mid-level descriptors for direct
feature matching, recent works have shown the benefit of learning coupled
feature representations to describe data from two related sources. However,
cross-domain representation learning methods are typically cast into non-convex
minimization problems that are difficult to optimize, leading to unsatisfactory
performance. Inspired by self-paced learning, a learning methodology designed
to overcome convergence issues related to local optima by exploiting the
samples in a meaningful order (i.e. easy to hard), we introduce the cross-paced
partial curriculum learning (CPPCL) framework. Compared with existing
self-paced learning methods which only consider a single modality and cannot
deal with prior knowledge, CPPCL is specifically designed to assess the
learning pace by jointly handling data from dual sources and modality-specific
prior information provided in the form of partial curricula. Additionally,
thanks to the learned dictionaries, we demonstrate that the proposed CPPCL
embeds robust coupled representations for SBIR. Our approach is extensively
evaluated on four publicly available datasets (i.e. CUFS, Flickr15K, QueenMary
SBIR and TU-Berlin Extension datasets), showing superior performance over
competing SBIR methods
- …