10,503 research outputs found
The Role of Ethological Observation for Measuring Animal Reactions to Biotelemetry Devices
This paper presents a methodological approach used to assess the wearability of biotelemetry devices in animals. A detailed protocol to gather quantitative and qualitative ethological observations was adapted and tested in an experimental study of 13 cat participants wearing two different GPS devices. The aim was twofold: firstly, to ascertain the potential interference generated by the devices on the animal body and behavior by quantifying and characterizing it; secondly, to individuate device features potentially responsible for the influence registered, and establish design requirements. This research contributes towards the development of a framework for evaluating the design of wearer-centered biotelemetry interventions for animals, consistent with values advocated by Animal- Computer Interaction researchers
Escape distance in ground-nesting birds differs with individual level of camouflage
This is the author accepted manuscript. The final version is available from University of Chicago Press via the DOI in this record.Camouflage is one of the most widespread anti-predator strategies in the animal kingdom, yet no animal can match its background perfectly in a complex environment. Therefore, selection should favour individuals that use information on how effective their camouflage is in their
immediate habitat when responding to an approaching threat. In a field study of African ground-nesting birds (plovers, coursers, and nightjars), we tested the hypothesis that individuals adaptively modulate their escape behaviour in relation to their degree of background matching. We used digital imaging and models of predator vision to quantify differences in color, luminance, and pattern between eggs and their background, as well as the plumage of incubating adult nightjars. We found that plovers and coursers showed greater
escape distances when their eggs were a poorer pattern match to the background. Nightjars sit on their eggs until a potential threat is nearby, and correspondingly they showed greater escape distances when the pattern and color match of the incubating adult's plumage, rather than its eggs, was a poorer match to the background. Finally, escape distances were shorter in the middle of the day, suggesting that escape behaviour is mediated by both camouflage and thermoregulation.In Zambia we thank the Bruce-Miller, Duckett and Nicolle families, Collins Moya and numerous other nest-finding assistants and land-owners, Lackson Chama, and the Zambia Wildlife Authority. We also thank Tony Fulford and are grateful for the helpful comments provided by Tim Caro, Innes Cuthill, Daniel Osorio, and two anonymous referees. J.T., J.W-A. and M.S. were funded by a Biotechnology and Biological Sciences Research Council (BBSRC) grant BB/J018309/1 to M.S., and a BBSRC David Phillips Research Fellowship (BB/G022887/1) to M.S., and C.N.S was funded by a Royal Society Dorothy Hodgkin
Fellowship, a BBSRC David Phillips Fellowship (BB/J014109/1) and the DST-NRF Centre of Excellence at the Percy FitzPatrick Institute
Recommended from our members
Animal coloration patterns: linking spatial vision to quantitative analysis
Animal coloration patterns, from zebra stripes to bird egg speckles, are remarkably varied. With research on the perception, function, and evolution of animal patterns growing rapidly, we require a convenient framework for quantifying their diversity, particularly in the contexts of camouflage, mimicry, mate choice, and individual recognition. Ideally, patterns should be defined by their locations in a low-dimensional pattern space that represents their appearance to their natural receivers, much as color is represented by color spaces. This synthesis explores the extent to which animal patterns, like colors, can be described by a few perceptual dimensions in a pattern space. We begin by reviewing biological spatial vision, focusing on early stages during which neurons act as spatial filters or detect simple features such as edges. We show how two methods from computational vision—spatial filtering and feature detection—offer qualitatively distinct measures of animal coloration patterns. Spatial filters provide a measure of the image statistics, captured by the spatial frequency power spectrum. Image statistics give a robust but incomplete representation of the appearance of patterns, whereas feature detectors are essential for sensing and recognizing physical objects, such as distinctive markings and animal bodies. Finally, we discuss how pattern space analyses can lead to new insights into signal design and macroevolution of animal phenotypes. Overall, pattern spaces open up new possibilities for exploring how receiver vision may shape the evolution of animal pattern signals
Generic 3D Representation via Pose Estimation and Matching
Though a large body of computer vision research has investigated developing
generic semantic representations, efforts towards developing a similar
representation for 3D has been limited. In this paper, we learn a generic 3D
representation through solving a set of foundational proxy 3D tasks:
object-centric camera pose estimation and wide baseline feature matching. Our
method is based upon the premise that by providing supervision over a set of
carefully selected foundational tasks, generalization to novel tasks and
abstraction capabilities can be achieved. We empirically show that the internal
representation of a multi-task ConvNet trained to solve the above core problems
generalizes to novel 3D tasks (e.g., scene layout estimation, object pose
estimation, surface normal estimation) without the need for fine-tuning and
shows traits of abstraction abilities (e.g., cross-modality pose estimation).
In the context of the core supervised tasks, we demonstrate our representation
achieves state-of-the-art wide baseline feature matching results without
requiring apriori rectification (unlike SIFT and the majority of learned
features). We also show 6DOF camera pose estimation given a pair local image
patches. The accuracy of both supervised tasks come comparable to humans.
Finally, we contribute a large-scale dataset composed of object-centric street
view scenes along with point correspondences and camera pose information, and
conclude with a discussion on the learned representation and open research
questions.Comment: Published in ECCV16. See the project website
http://3drepresentation.stanford.edu/ and dataset website
https://github.com/amir32002/3D_Street_Vie
Optimized Custom Dataset for Efficient Detection of Underwater Trash
Accurately quantifying and removing submerged underwater waste plays a
crucial role in safeguarding marine life and preserving the environment. While
detecting floating and surface debris is relatively straightforward,
quantifying submerged waste presents significant challenges due to factors like
light refraction, absorption, suspended particles, and color distortion. This
paper addresses these challenges by proposing the development of a custom
dataset and an efficient detection approach for submerged marine debris. The
dataset encompasses diverse underwater environments and incorporates
annotations for precise labeling of debris instances. Ultimately, the primary
objective of this custom dataset is to enhance the diversity of litter
instances and improve their detection accuracy in deep submerged environments
by leveraging state-of-the-art deep learning architectures
A Case Study on Effectively Identifying Technical Debt
Context: The technical debt (TD) concept describes a tradeoff between short-term and long-term goals in software development. While it is highly useful as a metaphor, it has utility beyond the facilitation of discussion, to inspire a useful set of methods and tools that support the identification, measurement, monitoring, management, and payment of TD. Objective: This study focuses on the identification of TD. We evaluate human elicitation of TD and compare it to automated identification. Method: We asked a development team to identify TD items in artifacts from a software project on which they were working. We provided the participants with a TD template and a short questionnaire. In addition, we also collected the output of three tools to automatically identify TD and compared it to the results of human elicitation. Results: There is little overlap between the TD reported by different developers, so aggregation, rather than consensus, is an appropriate way to combine TD reported by multiple developers. The tools used are especially useful for identifying defect debt but cannot help in identifying many other types of debt, so involving humans in the identification process is necessary. Conclusion: We have conducted a case study that focuses on the practical identification of TD, one area that could be facilitated by tools and techniques. It contributes to the TD landscape, which depicts an understanding of relationships between different types of debt and how they are best discovered
Quantifying Aphantasia through drawing: Those without visual imagery show deficits in object but not spatial memory
Congenital aphantasia is a recently characterized variation of experience defined by the inability to form voluntary visual imagery, in individuals who are otherwise high performing. Because of this specific deficit to visual imagery, individuals with aphantasia serve as an ideal group for probing the nature of representations in visual memory, particularly the interplay of object, spatial, and symbolic information. Here, we conducted a large-scale online study of aphantasia and revealed a dissociation in object and spatial content in their memory representations. Sixty-one individuals with aphantasia and matched controls with typical imagery studied real-world scene images, and were asked to draw them from memory, and then later copy them during a matched perceptual condition. Drawings were objectively quantified by 2,795 online scorers for object and spatial details. Aphantasic participants recalled significantly fewer objects than controls, with less color in their drawings, and an increased reliance on verbal scaffolding. However, aphantasic participants showed high spatial accuracy equivalent to controls, and made significantly fewer memory errors. These differences between groups only manifested during recall, with no differences between groups during the matched perceptual condition. This object-specific memory impairment in individuals with aphantasia provides evidence for separate systems in memory that support object versus spatial information. The study also provides an important experimental validation for the existence of aphantasia as a variation in human imagery experience
- …