387 research outputs found

    Continuous, Evolutionary and Large-Scale: A New Perspective for Automated Mobile App Testing

    Full text link
    Mobile app development involves a unique set of challenges including device fragmentation and rapidly evolving platforms, making testing a difficult task. The design space for a comprehensive mobile testing strategy includes features, inputs, potential contextual app states, and large combinations of devices and underlying platforms. Therefore, automated testing is an essential activity of the development process. However, current state of the art of automated testing tools for mobile apps poses limitations that has driven a preference for manual testing in practice. As of today, there is no comprehensive automated solution for mobile testing that overcomes fundamental issues such as automated oracles, history awareness in test cases, or automated evolution of test cases. In this perspective paper we survey the current state of the art in terms of the frameworks, tools, and services available to developers to aid in mobile testing, highlighting present shortcomings. Next, we provide commentary on current key challenges that restrict the possibility of a comprehensive, effective, and practical automated testing solution. Finally, we offer our vision of a comprehensive mobile app testing framework, complete with research agenda, that is succinctly summarized along three principles: Continuous, Evolutionary and Large-scale (CEL).Comment: 12 pages, accepted to the Proceedings of 33rd IEEE International Conference on Software Maintenance and Evolution (ICSME'17

    Image-based Communication on Social Coding Platforms

    Full text link
    Visual content in the form of images and videos has taken over general-purpose social networks in a variety of ways, streamlining and enriching online communications. We are interested to understand if and to what extent the use of images is popular and helpful in social coding platforms. We mined nine years of data from two popular software developers' platforms: the Mozilla issue tracking system, i.e., Bugzilla, and the most well-known platform for developers' Q/A, i.e., Stack Overflow. We further triangulated and extended our mining results by performing a survey with 168 software developers. We observed that, between 2013 and 2022, the number of posts containing image data on Bugzilla and Stack Overflow doubled. Furthermore, we found that sharing images makes other developers engage more and faster with the content. In the majority of cases in which an image is included in a developer's post, the information in that image is complementary to the text provided. Finally, our results showed that when an image is shared, understanding the content without the information in the image is unlikely for 86.9\% of the cases. Based on these observations, we discuss the importance of considering visual content when analyzing developers and designing automation tools

    Capturing perceived everyday lived landscapes through gamification and active crowdsourcing

    Get PDF
    Summary Landscapes are distinguishable areas of the earth with distinct characters comprised of tangible and intangible dimensions and entities. Interactions between humans and landscapes influence social, physical and mental well-being as well as guide behaviour. Understanding how landscapes are perceived has thus gained traction in sustainable and inclusive policy and decision making processes and public participation is called for. The recognised importance of understanding landscapes from an experiential and perceptual perspective and incorporating public participation in data generation efforts is reflected in overarching conventions, policy guidelines and frameworks including the European Landscape Convention (ELC), the Millennium Ecosystem Assessment (MEA), Natures Contributions to People (NCP) and the Landscape Character Assessment (LCA) framework. Major challenges for these conventions and frameworks are 1) how to collect data on landscape experiences and perceptions from a diverse group of individuals, 2) how to integrate and link physical entities, sensory experiences and intangible dimensions of landscapes and 3) how to identify other potential sources of landscape relevant information. The abundance of storage space and the accessibility of broadband internet have led to a burgeoning of user generated natural language content. In parallel, various paradigms of exploiting ubiquitous internet access for research purposes have emerged, including crowdsourcing, citizen science, volunteered geographic information and public participation geographic information systems. These low cost approaches have shown great potential in generating large amounts of data, however, they struggle with motivating and retaining participants. Gamification - broadly defined as adding entertaining or playful elements to applications or processes - has been found to increase user motivation and has explicitly been called for in landscape perception and preference research to diversify participant demographics. Meanwhile, natural language has been found to be deeply intertwined with thought and emotion and has been identified as a rich source of semantic data on how landscapes are perceived and experienced. Written texts and the ways in which these can be analysed have gained particular interest. Therefore, the overall goal of this thesis is to develop and implement a gamified crowdsourcing application to collect natural language landscape descriptions and to analyse and explore the contributions in terms of how landscapes are perceived through sensory experiences and how additional landscape relevant natural language can be identified. To approach this goal, I first elicit key data and feature requirements to collect landscape relevant information from a heterogeneous audience. Guided by the identified requirements, I develop and implement Window Expeditions, a gamified active crowdsourcing platform geared towards collecting natural language descriptions of everyday lived landscapes. The generated corpus of natural language is explored using computational methods and I present and discuss the results in light of who the contributors are, the locations from which participants contribute and salient terms found in English and German. In a further step I annotate a subset of English contributions according to the contained biophysical elements, sensory experiences and cultural ecosystem (dis)services and explore these in terms of how they are linked. Finally, I present a novel approach of using a curated high quality landscape specific dataset to computationally identify similar documents in other corpora using sentence-transformers. Using the Mechanics, Dynamics and Aesthetics (MDA) framework, the aesthetics of discovery, expression and fellowship were identified as most fitting for an active crowdsourcing platform. In addition, four groups of main dynamics were found, namely general dynamics of user interactions, contribution dynamics, exploration dynamics and moderation dynamics. The application was gamified by introducing points and leader boards and the platform was implemented in German and English (with French being added at a later point) to collect landscape descriptions in multiple languages. Demographic information was collected about the users including their year of birth, their gender, if they were at home whilst contributing and what languages users believed to be fluent in. Using the Mechanics, Dynamics and Aesthetics (MDA) framework, the aesthetics of discovery, expression and fellowship were identified as most fitting for an active crowdsourcing platform. In addition, four groups of main dynamics were found, namely general dynamics of user interactions, contribution dynamics, exploration dynamics and moderation dynamics. The application was gamified by introducing points and leader boards and the platform was implemented in German and English (with French being added at a later point) to collect landscape descriptions in multiple languages. Demographic information was collected about the users including their year of birth, their gender, if they were at home whilst contributing and what languages users believed to be fluent in reporting not being at home (n = 172) who were more likely to contribute from areas of herbaceous vegetation. Terms describing salient elements of everyday lived environments such as "tree", "house", "garden" and "street", as well as weather related phenomena and colours were found frequently in both English and German contributions in the generated corpus. Further, terms related to space, time and people were found significantly more frequently in the generated corpus compared to general natural language and representative landscape image descriptions highlighting the importance of spatial features as well as people and the times at which these were observed. Notably, descriptions referring to trees and birds were frequently found in the contributed texts, underlining their saliency in everyday lived landscapes. The results show biophyiscal terms related to vegetation (n = 556) and the built environment (n = 468) as well as weather related terms (n = 452) to be most prominent. Further, contributions referencing visual (n = 186) and auditory (n = 96) sensory experiences were found most often with positive sensory experiences being most common (n = 168) followed by neutral (n = 86) and negative (n = 68). In regards to the intangible dimensions captured in the contributed landscape descriptions, recreation (n = 68) was found most often followed by heritage (n = 36), identity (n = 26) and tranquillity (n = 23). Through linking biophysical elements, sensory experiences and cultural ecosystem (dis)services, the results show that the biophysical category of animals appears often with the sensory experience of smell/taste and the biophysical category of moving objects appears more than expected with the sensory experience of sound. Further, the results show the cultural ecosystem service of inspiration to often appear with the biophysical category of natural features and tranquillity with weather. Using a curated subcorpus of English natural language landscape descriptions (n = 428) collected with Window Expeditions, similar documents in other collections were identified. Through translating documents to vectors by means of sentence-transformers and calculating cosine similarity scores, a total of 6075 to 8172 documents were identified to be similar to contributions to Window Expeditions, depending on if the initial dataset was prefiltered for biophysical noun lemmas (a list of biophysical landscape elements derived from the Window Expeditions corpus) and Craik’s list adjectives (a list of common adjectives used to describe landscapes). Latent Dirichlet allocation topic modelling, a clustering approach which is commonly used to identify overarching topics or themes in collections of natural language, shows four distinct clusters in both Window Expeditions as well as in the corpus of identified similar documents, namely urban and residential, rural and natural, autumn and colours and snow and weather. Overall, the results presented in this thesis provide further evidence to work that natural language is a rich source of landscape specific information, capturing underlying semantics of a multitude of referenced landscape dimensions. In particular, this thesis demonstrates that computationally aided approaches to analysing and exploring landscape relevant textual data can give detailed insights into salient features of landscapes and how individuals perceive and experience these. Especially when complemented by human annotation, natural language landscape descriptions are a welcome source of data about a landscape’s biophysical elements, individual sensory experiences in landscapes and the perceived cultural ecosystem (dis)services. The findings of this thesis are accompanied by various limitations, chief amongst which are the possibilities of users to falsify their locations, the rather small amount of data that was collected through Window Expeditions and the Eurocentric definitions and approaches common in landscape perception research. The former two limitations can be addressed through implementational reiterations and promotional efforts, whereas the latter limitation calls for further consideration of the socio-culturally induced construction of landscape perception research and a rethinking of holistic approaches, especially in multicultural participatory contexts. The work presented in this thesis shows great potential in complementing landscape perception research with gamified methods of data generation. Active crowdsourcing can be a cost efficient and scalable approach of generating much needed data from a diverse audience. Exploring landscape relevant natural language with both quantitative and qualitative methods from various disciplines including geographic information science, linguistics and machine learning can lead to new insights into landscape perception, sensory landscape experiences and how these are expressed

    Soundscape Generation Using Web Audio Archives

    Get PDF
    Os grandes e crescentes acervos de áudio na web têm transformado a prática do design de som. Neste contexto, sampling -- uma ferramenta essencial do design de som -- mudou de gravações mecânicas para os domínios da cópia e reprodução no computador. A navegação eficaz nos grandes acervos e a recuperação de conteúdo tornaram-se um problema bem identificado em Music Information Retrieval, nomeadamente através da adoção de metodologias baseadas no conteúdo do áudio.Apesar da sua robustez e eficácia, as soluções tecnológicas atuais assentam principalmente em métodos (estatísticos) de processamento de sinal, cuja terminologia atinge um nível de adequação centrada no utilizador.Esta dissertação avança uma nova estratégia orientada semanticamente para navegação e recuperação de conteúdo de áudio, em particular, sons ambientais, a partir de grandes acervos de áudio na web. Por fim, pretendemos simplificar a extração de pedidos definidos pelo utilizador para promover uma geração fluida de paisagens sonoras. No nosso trabalho, os pedidos aos acervos de áudio na web são feitos por dimensões afetivas que se relacionam com estados emocionais (exemplo: baixa ativação e baixa valência) e descrições semânticas das fontes de áudio (exemplo: chuva). Para tal, mapeamos as anotações humanas das dimensões afetivas para descrições espectrais de áudio extraídas do conteúdo do sinal. A extração de novos sons dos acervos da web é feita estipulando um pedido que combina um ponto num plano afetivo bidimensional e tags semânticas. A aplicação protótipo, MScaper, implementa o método no ambiente Ableton Live. A avaliação da nossa pesquisa avaliou a confiabilidade perceptual dos descritores espectrais de áudio na captura de dimensões afetivas e a usabilidade da MScaper. Os resultados mostram que as características espectrais do áudio capturam significativamente as dimensões afetivas e que o MScaper foi entendido pelos os utilizadores experientes como tendo excelente usabilidade.The large and growing archives of audio content on the web have been transforming the sound design practice. In this context, sampling -- a fundamental sound design tool -- has shifted from mechanical recording to the realms of the copying and cutting on the computer. To effectively browse these large archives and retrieve content became a well-identified problem in Music Information Retrieval, namely through the adoption of audio content-based methodologies. Despite its robustness and effectiveness, current technological solutions rely mostly on (statistical) signal processing methods, whose terminology do attain a level of user-centered explanatory adequacy.This dissertation advances a novel semantically-oriented strategy for browsing and retrieving audio content, in particular, environmental sounds, from large web audio archives. Ultimately, we aim to streamline the retrieval of user-defined queries to foster a fluid generation of soundscapes. In our work, querying web audio archives is done by affective dimensions that relate to emotional states (e.g., low arousal and low valence) and semantic audio source descriptions (e.g., rain). To this end, we map human annotations of affective dimensions to spectral audio-content descriptions extracted from the signal content. Retrieving new sounds from web archives is then made by specifying a query which combines a point in a 2-dimensional affective plane and semantic tags. A prototype application, MScaper, implements the method in the Ableton Live environment. An evaluation of our research assesses the perceptual soundness of the spectral audio-content descriptors in capturing affective dimensions and the usability of MScaper. The results show that spectral audio features significantly capture affective dimensions and that MScaper has been perceived by expert-users as having excellent usability

    Fluid Transformers and Creative Analogies: Exploring Large Language Models' Capacity for Augmenting Cross-Domain Analogical Creativity

    Full text link
    Cross-domain analogical reasoning is a core creative ability that can be challenging for humans. Recent work has shown some proofs-of concept of Large language Models' (LLMs) ability to generate cross-domain analogies. However, the reliability and potential usefulness of this capacity for augmenting human creative work has received little systematic exploration. In this paper, we systematically explore LLMs capacity to augment cross-domain analogical reasoning. Across three studies, we found: 1) LLM-generated cross-domain analogies were frequently judged as helpful in the context of a problem reformulation task (median 4 out of 5 helpfulness rating), and frequently (~80% of cases) led to observable changes in problem formulations, and 2) there was an upper bound of 25% of outputs bring rated as potentially harmful, with a majority due to potentially upsetting content, rather than biased or toxic content. These results demonstrate the potential utility -- and risks -- of LLMs for augmenting cross-domain analogical creativity

    On Supporting Android Software Developers And Testers

    Get PDF
    Users entrust mobile applications (apps) to help them with different tasks in their daily lives. However, for each app that helps to finish a given task, there are a plethora of other apps in popular marketplaces that offer similar or nearly identical functionality. This makes for a competitive market where users will tend to favor the highest quality apps in most cases. Given that users can easily get frustrated by apps which repeatedly exhibit bugs, failures, and crashes, it is imperative that developers promptly fix problems both before and after the release. However, implementing and maintaining high quality apps is difficult due to unique problems and constraints associated with the mobile development process such as fragmentation, quick feature changes, and agile software development. This dissertation presents an empirical study, as well as several approaches for developers, testers and designers to overcome some of these challenges during the software development life cycle. More specifically, first we perform an in-depth analysis of developers’ needs on automated testing techniques. This included surveying 102 contributors of open source Android projects about practices for testing their apps. The major findings from this survey illustrate that developers: (i) rely on usage models for designing test app cases, (ii) prefer expressive automated generated test cases organized around use cases, (iii) prefer manual testing over automation due to reproducibility issues, and (iv) do not perceive that code coverage is an important measure of test case quality. Based on the findings from the survey, this dissertation presents several approaches to support developers and testers of Android apps in their daily tasks. In particular, we present the first taxonomy of faults in Android apps. This taxonomy is derived from a manual analysis of 2,023 software artifacts extracted from six different sources (e.g., stackoverflow and bug reports). The taxonomy is divided into 14 categories containing 262 specific types. Then, we derived 38 Android-specific mutation operators from the taxonomy. Additionally, we implemented the infrastructure called MDroid+ that automatically introduces mutations in Android apps. Third, we present a practical automation for crowdsourced videos of mobile apps called V2S. This solution automatically translates video recordings of mobile executions into replayable user scenarios. V2S uses computer vision and adopts deep learning techniques to identify user interactions from video recordings that illustrate bugs or faulty behaviors in mobile apps. Last but not least, we present an approach that aims at supporting the maintenance process by facilitating the way users report bugs for Android apps. It comprises the interaction between an Android and a web app that assist the reporter by automatically collecting relevant information

    StateLens: A Reverse Engineering Solution for Making Existing Dynamic Touchscreens Accessible

    Full text link
    Blind people frequently encounter inaccessible dynamic touchscreens in their everyday lives that are difficult, frustrating, and often impossible to use independently. Touchscreens are often the only way to control everything from coffee machines and payment terminals, to subway ticket machines and in-flight entertainment systems. Interacting with dynamic touchscreens is difficult non-visually because the visual user interfaces change, interactions often occur over multiple different screens, and it is easy to accidentally trigger interface actions while exploring the screen. To solve these problems, we introduce StateLens - a three-part reverse engineering solution that makes existing dynamic touchscreens accessible. First, StateLens reverse engineers the underlying state diagrams of existing interfaces using point-of-view videos found online or taken by users using a hybrid crowd-computer vision pipeline. Second, using the state diagrams, StateLens automatically generates conversational agents to guide blind users through specifying the tasks that the interface can perform, allowing the StateLens iOS application to provide interactive guidance and feedback so that blind users can access the interface. Finally, a set of 3D-printed accessories enable blind people to explore capacitive touchscreens without the risk of triggering accidental touches on the interface. Our technical evaluation shows that StateLens can accurately reconstruct interfaces from stationary, hand-held, and web videos; and, a user study of the complete system demonstrates that StateLens successfully enables blind users to access otherwise inaccessible dynamic touchscreens.Comment: ACM UIST 201
    corecore