70 research outputs found
Users, Queries, and Bad Abandonment in Web Search
After a user submits a query and receives a list of search results, the user may abandon their query without clicking on any of the search results. A bad query abandonment is when a searcher abandons the SERP because they were dissatisfied with the quality of the search results, often making the user reformulate their query in the hope of receiving better search results. As we move closer to understanding when and what causes a user to abandon their query under different qualities of search results, we move forward in an overall understanding of user behavior with search engines. In this thesis, we describe three user studies to investigate bad query abandonment.
First, we report on a study to investigate the rate and time at which users abandon their queries at different levels of search quality. We had users search for answers to questions, but showed users manipulated SERPs that contain one relevant document placed at different ranks. We show that as the quality of search results decreases, the probability of abandonment increases, and that users quickly decide to abandon their queries. Users make their decisions fast, but not all users are the same. We show that there appear to be two types of users that behave differently, with one group more likely to abandon their query and are quicker in finding answers than the group less likely to abandon their query.
Second, we describe an eye-tracking experiment that focuses on understanding possible causes of users' willingness to examine SERPs and what motivates users to continue or discontinue their examination. Using eye-tracking data, we found that a user deciding to abandon a query is best understood by the user's examination pattern not including a relevant search result. If a user sees a relevant result, they are very likely to click it. However, users' examination of results are different and may be influenced by other factors. The key factors we found are the rank of search results, the user type, and the query quality. For example, we show that regardless of where the relevant document is placed in the SERP, the type of query submitted affects examination, and if a user enters an ambiguous query, they are likely to examine fewer results.
Third, we show how the nature of non-relevant material affects users' willingness to further explore a ranked list of search results. We constructed and showed participants manipulated SERPs with different types of non-relevant documents. We found that user examination of search results and time to query abandonment is influenced by the coherence and type of non-relevant documents included in the SERP. For SERPs coherent on off-topic results, users spend the least amount of time before abandoning and are less likely to request to view more results. The time they spend increases as the SERP quality improves, and users are more likely to request to view more results when the SERP contains diversified non-relevant results on multiple subtopics
Using a Dynamic Domain-Specific Modeling Language for the Model-Driven Development of Cross-Platform Mobile Applications
There has been a gradual but steady convergence of dynamic programming languages with modeling languages. One area that can benefit from this convergence is modeldriven development (MDD) especially in the domain of mobile application development. By using a dynamic language to construct a domain-specific modeling language (DSML), it is possible to create models that are executable, exhibit flexible type checking, and provide a smaller cognitive gap between business users, modelers and developers than more traditional model-driven approaches.
Dynamic languages have found strong adoption by practitioners of Agile development processes. These processes often rely on developers to rapidly produce working code that meets business needs and to do so in an iterative and incremental way. Such methodologies tend to eschew “throwaway” artifacts and models as being wasteful except as a communication vehicle to produce executable code. These approaches are not readily supported with traditional heavyweight approaches to model-driven development such as the Object Management Group’s Model-Driven Architecture approach.
This research asks whether it is possible for a domain-specific modeling language written in a dynamic programming language to define a cross-platform model that can produce native code and do so in a way that developer productivity and code quality are at least as effective as hand-written code produced using native tools.
Using a prototype modeling tool, AXIOM (Agile eXecutable and Incremental Objectoriented Modeling), we examine this question through small- and mid-scale experiments and find that the AXIOM approach improved developer productivity by almost 400%, albeit only after some up-front investment. We also find that the generated code can be of equal if not better quality than the equivalent hand-written code. Finally, we find that there are significant challenges in the synthesis of a DSML that can be used to model applications across platforms as diverse as today’s mobile operating systems, which point to intriguing avenues of subsequent research
Feature-based generation of pervasive systems architectures utilizing software product line concepts
As the need for pervasive systems tends to increase and to dominate the computing discipline, software engineering approaches must evolve at a similar pace to facilitate the construction of such systems in an efficient manner. In this thesis, we provide a vision of a framework that will help in the construction of software product lines for pervasive systems by devising an approach to automatically generate architectures for this domain. Using this framework, designers of pervasive systems will be able to select a set of desired system features, and the framework will automatically generate architectures that support the presence of these features. Our approach will not compromise the quality of the architecture especially as we have verified that by comparing the generated architectures to those manually designed by human architects. As an initial step, and in order to determine the most commonly required features that comprise the widely most known pervasive systems, we surveyed more than fifty existing architectures for pervasive systems in various domains. We captured the most essential features along with the commonalities and variabilities between them. The features were categorized according to the domain and the environment that they target. Those categories are: General pervasive systems, domain-specific, privacy, bridging, fault-tolerance and context-awareness. We coupled the identified features with well-designed components, and connected the components based on the initial features selected by a system designer to generate an architecture. We evaluated our generated architectures against architectures designed by human architects. When metrics such as coupling, cohesion, complexity, reusability, adaptability, modularity, modifiability, packing density, and average interaction density were used to test our framework, our generated architectures were found comparable, if not better than the human generated architectures
Enabling lock-free concurrent workers over temporal graphs composed of multiple time-series
Time series are commonly used to store temporal data, e.g., sensor measurements. However, when it comes to complex analytics and learning tasks, these measurements have to be combined with structural context data. Temporal graphs, connecting multiple time- series, have proven to be very suitable to organize such data and ultimately empower analytic algorithms. Computationally intensive tasks often need to be distributed and parallelized among different workers. For tasks that cannot be split into independent parts, several workers have to concurrently read and update these shared temporal graphs. This leads to inconsistency risks, especially in the case of frequent updates. Distributed locks can mitigate these risks but come with a very high-performance cost. In this paper, we present a lock-free approach allowing to concurrently modify temporal graphs. Our approach is based on a composition operator able to do online reconciliation of concurrent modifications of temporal graphs. We evaluate the efficiency and scalability of our approach compared to lock-based approaches
Air Force Institute of Technology Research Report 2009
This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems and Engineering Management, Operational Sciences, Mathematics, Statistics and Engineering Physics
Evolutionary unit-testing of third party object-oriented Java software
Dissertação apresentada à Universidad de Extremadura para obtenção do Diploma de Estudios Avanzados, orientada por Francisco Fernandéz de Vega.Evolutionary Testing is an emerging methodology for automatically generating
high quality test data. The focus of this work is on presenting a searchbased
approach for the the unit-testing of third-party object-oriented Java
software.
Test cases are represented and evolved using the Strongly Typed Genetic
Programming paradigm, which effectively mimics the inheritance and polymorphic
properties of object-oriented programs and enables the maintenance
of call dependences when applying tree construction, mutation or crossover.
Our strategy for evaluating the quality of test cases includes instrumenting
the test object for basic block analysis and structural event dispatch,
and executing the instrumented test object using the generated test cases as
“inputs” – in order to collect trace information and derive coverage metrics.
Static analysis, instrumentation and execution tracing is performed solely
with basis on the high-level information extracted from the Java Bytecode of
the test object. Given that the test object’s source code is often unavailable,
working at the Bytecode level allows broadening the scope of applicability of
our approach; it can be used, for instance, to perform structural testing on
third-party Java components.
Test objects are represented internally by weighted control-flow graphs;
strategies are introduced for favouring test cases that exercise problematic
structures and difficult control-flow paths, which involve dynamic weight
reevaluation. The aim is that of efficiently guiding the search process towards
achieving full structural coverage – which often involves promoting
the definition of complex and intricate test cases that define elaborate state
scenarios.
The work performed so far allowed us to develop the prototype of a test
case generation tool, called eCrash. Experiments have been carried and quality
solutions have been found, proving the pertinence of our methodology and
encouraging further studies
Technical Debt Decision-Making Framework
Software development companies strive to produce high-quality software. In commercial software development environments, due to resource and time constraints, software is often developed hastily which gives rise to technical debt. Technical debt refers to the consequences of taking shortcuts when developing software. These consequences include making the system difficult to maintain and defect prone. Technical debt can have financial consequences and impede feature enhancements. Identifying technical debt and deciding which debt to address is challenging given resource constraints. Project managers must decide which debt has the highest priority and is most critical to the project. This decision-making process is not standardized and sometimes differs from project to project. My research goal is to develop a framework that project managers can use in their decision-making process to prioritize technical debt based on its potential impact. To achieve this goal, we survey software practitioners, conduct literature reviews, and mine software repositories for historical data to build a framework to model the technical debt decision-making process and inform practitioners of the most critical debt items
Technical Debt Decision-Making Framework
Software development companies strive to produce high-quality software. In commercial software development environments, due to resource and time constraints, software is often developed hastily which gives rise to technical debt. Technical debt refers to the consequences of taking shortcuts when developing software. These consequences include making the system difficult to maintain and defect prone. Technical debt can have financial consequences and impede feature enhancements. Identifying technical debt and deciding which debt to address is challenging given resource constraints. Project managers must decide which debt has the highest priority and is most critical to the project. This decision-making process is not standardized and sometimes differs from project to project. My research goal is to develop a framework that project managers can use in their decision-making process to prioritize technical debt based on its potential impact. To achieve this goal, we survey software practitioners, conduct literature reviews, and mine software repositories for historical data to build a framework to model the technical debt decision-making process and inform practitioners of the most critical debt items
- …