6,175 research outputs found
BlogForever D2.4: Weblog spider prototype and associated methodology
The purpose of this document is to present the evaluation of different solutions for capturing blogs, established methodology and to describe the developed blog spider prototype
C to O-O Translation: Beyond the Easy Stuff
Can we reuse some of the huge code-base developed in C to take advantage of
modern programming language features such as type safety, object-orientation,
and contracts? This paper presents a source-to-source translation of C code
into Eiffel, a modern object-oriented programming language, and the supporting
tool C2Eif. The translation is completely automatic and supports the entire C
language (ANSI, as well as many GNU C Compiler extensions, through CIL) as used
in practice, including its usage of native system libraries and inlined
assembly code. Our experiments show that C2Eif can handle C applications and
libraries of significant size (such as vim and libgsl), as well as challenging
benchmarks such as the GCC torture tests. The produced Eiffel code is
functionally equivalent to the original C code, and takes advantage of some of
Eiffel's object-oriented features to produce safe and easy-to-debug
translations
Putting the Semantics into Semantic Versioning
The long-standing aspiration for software reuse has made astonishing strides
in the past few years. Many modern software development ecosystems now come
with rich sets of publicly-available components contributed by the community.
Downstream developers can leverage these upstream components, boosting their
productivity.
However, components evolve at their own pace. This imposes obligations on and
yields benefits for downstream developers, especially since changes can be
breaking, requiring additional downstream work to adapt to. Upgrading too late
leaves downstream vulnerable to security issues and missing out on useful
improvements; upgrading too early results in excess work. Semantic versioning
has been proposed as an elegant mechanism to communicate levels of
compatibility, enabling downstream developers to automate dependency upgrades.
While it is questionable whether a version number can adequately characterize
version compatibility in general, we argue that developers would greatly
benefit from tools such as semantic version calculators to help them upgrade
safely. The time is now for the research community to develop such tools: large
component ecosystems exist and are accessible, component interactions have
become observable through automated builds, and recent advances in program
analysis make the development of relevant tools feasible. In particular,
contracts (both traditional and lightweight) are a promising input to semantic
versioning calculators, which can suggest whether an upgrade is likely to be
safe.Comment: to be published as Onward! Essays 202
The Effect of Credit Market Competition on Lending Relationships
This paper provides a simple model showing that the extent of competition in credit markets is important in determining the value of lending relationships. Creditors are more likely to finance credit constrained firms when credit markets are concentrated because it is easier for these creditors to internalize the benefits of assisting the firms. The model has implications about the availability and the price of credit as firms age in different markets. The paper offers evidence for these implications from small business data. It concludes with conjectures on the costs and benefits of liberalizing financial markets, as well as the timing of such reforms.
Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier
As universities recognize the inherent value in the data they collect and
hold, they encounter unforeseen challenges in stewarding those data in ways
that balance accountability, transparency, and protection of privacy, academic
freedom, and intellectual property. Two parallel developments in academic data
collection are converging: (1) open access requirements, whereby researchers
must provide access to their data as a condition of obtaining grant funding or
publishing results in journals; and (2) the vast accumulation of 'grey data'
about individuals in their daily activities of research, teaching, learning,
services, and administration. The boundaries between research and grey data are
blurring, making it more difficult to assess the risks and responsibilities
associated with any data collection. Many sets of data, both research and grey,
fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities
are exploiting these data for research, learning analytics, faculty evaluation,
strategic decisions, and other sensitive matters. Commercial entities are
besieging universities with requests for access to data or for partnerships to
mine them. The privacy frontier facing research universities spans open access
practices, uses and misuses of data, public records requests, cyber risk, and
curating data for privacy protection. This paper explores the competing values
inherent in data stewardship and makes recommendations for practice, drawing on
the pioneering work of the University of California in privacy and information
security, data governance, and cyber risk.Comment: Final published version, Sept 30, 201
A heuristic-based approach to code-smell detection
Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
Kaskade: Graph Views for Efficient Graph Analytics
Graphs are an increasingly popular way to model real-world entities and
relationships between them, ranging from social networks to data lineage graphs
and biological datasets. Queries over these large graphs often involve
expensive subgraph traversals and complex analytical computations. These
real-world graphs are often substantially more structured than a generic
vertex-and-edge model would suggest, but this insight has remained mostly
unexplored by existing graph engines for graph query optimization purposes.
Therefore, in this work, we focus on leveraging structural properties of graphs
and queries to automatically derive materialized graph views that can
dramatically speed up query evaluation. We present KASKADE, the first graph
query optimization framework to exploit materialized graph views for query
optimization purposes. KASKADE employs a novel constraint-based view
enumeration technique that mines constraints from query workloads and graph
schemas, and injects them during view enumeration to significantly reduce the
search space of views to be considered. Moreover, it introduces a graph view
size estimator to pick the most beneficial views to materialize given a query
set and to select the best query evaluation plan given a set of materialized
views. We evaluate its performance over real-world graphs, including the
provenance graph that we maintain at Microsoft to enable auditing, service
analytics, and advanced system optimizations. Our results show that KASKADE
substantially reduces the effective graph size and yields significant
performance speedups (up to 50X), in some cases making otherwise intractable
queries possible
Automatically generating complex test cases from simple ones
While source code expresses and implements design considerations for software system, test cases capture and represent the domain knowledge of software developer, her assumptions on the implicit and explicit interaction protocols in the system, and the expected behavior of different modules of the system in normal and exceptional conditions. Moreover, test cases capture information about the environment and the data the system operates on. As such, together with the system source code, test cases integrate important system and domain knowledge. Besides being an important project artifact, test cases embody up to the half the overall software development cost and effort. Software projects produce many test cases of different kind and granularity to thoroughly check the system functionality, aiming to prevent, detect, and remove different types of faults. Simple test cases exercise small parts of the system aiming to detect faults in single modules. More complex integration and system test cases exercise larger parts of the system aiming to detect problems in module interactions and verify the functionality of the system as a whole. Not surprisingly, the test case complexity comes at a cost -- developing complex test cases is a laborious and expensive task that is hard to automate. Our intuition is that important information that is naturally present in test cases can be reused to reduce the effort in generation of new test cases. This thesis develops this intuition and investigates the phenomenon of information reuse among test cases. We first empirically investigated many test cases from real software projects and demonstrated that test cases of different granularity indeed share code fragments and build upon each other. Then we proposed an approach for automatically generating complex test cases by extracting and exploiting information in existing simple ones. In particular, our approach automatically generates integration test cases from unit ones. We implemented our approach in a prototype to evaluate its ability to generate new and useful test cases for real software systems. Our studies show that test cases generated with our approach reveal new interaction faults even in well tested applications. We evaluated the effectiveness of our approach by comparing it with the state of the art test generation techniques. The evaluation results show that our approach is effective, it finds relevant faults differently from other approaches that tend to find different and usually less relevant faults
- …