6,732 research outputs found
Reducing the Barrier to Entry of Complex Robotic Software: a MoveIt! Case Study
Developing robot agnostic software frameworks involves synthesizing the
disparate fields of robotic theory and software engineering while
simultaneously accounting for a large variability in hardware designs and
control paradigms. As the capabilities of robotic software frameworks increase,
the setup difficulty and learning curve for new users also increase. If the
entry barriers for configuring and using the software on robots is too high,
even the most powerful of frameworks are useless. A growing need exists in
robotic software engineering to aid users in getting started with, and
customizing, the software framework as necessary for particular robotic
applications. In this paper a case study is presented for the best practices
found for lowering the barrier of entry in the MoveIt! framework, an
open-source tool for mobile manipulation in ROS, that allows users to 1)
quickly get basic motion planning functionality with minimal initial setup, 2)
automate its configuration and optimization, and 3) easily customize its
components. A graphical interface that assists the user in configuring MoveIt!
is the cornerstone of our approach, coupled with the use of an existing
standardized robot model for input, automatically generated robot-specific
configuration files, and a plugin-based architecture for extensibility. These
best practices are summarized into a set of barrier to entry design principles
applicable to other robotic software. The approaches for lowering the entry
barrier are evaluated by usage statistics, a user survey, and compared against
our design objectives for their effectiveness to users
Image Watermaking With Biometric Data For Copyright Protection
In this paper, we deal with the proof of ownership or legitimate usage of a
digital content, such as an image, in order to tackle the illegitimate copy.
The proposed scheme based on the combination of the watermark-ing and
cancelable biometrics does not require a trusted third party, all the exchanges
are between the provider and the customer. The use of cancelable biometrics
permits to provide a privacy compliant proof of identity. We illustrate the
robustness of this method against intentional and unintentional attacks of the
watermarked content
Multi-cultural visualization : how functional programming can enrich visualization (and vice versa)
The past two decades have seen visualization flourish as a research field in its own right, with advances on the computational challenges of faster algorithms, new techniques for datasets too large for in-core processing, and advances in understanding the perceptual and cognitive processes recruited by visualization systems, and through this, how to improve the representation of data. However, progress within visualization has sometimes proceeded in parallel with that in other branches of computer science, and there is a danger that when novel solutions ossify into `accepted practice' the field can easily overlook significant advances elsewhere in the community. In this paper we describe recent advances in the design and implementation of pure functional programming languages that, significantly, contain important insights into questions raised by the recent NIH/NSF report on Visualization Challenges. We argue and demonstrate that modern functional languages combine high-level mathematically-based specifications of visualization techniques, concise implementation of algorithms through fine-grained composition, support for writing correct programs through strong type checking, and a different kind of modularity inherent in the abstractive power of these languages. And to cap it off, we have initial evidence that in some cases functional implementations are faster than their imperative counterparts
Platforms and the Fall of the Fourth Estate: Looking Beyond the First Amendment to Protect Watchdog Journalism
Journalists see the First Amendment as an amulet, and with good reason. It has long protected the Fourth Estate—an independent institutional press—in its exercise of editorial discretion to check government power. This protection helped the Fourth Estate flourish in the second half of the twentieth century and ably perform its constitutional watchdog role.
But in the last two decades, the media ecology has changed. The Fourth Estate has been subsumed by a Networked Press in which journalists are joined by engineers, algorithms, audience, and other human and non-human actors in creating and distributing news. The Networked Press’s most powerful members are platforms. These platforms—companies like Facebook, Google, and Twitter—shun the media label even as they function as information gatekeepers and news editors. Their norms and values, including personalization and speed, stymie watchdog reporting.
The Networked Press regime significantly threatens watchdog journalism, speech that is at the core of the press’s constitutional role. Yet, limited by the state action doctrine, the First Amendment cannot shield this speech from a threat by private actors like platforms. Today, the First Amendment is insufficient to protect a free press that can serve as a check on government tyranny.
This article argues that we must look beyond the First Amendment to protect watchdog journalism from the corrosive power of platforms. It describes the limits of the First Amendment and precisely how platforms threaten watchdog journalism. It also proposes a menu of extra-constitutional options for bolstering this essential brand of speech
RRR: Rank-Regret Representative
Selecting the best items in a dataset is a common task in data exploration.
However, the concept of "best" lies in the eyes of the beholder: different
users may consider different attributes more important, and hence arrive at
different rankings. Nevertheless, one can remove "dominated" items and create a
"representative" subset of the data set, comprising the "best items" in it. A
Pareto-optimal representative is guaranteed to contain the best item of each
possible ranking, but it can be almost as big as the full data. Representative
can be found if we relax the requirement to include the best item for every
possible user, and instead just limit the users' "regret". Existing work
defines regret as the loss in score by limiting consideration to the
representative instead of the full data set, for any chosen ranking function.
However, the score is often not a meaningful number and users may not
understand its absolute value. Sometimes small ranges in score can include
large fractions of the data set. In contrast, users do understand the notion of
rank ordering. Therefore, alternatively, we consider the position of the items
in the ranked list for defining the regret and propose the {\em rank-regret
representative} as the minimal subset of the data containing at least one of
the top- of any possible ranking function. This problem is NP-complete. We
use the geometric interpretation of items to bound their ranks on ranges of
functions and to utilize combinatorial geometry notions for developing
effective and efficient approximation algorithms for the problem. Experiments
on real datasets demonstrate that we can efficiently find small subsets with
small rank-regrets
Recommended from our members
Dialectic tensions in the financial markets: a longitudinal study of pre- and post-crisis regulatory technology
This article presents the findings from a longitudinal research study on regulatory technology in the UK financial services industry. The financial crisis with serious corporate and mutual fund scandals raised the profile of
compliance as governmental bodies, institutional and private investors introduced a ‘tsunami’ of financial regulations. Adopting a multi-level analysis, this study examines how regulatory technology was used by financial firms to meet their compliance obligations, pre- and post-crisis. Empirical data collected over 12 years examine the deployment of
an investment management system in eight financial firms. Interviews with public regulatory bodies, financial
institutions and technology providers reveal a culture of compliance with increased transparency, surveillance and
accountability. Findings show that dialectic tensions arise as the pursuit of transparency, surveillance and
accountability in compliance mandates is simultaneously rationalized, facilitated and obscured by regulatory
technology. Responding to these challenges, regulatory bodies continue to impose revised compliance mandates on
financial firms to force them to adapt their financial technologies in an ever-changing multi-jurisdictional regulatory landscape
A parallel Heap-Cell Method for Eikonal equations
Numerous applications of Eikonal equations prompted the development of many
efficient numerical algorithms. The Heap-Cell Method (HCM) is a recent serial
two-scale technique that has been shown to have advantages over other serial
state-of-the-art solvers for a wide range of problems. This paper presents a
parallelization of HCM for a shared memory architecture. The numerical
experiments in show that the parallel HCM exhibits good algorithmic
behavior and scales well, resulting in a very fast and practical solver.
We further explore the influence on performance and scaling of data
precision, early termination criteria, and the hardware architecture. A shorter
version of this manuscript (omitting these more detailed tests) has been
submitted to SIAM Journal on Scientific Computing in 2012.Comment: (a minor update to address the reviewers' comments) 31 pages; 15
figures; this is an expanded version of a paper accepted by SIAM Journal on
Scientific Computin
An Efficient Local Search for Partial Latin Square Extension Problem
A partial Latin square (PLS) is a partial assignment of n symbols to an nxn
grid such that, in each row and in each column, each symbol appears at most
once. The partial Latin square extension problem is an NP-hard problem that
asks for a largest extension of a given PLS. In this paper we propose an
efficient local search for this problem. We focus on the local search such that
the neighborhood is defined by (p,q)-swap, i.e., removing exactly p symbols and
then assigning symbols to at most q empty cells. For p in {1,2,3}, our
neighborhood search algorithm finds an improved solution or concludes that no
such solution exists in O(n^{p+1}) time. We also propose a novel swap
operation, Trellis-swap, which is a generalization of (1,q)-swap and
(2,q)-swap. Our Trellis-neighborhood search algorithm takes O(n^{3.5}) time to
do the same thing. Using these neighborhood search algorithms, we design a
prototype iterated local search algorithm and show its effectiveness in
comparison with state-of-the-art optimization solvers such as IBM ILOG CPLEX
and LocalSolver.Comment: 17 pages, 2 figure
- …