3,059 research outputs found
LiveSketch: Query Perturbations for Guided Sketch-based Visual Search
LiveSketch is a novel algorithm for searching large image collections using
hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch
search by creating visual suggestions that augment the query as it is drawn,
making query specification an iterative rather than one-shot process that helps
disambiguate users' search intent. Our technical contributions are: a triplet
convnet architecture that incorporates an RNN based variational autoencoder to
search for images using vector (stroke-based) queries; real-time clustering to
identify likely search intents (and so, targets within the search embedding);
and the use of backpropagation from those targets to perturb the input stroke
sequence, so suggesting alterations to the query in order to guide the search.
We show improvements in accuracy and time-to-task over contemporary baselines
using a 67M image corpus.Comment: Accepted to CVPR 201
The Web of Law
Scientists and mathematicians in recent years have become intensely interested in the structure of networks. Networks turn out to be crucial to understanding everything from physics and biology, to economics and sociology. This article proposes that the science of networks has important contributions to make to the study of law as well. Legal scholars have yet to study, or even recognize as such, one of the largest, most accessible, and best documented human-created networks in existence. This is the centuries-old network of case law and other legal authorities into which lawyers, judges, and legal scholars routinely delve in order to discover what the law is on any given topic. The network of American case law closely resembles the Web in structure. It has the peculiar mathematical and statistical properties that networks have. It can be studied using techniques that are now being used to describe many other networks, some found in nature, and others created by human action. Studying the legal network can shed light on how the legal system evolves, and many other questions. To initiate what I hope will become a fruitful new type of legal scholarship, I present in this article the preliminary results of a significant citation study of nearly four million American legal precedents, which was undertaken at my request by the LexisNexis corporation using their well-known Shepard\u27s citation service. This study demonstrates that the American case law network has the overall structure that network theory predicts it would.
This article has three parts. First, I introduce some basic concepts of network science, including such important ideas as nodes, links, random graphs, evolving networks, scale-free networks, small worlds, the rich get richer dynamic, node fitness, and clusters. Oddly enough, the mathematical tools that have proven most useful for studying networks (or at least scale-free networks) come from statistical mechanics, a branch of physics. Having introduced network theory in Part I, and having presented evidence that American case law is a scale-free network in Part II, I argue for the significance of this discovery in Part III. I hope that by the time they reach Part III, readers will already be realizing the potential richness of applying network theory to legal systems. In Part III, I describe some insights that appear from this application and suggest areas for future research.
The most famous hypothesis about the structure of law is that it is a seamless web. This old phrase, however, is just a metaphor we have used to grope for a reality we have not been in a position to express more precisely. Network science changes that. The Web of Law can be considered as a mathematical object whose topology can be analyzed using the tools pioneered by physicists and others who wanted to explore the structure of the Web and other real networks. The Web of Law has a structure very similar to that of other real networks, such as the Web and the network of scientific papers. The Web of Law is in substantial part a scale-free network, organized with hub cases that have many citations and the vast majority of cases, which have very few. The distribution of citation frequency approximates a power-law distribution, as is common with real scale-free networks, with truncations at either extreme of its distribution, which is also common.
Many promising hypotheses can be generated by considering the law as a scale-free network. State and federal systems can be examined empirically to measure how well integrated each is with itself, and with each other, and how this is changing over time. Legal authorities can be measured to determine whether their authority is emerging or declining. Institutional bodies, such as courts, can be examined in the same way. Clusters of cases, which will reveal the semantic topology of law, can be mapped to determine whether traditional legal categories are accurate or require reform. These methods can be used to develop computer programs to improve the efficiency of searching electronic legal databases. The topology of American law can be compared to that of other legal systems to determine whether legal systems share universal architectural features, and in what respects different systems are unique. Changing dynamics of the citation frequency and the fitness of particular cases can be studied over historical periods to test historiographical hypotheses. So, for example, Farber\u27s hypothesis that changes in constitutional interpretation occur suddenly, and many others, may be tested rigorously. The dynamics of authority in law generally can be studied much more rigorously. The mere fact that law is a scale free, not a random network, suggests a high degree of intellectual coherence, contrary to what some critics have suggested. The shape of the degree distribution graph of the Web of Law, in its similarity to the scientific citation network, also suggests that cases age, in the sense of losing the ability to attract citations, over time, just as scientific papers do. Yet Supreme Court cases seem to age more slowly. How nodes age profoundly affects overall network structure and therefore affects the shape of the Web of Law. Network theory hints at complex, but analyzable, interactions between the legal doctrines of precedent, and the systems of common law and multiple sovereignties.
Because law grows and because it has doctrines of authority, it creates a network of a certain shape, which spontaneously organizes itself. This is the product of laws that govern networks of computers as inexorably as they govern networks of cases, laws arising from the underlying mathematics of networks. Legal databases, which are huge, precisely documented, and readily accessible, present a perfect opportunity for the application of network science. This research would produce new knowledge of general jurisprudence that has simply been impossible until now, when we have the necessary advances in network science, the fast computers, and the existence of a complete record of the legal network in electronic form, waiting to be explored
Object-aware Inversion and Reassembly for Image Editing
By comparing the original and target prompts in editing task, we can obtain
numerous editing pairs, each comprising an object and its corresponding editing
target. To allow editability while maintaining fidelity to the input image,
existing editing methods typically involve a fixed number of inversion steps
that project the whole input image to its noisier latent representation,
followed by a denoising process guided by the target prompt. However, we find
that the optimal number of inversion steps for achieving ideal editing results
varies significantly among different editing pairs, owing to varying editing
difficulties. Therefore, the current literature, which relies on a fixed number
of inversion steps, produces sub-optimal generation quality, especially when
handling multiple editing pairs in a natural image. To this end, we propose a
new image editing paradigm, dubbed Object-aware Inversion and Reassembly (OIR),
to enable object-level fine-grained editing. Specifically, we design a new
search metric, which determines the optimal inversion steps for each editing
pair, by jointly considering the editability of the target and the fidelity of
the non-editing region. We use our search metric to find the optimal inversion
step for each editing pair when editing an image. We then edit these editing
pairs separately to avoid concept mismatch. Subsequently, we propose an
additional reassembly step to seamlessly integrate the respective editing
results and the non-editing region to obtain the final edited image. To
systematically evaluate the effectiveness of our method, we collect two
datasets for benchmarking single- and multi-object editing, respectively.
Experiments demonstrate that our method achieves superior performance in
editing object shapes, colors, materials, categories, etc., especially in
multi-object editing scenarios.Comment: Project Page: https://aim-uofa.github.io/OIR-Diffusion
Carnegie Mellon Team Tartan: Mission-level Robustness with Rapidly Deployed Autonomous Aerial Vehicles in the MBZIRC 2020
For robotics systems to be used in high risk, real-world situations, they
have to be quickly deployable and robust to environmental changes,
under-performing hardware, and mission subtask failures. Robots are often
designed to consider a single sequence of mission events, with complex
algorithms lowering individual subtask failure rates under some critical
constraints. Our approach is to leverage common techniques in vision and
control and encode robustness into mission structure through outcome monitoring
and recovery strategies, aided by a system infrastructure that allows for quick
mission deployments under tight time constraints and no central communication.
We also detail lessons in rapid field robotics development and testing. Systems
were developed and evaluated through real-robot experiments at an outdoor test
site in Pittsburgh, Pennsylvania, USA, as well as in the 2020 Mohamed Bin Zayed
International Robotics Challenge. All competition trials were completed in
fully autonomous mode without RTK-GPS. Our system led to 4th place in Challenge
2 and 7th place in the Grand Challenge, and achievements like popping five
balloons (Challenge 1), successfully picking and placing a block (Challenge 2),
and dispensing the most water autonomously with a UAV of all teams onto an
outdoor, real fire (Challenge 3).Comment: 28 pages, 26 figures. To appear in Field Robotics, Special Issues on
MBZIRC 202
Recommended from our members
A SEMIOTIC ANALYSIS OF LINGUISTIC AND CONCEPTUAL DEVELOPMENT IN MATHEMATICS FOR ENGLISH LANGUAGE LEARNERS
This study explores how an elementary mathematics teacher supported English language leaners’ (ELLs’) academic language and concept development in the context of current high- stakes school reform. The conceptual frameworks informing this study include Halliday’s theory of systemic functional linguistics (e.g., Halliday & Matthiessen, 2014) and Vygotsky’s sociocultural theory of concept development (Vygotsky, 1986). Specifically, this study analyzes the interplay between academic and everyday language and how this interplay can facilitate the development of what Vygotsky referred to as “real” or complete concepts as students shift from “spontaneous” to more “scientific” understanding of phenomenon (Vygotsky, 1986, p.173). This year-long qualitative study combines case study methods with discourse analysis using SFL tools. Participants included an English-as-Second-Language teacher and her 14 ELL students. At the time of the study these students had varying degrees of English proficiency and were enrolled in a mix-aged classroom in an urban elementary school in Massachusetts. In SFL terms, the findings from this investigation indicate that the teacher used language in a structured way to interweave everyday language connected with familiar or “Given” information with academic language regarding “New” information. In addition, the data suggest that student talk, over time, mirrored the way the teacher used language to “bind” everyday language representing spontaneous concepts with academic language representing mathematical concepts. Moreover, mathematics classroom discourse in this context often related multiple semiotic resources as “Token” to their meanings as “Value.” Drawing Halliday and Matthiessen’ (2014) concept of “decoding” and “encoding” activities associated with Token-Value relationships, students were guided in verbalizing mathematical reasoning that promoted both spontaneous and scientific concept development. In addition, the participant teacher made linguistic choices differently depending on the multisemiotic resources she used during instruction. The findings of this study suggest that teachers’ use of language plays a pivotal role in developing students’ language and mathematical conceptual knowledge simultaneously. Drawing teachers’ attention to the role discourse plays in classroom interactions and students’ disciplinary literacy development is especially consequential given the discourses of high-stakes testing, standardization, and accountability systems in K-12 schools in the United States
The 3rd Anti-UAV Workshop & Challenge: Methods and Results
The 3rd Anti-UAV Workshop & Challenge aims to encourage research in
developing novel and accurate methods for multi-scale object tracking. The
Anti-UAV dataset used for the Anti-UAV Challenge has been publicly released.
There are two main differences between this year's competition and the previous
two. First, we have expanded the existing dataset, and for the first time,
released a training set so that participants can focus on improving their
models. Second, we set up two tracks for the first time, i.e., Anti-UAV
Tracking and Anti-UAV Detection & Tracking. Around 76 participating teams from
the globe competed in the 3rd Anti-UAV Challenge. In this paper, we provide a
brief summary of the 3rd Anti-UAV Workshop & Challenge including brief
introductions to the top three methods in each track. The submission
leaderboard will be reopened for researchers that are interested in the
Anti-UAV challenge. The benchmark dataset and other information can be found
at: https://anti-uav.github.io/.Comment: Technical report for 3rd Anti-UAV Workshop and Challenge. arXiv admin
note: text overlap with arXiv:2108.0990
- …