65 research outputs found
Estimation of Sparsity via Simple Measurements
We consider several related problems of estimating the 'sparsity' or number
of nonzero elements in a length vector by observing only
, where is a predesigned test matrix
independent of , and the operation varies between problems.
We aim to provide a -approximation of sparsity for some constant
with a minimal number of measurements (rows of ). This framework
generalizes multiple problems, such as estimation of sparsity in group testing
and compressed sensing. We use techniques from coding theory as well as
probabilistic methods to show that rows are sufficient
when the operation is logical OR (i.e., group testing), and nearly this
many are necessary, where is a known upper bound on . When instead the
operation is multiplication over or a finite field
, we show that respectively and measurements are necessary and sufficient.Comment: 13 pages; shortened version presented at ISIT 201
Fuzzy Mouse Cursor Control System for Computer Users with Spinal Cord Injuries
People with severe motor-impairments due to Spinal Cord Injury (SCI) or Spinal Cord Dysfunction (SCD), often experience difficulty with accurate and efficient control of pointing devices (Keates et al., 02). Usually this leads to their limited integration to society as well as limited unassisted control over the environment. The questions âHow can someone with severe motor-impairments perform mouse pointer control as accurately and efficiently as an able-bodied person?â and âHow can these interactions be advanced through use of Computational Intelligence (CI)?â are the driving forces behind the research described in this paper. Through this research, a novel fuzzy mouse cursor control system (FMCCS) is developed. The goal of this system is to simplify and improve efficiency of cursor control and its interactions on the computer screen by applying fuzzy logic in its decision-making to make disabled Internet users use the networked computer conveniently and easily. The FMCCS core consists of several fuzzy control functions, which define different user interactions with the system. The development of novel cursor control system is based on utilization of motor functions that are still available to most complete paraplegics, having capability of limited vision and breathing control. One of the biggest obstacles of developing human computer interfaces for disabled people focusing primarily on eyesight and breath control is userâs limited strength, stamina, and reaction time. Within the FMCCS developed in this research, these limitations are minimized through the use of a novel pneumatic input device and intelligent control algorithms for soft data analysis, fuzzy logic and user feedback assistance during operation. The new system is developed using a reliable and cheap sensory system and available computing techniques. Initial experiments with healthy and SCI subjects have clearly demonstrated benefits and promising performance of the new system: the FMCCS is accessible for people with severe SCI; it is adaptable to user specific capabilities and wishes; it is easy to learn and operate; point-to-point movement is responsive, precise and fast. The integrated sophisticated interaction features, good movement control without strain and clinical risks, as well the fact that quadriplegics, whose breathing is assisted by a respirator machine, still possess enough control to use the new system with ease, provide a promising framework for future FMCCS applications. The most motivating leverage for further FMCCS development is however, the positive feedback from persons who tested the first system prototype
Algebras of one-sided subshifts over arbitrary alphabets
We introduce two algebras associated with a subshift over an arbitrary
alphabet. One is unital and the other not necessarily. We focus on the unital
case and describe a conjugacy between Ott-Tomforde-Willis subshifts in terms of
a homeomorphism between the Stone duals of suitable Boolean algebras, and in
terms of a diagonal-preserving isomorphism of the associated unital algebras.
For this, we realise the unital algebra associated with a subshift as a
groupoid algebra and as a partial skew group ring.Comment: 40 page
Securing Whose Peace? The Effects of Peace-Agreement Provisions on Physical Integrity Rights After Civil War
When civil wars are resolved via negotiated settlement, peace-agreement provisions like power-sharing agreements and third-party security guarantees often are advocated for their purported benefits of ensuring a long-lasting and durable peace. Although scholars have explored the effects of peace-agreement provisions on enhancing the security of states, their influence on shaping individual security outcomes is largely unknown. The strong potential exists that these same provisions that improve a government\u27s ability to deter future violence also increase that government\u27s violation of its citizens\u27 physical integrity rights as a means of coercion and governance. Also rare in the power-sharing literature is exploration of the effects of individual, disaggregated provisions.
This dissertation, therefore, asks: Under what conditions do peace-agreement provisions significantly improve the state\u27s protection of its citizens\u27 physical integrity rights? Two models are proposed. Model 1 considers aggregated power-sharing provisions. Model 2 considers disaggregated peace-agreement provisions, and includes both power-sharing agreements and robust third-party security guarantees.
Both models are evaluated in light of the situational and historical contexts relevant to each state\u27s civil war experience. The universe of cases includes thirty-six civil wars in twenty-seven states where conflict terminated between 1989-2007 via negotiated settlement. This project leverages a mixed-method research design, including contingency tables and fuzzy-set Quantitative Comparative Analysis (fsQCA) to resolve the small number of cases, multiple variables challenge and to account for causal complexity.
Four central claims are advanced in this dissertation: First, the common technique of evaluating peace-agreement provisions by aggregating them according to common political, military, and territorial dimensions obscures and misleads scholars; the disaggregation of peace-agreement provisions reveals how measures often act in opposition. Second, a number of commonly present provisions--including integration of rebels into the main military ranks and the granting of territorial autonomy--are consistently inhibitory to individual security after civil war ends. Third, other provisions such as robust third-party security guarantees and the granting of territorial federalism consistently lead to a reduction in the level of political repression used by states after civil war has ended. Fourth, significant human-rights improvement results from favorable causal recipes (i.e., combinations of disaggregated conditions) that together reduce both the motivation and opportunity of a government to repress.
These findings will assist decision-makers involved in negotiated settlements, as they (1) identify the appropriate blends of peace-agreement provisions for resolving different civil wars, and (2) balance the need for a post-conflict government to both assure its population and to deter future violence
A Microscopic Simulation Laboratory for Evaluation of Off-street Parking Systems
The parking industry produces an enormous amount of data every day that, properly analyzed, will change the way the industry operates. The collected data form patterns that, in most cases, would allow parking operators and property owners to better understand how to maximize revenue and decrease operating expenses and support the decisions such as how to set specific parking policies (e.g. electrical charging only parking space) to achieve the sustainable and eco-friendly parking.
However, there lacks an intelligent tool to assess the layout design and operational performance of parking lots to reduce the externalities and increase the revenue. To address this issue, this research presents a comprehensive agent-based framework for microscopic off-street parking system simulation. A rule-based parking simulation logic programming model is formulated. The proposed simulation model can effectively capture the behaviors of drivers and pedestrians as well as spatial and temporal interactions of traffic dynamics in the parking system. A methodology for data collection, processing, and extraction of user behaviors in the parking system is also developed. A Long-Short Term Memory (LSTM) neural network is used to predict the arrival and departure of the vehicles. The proposed simulator is implemented in Java and a Software as a Service (SaaS) graphic user interface is designed to analyze and visualize the simulation results. This study finds the active capacity of the parking system, which is defined as the largest number of actively moving vehicles in the parking system under the facility layout. In the system application of the real world testbed, the numerical tests show (a) the smart check-in device has marginal benefits in vehicle waiting time; (b) the flexible pricing policy may increase the average daily revenue if the elasticity of the price is not involved; (c) the number of electrical charging only spots has a negative impact on the performance of the parking facility; and (d) the rear-in only policy may increase the duration of parking maneuvers and reduce the efficiency during the arrival rush hour. Application of the developed simulation system using a real-world case demonstrates its capability of providing informative quantitative measures to support decisions in designing, maintaining, and operating smart parking facilities
Recommended from our members
Fast, Scalable, and Accurate Algorithms for Time-Series Analysis
Time is a critical element for the understanding of natural processes (e.g., earthquakes and weather) or human-made artifacts (e.g., stock market and speech signals). The analysis of time series, the result of sequentially collecting observations of such processes and artifacts, is becoming increasingly prevalent across scientific and industrial applications. The extraction of non-trivial features (e.g., patterns, correlations, and trends) in time series is a critical step for devising effective time-series mining methods for real-world problems and the subject of active research for decades. In this dissertation, we address this fundamental problem by studying and presenting computational methods for efficient unsupervised learning of robust feature representations from time series. Our objective is to (i) simplify and unify the design of scalable and accurate time-series mining algorithms; and (ii) provide a set of readily available tools for effective time-series analysis. We focus on applications operating solely over time-series collections and on applications where the analysis of time series complements the analysis of other types of data, such as text and graphs.
For applications operating solely over time-series collections, we propose a generic computational framework, GRAIL, to learn low-dimensional representations that natively preserve the invariances offered by a given time-series comparison method. GRAIL represents a departure from classic approaches in the time-series literature where representation methods are agnostic to the similarity function used in subsequent learning processes. GRAIL relies on the attractive idea that once we construct the data-to-data similarity matrix most time-series mining tasks can be trivially solved. To overcome scalability issues associated with approaches relying on such matrices, GRAIL exploits time-series clustering to construct a small set of landmark time series and learns representations to reduce the data-to-data matrix to a data-to-landmark points matrix. To demonstrate the effectiveness of GRAIL, we first present domain-independent, highly accurate, and scalable time-series clustering methods to facilitate exploration and summarization of time-series collections. Then, we show that GRAIL representations, when combined with suitable methods, significantly outperform, in terms of efficiency and accuracy, state-of-the-art methods in major time-series mining tasks, such as querying, clustering, classification, sampling, and visualization. Overall, GRAIL rises as a new primitive for highly accurate, yet scalable, time-series analysis.
For applications where the analysis of time series complements the analysis of other types of data, such as text and graphs, we propose generic, simple, and lightweight methodologies to learn features from time-varying measurements. Such applications often organize operations over different types of data in a pipeline such that one operation provides input---in the form of feature vectors---to subsequent operations. To reason about the temporal patterns and trends in the underlying features, we need to (i) track the evolution of features over different time periods; and (ii) transform these time-varying features into actionable knowledge (e.g., forecasting an outcome). To address this challenging problem, we propose principled approaches to model time-varying features and study two large-scale, real-world, applications. Specifically, we first study the problem of predicting the impact of scientific concepts through temporal analysis of characteristics extracted from the metadata and full text of scientific articles. Then, we explore the promise of harnessing temporal patterns in behavioral signals extracted from web search engine logs for early detection of devastating diseases. In both applications, combinations of features with time-series relevant features yielded the greatest impact than any other indicator considered in our analysis. We believe that our simple methodology, along with the interesting domain-specific findings that our work revealed, will motivate new studies across different scientific and industrial settings
Recommended from our members
Algorithms to Exploit Data Sparsity
While data in the real world is very high-dimensional, it generally has some underlying structure; for instance, if we think of an image as a set of pixels with associated color values, most possible settings of color values correspond to something more like random noise than what we typically think of as a picture. With an appropriate transformation of basis, this underlying structure can often be converted into sparsity in data, giving an equivalent representation of the data where the magnitude is large in only a few directions relative to the ambient dimension. This motivates a variety of theoretical questions around designing algorithms that can exploit this data sparsity to achieve better performance than what would be possible naively, and in this thesis we tackle several such questions.We first examine the question of simply approximating the level of sparsity of a signal under several different measurement models, a natural first step if the sparsity is to be exploited by other algorithms. Second, we look at a particular sparse signal recovery problem called nonadaptive probabilistic group testing, and investigate the question of exactly how sparse the signal needs to be before the methods used for recovering sparse signals outperform those used for non-sparse signals. Third, we prove novel upper bounds on the number of measurements needed to recover a sparse signal in the universal one-bit compressed sensing model of sparse signal recovery. Fourth, we give some approximations of an information-theoretic quantity called the index coding rate of a network modeled by a graph, in the special case that the graph is sparse or otherwise highly structured. For each of the problems considered, we also discuss some remaining open questions and conjectures, as well as possible directions towards their solutions
- âŠ