150,813 research outputs found
Local Testing for Membership in Lattices
Motivated by the structural analogies between point lattices and linear error-correcting codes, and by the mature theory on locally testable codes, we initiate a systematic study of local testing for membership in lattices. Testing membership in lattices is also motivated in practice, by applications to integer programming, error detection in lattice-based communication, and cryptography. Apart from establishing the conceptual foundations of lattice testing, our results include the following: 1. We demonstrate upper and lower bounds on the query complexity of local testing for the well-known family of code formula lattices. Furthermore, we instantiate our results with code formula lattices constructed from Reed-Muller codes, and obtain nearly-tight bounds. 2. We show that in order to achieve low query complexity, it is sufficient to design one-sided non-adaptive canonical tests. This result is akin to, and based on an analogous result for error-correcting codes due to Ben-Sasson et al. (SIAM J. Computing 35(1) pp1-21)
Recommended from our members
PHP/HTML design and build of a computer adaptive test to assess English fluency among native Spanish speakers
textThe following is a review of key findings from the implementation of a PHP/HTML web-based application to assess English fluency among native Spanish speakers. The scope of this professional report includes mainly the design, build, and implementation of a web based system accessible through www.babelous.com. This written portion is intended to briefly summarize initial results from the implementation of the successfully built application, provide information on how to replicate the application, and detail areas of focus for future development.Public AffairsBusiness Administratio
Psychometrics in Practice at RCEC
A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud
All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment
Mining Bad Credit Card Accounts from OLAP and OLTP
Credit card companies classify accounts as a good or bad based on historical
data where a bad account may default on payments in the near future. If an
account is classified as a bad account, then further action can be taken to
investigate the actual nature of the account and take preventive actions. In
addition, marking an account as "good" when it is actually bad, could lead to
loss of revenue - and marking an account as "bad" when it is actually good,
could lead to loss of business. However, detecting bad credit card accounts in
real time from Online Transaction Processing (OLTP) data is challenging due to
the volume of data needed to be processed to compute the risk factor. We
propose an approach which precomputes and maintains the risk probability of an
account based on historical transactions data from offline data or data from a
data warehouse. Furthermore, using the most recent OLTP transactional data,
risk probability is calculated for the latest transaction and combined with the
previously computed risk probability from the data warehouse. If accumulated
risk probability crosses a predefined threshold, then the account is treated as
a bad account and is flagged for manual verification.Comment: Conference proceedings of ICCDA, 201
interAdapt -- An Interactive Tool for Designing and Evaluating Randomized Trials with Adaptive Enrollment Criteria
The interAdapt R package is designed to be used by statisticians and clinical
investigators to plan randomized trials. It can be used to determine if certain
adaptive designs offer tangible benefits compared to standard designs, in the
context of investigators' specific trial goals and constraints. Specifically,
interAdapt compares the performance of trial designs with adaptive enrollment
criteria versus standard (non-adaptive) group sequential trial designs.
Performance is compared in terms of power, expected trial duration, and
expected sample size. Users can either work directly in the R console, or with
a user-friendly shiny application that requires no programming experience.
Several added features are available when using the shiny application. For
example, the application allows users to immediately download the results of
the performance comparison as a csv-table, or as a printable, html-based
report.Comment: 14 pages, 2 figures (software screenshots); v2 includes command line
function descriptio
Transportation mode recognition fusing wearable motion, sound and vision sensors
We present the first work that investigates the potential of improving the performance of transportation mode recognition through fusing multimodal data from wearable sensors: motion, sound and vision. We first train three independent deep neural network (DNN) classifiers, which work with the three types of sensors, respectively. We then propose two schemes that fuse the classification results from the three mono-modal classifiers. The first scheme makes an ensemble decision with fixed rules including Sum, Product, Majority Voting, and Borda Count. The second scheme is an adaptive fuser built as another classifier (including Naive Bayes, Decision Tree, Random Forest and Neural Network) that learns enhanced predictions by combining the outputs from the three mono-modal classifiers. We verify the advantage of the proposed method with the state-of-the-art Sussex-Huawei Locomotion and Transportation (SHL) dataset recognizing the eight transportation activities: Still, Walk, Run, Bike, Bus, Car, Train and Subway. We achieve F1 scores of 79.4%, 82.1% and 72.8% with the mono-modal motion, sound and vision classifiers, respectively. The F1 score is remarkably improved to 94.5% and 95.5% by the two data fusion schemes, respectively. The recognition performance can be further improved with a post-processing scheme that exploits the temporal continuity of transportation. When assessing generalization of the model to unseen data, we show that while performance is reduced - as expected - for each individual classifier, the benefits of fusion are retained with performance improved by 15 percentage points. Besides the actual performance increase, this work, most importantly, opens up the possibility for dynamically fusing modalities to achieve distinct power-performance trade-off at run time
- âŠ