46 research outputs found

    Tight Bounds for Adversarially Robust Streams and Sliding Windows via Difference Estimators

    Full text link
    In the adversarially robust streaming model, a stream of elements is presented to an algorithm and is allowed to depend on the output of the algorithm at earlier times during the stream. In the classic insertion-only model of data streams, Ben-Eliezer et. al. (PODS 2020, best paper award) show how to convert a non-robust algorithm into a robust one with a roughly 1/ε1/\varepsilon factor overhead. This was subsequently improved to a 1/ε1/\sqrt{\varepsilon} factor overhead by Hassidim et. al. (NeurIPS 2020, oral presentation), suppressing logarithmic factors. For general functions the latter is known to be best-possible, by a result of Kaplan et. al. (CRYPTO 2021). We show how to bypass this impossibility result by developing data stream algorithms for a large class of streaming problems, with no overhead in the approximation factor. Our class of streaming problems includes the most well-studied problems such as the L2L_2-heavy hitters problem, FpF_p-moment estimation, as well as empirical entropy estimation. We substantially improve upon all prior work on these problems, giving the first optimal dependence on the approximation factor. As in previous work, we obtain a general transformation that applies to any non-robust streaming algorithm and depends on the so-called flip number. However, the key technical innovation is that we apply the transformation to what we call a difference estimator for the streaming problem, rather than an estimator for the streaming problem itself. We then develop the first difference estimators for a wide range of problems. Our difference estimator methodology is not only applicable to the adversarially robust model, but to other streaming models where temporal properties of the data play a central role. (Abstract shortened to meet arXiv limit.)Comment: FOCS 202

    THE FUNDAMENTAL APPLICATION OF DECISION ANALYSIS TO MANUFACTURING

    Get PDF
    Machining models are available to predict nearly every aspect of machining processes. In milling, for example, models are available to relate stability, part accuracy (from forced vibrations during stable machining), and tool wear to the selected operating parameters, material and tool properties, tool geometry, and part-tool-holder-spindle- machine dynamics. The models capture the underlying physics. However, models are deterministic and do not take into account the uncertainty that exists due to the model assumptions, model inputs, and factors that are unknown. Therefore, to enable reliable parameter selection using process models, uncertainty should be included in the formulation. This research will apply the normative mathematical framework of decision theory to select optical machining parameters while taking into account the inherent uncertainty in milling processes. The objective function will be profit because it (arguably) represents the decision maker's primary motivation in the manufacturing environment. The objective of this research is to select the optimal machining parameters which minimize cost while considering the uncertainty in tool life and stability for a given machine, tool, tool path and workpiece material

    G’3-stable semantics and inconsistency

    Get PDF
    We present an overview on how to perform non-monotonic reasoning based on paraconsistent logics. In particular, we show that one can define a logic programming semantics based on the paraconsistent logic G’3 which is called G’3-stable semantics. This semantics defines a frame for performing non-monotonic reasoning in domains which are pervaded with vagueness and inconsistencies. In fact, we show that, by considering also a possibilistic logic point of view, one can use this extended framework for defining a possibilistic logic programming approach able to deal with reasoning, which is at the same time non-monotonic and uncertain.Peer ReviewedPostprint (published version

    A Framework for Testing Peer-to-Peer Systems

    Get PDF
    International audienceDeveloping peer-to-peer (P2P) systems is hard because they must be deployed on a high number of nodes, which can be autonomous, refusing to answer to some requests or even unexpectedly leaving the system. Such volatility of nodes is a common behavior in P2P system and can be interpreted as fault during tests. In this paper, we propose a framework for testing P2P systems. This framework is based on the individual control of nodes, allowing test cases to precisely control the volatility of nodes during execution. We validated this framework through implementation and experimentation on an open-source P2P system

    A Model-Based Approach for Testing Large Scale Systems

    Get PDF
    This document summarize the author's experience over six years testing large-scale systems.We outline that experience in four points.First, we present a methodology for testing large-scale system. The methodology takes into account three dimensions of these systems: functionality, scalability, and volatility.The methodology proposes to execute tests in different workloads, from a small-scale static system up to a large-scale dynamic system.Experiments show that the alteration of the three dimensional aspects improves code coverage, thus improving the confidence on tests.Second, we introduce a distributed test architecture that uses both, a broadcast protocol to send messages from the test controller to testers and a converge cast protocol to send messages from testers back to the test controller. Experiments show that the architecture is more scalable than traditional centralized architectures when testing systems with more than \num{1000} nodes.Third, we present an approach for using models as dynamic oracles for testing global properties of large-scale systems.This approach focuses on global, liveness, observable and controllable properties. We propose to efficiently keep updating a global model of the system during its execution. This model is then instantiated and evolved at runtime, by monitoring the corresponding distributed system, and serve as oracle for the distributed tests.We illustrate this approach by testing the reliability of two routing algorithms under churn. Results show common flaws in both algorithms.Finally, we present a model-driven approach for software artifacts deployment.We consider software artifacts as a product line and use feature models to represent their configurations and model-based techniques to handle automatic artifact deployment and reconfiguration.Experiments show that this approach reduces network traffic when deploying software on cloud environment

    Search for the doubly heavy baryons Omega(0)(bc) and Xi(0)(bc) decaying to Lambda(+)(c)pi(-) and Xi(+)(c)pi-

    Get PDF
    The first search for the doubly heavy Omega(0)(bc) baryon and a search for the Xi(0)(bc) baryon are performed using collision data collected via the experiment from 2016 to 2018 at a centre-of-mass energy of, corresponding to an integrated luminosity of 5.2 fb(-1). The baryons are reconstructed via their decays to Lambda(+)(-)(c)(pi) and Xi(+)(c)pi(-). No significant excess is found for invariant masses between 6700 and 7300 MeV/c(2), in a rapidity range from 2.0 to 4.5 and a transverse momentum range from 2 to 20 MeV/c. Upper limits are set on the ratio of the Omega(0)(bc) and Xi(0)(bc) production cross-section times the branching fraction to Lambda(+)(c)pi(-)(Xi(+)(c)pi(-)) relative to that of the Lambda(0)(b)(Xi(0)(b)) baryon, for different lifetime hypotheses, at 95% confidence level. The upper limits range from 0.5x10(-4) to 2.5x10(-4) for the Omega(0)(bc) -&gt; Lambda(+)(c)pi(-) (Xi(0)(bc) -&gt; Lambda(+)(c)pi(-)) decay, pending on the considered mass and lifetime of the Omega(0)(bc) (Xi(0)(bc)) baryon.</p

    The Influence of Financial Performance on Higher Education Academic Quality

    Get PDF
    A variety of academic and financial performance metrics are used to assess higher education institution performance. However, there is no consensus on the best performance measures. Signaling theory and agency theory are used to frame the challenges of assessing post-secondary institution performance related to information asymmetry between the institution and stakeholders. Agency costs may be reduced with a better understanding of the relationship among assessment variables. This quantitative study uses multiple linear regressions to identify and describe the relationship between financial performance and academic quality in 1,045 public and private not-for-profit U.S. colleges and universities. U.S. News & World Report rankings serve as a measure of perceived academic quality performance and ratios developed by KPMG and Prager, Sealy & Co., LLC (2005) are used to measure financial performance. Initial findings provide evidence that a large number of schools could be considered financially weak performers. However, results also reveal a positive relationship between financial performance and percieved academic quality in groups with a high concentration of financially strong schools. Findings suggest that financial performance may be used to signal academic performance, reducing information asymmetry and simplifying monitoring of providers. Furthermore, better performance information has potential to inform college choice and, therefore influence access and student success. Recommendations for research, practice and policy have potential to create opportunities for better stakeholder decisions
    corecore