38 research outputs found

    Advanced data extraction infrastructure: Web based system for management of time series data

    Get PDF
    During operation of high energy physics experiments a big amount of slow control data is recorded. It is necessary to examine all collected data checking the integrity and validity of measurements. With growing maturity of AJAX technologies it becomes possible to construct sophisticated interfaces using web technologies only. Our solution for handling time series, generally slow control data, has a modular architecture: backend system for data analysis and preparation, a web service interface for data access and a fast AJAX web display. In order to provide fast interactive access the time series are aggregated over time slices of few predefined lengths. The aggregated values are stored in the temporary caching database and, then, are used to create generalizing data plots. These plots may include indication of data quality and are generated within few hundreds of milliseconds even if very high data rates are involved. The extensible export subsystem provides data in multiple formats including CSV, Excel, ROOT, and TDMS. The search engine can be used to find periods of time where indications of selected sensors are falling into the specified ranges. Utilization of the caching database allows performing most of such lookups within a second. Based on this functionality a web interface facilitating fast (Google-maps style) navigation through the data has been implemented. The solution is at the moment used by several slow control systems at Test Facility for Fusion Magnets (TOSKA) and Karlsruhe Tritium Neutrino (KATRIN)

    Locally Optimal Load Balancing

    Full text link
    This work studies distributed algorithms for locally optimal load-balancing: We are given a graph of maximum degree Δ\Delta, and each node has up to LL units of load. The task is to distribute the load more evenly so that the loads of adjacent nodes differ by at most 11. If the graph is a path (Δ=2\Delta = 2), it is easy to solve the fractional version of the problem in O(L)O(L) communication rounds, independently of the number of nodes. We show that this is tight, and we show that it is possible to solve also the discrete version of the problem in O(L)O(L) rounds in paths. For the general case (Δ>2\Delta > 2), we show that fractional load balancing can be solved in poly⁡(L,Δ)\operatorname{poly}(L,\Delta) rounds and discrete load balancing in f(L,Δ)f(L,\Delta) rounds for some function ff, independently of the number of nodes.Comment: 19 pages, 11 figure

    Worst case and probabilistic analysis of the 2-Opt algorithm for the TSP

    Get PDF
    2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every p∈N , a family of L p instances on which 2-Opt can take an exponential number of steps. Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0,1]2, where it was shown that the expected number of steps is bounded by O~(n10) for Euclidean instances. We consider a more advanced model of probabilistic instances in which the points can be placed independently according to general distributions on [0,1] d , for an arbitrary d≄2. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number n of points and the maximal density ϕ of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path of O~(n4+1/3⋅ϕ8/3) . When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to O~(n4+1/3−1/d⋅ϕ8/3) . If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by O~(n4−1/d⋅ϕ) . In addition, we prove an upper bound of O(ϕ√d) on the expected approximation factor with respect to all L p metrics. Let us remark that our probabilistic analysis covers as special cases the uniform input model with ϕ=1 and a smoothed analysis with Gaussian perturbations of standard deviation σ with ϕ∌1/σ d

    Scalable Multi-Party Private Set-Intersection

    Get PDF
    In this work we study the problem of private set-intersection in the multi-party setting and design two protocols with the following improvements compared to prior work. First, our protocols are designed in the so-called star network topology, where a designated party communicates with everyone else, and take a new approach of leveraging the 2PC protocol of [FreedmanNP04]. This approach minimizes the usage of a broadcast channel, where our semi-honest protocol does not make any use of such a channel and all communication is via point-to-point channels. In addition, the communication complexity of our protocols scales with the number of parties. More concretely, (1) our first semi-honest secure protocol implies communication complexity that is linear in the input sizes, namely O((∑i=1nmi)⋅Îș)O((\sum_{i=1}^n m_i)\cdot\kappa) bits of communication where Îș\kappa is the security parameter and mim_i is the size of PiP_i\u27s input set, whereas overall computational overhead is quadratic in the input sizes only for a designated party, and linear for the rest. We further reduce this overhead by employing two types of hashing schemes. (2) Our second protocol is proven secure in the malicious setting. This protocol induces communication complexity O((n^2 + nm_\maxx + nm_\minn\log m_\maxx)\kappa) bits of communication where m_\minn (resp. m_\maxx) is the minimum (resp. maximum) over all input sets sizes and nn is the number of parties

    Focal-plane detector system for the KATRIN experiment

    Get PDF
    The focal-plane detector system for the KArlsruhe TRItium Neutrino (KATRIN) experiment consists of a multi-pixel silicon p-i-n-diode array, custom readout electronics, two superconducting solenoid magnets, an ultra high-vacuum system, a high-vacuum system, calibration and monitoring devices, a scintillating veto, and a custom data-acquisition system. It is designed to detect the low-energy electrons selected by the KATRIN main spectrometer. We describe the system and summarize its performance after its final installation.Comment: 28 pages. Two figures revised for clarity. Final version published in Nucl. Inst. Meth.

    Randomized rumor spreading

    No full text
    We investigate the class of so-called epidemic algorithms that are commonly used for the lazy transmission of updates to distributed copies of a database. These algorithms use a simple randomized communication mechanism to ensure robustness. Suppose players communicate in parallel rounds in each of which every player calls a randomly selected communication partner. In every round, players can generate rumors (updates) that are to be distributed among all players. Whenever communication is established between two players, each one must decide which of the rumors to transmit. The major problem (arising due to the randomization) is that players might not know which rumors their partners have already received. For example, a standard algorithm forwarding each rumor from the calling to the called players for rounds needs to transmit the rumor times in order to ensure that every player finally receives the rumor with high probability. We investigate whether such a large communication overhead is inherent to epidemic algorithms. On the positive side, we show that the communication overhead can be reduced significantly. We give an algorithm using only transmissions and rounds. In addition, we prove the robustness of this algorithm, e.g., against adversarial failures. On the negative side, we show that any address-oblivious algorithm (i.e., an algorithm that does not use the addresses of communication partners) needs to send messages for each rumor regardless of the number of rounds. Furthermore, we give a general lower bound showing that time- and communicationoptimality cannot be achieved simultaneously using random phone calls, that is, every algorithm that distributes a rumo
    corecore