777,213 research outputs found

    Modeling and Testing Implementations of Protocols with Complex Messages

    Get PDF
    This paper presents a new language called APSL for formally describing protocols to facilitate automated testing. Many real world communication protocols exchange messages whose structures are not trivial, e.g. they may consist of multiple and nested fields, some could be optional, and some may have values that depend on other fields. To properly test implementations of such a protocol, it is not sufficient to only explore different orders of sending and receiving messages. We also need to investigate if the implementation indeed produces correctly formatted messages, and if it responds correctly when it receives different variations of every message type. APSL's main contribution is its sublanguage that is expressive enough to describe complex message formats, both text-based and binary. As an example, this paper also presents a case study where APSL is used to model and test a subset of Courier IMAP email server

    A Critic Evaluation of Methods for COVID-19 Automatic Detection from X-Ray Images

    Full text link
    In this paper, we compare and evaluate different testing protocols used for automatic COVID-19 diagnosis from X-Ray images in the recent literature. We show that similar results can be obtained using X-Ray images that do not contain most of the lungs. We are able to remove the lungs from the images by turning to black the center of the X-Ray scan and training our classifiers only on the outer part of the images. Hence, we deduce that several testing protocols for the recognition are not fair and that the neural networks are learning patterns in the dataset that are not correlated to the presence of COVID-19. Finally, we show that creating a fair testing protocol is a challenging task, and we provide a method to measure how fair a specific testing protocol is. In the future research we suggest to check the fairness of a testing protocol using our tools and we encourage researchers to look for better techniques than the ones that we propose

    Smooth Multirate Multicast Congestion Control

    Full text link
    A significant impediment to deployment of multicast services is the daunting technical complexity of developing, testing and validating congestion control protocols fit for wide-area deployment. Protocols such as pgmcc and TFMCC have recently made considerable progress on the single rate case, i.e. where one dynamic reception rate is maintained for all receivers in the session. However, these protocols have limited applicability, since scaling to session sizes beyond tens of participants necessitates the use of multiple rate protocols. Unfortunately, while existing multiple rate protocols exhibit better scalability, they are both less mature than single rate protocols and suffer from high complexity. We propose a new approach to multiple rate congestion control that leverages proven single rate congestion control methods by orchestrating an ensemble of independently controlled single rate sessions. We describe SMCC, a new multiple rate equation-based congestion control algorithm for layered multicast sessions that employs TFMCC as the primary underlying control mechanism for each layer. SMCC combines the benefits of TFMCC (smooth rate control, equation-based TCP friendliness) with the scalability and flexibility of multiple rates to provide a sound multiple rate multicast congestion control policy.National Science Foundation (ANI-9986397, ANI-0092196

    Testing performance of standards-based protocols in DPM

    Get PDF
    In the interests of the promotion of the increased use of non-proprietary protocols in grid storage systems, we perform tests on the performance of WebDAV and pNFS transport with the DPM storage solution. We find that the standards-based protocols behave similarly to the proprietary standards currently in use, despite encountering some issues with the state of the implementation itself. We thus conclude that there is no performance-based reason to avoid using such protocols for data management in future

    Protocols for the field testing

    Get PDF
    The COMMON SENSE project has been designed and planned in order to meet the general and specific scientific and technical objectives mentioned in its Description of Work (page 77). In an overall strategy of the work plan, work packages (11) can be grouped into 3 key phases: (1) RD basis for cost-effective sensor development, (2) Sensor development, sensor web platform and integration, and (3) Field testing. In the first two phases WP1 and WP2 partners have provided a general understanding and integrated basis for a cost effective sensors development. Within the following WPs 4 to 8 the new sensors are created and integrated into different identified platforms. During the third phase 3, characterized by WP9, partners will deploy precompetitive prototypes at chosen platforms (e.g. research vessels, oil platforms, buoys and submerged moorings, ocean racing yachts, drifting buoys). Starting from August 2015 (month 22; task 9.2), these platforms will allow the partnership to test the adaptability and performance of the in-situ sensors and verify if the transmission of data is properly made, correcting deviations. In task 9.1 all stakeholders identified in WP2, and other relevant agents, have been contacted in order to close a coordinated agenda for the field testing phase for each of the platforms. Field testing procedures (WP2) and deployment specificities, defined during sensor development in WPs 4 to 8, are closely studied by all stakeholders involved in field testing activities in order for everyone to know their role, how to proceed and to provide themselves with the necessary material and equipment (e.g. transport of instruments). All this information will provide the basis for designing and coordinating field testing activities. Type and characteristics of the system (vessel or mooring, surface or deep, open sea or coastal area, duration, etc.), used for the field testing activities, are planned comprising the indicators included in the above-mentioned descriptors, taking into account that they must of interest for eutrophication, concentration of contaminants, marine litter and underwater noise. In order to obtain the necessary information, two tables were realized starting from the information acquired for D2.2 delivered in June 2014. One table was created for sensor developers and one for those partners that will test the sensors at sea. The six developers in COMMON SENSE have provided information on the seven sensors: CEFAS and IOPAN for underwater noise; IDRONAUT and LEITAT for microplastics; CSIC for an innovative piro and piezo resistive polymeric temperature and pressure and for heavy metal; DCU for the eutrophication sensor. This information is anyway incomplete because in most cases the novel sensors are still far to be ready and will be developed over the course of COMMON SENSE. So the sensors cannot be clearly designed yet and, consequently, technical characteristics cannot still be perfectly defined. This produces some lag in the acquired information and, consequently, in the planning of their testing on specific platforms that will be solved in the near future. In the table for Testers, partners have provided information on fifteen available platforms. Specific answers have been given on number and type of sensors on each platforms, their availability and technical characteristics, compatibility issues and, very important when new sensors are tested, comparative measurements to be implemented to verify them. Finally IOPAN has described two more platforms, a motorboat not listed in the DoW, but already introduced in D2.2, and their oceanographic buoy in the Gdansk Bay that was previously unavailable. The same availability now is present for the OBSEA Underwater observatory from CSIC, while their Aqualog undulating mooring is still not ready for use. In the following months, new information on sensors and platforms will be provided and the planning of testing activities will improve. Further updates of this report will be therefore necessary in order to individuate the most suitable platforms to test each kind of sensor. Objectives and rationale The objective of deliverable 9.1 is the definition of field testing procedures (WP2), the study of deployment specificities during sensor development work packages (from WP4 to WP8) and the preparation of protocols. This with the participation of all stakeholders involved in field testing activities in order for everyone to know their role, how to proceed and to provide themselves with the necessary material and equipment

    Self-testing protocols based on the chained Bell inequalities

    Full text link
    Self testing is a device-independent technique based on non-local correlations whose aim is to certify the effective uniqueness of the quantum state and measurements needed to produce these correlations. It is known that the maximal violation of some Bell inequalities suffices for this purpose. However, most of the existing self-testing protocols for two devices exploit the well-known Clauser-Horne-Shimony-Holt Bell inequality or modifications of it, and always with two measurements per party. Here, we generalize the previous results by demonstrating that one can construct self-testing protocols based on the chained Bell inequalities, defined for two devices implementing an arbitrary number of two-output measurements. On the one hand, this proves that the quantum state and measurements leading to the maximal violation of the chained Bell inequality are unique. On the other hand, in the limit of a large number of measurements, our approach allows one to self-test the entire plane of measurements spanned by the Pauli matrices X and Z. Our results also imply that the chained Bell inequalities can be used to certify two bits of perfect randomness.Comment: 16 pages + appendix, 2 figures; close to published versio

    The Role of Interactivity in Local Differential Privacy

    Full text link
    We study the power of interactivity in local differential privacy. First, we focus on the difference between fully interactive and sequentially interactive protocols. Sequentially interactive protocols may query users adaptively in sequence, but they cannot return to previously queried users. The vast majority of existing lower bounds for local differential privacy apply only to sequentially interactive protocols, and before this paper it was not known whether fully interactive protocols were more powerful. We resolve this question. First, we classify locally private protocols by their compositionality, the multiplicative factor k1k \geq 1 by which the sum of a protocol's single-round privacy parameters exceeds its overall privacy guarantee. We then show how to efficiently transform any fully interactive kk-compositional protocol into an equivalent sequentially interactive protocol with an O(k)O(k) blowup in sample complexity. Next, we show that our reduction is tight by exhibiting a family of problems such that for any kk, there is a fully interactive kk-compositional protocol which solves the problem, while no sequentially interactive protocol can solve the problem without at least an Ω~(k)\tilde \Omega(k) factor more examples. We then turn our attention to hypothesis testing problems. We show that for a large class of compound hypothesis testing problems --- which include all simple hypothesis testing problems as a special case --- a simple noninteractive test is optimal among the class of all (possibly fully interactive) tests
    corecore