994 research outputs found

    Detection and identification of sparse audio tampering using distributed source coding and compressive sensing techniques

    Get PDF
    In most practical applications, for the sake of information integrity not only it is useful to detect whether a multimedia content has been modified or not, but also to identify which kind of attack has been carried out. In the case of audio streams, for example, it may be useful to localize the tamper in the time and/or frequency domain. In this paper we devise a hash-based tampering detection and localization system exploiting compressive sensing principles. The multimedia content provider produces a small hash signature using a limited number of random projections of a time-frequency representation of the original audio stream. At the content user side, the hash signature is used to estimate the distortion between the original and the received stream and, provided that the tamper is sufficiently sparse or sparsifiable in some orthonormal basis expansion or redundant dictionary (e.g. DCT or wavelet), to identify the time-frequency portion of the stream that has been manipulated. In order to keep the hash length small, the algorithm exploits distributed source coding techniques

    Educazione terapeutica ai malati oncologici: esperienza degli infermieri italiani

    Get PDF
    Introduction. Therapeutic patient’ education is a complex process requiring a proper level of communication between the patient and the healthcare professional. Nurses play a key role in providing the patients and their families with educational activities. Objective: This paper is the report of a study which investigates the experiences of some Italian nurses with regards to their role in therapeutic education to cancer patients. Methods: Qualitative research. Semi-structured interviews were carried out with 52 nurses working in different Local Health Service Units of two northern Italy regions: Piedmont and Valle d’Aosta. To identify categories and items arising from the data, the researchers used a qualitative content analysis. Results: The interview format was classified into six main categories: a) Patient education as daily care activity; b) Relevance of communication and dialogue for educational purposes; c) Relative usefulness of written information; d) Therapeutic education recording; e) Patients’ feedbacks as a tool for assessing therapeutic education; and f) Difficult communication. Conclusions: The experience of nurses in terms of their professional role in therapeutic education for cancer patients shows the steady presence of educational activities carried out in a non-planned way. This research confirms the need to launch educational interventions for nurses. Implications for Practice: It is essential to implement an action plan to promote opportunities of professional training in the field since among the most frequent reasons for project failure in therapeutic education is the lack of expert human resource

    Effect of season, late embryonic mortality and progesterone production on pregnancy rates in pluriparous buffaloes (Bubalus bubalis) after artificial insemination with sexed semen

    Get PDF
    The use of sexed semen technology in buffaloes is nowadays becoming more and more accepted by farmers, to overcome the burden of unwanted male calves with related costs and to more efficiently improve production and genetic gain. The aim of this study was to verify the coupling of some variables on the efficiency of pregnancy outcome after deposition of sexed semen through AI. Pluriparous buffaloes from two different farms (N = 152) were screened, selected, and subjected to Ovsynch protocol for AI using nonsexed and sexed semen from four tested bulls. AI was performed in two distinct periods of the year: September to October and January to February. Neither farms nor bulls had a significant effect on pregnancy rates pooled from the two periods. The process for sexing sperm cells did not affect pregnancy rates at 28 days after AI, for nonsexed and sexed semen, respectively 44/73 (60.2%) and 50/79 (63.2%), P = 0.70, and at 45 days after AI, for nonsexed and sexed semen, respectively 33/73 (45.2%) and 33/79 (49.3%), P = 0.60. Pregnancy rate at 28 days after AI during the transitional period of January to February was higher when compared with September to October, respectively 47/67 (70.1%) versus 47/85 (55.2%), P = 0.06. When the same pregnant animals were checked at Day 45 after AI, the difference disappeared between the two periods, because of a higher embryonic mortality, respectively 32/67 (47.7%) versus 40/85 (47.0%), P = 0.93. Hematic progesterone concentration at Day 10 after AI did not distinguish animals pregnant at Day 28 that would or would not maintain pregnancy until Day 45 (P = 0.21). On the contrary, when blood samples were taken at Day 20 after AI, the difference in progesterone concentration between pregnant animals that would maintain their pregnancy until Day 45 was significant for both pooled (P = 0.00) and nonsexed (P = 0.00) and sexed semen (P = 0.09). A similar trend was reported when blood samples were taken at Day 25, being highly significant for pooled, nonsexed, and sexed semen (P = 0.00). Hematic progesterone concentration between the two periods of the year was highly significant for pregnant animals at 28 days from AI when blood samples were taken at Day 20 after AI for pooled, nonsexed, and sexed semen, respectively P = 0.00, 0.00, and 0.06, and for pregnant animals at Day 45 for pooled, nonsexed, and sexed semen, respectively P = 0.00, 0.00, and 0.01. From these results, it can be stated that hematic progesterone concentration measurement since Day 20 after AI can be predictive of possible pregnancy maintenance until Day 45. Furthermore, the transitional period of January to February, although characterized by a higher pregnancy outcome when compared with September to October, suffers from a higher late embryonic mortality as evidenced by a significant different hematic progesterone concentration between the two periods at Day 20 after AI

    Specifying and Analysing SOC Applications with COWS

    Get PDF
    COWS is a recently defined process calculus for specifying and combining service-oriented applications, while modelling their dynamic behaviour. Since its introduction, a number of methods and tools have been devised to analyse COWS specifications, like e.g. a type system to check confidentiality properties, a logic and a model checker to express and check functional properties of services. In this paper, by means of a case study in the area of automotive systems, we demonstrate that COWS, with some mild linguistic additions, can model all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. We also provide a flavour of the properties that can be analysed by using the tools mentioned above

    Palatosquisis en la especie canina

    Get PDF
    La palatosquisis o paladar secundario hendido es una de las anomalías congénitas más corrientes que presentan los cachorros alnacimiento. Se describen las técnicas más habituales de corrección y la utilizada por los autores.Paiaiosquisis or secondary cleft palate is one of the most frequent congenital abnormalities shown by newborn pups. Commonly used correction techniques and that used by the authors are described

    Efficient Parallel Statistical Model Checking of Biochemical Networks

    Full text link
    We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture

    Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Get PDF
    In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%

    Designing human-centric software artifacts with future users: a case study

    Get PDF
    The quality and quantity of participation supplied by human beings during the different phases of the design and development of a software artifact are central to studies in human-centered computing. With this paper, we have investigated on what kind of experienced people should be engaged to design a new computational artifact, when a participatory approach is adopted. We compared two approaches: the former including only future users (i.e., novices) in the design process, and the latter enlarging the community to expert users. We experimented with the design of a large software artifact, in use at the University of Bologna, engaging almost 1500 users. Statistical methodologies were employed to validate our findings. Our analysis has provided mounting evidence that expert users have contributed to the design of the artifact only by a small amount. Instead, most of the innovative initiatives have come from future users, thus surpassing some traditional limitations that tend to exclude future users from this kind of processes. We here challenge the traditional opinion that expert users provide typically a more reliable contribution in a participatory software design process, demonstrating instead that future users would be often better suited. Along this line of sense, this is the first paper, in the field of human-centric computing, that discusses the relevant question to offer to future users a larger design space, intended as a higher level of freedom given in a software design situation, demarcated by precise design constraints. In this sense, the outcome has been positiv
    corecore