1,367 research outputs found

    Suspensions of finite-size neutrally-buoyant spheres in turbulent duct flow

    Full text link
    We study the turbulent square duct flow of dense suspensions of neutrally-buoyant spherical particles. Direct numerical simulations (DNS) are performed in the range of volume fractions ϕ=0−0.2\phi=0-0.2, using the immersed boundary method (IBM) to account for the dispersed phase. Based on the hydraulic diameter a Reynolds number of 56005600 is considered. We report flow features and particle statistics specific to this geometry, and compare the results to the case of two-dimensional channel flows. In particular, we observe that for ϕ=0.05\phi=0.05 and 0.10.1, particles preferentially accumulate on the corner bisectors, close to the duct corners as also observed for laminar square duct flows of same duct-to-particle size ratios. At the highest volume fraction, particles preferentially accumulate in the core region. For channel flows, in the absence of lateral confinement particles are found instead to be uniformily distributed across the channel. We also observe that the intensity of the cross-stream secondary flows increases (with respect to the unladen case) with the volume fraction up to ϕ=0.1\phi=0.1, as a consequence of the high concentration of particles along the corner bisector. For ϕ=0.2\phi=0.2 the turbulence activity is strongly reduced and the intensity of the secondary flows reduces below that of the unladen case. The friction Reynolds number increases with ϕ\phi in dilute conditions, as observed for channel flows. However, for ϕ=0.2\phi=0.2 the mean friction Reynolds number decreases below the value for ϕ=0.1\phi=0.1.Comment: Submitted to Journal of Fluid Mechanic

    Comparison of Three Rapid Commercial Canine Parvovirus Antigen Detection Tests with Quantitative Polymerase Chain Reaction (qPCR)

    Get PDF
    ABSTRACT Objective: To evaluate the effectiveness of three commercial ELISA rapid tests in comparison with qPCR for the diagnosis of canine parvovirus infection using fecal samples. Background: Canine parvovirus-2 (CPV-2) infection is an acute, life-threatening, and highly contagious viral disease. The infected dogs shed virus in their stool and a variety of diagnostic methods have been developed for the diagnosis of the infection using fecal samples. Rapid ELISA tests are commonly used in veterinary practices. However, the accuracy of the results of rapid tests has been questioned in many reports and a low sensitivity has been reported for these tests. Methods: The effectiveness of three parvovirus commercial ELISA rapid tests (Zoetis, Abaxis, and IDEXX) was compared with the laboratory method, qPCR, as a quantitative assay with high sensitivity and specificity. Using qPCR provides information on the amount of viral antigen. Fecal samples from 80 dogs suspected of having CPV-2 infection, based on the clinical signs, were tested by the three ELISA rapid tests and qPCR method for the presence of canine parvovirus antigen. Specificity, sensitivity, positive, and negative predictive values (PPV and NPV) for all tests were calculated and compared. Results: A total of 42 samples were qualified for testing based on the inclusion criteria. The results of qPCR indicated 22 positive samples; however, only 10 of those samples were diagnosed as positive when ELISA kits were used. There was no difference between the results of the three ELISA tests from different manufacturers included in the study. The ct-values for the qPCR tests ranged from 12.03 to 34.21. The ct- values for the samples that were found as false negatives when ELISA tests were used ranged from 21.12 to 34.21. The sensitivity and specificity of the ELISA tests were 64% and 100% respectively versus 100% sensitivity and specificity for the qPCR method. The PPV and NPV values for ELISA tests were 100% and 62.5%, respectively. Conclusion: ELISA rapid tests are associated with a low sensitivity and therefore, the negative results should be confirmed using PCR technology to confirm the diagnosis

    Countering internet packet classifiers to improve user online privacy

    Get PDF
    Internet traffic classification or packet classification is the act of classifying packets using the extracted statistical data from the transmitted packets on a computer network. Internet traffic classification is an essential tool for Internet service providers to manage network traffic, provide users with the intended quality of service (QoS), and perform surveillance. QoS measures prioritize a network\u27s traffic type over other traffic based on preset criteria; for instance, it gives higher priority or bandwidth to video traffic over website browsing traffic. Internet packet classification methods are also used for automated intrusion detection. They analyze incoming traffic patterns and identify malicious packets used for denial of service (DoS) or similar attacks. Internet traffic classification may also be used for website fingerprinting attacks in which an intruder analyzes encrypted traffic of a user to find behavior or usage patterns and infer the user\u27s online activities. Protecting users\u27 online privacy against traffic classification attacks is the primary motivation of this work. This dissertation shows the effectiveness of machine learning algorithms in identifying user traffic by comparing 11 state-of-art classifiers and proposes three anonymization methods for masking generated user network traffic to counter the Internet packet classifiers. These methods are equalized packet length, equalized packet count, and equalized inter-arrival times of TCP packets. This work compares the results of these anonymization methods to show their effectiveness in reducing machine learning algorithms\u27 performance for traffic classification. The results are validated using newly generated user traffic. Additionally, a novel model based on a generative adversarial network (GAN) is introduced to automate countering the adversarial traffic classifiers. This model, which is called GAN tunnel, generates pseudo traffic patterns imitating the distributions of the real traffic generated by actual applications and encapsulates the actual network packets into the generated traffic packets. The GAN tunnel\u27s performance is tested against random forest and extreme gradient boosting (XGBoost) traffic classifiers. These classifiers are shown not being able of detecting the actual source application of data exchanged in the GAN tunnel in the tested scenarios in this thesis

    On the role of exchange of power and information signals in control and stability of the human-robot interaction

    Get PDF
    A human's ability to perform physical tasks is limited, not only by his intelligence, but by his physical strength. If, in an appropriate environment, a machine's mechanical power is closely integrated with a human arm's mechanical power under the control of the human intellect, the resulting system will be superior to a loosely integrated combination of a human and a fully automated robot. Therefore, we must develop a fundamental solution to the problem of 'extending' human mechanical power. The work presented here defines 'extenders' as a class of robot manipulators worn by humans to increase human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. The human, in physical contact with the extender, exchanges power and information signals with the extender. The aim is to determine the fundamental building blocks of an intelligent controller, a controller which allows interaction between humans and a broad class of computer-controlled machines via simultaneous exchange of both power and information signals. The prevalent trend in automation has been to physically separate the human from the machine so the human must always send information signals via an intermediary device (e.g., joystick, pushbutton, light switch). Extenders, however are perfect examples of self-powered machines that are built and controlled for the optimal exchange of power and information signals with humans. The human wearing the extender is in physical contact with the machine, so power transfer is unavoidable and information signals from the human help to control the machine. Commands are transferred to the extender via the contact forces and the EMG signals between the wearer and the extender. The extender augments human motor ability without accepting any explicit commands: it accepts the EMG signals and the contact force between the person's arm and the extender, and the extender 'translates' them into a desired position. In this unique configuration, mechanical power transfer between the human and the extender occurs because the human is pushing against the extender. The extender transfers to the human's hand, in feedback fashion, a scaled-down version of the actual external load which the extender is manipulating. This natural feedback force on the human's hand allows him to 'feel' a modified version of the external forces on the extender. The information signals from the human (e.g., EMG signals) to the computer reflect human cognitive ability, and the power transfer between the human and the machine (e.g., physical interaction) reflects human physical ability. Thus the information transfer to the machine augments cognitive ability, and the power transfer augments motor ability. These two actions are coupled through the human cognitive/motor dynamic behavior. The goal is to derive the control rules for a class of computer-controlled machines that augment human physical and cognitive abilities in certain manipulative tasks

    Development of Technology Identification model focusing on Technology Intelligence and Components and Indicators extracted from Upstream Documents of the Islamic Republic of Iran

    Get PDF
    The intelligence of science, technology, and innovation in today's world can be very influential. So, making a wrong decision can have irreparable consequences and may even hurt an economy or a country for several years. Considering that the upstream documents are the guidelines for determining the macro-policies of science and technology, it is necessary to analyze these documents to identify the components of National Powerness from the perspective of these documents. In this study, we sought to investigate the upstream documents and statements of the Supreme Leader to enumerate the dimensions and components of National Powerness. The method used was based on documentary study and to analyze the findings, the Content Analysis method was used. In the policy section, upstream documents such as the constitution, the vision of the I.R. Iran on the horizon of 1404, the declaration of the second step of the Islamic Revolution of Iran and other documents were examined. In the speeches section, statements of the Supreme Leader among scientific experts, university professors, industry, and technology activists, etc. were reviewed and analyzed. The themes extracted from these documents were classified into six dimensions: political, defense-security, environmental, economic, scientific, technological, and cultural discourse. Below these dimensions, 75 components were extracted. Depending on the extracted components, technology can be identified or ranked

    Cost-effectiveness analysis of intravenous levetiracetam versus intravenous phenytoin for early onset seizure prophylaxis after neurosurgery and traumatic brain injury

    Get PDF
    Rashid Kazerooni1, Mark Bounthavong1,21Pharmacoeconomics/Formulary Management, Veterans Affairs San Diego Healthcare System, San Diego, CA, USA; 2UCSD Skaggs School of Pharmacy and Pharmaceutical Sciences, San Diego, CA, USAObjective: There has been growing interest in newer anti-epileptic drugs (AEDs) for seizure prophylaxis in the intensive care setting because of safety and monitoring issues associated with conventional AEDs like phenytoin. This analysis assessed the cost-effectiveness of levetiracetam versus phenytoin for early onset seizure prophylaxis after neurosurgery and traumatic brain injury (TBI).Methods: A cost-effectiveness analysis was conducted from the US hospital perspective using a decision analysis model. Probabilities of the model were taken from three studies comparing levetiracetam and phenytoin in post neurosurgery or TBI patients. The outcome measure was successful seizure prophylaxis regimen (SSPR) within 7 days, which was defined as patients who did not seize or require discontinuation of the AED due to adverse drug reactions (ADRs). One-way sensitivity analyses and probabilistic sensitivity analysis were conducted to test robustness of the base-case results.Results: The total direct costs for seizure prophylaxis were 8,784.63and8,784.63 and 8,743.78 for levetiracetam and phenytoin, respectively. The cost-effectiveness ratio of levetiracetam was 10,044.91perSSPRcomparedto10,044.91 per SSPR compared to 11,525.63 per SSPR with phenytoin. The effectiveness probability (patients with no seizures and no ADR requiring change in therapy) was higher in the levetiracetam group (87.5%) versus the phenytoin group (75.9%). The incremental cost effectiveness ratio for levetiracetam versus phenytoin was 360.82 per additional SSPR gained.Conclusions: Levetiracetam has the potential to be more cost-effective than phenytoin for early onset seizure prophylaxis after neurosurgery if the payer’s willingness-to-pay is greater than 360.82 per additional SSPR gained.Keywords: phenytoin, levetiracetam, seizure prophylaxis, cost-effectiveness, traumatic brain injury (TBI), and neurosurger
    • …
    corecore