3,320 research outputs found

    Radial Dynamics of Pickering-stabilised Endoskeletal Antibubbles and Their Components in Pulsed Ultrasound

    Get PDF
    Liquids containing microscopic antibubbles may have theranostic applications in harmonic diagnostic ultrasonic imaging and in ultrasound-assisted drug delivery. Presently there are no known agents available with the acoustic properties required for use in both of these applications. The Pickering-stabilised antibubble may possess the de- sired acoustic properties to be such a theranostic agent. An antibubble is a gas bubble containing at least one incompressible core. An antibubble is inherently unstable and thus needs to be stabilised to exist for longer than a moment. One such stabilising method, involving the adsorption of nanoparticles to gasā€“liquid interfaces, is called Pickering stabilisation. A Pickering-stabilised antibubble responds to an incident sound ļ¬eld by means of radial pulsation and other, more complicated, dynamics. Despite the potential application of microscopic antibubbles in theranostics, their dynamic behaviour and the acoustic regimes in which this behaviour occurs are not known. The purpose of this research was to predict the dynamic response of Pickering- stabilised antibubbles to pulsed ultrasound, and to identify and quantify the contribution of each of the Pickering-stabilised antibubble components to that behaviour. Radial excursions of antibubbles and their components during ultrasound exposure were extracted from high-speed footage. The applied ultrasound had a centre frequency of 1 MHz and pressure amplitudes between 0.20 MPa and 1.30 MPa. Moreover, damping coeļ¬ƒcients, pulsation phases, and excursions of antibubbles and antibubble components were computed with equations describing a forced massā€“springā€“dashpot system and an adapted Rayleigh-Plesset equation. Over a range of driving pressure amplitudes, fragmentation thresholds were computed for antibubbles of varying size, core volume, shell stiļ¬€ness, and driving frequency. In addition, the feasibility of an antibubble component for the disruption of cell walls was tested. From the experimental data, it was found that antibubble contractions and expansions were symmetrical and predictable at an acoustic amplitude of 0.20 MPa, whilst the pulsations were asymmetrical and less predictable at an acoustic amplitude of 1.00 MPa. These results show that the presence of the core inside of the antibubble hampers the contraction of a collapsing antibubble and ameliorates its stability. Consequently, Pickering-stabilised antibubbles appear to be feasible candidates for ultrasonic imaging, with greater stability than the agents currently in use. Micron-sized antibubbles, much smaller than resonant size, were computed to have a pulsation phase diļ¬€erence of up to 16 th of a cycle with respect to free gas bubbles. The diļ¬€erence in oscillation phase is a result of the increased damping coefļ¬cient caused by the friction of the internal components and shell of the antibubble. This indicates that altering the damping of the shell or skeletal material of minute antibubbles can alter the degree to which the particleā€™s oscillation is in phase with the sound ļ¬eld. The shell stiļ¬€ness of Pickering-stabilised microbubbles without incompressible contents was measured to be 7.6 N māˆ’1 throughout low-amplitude sonication. Un- der high-amplitude sonication, the maximum expansions of microbubbles, measured from high-speed camera footage, were either agreeing with those computed for Pickering-stabilised microbubbles or corresponding to greater values. The diļ¬€ering oscillation amplitudes for similarly sized microbubbles is attributed to shell disruption of diļ¬€erent severity. For a 3-Ī¼m radius antibubble with a 90% core radius, subjected to a pulse of centre frequency 1 MHz, the fragmentation threshold was computed to drastically increase with shell stiļ¬€ness. At a driving frequency of 13 MHz, the fragmentation threshold was computed to correspond to a mechanical index less than 0.4, irrespective of shell stiļ¬€ness. Shell stiļ¬€ness changes the resonance frequency, and thus the fragmentation threshold of antibubbles. This means that the resonance frequency of an extremely low concentration and quantity of homogeneous agent can be determined using microscopy. At driving frequencies above 1 MHz, the fragmentation threshold was computed to correspond to a mechanical index of less than 0.5, irrespective of shell stiļ¬€ness. Antibubbles exposed to high-amplitude ultrasound were found to have an exponential fragment size distribution. This brings us closer to understanding and controlling disruption and material release for these particles. If the pressure of the regime is known, the number of antibubble fragments produced can be theoretically determined. Under low-amplitude ultrasound exposure, hydrophobic particles, a common component of antibubbles, were observed to jet through wood ļ¬bre cell walls, without causing visible internal structural damage to these cells. Hydrophobic particles can thus act as inertial cavitation nuclei which collapse asymmetrically close to solid boundaries such as wood pulp ļ¬bres. This indicates that hydrophobic particles on their own may be used for applications such as trans-dermal drug delivery. The dynamic response of Pickering-stabilised antibubbles to ultrasound has been predicted. Furthermore the respective behaviour of Pickering-stabilised antibubble components under theranostic ultrasound conditions has been identiļ¬ed. This work has led to a straightforward way to determine the elasto-mechano properties of small samples of contrast agent. Whilst possessing some theranostic properties, Pickering-stabilised antibubbles may be more suitable as replacements for current diagnostic agents. Hydrophobic particles, a current constituent of the Pickering-stabilised antibubble, may however, prove to be promising theranostic agents

    TANDEM: taming failures in next-generation datacenters with emerging memory

    Get PDF
    The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions. Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice. Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for implementing recovery, complicating correctness and programmability. Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges. When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability). This thesis aims to address these challenges through the following approaches: (a) defining precise consistency models that formally specify correct end-to-end semantics in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery. We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processorā€™s load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop- ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model ā€“ Release Persistency (RP) ā€“ that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism, lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance. We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes). Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services. Finally, we introduce a novel target litmus-testing framework ā€“ DART ā€“ to validate the end-to-end correctness of transactional protocols with recovery. Using DARTā€™s target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating any intervention from the programmers

    A survey on vulnerability of federated learning: A learning algorithm perspective

    Get PDF
    Federated Learning (FL) has emerged as a powerful paradigm for training Machine Learning (ML), particularly Deep Learning (DL) models on multiple devices or servers while maintaining data localized at ownersā€™ sites. Without centralizing data, FL holds promise for scenarios where data integrity, privacy and security and are critical. However, this decentralized training process also opens up new avenues for opponents to launch unique attacks, where it has been becoming an urgent need to understand the vulnerabilities and corresponding defense mechanisms from a learning algorithm perspective. This review paper takes a comprehensive look at malicious attacks against FL, categorizing them from new perspectives on attack origins and targets, and providing insights into their methodology and impact. In this survey, we focus on threat models targeting the learning process of FL systems. Based on the source and target of the attack, we categorize existing threat models into four types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and composite attacks. For each attack type, we discuss the defense strategies proposed, highlighting their effectiveness, assumptions and potential areas for improvement. Defense strategies have evolved from using a singular metric to excluding malicious clients, to employing a multifaceted approach examining client models at various phases. In this survey paper, our research indicates that the to-learn data, the learning gradients, and the learned model at different stages all can be manipulated to initiate malicious attacks that range from undermining model performance, reconstructing private local data, and to inserting backdoors. We have also seen these threat are becoming more insidious. While earlier studies typically amplified malicious gradients, recent endeavors subtly alter the least significant weights in local models to bypass defense measures. This literature review provides a holistic understanding of the current FL threat landscape and highlights the importance of developing robust, efficient, and privacy-preserving defenses to ensure the safe and trusted adoption of FL in real-world applications. The categorized bibliography can be found at: https://github.com/Rand2AI/Awesome-Vulnerability-of-Federated-Learning

    A survey on vulnerability of federated learning: A learning algorithm perspective

    Get PDF
    Federated Learning (FL) has emerged as a powerful paradigm for training Machine Learning (ML), particularly Deep Learning (DL) models on multiple devices or servers while maintaining data localized at ownersā€™ sites. Without centralizing data, FL holds promise for scenarios where data integrity, privacy and security and are critical. However, this decentralized training process also opens up new avenues for opponents to launch unique attacks, where it has been becoming an urgent need to understand the vulnerabilities and corresponding defense mechanisms from a learning algorithm perspective. This review paper takes a comprehensive look at malicious attacks against FL, categorizing them from new perspectives on attack origins and targets, and providing insights into their methodology and impact. In this survey, we focus on threat models targeting the learning process of FL systems. Based on the source and target of the attack, we categorize existing threat models into four types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and composite attacks. For each attack type, we discuss the defense strategies proposed, highlighting their effectiveness, assumptions and potential areas for improvement. Defense strategies have evolved from using a singular metric to excluding malicious clients, to employing a multifaceted approach examining client models at various phases. In this survey paper, our research indicates that the to-learn data, the learning gradients, and the learned model at different stages all can be manipulated to initiate malicious attacks that range from undermining model performance, reconstructing private local data, and to inserting backdoors. We have also seen these threat are becoming more insidious. While earlier studies typically amplified malicious gradients, recent endeavors subtly alter the least significant weights in local models to bypass defense measures. This literature review provides a holistic understanding of the current FL threat landscape and highlights the importance of developing robust, efficient, and privacy-preserving defenses to ensure the safe and trusted adoption of FL in real-world applications. The categorized bibliography can be found at: https://github.com/Rand2AI/Awesome-Vulnerability-of-Federated-Learning

    Authentication enhancement in command and control networks: (a study in Vehicular Ad-Hoc Networks)

    Get PDF
    Intelligent transportation systems contribute to improved traffic safety by facilitating real time communication between vehicles. By using wireless channels for communication, vehicular networks are susceptible to a wide range of attacks, such as impersonation, modification, and replay. In this context, securing data exchange between intercommunicating terminals, e.g., vehicle-to-everything (V2X) communication, constitutes a technological challenge that needs to be addressed. Hence, message authentication is crucial to safeguard vehicular ad-hoc networks (VANETs) from malicious attacks. The current state-of-the-art for authentication in VANETs relies on conventional cryptographic primitives, introducing significant computation and communication overheads. In this challenging scenario, physical (PHY)-layer authentication has gained popularity, which involves leveraging the inherent characteristics of wireless channels and the hardware imperfections to discriminate between wireless devices. However, PHY-layerbased authentication cannot be an alternative to crypto-based methods as the initial legitimacy detection must be conducted using cryptographic methods to extract the communicating terminal secret features. Nevertheless, it can be a promising complementary solution for the reauthentication problem in VANETs, introducing what is known as ā€œcross-layer authentication.ā€ This thesis focuses on designing efficient cross-layer authentication schemes for VANETs, reducing the communication and computation overheads associated with transmitting and verifying a crypto-based signature for each transmission. The following provides an overview of the proposed methodologies employed in various contributions presented in this thesis. 1. The first cross-layer authentication scheme: A four-step process represents this approach: initial crypto-based authentication, shared key extraction, re-authentication via a PHY challenge-response algorithm, and adaptive adjustments based on channel conditions. Simulation results validate its efficacy, especially in low signal-to-noise ratio (SNR) scenarios while proving its resilience against active and passive attacks. 2. The second cross-layer authentication scheme: Leveraging the spatially and temporally correlated wireless channel features, this scheme extracts high entropy shared keys that can be used to create dynamic PHY-layer signatures for authentication. A 3-Dimensional (3D) scattering Doppler emulator is designed to investigate the schemeā€™s performance at different speeds of a moving vehicle and SNRs. Theoretical and hardware implementation analyses prove the schemeā€™s capability to support high detection probability for an acceptable false alarm value ā‰¤ 0.1 at SNR ā‰„ 0 dB and speed ā‰¤ 45 m/s. 3. The third proposal: Reconfigurable intelligent surfaces (RIS) integration for improved authentication: Focusing on enhancing PHY-layer re-authentication, this proposal explores integrating RIS technology to improve SNR directed at designated vehicles. Theoretical analysis and practical implementation of the proposed scheme are conducted using a 1-bit RIS, consisting of 64 Ɨ 64 reflective units. Experimental results show a significant improvement in the Pd, increasing from 0.82 to 0.96 at SNR = āˆ’ 6 dB for multicarrier communications. 4. The fourth proposal: RIS-enhanced vehicular communication security: Tailored for challenging SNR in non-line-of-sight (NLoS) scenarios, this proposal optimises key extraction and defends against denial-of-service (DoS) attacks through selective signal strengthening. Hardware implementation studies prove its effectiveness, showcasing improved key extraction performance and resilience against potential threats. 5. The fifth cross-layer authentication scheme: Integrating PKI-based initial legitimacy detection and blockchain-based reconciliation techniques, this scheme ensures secure data exchange. Rigorous security analyses and performance evaluations using network simulators and computation metrics showcase its effectiveness, ensuring its resistance against common attacks and time efficiency in message verification. 6. The final proposal: Group key distribution: Employing smart contract-based blockchain technology alongside PKI-based authentication, this proposal distributes group session keys securely. Its lightweight symmetric key cryptography-based method maintains privacy in VANETs, validated via Ethereumā€™s main network (MainNet) and comprehensive computation and communication evaluations. The analysis shows that the proposed methods yield a noteworthy reduction, approximately ranging from 70% to 99%, in both computation and communication overheads, as compared to the conventional approaches. This reduction pertains to the verification and transmission of 1000 messages in total

    Exploring the impact of food composition and eating context on food choice, energy intake and body mass index

    Get PDF
    Research suggests that traditional behaviour-based weight loss approaches requiring individuals to change their dietary habits or physical activity results in only modest weight loss which often isnā€™t maintained. Given the need to improve population health, the broader food environment (e.g., food price or composition) and the more immediate eating context have been identified as possible intervention targets.For product reformulation to be a successful public health strategy, consumers are required to be ā€˜insensitiveā€™ to changes to the reformulated product. This raises a more general question regarding whether humans are sensitive to food composition and whether this influences food choice and energy intake. The studies presented in Part A suggest that people are sensitive to both the energy content and macronutrient composition of food. Specifically, the results presented in chapters two to four indicate a non-linear pattern in meal caloric intake in response to meal energy density (kcal/g), and this pattern was captured in a theoretical two-component model of meal size (g, chapter five). The remaining two chapters in Part A (chapters six and seven) explore human sensitivity to food macronutrient composition. Chapter six describes the development of a new paradigm and task to assess protein discrimination by humans. Chapter seven focuses on the remaining two macronutrients, fat and carbohydrate, and demonstrates that, alongside being more liked, foods containing a combination of fat and carbohydrate are selected in larger portions than foods high in either fat or carbohydrate.The effect of eating contexts (e.g., social or distracted eating) on acute energy intake is well-researched, but their chronic impact on energy balance is unclear. The results of chapter nine (Part B) indicated that more frequently watching TV was associated with a higher body mass index (BMI) in young adults. More generally, the work identified eating contexts as potential targets for public health messaging which could effect changes in BMI on a population level. Together, the work presented in this thesis highlights new complexity in human dietary behaviour which presents both challenges and opportunities for successful food reformulation as a public health strategy, and it also demonstrates that the context in which we eat our meals could be leveraged to improve population-level health

    Hybrid chaotic map with L-shaped fractal Tromino for image encryption and decryption

    Get PDF
    Insecure communication in digital image security and image storing are considered as important challenges. Moreover, the existing approaches face problems related to improper security at the time of image encryption and decryption. In this research work, a wavelet environment is obtained by transforming the cover image utilizing integer wavelet transform (IWT) and hybrid discrete cosine transform (DCT) to completely prevent false errors. Then the proposed hybrid chaotic map with L-shaped fractal Tromino offers better security to maintain image secrecy by means of encryption and decryption. The proposed work uses fractal encryption with the combination of L-shaped Tromino theorem for enhancement of information hiding. The regions of L-shaped fractal Tromino are sensitive to variations, thus are embedded in the watermark based on a visual watermarking technique known as reversible watermarking. The experimental results showed that the proposed method obtained peak signal-to-noise ratio (PSNR) value ofĀ 56.82dB which is comparatively higher than the existing methods that are, Beddington, free, and Lawton (BFL) map with PSNR value of 8.10 dB, permutation substitution, and Boolean operation with PSNR value of 21.19 dB and deoxyribonucleic acid (DNA) level permutation-based logistic map with PSNR value of 21.27 dB

    Regulating ChatGPT and other Large Generative AI Models

    Full text link
    Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.Comment: under revie

    Information Encoding for Flow Watermarking and Binding Keys to Biometric Data

    Get PDF
    Due to the current level of telecommunications development, fifth-generation (5G) communication systems are expected to provide higher data rates, lower latency, and improved scalability. To ensure the security and reliability of data traffic generated from wireless sources, 5G networks must be designed to support security protocols and reliable communication applications. The operations of coding and processing of information during the transmission of both binary and non-binary data in nonstandard communication channels are described. A subclass of linear binary codes is considered, which are both Varshamov-Tenengolz codes and are used for channels with insertions and deletions of symbols. The use of these codes is compared with Hidden Markov Model (HMM)-based systems for detecting intrusions in networks using flow watermarking, which provide high true positive rate in both cases. The principles of using Bose-Chadhuri-Hocquenhgem (BCH) codes, non-binary Reed-Solomon codes, and turbo codes, as well as concatenated code structures to ensure noise immunity when reproducing information in Helper-Data Systems are considered. Examples of biometric systems organization based on the use of these codes, operating on the basis of the Fuzzy Commitment Scheme (FCS) and providing FRRĀ <Ā 1% for authentication, are given

    The Appropriation of Value from Knowledge: Three Essays on Technological Discontinuities, Market Entry, and Patent Strategy

    Get PDF
    Knowledge accumulation and protection are critical considerations of the firm. How does the capability to appropriate value from knowledge affect firm strategies in the industries? To answer this question, I develop a new theory and evidence to argue that appropriate value from knowledge is a central consideration in firmsā€™ capabilities and decisions to deal with technological changes and intellectual property issues. In particular, I examine the relatedness of products and markets, the strategic uses of patents, and how firms can successfully adapt to concerns regarding technological changes and intellectual property leakage. Throughout my three dissertation chapters, I find evidence that the capability to appropriate value from knowledge can affect how firms behave in consistent and essential ways. These findings provide important implications for knowledge-based views of the firm and strategy-based recommendations in terms of the management of knowledge assets
    • ā€¦
    corecore