22 research outputs found

    Design and Evaluation of a Scalable Engine for 3D-FFT Computation in an FPGA Cluster

    Get PDF
    The Three Dimensional Fast Fourier Transform (3D-FFT) is commonly used to solve the partial differential equations describing the system evolution in several physical phenomena, such as the motion of viscous fluids described by the Navier–Stokes equations. Simulation of such problems requires the use of a parallel High-Performance Computing architecture since the size of the problem grows with the cube of the FFT size, and the representation of the single point comprises several double precision floating- point complex numbers. Modern High-Performance Computing (HPC) systems are considering the inclusion of FPGAs as components of this computing architecture because they can combine effective hardware acceleration capabilities and dedicated communication facilities. Furthermore, the network topology can be optimized for the specific calculation that the cluster must perform, especially in the case of algorithms limited by the data exchange delay between the processors. In this paper, we explore an HPC design that uses FPGA accelerators to compute the 3DFFT. We devise a scalable FFT engine based on a custom radix-2 double-precision core that is used to implement the Decimation in Frequency version of the Cooley–Tukey FFT algorithm. The FFT engine can be adapted to different technology constraints and networking topologies by adjusting the number of cores and configuration parameters in order to minimize the overall calculation time. We compare the various possible configurations with the technological limits of available hardware. Finally, we evaluate the bandwidth required for continuous FFT execution in the APEnet toroidal mesh network.

    Design and Evaluation of a Scalable Engine for 3D-FFT Computation in an FPGA Cluster

    Get PDF
    The Three Dimensional Fast Fourier Transform (3D-FFT) is commonly used to solve the partial differential equations describing the system evolution in several physical phenomena, such as the motion of viscous fluids described by the Navier–Stokes equations. Simulation of such problems requires the use of a parallel High-Performance Computing architecture since the size of the problem grows with the cube of the FFT size, and the representation of the single point comprises several double precision floating- point complex numbers. Modern High-Performance Computing (HPC) systems are considering the inclusion of FPGAs as components of this computing architecture because they can combine effective hardware acceleration capabilities and dedicated communication facilities. Furthermore, the network topology can be optimized for the specific calculation that the cluster must perform, especially in the case of algorithms limited by the data exchange delay between the processors. In this paper, we explore an HPC design that uses FPGA accelerators to compute the 3DFFT. We devise a scalable FFT engine based on a custom radix-2 double-precision core that is used to implement the Decimation in Frequency version of the Cooley–Tukey FFT algorithm. The FFT engine can be adapted to different technology constraints and networking topologies by adjusting the number of cores and configuration parameters in order to minimize the overall calculation time. We compare the various possible configurations with the technological limits of available hardware. Finally, we evaluate the bandwidth required for continuous FFT execution in the APEnet toroidal mesh network

    Can AI be used ethically to assist peer review?

    Get PDF
    As the rate and volume of academic publications has risen, so too has the pressure on journal editors to quickly find reviewers to assess the quality of academic work. In this context the potential of Artificial Intelligence (AI) to boost productivity and reduce workload has received significant attention. Drawing on evidence from an experiment utilising AI to learn and assess peer review outcomes, Alessandro Checco, Lorenzo Bracciale, Pierpaolo Loreti, Stephen Pinfield, and Giuseppe Bianchi, discuss the prospects for AI for assisting peer review and the potential ethical dilemmas its application might produce

    Privacy-Aware Architectures for NFC and RFID Sensors in Healthcare Applications

    Get PDF
    World population and life expectancy have increased steadily in recent years, raising issues regarding access to medical treatments and related expenses. Through last-generation medical sensors, NFC (Near Field Communication) and radio frequency identification (RFID) technologies can enable healthcare internet of things (H-IoT) systems to improve the quality of care while reducing costs. Moreover, the adoption of point-of-care (PoC) testing, performed whenever care is needed to return prompt feedback to the patient, can generate great synergy with NFC/RFID H-IoT systems. However, medical data are extremely sensitive and require careful management and storage to protect patients from malicious actors, so secure system architectures must be conceived for real scenarios. Existing studies do not analyze the security of raw data from the radiofrequency link to cloud-based sharing. Therefore, two novel cloud-based system architectures for data collected from NFC/RFID medical sensors are proposed in this paper. Privacy during data collection is ensured using a set of classical countermeasures selected based on the scientific literature. Then, data can be shared with the medical team using one of two architectures: in the first one, the medical system manages all data accesses, whereas in the second one, the patient defines the access policies. Comprehensive analysis of the H-IoT system can be useful for fostering research on the security of wearable wireless sensors. Moreover, the proposed architectures can be implemented for deploying and testing NFC/RFID-based healthcare applications, such as, for instance, domestic PoCs

    Privacy and Transparency in Blockchain-based Smart Grid Operations

    Get PDF
    In the past few years, blockchain technology has emerged in numerous smart grid applications, enabling the construction of systems without the need for a trusted third party. Blockchain offers transparency, traceability, and accountability, which lets various energy management system functionalities be executed through smart contracts, such as monitoring, consumption analysis, and intelligent energy adaptation. Nevertheless, revealing sensitive energy consumption information could render users vulnerable to digital and physical assaults. This paper presents a novel method for achieving a dual balance between privacy and transparency, as well as accountability and verifiability. This equilibrium requires the incorporation of cryptographic tools like Secure Mul- tiparty Computation and Verifiable Secret Sharing within the distributed components of a multi- channel blockchain and its associated smart contracts. We corroborate the suggested architecture throughout the entire process of a Demand Response scenario, from the collection of energy data to the ultimate reward. To address our proposal’s constraints, we present countermeasures against accidental crashes and Byzantine behavior while ensuring that the solution remains appropriate for low-performance IoT devices

    Assessment and validation of miniaturized technology for the remote tracking of critically endangered Galápagos pink land iguana (Conolophus marthae)

    Get PDF
    Abstract Background: Gathering ecological data for species of conservation concern inhabiting remote regions can be daunting and, sometimes, logistically infeasible. We built a custom-made GPS tracking device that allows to remotely and accurately collect animal position, environmental, and ecological data, including animal temperature and UVB radiation. We designed the device to track the critically endangered Galápagos pink land iguana, Conolophus marthae. Here we illustrate some technical solutions adopted to respond to challenges associated with such task and present some preliminary results from controlled trial experiments and field implementation. Results: Our tests show that estimates of temperature and UVB radiation are affected by the design of our device, in particular by its casing. The introduced bias, though, is systematic and can be corrected using linear and quadratic regressions on collected values. Our data show that GPS accuracy loss, although introduced by vegetation and orientation of the devices when attached to the animals, is acceptable, leading to an average error gap of less than 15 m in more than 50% of the cases. Conclusions: We address some technical challenges related to the design, construction, and operation of a custommade GPS tracking device to collect data on animals in the wild. Systematic bias introduced by the technological implementation of the device exists. Understanding the nature of the bias is crucial to provide correction models. Although designed to track land iguanas, our device could be used in other circumstances and is particularly useful to track organisms inhabiting locations that are difficult to reach or for which classic telemetry approaches are unattainable

    Better Than Nothing’ privacy with Bloom filters: To what extent

    No full text
    Abstract. Bloom filters are probabilistic data structures which permit to conveniently represent set membership. Their performance/memory efficiency makes them appealing in a huge variety of scenarios. Their probabilistic operation, along with the implicit data representation, yields some ambiguity on the actual data stored, which, in scenarios where cryptographic protection is unviable or unpractical, may be somewhat considered as a better than nothing privacy asset. Oddly enough, even if frequently mentioned, to the best of our knowledge the (soft) privacy properties of Bloom filters have never been explicitly quantified. This work aims to fill this gap. Starting from the adaptation of probabilistic anonymity metrics to the Bloom filter setting, we derive exact and (tightly) approximate formulae which permit to readily relate privacy properties with filter (and universe set) parameters. Using such relations, we quantitatively investigate the emerging privacy/utility trade-offs. We finally preliminary assess the advantages that a tailored insertion of a few extra (covert) bits achieves over the commonly employed strategy of increasing ambiguity via addition of random bits

    Fractional frequency reuse planning for WiMAX over frequency selective channels

    No full text
    Fourth generation broadband wireless access systems based on OFDMA/OFDM techniques operate in point-to-multipoint (PMP) configuration so that conventional cellular planning methods can be used for network design. As an alternative planning method fractional frequency reuse (FFR) strategy has been recently proposed for OFDMA/OFDM cellular systems such as WiMAX. For what concerns the determination of the system outage prediction, planning procedures for multi-carrier systems are usually based on bit error rate curves as a function of the average signal-to-interference plus noise ratio (SINR). Such curves are obtained after simulating the entire transmitter-receiver chain including multipath channel. However these approaches do not explicitly evidence the role of the number of M sub-carriers allocated to each user and are computationally intensive. In order to avoid such inconveniences we express the outage probability as a function of the effective SINR which is a function of the M SINRs (one for each sub-carrier). Such function already includes the decoding effects thus avoiding the re-simulation of the decoder behavior. Furthermore, this approach allows to explicitly evidence the role on planning of the number of subcarriers allocated for one user. For evaluation purposes we apply the procedure in a simple two cell FFR interference scenario. It is shown that the number of sub-carriers can play a significant role in the entire planning process i.e. the reuse distance can be lowered at the expense of an increased number of sub-carriers to be allocated per block. © 2008 IEEE
    corecore