47 research outputs found

    Customizable FPGA-based hardware accelerator for standard convolution processes empowered with quantization applied to LiDAR data

    Get PDF
    In recent years there has been an increase in the number of research and developments in deep learning solutions for object detection applied to driverless vehicles. This application benefited from the growing trend felt in innovative perception solutions, such as LiDAR sensors. Currently, this is the preferred device to accomplish those tasks in autonomous vehicles. There is a broad variety of research works on models based on point clouds, standing out for being efficient and robust in their intended tasks, but they are also characterized by requiring point cloud processing times greater than the minimum required, given the risky nature of the application. This research work aims to provide a design and implementation of a hardware IP optimized for computing convolutions, rectified linear unit (ReLU), padding, and max pooling. This engine was designed to enable the configuration of features such as varying the size of the feature map, filter size, stride, number of inputs, number of filters, and the number of hardware resources required for a specific convolution. Performance results show that by resorting to parallelism and quantization approach, the proposed solution could reduce the amount of logical FPGA resources by 40 to 50%, enhancing the processing time by 50% while maintaining the deep learning operation accuracy.European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) (Project no. 037902; Funding Reference: POCI-01-0247-FEDER-037902

    Efficient hardware design and implementation of the voting scheme-based convolution

    Get PDF
    Due to a point cloud’s sparse nature, a sparse convolution block design is necessary to deal with its particularities. Mechanisms adopted in computer vision have recently explored the advantages of data processing in more energy-efficient hardware, such as the FPGA, as a response to the need to run these algorithms on resource-constrained edge devices. However, implementing it in hardware has not been properly explored, resulting in a small number of studies aimed at analyzing the potential of sparse convolutions and their efficiency on resource-constrained hardware platforms. This article presents the design of a customizable hardware block for the voting convolution. We carried out an in-depth analysis to determine under which conditions the use of the voting scheme is justified instead of dense convolutions. The proposed hardware design achieves an energy consumption about 8.7 times lower than similar works in the literature by ignoring unnecessary arithmetic operations with null weights and leveraging data dependency. Access to data memory was also reduced to the minimum necessary, leading to improvements of around 55% in processing time. To evaluate both the performance and applicability of the proposed solution, the voting convolution was integrated into the well-known PointPillars model, where it achieves improvements between 23.05% and 80.44% without a significant effect on detection performance.European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) (Project no. 037902; Funding Reference: POCI-01-0247-FEDER-037902)

    Libra: Achieving Efficient Instruction- and Data- Parallel Execution for Mobile Applications.

    Full text link
    Mobile computing as exemplified by the smart phone has become an integral part of our daily lives. The next generation of these devices will be driven by providing richer user experiences and compelling capabilities: higher definition multimedia, 3D graphics, augmented reality, and voice interfaces. To meet these goals, the core computing capabilities of the smart phone must be scaled. But, the energy budgets are increasing at a much lower rate, thus fundamental improvements in computing efficiency must be garnered. To meet this challenge, computer architects employ hardware accelerators in the form of SIMD and VLIW. Single-instruction multiple-data (SIMD) accelerators provide high degrees of scalability for applications rich in data-level parallelism (DLP). Very long instruction word (VLIW) accelerators provide moderate scalability for applications with high degrees of instruction-level parallelism (ILP). Unfortunately, applications are not so nicely partitioned into two groups: many applications have some DLP, but also contain significant fractions of code with low trip count loops, complex control/data dependences, or non-uniform execution behavior for which no DLP exists. Therefore, a more adaptive accelerator is required to be able to deploy resources as needed: exploit DLP on SIMD when it’s available, but fall back to ILP on the same hardware when necessary. In this thesis, we first focus on various compiler solutions that solve inefficiency problem in both VLIW and SIMD accelerators. For SIMD accelerators, a new vectorization pass, called SIMD Defragmenter, is introduced to uncover hidden DLP using subgraph identification in SIMD accelerators. CGRA express effectively accelerates sequential code regions using a bypass network in VLIW accelerators, and Resource Recycling leverages stream-graph modulo scheduling technique for scheduling of multiple code regions in multi-core accelerators. Second, we propose the new scalable multicore accelerator referred to as Libra for mobile systems, which can support execution of code regions having both DLP and ILP, as well as hybrid combinations of the two. We believe that as industry requires higher performance, the proposed flexible accelerator and compiler support will put more resources to work in order to meet the performance and power efficiency requirements.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99840/1/yjunpark_1.pd

    Kodizajn arhitekture i algoritama za lokalizacijumobilnih robota i detekciju prepreka baziranih namodelu

    No full text
    This thesis proposes SoPC (System on a Programmable Chip) architectures for efficient embedding of vison-based localization and obstacle detection tasks in a navigational pipeline on autonomous mobile robots. The obtained results are equivalent or better in comparison to state-ofthe- art. For localization, an efficient hardware architecture that supports EKF-SLAM's local map management with seven-dimensional landmarks in real time is developed. For obstacle detection a novel method of object recognition is proposed - detection by identification framework based on single detection window scale. This framework allows adequate algorithmic precision and execution speeds on embedded hardware platforms.Ova teza bavi se dizajnom SoPC (engl. System on a Programmable Chip) arhitektura i algoritama za efikasnu implementaciju zadataka lokalizacije i detekcije prepreka baziranih na viziji u kontekstu autonomne robotske navigacije. Za lokalizaciju, razvijena je efikasna računarska arhitektura za EKF-SLAM algoritam, koja podržava skladištenje i obradu sedmodimenzionalnih orijentira lokalne mape u realnom vremenu. Za detekciju prepreka je predložena nova metoda prepoznavanja objekata u slici putem prozora detekcije fiksne dimenzije, koja omogućava veću brzinu izvršavanja algoritma detekcije na namenskim računarskim platformama

    Novel Privacy Preserving Non-Invasive Sensing-Based Diagnoses of Pneumonia Disease Leveraging Deep Network Model

    Get PDF
    This article presents non-invasive sensing-based diagnoses of pneumonia disease, exploiting a deep learning model to make the technique non-invasive coupled with security preservation. Sensing and securing healthcare and medical images such as X-rays that can be used to diagnose viral diseases such as pneumonia is a challenging task for researchers. In the past few years, patients’ medical records have been shared using various wireless technologies. The wireless transmitted data are prone to attacks, resulting in the misuse of patients’ medical records. Therefore, it is important to secure medical data, which are in the form of images. The proposed work is divided into two sections: in the first section, primary data in the form of images are encrypted using the proposed technique based on chaos and convolution neural network. Furthermore, multiple chaotic maps are incorporated to create a random number generator, and the generated random sequence is used for pixel permutation and substitution. In the second part of the proposed work, a new technique for pneumonia diagnosis using deep learning, in which X-ray images are used as a dataset, is proposed. Several physiological features such as cough, fever, chest pain, flu, low energy, sweating, shaking, chills, shortness of breath, fatigue, loss of appetite, and headache and statistical features such as entropy, correlation, contrast dissimilarity, etc., are extracted from the X-ray images for the pneumonia diagnosis. Moreover, machine learning algorithms such as support vector machines, decision trees, random forests, and naive Bayes are also implemented for the proposed model and compared with the proposed CNN-based model. Furthermore, to improve the CNN-based proposed model, transfer learning and fine tuning are also incorporated. It is found that CNN performs better than other machine learning algorithms as the accuracy of the proposed work when using naive Bayes and CNN is 89% and 97%, respectively, which is also greater than the average accuracy of the existing schemes, which is 90%. Further, K-fold analysis and voting techniques are also incorporated to improve the accuracy of the proposed model. Different metrics such as entropy, correlation, contrast, and energy are used to gauge the performance of the proposed encryption technology, while precision, recall, F1 score, and support are used to evaluate the effectiveness of the proposed machine learning-based model for pneumonia diagnosis. The entropy and correlation of the proposed work are 7.999 and 0.0001, respectively, which reflects that the proposed encryption algorithm offers a higher security of the digital data. Moreover, a detailed comparison with the existing work is also made and reveals that both the proposed models work better than the existing work

    Optimising runtime reconfigurable designs for high performance applications

    Get PDF
    This thesis proposes novel optimisations for high performance runtime reconfigurable designs. For a reconfigurable design, the proposed approach investigates idle resources introduced by static design approaches, and exploits runtime reconfiguration to eliminate the inefficient resources. The approach covers the circuit level, the function level, and the system level. At the circuit level, a method is proposed for tuning reconfigurable designs with two analytical models: a resource model for computational and memory resources and memory bandwidth, and a performance model for estimating execution time. This method is applied to tuning implementations of finite-difference algorithms, optimising arithmetic operators and memory bandwidth based on algorithmic parameters, and eliminating idle resources by runtime reconfiguration. At the function level, a method is proposed to automatically identify and exploit runtime reconfiguration opportunities while optimising resource utilisation. The method is based on Reconfiguration Data Flow Graph, a new hierarchical graph structure enabling runtime reconfigurable designs to be synthesised in three steps: function analysis, configuration organisation, and runtime solution generation. At the system level, a method is proposed for optimising reconfigurable designs by dynamically adapting the designs to available runtime resources in a reconfigurable system. This method includes two steps: compile-time optimisation and runtime scaling, which enable efficient workload distribution, asynchronous communication scheduling, and domain-specific optimisations. It can be used in developing effective servers for high performance applications.Open Acces

    Instrumenting and analyzing platform-independent communication in applications

    Get PDF
    The performance of microprocessors is limited by communication. This limitation, sometimes alluded to as the memory wall, refers to the hardware-level cost of communicating with memory. Recent studies have found that the promise of speedup from transistor scaling, or employing heterogeneous processors, such as GPUs, is diminished when such hardware communication costs are included. Based on the insight that hardware communication at run-time is a manifestation of communication in software, this dissertation proposes that automatically capturing and classifying software-level communication is the first step in performing fast, early-stage design space exploration of future multicore systems. Software-level communication refers to the exchange of data between software entities such as functions, threads or basic blocks. Communication classification helps differentiate the first-time use from the reuse of communicated data, and distinguishes between communication external to a software entity and local communication within a software entity. We present Sigil, a novel tool that automatically captures and classifies software-level communication in an efficient way. Due to its platform-independent nature, software-level communication can be useful during the early-stage design of future multicore systems. Using the two different representations of output data that Sigil produces, we show that the measurement of software-level communication can be used to analyze i) function-level interaction in single-threaded programs to determine which specialized logic can be included in future heterogeneous multicore systems, and ii) thread-level interaction in multi-threaded programs to aid in chip multi-processor(CMP) design space exploration.Ph.D., Electrical Engineering -- Drexel University, 201
    corecore