9 research outputs found

    Development of Some Efficient Lossless and Lossy Hybrid Image Compression Schemes

    Get PDF
    Digital imaging generates a large amount of data which needs to be compressed, without loss of relevant information, to economize storage space and allow speedy data transfer. Though both storage and transmission medium capacities have been continuously increasing over the last two decades, they dont match the present requirement. Many lossless and lossy image compression schemes exist for compression of images in space domain and transform domain. Employing more than one traditional image compression algorithms results in hybrid image compression techniques. Based on the existing schemes, novel hybrid image compression schemes are developed in this doctoral research work, to compress the images effectually maintaining the quality

    Survey of FPGA applications in the period 2000 – 2015 (Technical Report)

    Get PDF
    Romoth J, Porrmann M, Rückert U. Survey of FPGA applications in the period 2000 – 2015 (Technical Report).; 2017.Since their introduction, FPGAs can be seen in more and more different fields of applications. The key advantage is the combination of software-like flexibility with the performance otherwise common to hardware. Nevertheless, every application field introduces special requirements to the used computational architecture. This paper provides an overview of the different topics FPGAs have been used for in the last 15 years of research and why they have been chosen over other processing units like e.g. CPUs

    Concepção e realização de um framework para sistemas embarcados baseados em FPGA aplicado a um classificador Floresta de Caminhos Ótimos

    Get PDF
    Orientadores: Eurípedes Guilherme de Oliveira Nóbrega, Isabelle Fantoni-Coichot, Vincent FrémontTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica, Université de Technologie de CompiègneResumo: Muitas aplicações modernas dependem de métodos de Inteligência Artificial, tais como classificação automática. Entretanto, o alto custo computacional associado a essas técnicas limita seu uso em plataformas embarcadas com recursos restritos. Grandes quantidades de dados podem superar o poder computacional disponível em tais ambientes, o que torna o processo de projetá-los uma tarefa desafiadora. As condutas de processamento mais comuns usam muitas funções de custo computacional elevadas, o que traz a necessidade de combinar alta capacidade computacional com eficiência energética. Uma possível estratégia para superar essas limitações e prover poder computacional suficiente aliado ao baixo consumo de energia é o uso de hardware especializado como, por exemplo, FPGA. Esta classe de dispositivos é amplamente conhecida por sua boa relação desempenho/consumo, sendo uma alternativa interessante para a construção de sistemas embarcados eficazes e eficientes. Esta tese propõe um framework baseado em FPGA para a aceleração de desempenho de um algoritmo de classificação a ser implementado em um sistema embarcado. A aceleração do desempenho foi atingida usando o esquema de paralelização SIMD, aproveitando as características de paralelismo de grão fino dos FPGA. O sistema proposto foi implementado e testado em hardware FPGA real. Para a validação da arquitetura, um classificador baseado em Teoria dos Grafos, o OPF, foi avaliado em uma proposta de aplicação e posteriormente implementado na arquitetura proposta. O estudo do OPF levou à proposição de um novo algoritmo de aprendizagem para o mesmo, usando conceitos de Computação Evolutiva, visando a redução do tempo de processamento de classificação, que, combinada à implementação em hardware, oferece uma aceleração de desempenho suficiente para ser aplicada em uma variedade de sistemas embarcadosAbstract: Many modern applications rely on Artificial Intelligence methods such as automatic classification. However, the computational cost associated with these techniques limit their use in resource constrained embedded platforms. A high amount of data may overcome the computational power available in such embedded environments while turning the process of designing them a challenging task. Common processing pipelines use many high computational cost functions, which brings the necessity of combining high computational capacity with energy efficiency. One of the strategies to overcome this limitation and provide sufficient computational power allied with low energy consumption is the use of specialized hardware such as FPGA. This class of devices is widely known for their performance to consumption ratio, being an interesting alternative to building capable embedded systems. This thesis proposes an FPGA-based framework for performance acceleration of a classification algorithm to be implemented in an embedded system. Acceleration is achieved using SIMD-based parallelization scheme, taking advantage of FPGA characteristics of fine-grain parallelism. The proposed system is implemented and tested in actual FPGA hardware. For the architecture validation, a graph-based classifier, the OPF, is evaluated in an application proposition and afterward applied to the proposed architecture. The OPF study led to a proposition of a new learning algorithm using evolutionary computation concepts, aiming at classification processing time reduction, which combined to the hardware implementation offers sufficient performance acceleration to be applied in a variety of embedded systemsDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânica3077/2013-09CAPE

    Slantlet transform-based segmentation and α -shape theory-based 3D visualization and volume calculation methods for MRI brain tumour

    Get PDF
    Magnetic Resonance Imaging (MRI) being the foremost significant component of medical diagnosis which requires careful, efficient, precise and reliable image analyses for brain tumour detection, segmentation, visualisation and volume calculation. The inherently varying nature of tumour shapes, locations and image intensities make brain tumour detection greatly intricate. Certainly, having a perfect result of brain tumour detection and segmentation is advantageous. Despite several available methods, tumour detection and segmentation are far from being resolved. Meanwhile, the progress of 3D visualisation and volume calculation of brain tumour is very limited due to absence of ground truth. Thus, this study proposes four new methods, namely abnormal MRI slice detection, brain tumour segmentation based on Slantlet Transform (SLT), 3D visualization and volume calculation of brain tumour based on Alpha (α) shape theory. In addition, two new datasets along with ground truth are created to validate the shape and volume of the brain tumour. The methodology involves three main phases. In the first phase, it begins with the cerebral tissue extraction, followed by abnormal block detection and its fine-tuning mechanism, and ends with abnormal slice detection based on the detected abnormal blocks. The second phase involves brain tumour segmentation that covers three processes. The abnormal slice is first decomposed using the SLT, then its significant coefficients are selected using Donoho universal threshold. The resultant image is composed using inverse SLT to obtain the tumour region. Finally, in the third phase, four original ideas are proposed to visualise and calculate the volume of the tumour. The first idea involves the determination of an optimal α value using a new formula. The second idea is to merge all tumour points for all abnormal slices using the α value to form a set of tetrahedrons. The third idea is to select the most relevant tetrahedrons using the α value as the threshold. The fourth idea is to calculate the volume of the tumour based on the selected tetrahedrons. In order to evaluate the performance of the proposed methods, a series of experiments are conducted using three standard datasets which comprise of 4567 MRI slices of 35 patients. The methods are evaluated using standard practices and benchmarked against the best and up-to-date techniques. Based on the experiments, the proposed methods have produced very encouraging results with an accuracy rate of 96% for the abnormality slice detection along with sensitivity and specificity of 99% for brain tumour segmentation. A perfect result for the 3D visualisation and volume calculation of brain tumour is also attained. The admirable features of the results suggest that the proposed methods may constitute a basis for reliable MRI brain tumour diagnosis and treatments

    Research & Technology Report Goddard Space Flight Center

    Get PDF
    The main theme of this edition of the annual Research and Technology Report is Mission Operations and Data Systems. Shifting from centralized to distributed mission operations, and from human interactive operations to highly automated operations is reported. The following aspects are addressed: Mission planning and operations; TDRSS, Positioning Systems, and orbit determination; hardware and software associated with Ground System and Networks; data processing and analysis; and World Wide Web. Flight projects are described along with the achievements in space sciences and earth sciences. Spacecraft subsystems, cryogenic developments, and new tools and capabilities are also discussed

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    Parallel Genetic Algorithm based Thresholding Schemes for Image Segmentation

    Get PDF
    In this thesis, the problem of image segmentation has been addressed using the notion of thresholding.Since the focus of this work is primarily on object/objects background classification and fault detection in a given scene, the segmentation problem is viewed as a classification problem. In this regard, the notion of thresholding has been used to classify the range of gray values and hence classifies the image. The gray level distributions of the original image or the proposed feature image have been used to obtain the optimal threshold. Initially, PGA based class models have been developed to classify different classes of a nonlinear multimodal function. This problem is formulated where the nonlinear multimodal function is viewed as consisting of multiple class distributions.Each class could be represented by the niche or peaks of that class.Hence, the problem has been formulated to detect the peaks of the functions. PGA based clustering algorithm has been proposed to maintain stable sub-populations in the niches and hence the peaks could be detected. A new interconnection model has been proposed for PGA to accelerate the rate of convergence to the optimal solution. Convergence analysis of the proposed PGA based algorithm has been carried out and is shown to converge to the solution. The proposed PGA based clustering algorithm could successfully be tested for different classes and is found to converge much faster than that of GA based clustering algorithm

    Performance analysis for wireless G (IEEE 802.11G) and wireless N (IEEE 802.11N) in outdoor environment

    Get PDF
    This paper described an analysis the different capabilities and limitation of both IEEE technologies that has been utilized for data transmission directed to mobile device. In this work, we have compared an IEEE 802.11/g/n outdoor environment to know what technology is better. The comparison consider on coverage area (mobility), throughput and measuring the interferences. The work presented here is to help the researchers to select the best technology depending of their deploying case, and investigate the best variant for outdoor. The tool used is Iperf software which is to measure the data transmission performance of IEEE 802.11n and IEEE 802.11g

    Performance Analysis For Wireless G (IEEE 802.11 G) And Wireless N (IEEE 802.11 N) In Outdoor Environment

    Get PDF
    This paper described an analysis the different capabilities and limitation of both IEEE technologies that has been utilized for data transmission directed to mobile device. In this work, we have compared an IEEE 802.11/g/n outdoor environment to know what technology is better. the comparison consider on coverage area (mobility), through put and measuring the interferences. The work presented here is to help the researchers to select the best technology depending of their deploying case, and investigate the best variant for outdoor. The tool used is Iperf software which is to measure the data transmission performance of IEEE 802.11n and IEEE 802.11g
    corecore