4 research outputs found

    MAC and baseband processors for RF-MIMO WLAN

    Get PDF
    The article describes hardware solutions for the IEEE 802.11 medium access control (MAC) layer and IEEE 802.11a digital baseband in an RF-MIMO WLAN transceiver that performs the signal combining in the analogue domain. Architecture and implementation details of the MAC processor including a hardware accelerator and a 16-bit MACphysical layer (PHY) interface are presented. The proposed hardware solution is tested and verified using a PHY link emulator. Architecture, design, implementation, and test of a reconfigurable digital baseband processor are described too. Description includes the baseband algorithms (the main blocks being MIMO channel estimation and Tx-Rx analogue beamforming), their FPGA-based implementation, baseband printed-circuit-board, and real-time test

    Optimal Histopathological Magnification Factors for Deep Learning-Based Breast Cancer Prediction

    No full text
    Pathologists use histopathology to examine tissues or cells under a microscope to compare healthy and abnormal tissue structures. Differentiating benign from malignant tumors is the most critical aspect of cancer histopathology. Pathologists use a range of magnification factors, including 40x, 100x, 200x, and 400x, to identify abnormal tissue structures. It is a painful process because specialists must spend much time sitting and gazing into the microscope lenses. Hence, pathologists are more likely to make errors due to being overworked or fatigued. Automating cancer detection in histopathology is the best way to mitigate humans’ erroneous diagnostics. Multiple approaches in the literature suggest methods to automate the detection of breast cancer based on the use of histopathological images. This work performs a comprehensive analysis to identify which magnification factors, 40x, 100x, 200x, and 400x, induce higher prediction accuracy. This study found that training Convolutional Neural Networks (CNNs) on 200x and 400x magnification factors increased the prediction accuracy compared to training on 40x and 100x. More specifically, this study finds that the CNN model performs better when trained on 200x than on 400x
    corecore