11 research outputs found

    Planning Wireless Cellular Networks of Future: Outlook, Challenges and Opportunities

    Get PDF
    Cell planning (CP) is the most important phase in the life cycle of a cellular system as it determines the operational expenditure, capital expenditure, as well as the long-term performance of the system. Therefore, it is not surprising that CP problems have been studied extensively for the past three decades for all four generations of cellular systems. However, the fact that small cells, a major component of future networks, are anticipated to be deployed in an impromptu fashion makes CP for future networks vis-a-vis 5G a conundrum. Furthermore, in emerging cellular systems that incorporate a variety of different cell sizes and types, heterogeneous networks (HetNets), energy efficiency, self-organizing network features, control and data plane split architectures (CDSA), massive multiple input multiple out (MIMO), coordinated multipoint (CoMP), cloud radio access network, and millimetre-wave-based cells plus the need to support Internet of Things (IoT) and device-to-device (D2D) communication require a major paradigm shift in the way cellular networks have been planned in the past. The objective of this paper is to characterize this paradigm shift by concisely reviewing past developments, analyzing the state-of-the-art challenges, and identifying future trends, challenges, and opportunities in CP in the wake of 5G. More specifically, in this paper, we investigate the problem of planning future cellular networks in detail. To this end, we first provide a brief tutorial on the CP process to identify the peculiarities that make CP one of the most challenging problems in wireless communications. This tutorial is followed by a concise recap of past research in CP. We then review key findings from recent studies that have attempted to address the aforementioned challenges in planning emerging networks. Finally, we discuss the range of technical factors that need to be taken into account while planning future networks and the promising research directions that necessitates the paradigm shift to do so

    A Low Power Multi-Class Migraine Detection Processor Based on Somatosensory Evoked Potentials

    Get PDF
    Migraine is a disabling neurological disorder that can be recurrent and persist for long durations. The continuous monitoring of the brain activities can enable the patient to respond on time before the occurrence of the approaching migraine episode to minimize the severity. Therefore, there is a need for a wearable device that can ensure the early diagnosis of a migraine attack. This brief presents a low latency, and power-efficient feature extraction and classification processor for the early detection of a migraine attack. Somatosensory Evoked Potentials (SEP) are utilized to monitor the migraine patterns in an ambulatory environment aiming to have a processor integrated on-sensor for power-efficient and timely intervention. In this work, a complete digital design of the wearable environment is proposed. It allows the extraction of multiple features including multiple power spectral bands using 256-point fast Fourier transform (FFT), root mean square of late HFO bursts and latency of N20 peak. These features are then classified using a multi-classification artificial neural network (ANN)-based classifier which is also realized on the chip. The proposed processor is placed and routed in a 180nm CMOS with an active area of 0.5mm(2). The total power consumption is 249 mu W while operating at a 20MHz clock with full computations completed in 1.31ms

    2019 IEEE Biomedical Circuits and Systems Conference (BioCAS)

    No full text
    An Electrocardiography (ECG) based processor for eight Cardiac arrhythmias (CA) detection with smart priority logic is presented to minimize the false alarms. The processor utilizes a Multi-Level Linear Support Vector Machine (ML-LSVM) classifiers with one-vs-all approach to distinguish the different CAs. The classification is solely based on 5 features including R-wave, S-wave, T-wave, R-R interval and Q-S interval. The processor employs a priority logic to prioritize the detected conditions if more than one condition are detected. The system is implemented using CMOS 180nm with an area of 0.18mm2 and validated using 83 patient's recordings from Physionet Arrhythmia Database and Creighton University Database. The proposed processor consumes 0.91uW with an average classification accuracy of 98.5% while reducing the false alarms by 99%, which is 30% superior performance compared to conventional systems

    Electrochemical Sensing of Lead in Drinking Water Using Copper Foil Bonded with Polymer

    No full text
    Levels of lead (Pb) in tap water that are well below established guidelines are now considered harmful, so the detection of sub-parts-per-billion (ppb) Pb levels is crucial. In this work, we developed a two-step, facile, and inexpensive fabrication approach that involves direct bonding of copper (Cu) and liquid crystal polymer (LCP) followed by polyester resin printing for masking onto Cu/LCP to fabricate Cu thin-film-based Pb sensors. The oxygen plasma-treated surfaces resulted in strongly bonded Cu/LCP with a high peel strength of 500 N/m due to the highly hydrophilic nature of both surfaces. The bonded specimen can withstand wet etching of the electrode and can address delamination of the electrode for prolonged use in application environments. The Cu-foil-based electrochemical sensor showed sensitivity of ~11 nA/ppb/cm2 and a limit of detection (LOD) of 0.2 ppb (0.2 µg/L) Pb ions in water. The sensor required only 30 s and a 100 µL sample to detect Pb. To date, this is the most rapid detection of Pb performed using an all-Cu-based sensor. The selectivity test of Cu to Pb with interferences from cadmium and zinc showed that their peaks were separated by a few hundred millivolts. This approach has strong potential towards realizing low-cost, highly reliable integrated water quality monitoring systems

    Delayed Cerebral Ischemia after Subarachnoid Hemorrhage: Beyond Vasospasm and Towards a Multifactorial Pathophysiology

    No full text
    PURPOSE OF REVIEW: Delayed cerebral ischemia (DCI) is common after subarachnoid hemorrhage (SAH) and represents a significant cause of poor functional outcome. DCI was mainly thought to be caused by cerebral vasospasm; however, recent clinical trials have been unable to confirm this hypothesis. Studies in humans and animal models have since supported the notion of a multifactorial pathophysiology of DCI. This review summarizes some of the main mechanisms under investigation including cerebral vascular dysregulation, microthrombosis, cortical spreading depolarizations, and neuroinflammation. RECENT FINDINGS: Recent guidelines have differentiated between DCI and angiographic vasospasm and have highlighted roles of the microvasculature, coagulation and fibrinolytic systems, cortical spreading depressions, and the contribution of the immune system to DCI. Many therapeutic interventions are underway in both preclinical and clinical studies to target these novel mechanisms as well as studies connecting these mechanisms to one another. Summary: Clinical trials to date have been largely unsuccessful at preventing or treating DCI after SAH. The only successful pharmacologic intervention is the calcium channel antagonist, nimodipine. Recent studies have provided evidence that cerebral vasospasm is not the sole contributor to DCI and that additional mechanisms may play equal if not more important roles
    corecore