1,011 research outputs found
Towards a cyber physical system for personalised and automatic OSA treatment
Obstructive sleep apnea (OSA) is a breathing disorder that takes place in the course of the sleep and is produced by a complete or a partial obstruction of the upper airway that manifests itself as frequent breathing stops and starts during the sleep. The real-time evaluation of whether or not a patient is undergoing OSA episode is a very important task in medicine in many scenarios, as for example for making instantaneous pressure adjustments that should take place when Automatic Positive Airway Pressure (APAP) devices are used during the treatment of OSA. In this paper the design of a possible Cyber Physical System (CPS) suited to real-time monitoring of OSA is described, and its software architecture and possible hardware sensing components are detailed. It should be emphasized here that this paper does not deal with a full CPS, rather with a software part of it under a set of assumptions on the environment. The paper also reports some preliminary experiments about the cognitive and learning capabilities of the designed CPS involving its use on a publicly available sleep apnea database
Dissertation submitted in partial fulfillment of the requirements for the Bachelor of Technology (Hons) (Information System)
Up until recently the whole area of video conferencing has proved to be an expensive
and tricky technology to be working with. Most of the video conferencing technologies
can be found in a large room video conferencing system with sophisticated and
expensive conferencing equipments. And in other hand, teaching and learning process is
still limited by physical boundaries. The main idea of this project is to improve
communication and correlation among students, lecturers and tutors. The methodology
chosen for the development of this project is Prototyping system development
methodology. It consists of Requirement Analysis, Design Prototype, Evaluate
Prototype and Project completion. In Requirement Analysis Phase, the requirements of
the application and the functional specification are determined followed by Design
Prototype Phase where all the critical part of this project is developed. These include the
Graphical User Interface (GUI) development and the coding of this application. The
third phase is Evaluate Prototype Phase where the testing phase took place. Each subcomponent
is tested to make sure it met all requirements. Once all components of the
application is tested and all requirements are satisfied, the last phase, that is Project
Completion Phase are considered completed whereby final documentation are to be
developed before the final presentation. As a conclusion, this project aims to improve
current communication style. It consumes communication technology effectively
whereby the processing power of desktop computers has almost reached a level to
become comfortable with processing the multimedia data. In addition, advances in the
bandwidth availability on the internet and on LAN's/ WAN's has given the networks the
ability to handle the real time streaming media data
FAST ROTATED BOUNDING BOX ANNOTATIONS FOR OBJECT DETECTION
Traditionally, object detection models use a large amount of annotated data and axis-aligned bounding boxes (AABBs) are often chosen as the image annotation technique for both training and predictions. The purpose of annotating the objects in the images is to indicate the regions of interest with the corresponding labels. Accurate object annotations help the computer vision models to understand the distinct patterns of the image features to recognize and localize different classes of objects. However, AABBs are often a poor fit for elongated object instances. It’s also
challenging to localize objects with AABBs in densely packed aerial images because of overlapping adjacent bounding boxes. Alternatively, using rectangular annotations that can be oriented diagonally, also known as rotated bounding boxes (RBB), can provide a much tighter fit for elongated objects and reduce the potential bounding box overlap between adjacent objects. However, RBBs are much more time-consuming and tedious to annotate than AABBs for large datasets.
In this work, we propose a novel annotation tool named as FastRoLabelImg (Fast Rotated LabelImg) for producing high-quality RBB annotations with low time and effort. The tool generates accurate RBB proposals for objects of
interest as the annotator makes progress through the dataset. It can also adapt available AABBs to generate RBB proposals. Furthermore, a multipoint box drawing system is provided to reduce manual RBB annotation time compared to the existing methods. Across three diverse datasets, we show that the proposal generation methods can achieve a maximum of 88.9% manual workload reduction. We also show that our proposed manual annotation method is
twice as fast as the existing system with the same accuracy by conducting a participant study. Lastly, we publish the RBB annotations for two public datasets in order to motivate future research that will contribute in developing more competent object detection algorithms capable of RBB predictions
Recommended from our members
Improving next-generation wireless network performance and reliability with deep learning
A rudimentary question whether machine learning in general, or deep learning in particular, could add to the well-established field of wireless communications, which has been evolving for close to a century, is often raised. While the use of deep learning based methods is likely to help build intelligent wireless solutions, this use becomes particularly challenging for the lower layers in the wireless communication stack. The introduction of the fifth generation of wireless communications (5G) has triggered the demand for “network intelligence” to support its promises for very high data rates and extremely low latency. Consequently, 5G wireless operators are faced with the challenges of network complexity, diversification of services, and personalized user experience. Industry standards have created enablers (such as the network data analytics function), but these enablers focus on post-mortem analysis at higher stack layers and have a periodicity in the time scale of seconds (or larger). The goal of this dissertation is to show a solution for these challenges and how a data-driven approach using deep learning could add to the field of wireless communications. In particular, I propose intelligent predictive and prescriptive abilities to boost reliability and eliminate performance bottlenecks in 5G cellular networks and beyond, show contributions that justify the value of deep learning in wireless communications across several different layers, and offer in-depth analysis and comparisons with baselines and industry standards. First, to improve multi-antenna network reliability against wireless impairments with power control and interference coordination for both packetized voice and beamformed data bearers, I propose the use of a joint beamforming, power control, and interference coordination algorithm based on deep reinforcement learning. This algorithm uses a string of bits and logic operations to enable simultaneous actions to be performed by the reinforcement learning agent. Consequently, a joint reward function is also proposed. I compare the performance of my proposed algorithm with the brute force approach and show that similar performance is achievable but with faster run-time as the number of transmit antennas increases. Second, in enhancing the performance of coordinated multipoint, I propose the use of deep learning binary classification to learn a surrogate function to trigger a second transmission stream instead of depending on the popular signal to interference plus noise measurement quantity. This surrogate function improves the users' sum-rate through focusing on pre-logarithmic terms in the sum-rate formula, which have larger impact on this rate. Third, performance of band switching can be improved without the need for a full channel estimation. My proposal of using deep learning to classify the quality of two frequency bands prior to granting the band switching leads to a significant improvement in users' throughput. This is due to the elimination of the industry standard measurement gap requirement—a period of silence where no data is sent to the users so they could measure the frequency bands before switching. In this dissertation, a group of algorithms for wireless network performance and reliability for downlink are proposed. My results show that the introduction of user coordinates enhance the accuracy of the predictions made with deep learning. Also, the choice of signal to interference plus noise ratio as the optimization objective may not always be the best choice to improve user throughput rates. Further, exploiting the spatial correlation of channels in different frequency bands can improve certain network procedures without the need for perfect knowledge of the per-band channel state information. Hence, an understanding of these results help develop novel solutions to enhancing these wireless networks at a much smaller time scale compared to the industry standards todayElectrical and Computer Engineerin
Genetic algorithms
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology
Data prediction for cases of incorrect data in multi-node electrocardiogram monitoring
The development of a mesh topology in multi-node electrocardiogram (ECG) monitoring based on the ZigBee protocol still has limitations. When more than one active ECG node sends a data stream, there will be incorrect data or damage due to a failure of synchronization. The incorrect data will affect signal interpretation. Therefore, a mechanism is needed to correct or predict the damaged data. In this study, the method of expectation-maximization (EM) and regression imputation (RI) was proposed to overcome these problems. Real data from previous studies are the main modalities used in this study. The ECG signal data that has been predicted is then compared with the actual ECG data stored in the main controller memory. Root mean square error (RMSE) is calculated to measure system performance. The simulation was performed on 13 ECG waves, each of them has 1000 samples. The simulation results show that the EM method has a lower predictive error value than the RI method. The average RMSE for the EM and RI methods is 4.77 and 6.63, respectively. The proposed method is expected to be used in the case of multi-node ECG monitoring, especially in the ZigBee application to minimize errors
- …