1,647 research outputs found

    Multiple transmission optimization of medical images in recourse-constraint mobile telemedicine systems

    Get PDF
    Background and objective In the state-of-the-art image transmission methods, multiple large medical images are usually transmitted one by one which is very inefficient. The objective of our study is to devise an effective and efficient multiple transmission optimization scheme for medical images called Mto via analyzing the visual content of the multiple images based on the characteristics of a recourse-constraint mobile telemedicine system (MTS) and the medical images; Methods To better facilitate the efficient Mto processing, two enabling techniques, i.e., 1) NIB grouping scheme, and 2) adaptive RIB replicas selection are developed. Given a set of transmission images (Ω), the correlation of these transmission images is first explored, the pixel resolutions of the corresponding MIBs keep high, the NIBs are grouped into k clusters based on the visual similarity in which the k RIBs are obtained. An optimal pixel resolution for the RIBs is derived based on the current network bandwidth and their corresponding areas, etc. Then, the candidate MIBs and the k RIBs are transmitted to the receiver node based on their transmission priorities. Finally, the IBs are reconstructed and displayed at the receiver node level for different users. Results The experimental results show that our approach is about 45% more efficient than the state-of-the-art methods, significantly minimizing the response time by decreasing the network communication cost while improving the transmission throughput; Conclusions Our proposed Mto method can be seamlessly applied in a recourse-constraint MTS environment in which the high transmission efficiency and the acceptable image quality can be guaranteed. Keywords Medical imageMulti-resolutionMobile telemedicine systemBatch transmissionpostprin

    On the analysis of big data indexing execution strategies

    No full text
    Efficient response to search queries is very crucial for data analysts to obtain timely results from big data spanned over heterogeneous machines. Currently, a number of big-data processing frameworks are available in which search operations are performed in distributed and parallel manner. However, implementation of indexing mechanism results in noticeable reduction of overall query processing time. There is an urge to assess the feasibility and impact of indexing towards query execution performance. This paper investigates the performance of state-of-the-art clustered indexing approaches over Hadoop framework which is de facto standard for big data processing. Moreover, this study leverages a comparative analysis of non-clustered indexing overhead in terms of time and space taken by indexing process for varying volume data sets with increasing Index Hit Ratio. Furthermore, the experiments evaluate performance of search operations in terms of data access and retrieval time for queries that use indexes. We then validated the obtained results using Petri net mathematical modeling. We used multiple data sets in our experiments to manifest the impact of growing volume of data on indexing and data search and retrieval performance. The results and highlighted challenges favorably lead researchers towards improved implication of indexing mechanism in perspective of data retrieval from big data. Additionally, this study advocates selection of a non-clustered indexing solution so that optimized search performance over big data is obtained

    Predictive Internet of Things Based Detection Model of Comatose Patient using Deep Learning

    Get PDF
    The needs and demands of the healthcare sector are increasing exponentially. Also, there has been a rapid development in diverse technologies in totality. Hence varied advancements in different technologies like Internet of Things (IoT) and Deep Learning are being utilised and play a vital role in healthcare sector. In health care domain, specifically, there is also increasing need to find the possibility of patient going into coma. This is because if it is found that the patient is going into coma, preventive steps could be initiated helping patient and this could possibly save the life of the patient. The proposed work in this paper is in this direction whereby the advancement in technology is utilised to build a predictive model towards forecasting the chances of a patient going into coma state. The proposed system initially consists of different medical devices like sensors which take inputs from the patient and helps aid to monitor the condition of the patient. The proposed system consists of varied sensing devices which will help to record patient’s details such as blood pressure (B.P.), pulse rate, heart rate, brain signal and continuous monitoring the motion of coma patient. The various vital parameters from the patient are taken in continuously and displayed across a graphical display unit. Further as and when even if one vital parameter exceeds certain thresholds, the probability that patient will go into coma increases. Immediately an alert is given in. Further, all such records where there are chances that patient goes into coma state are stored in cloud. Subsequently, based on the data retrieved from the cloud a predictive model using Convolutional Neural Network (CNN) is built to forecast the status of the coma patient as an output for any set of health-related parameters of the patient. The effectiveness of the built predictive model is evaluated in terms of performance metrics such as accuracy, precision and recall. The built forecasting model displays high accuracy up to 98%. Such a system will greatly benefit health sector and coma patients and enable build futuristic and superior predictive and preventive model helping in reducing cases of patient going into coma state

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe

    Ubiquitous Scalable Graphics: An End-to-End Framework using Wavelets

    Get PDF
    Advances in ubiquitous displays and wireless communications have fueled the emergence of exciting mobile graphics applications including 3D virtual product catalogs, 3D maps, security monitoring systems and mobile games. Current trends that use cameras to capture geometry, material reflectance and other graphics elements means that very high resolution inputs is accessible to render extremely photorealistic scenes. However, captured graphics content can be many gigabytes in size, and must be simplified before they can be used on small mobile devices, which have limited resources, such as memory, screen size and battery energy. Scaling and converting graphics content to a suitable rendering format involves running several software tools, and selecting the best resolution for target mobile device is often done by trial and error, which all takes time. Wireless errors can also affect transmitted content and aggressive compression is needed for low-bandwidth wireless networks. Most rendering algorithms are currently optimized for visual realism and speed, but are not resource or energy efficient on mobile device. This dissertation focuses on the improvement of rendering performance by reducing the impacts of these problems with UbiWave, an end-to-end Framework to enable real time mobile access to high resolution graphics using wavelets. The framework tackles the issues including simplification, transmission, and resource efficient rendering of graphics content on mobile device based on wavelets by utilizing 1) a Perceptual Error Metric (PoI) for automatically computing the best resolution of graphics content for a given mobile display to eliminate guesswork and save resources, 2) Unequal Error Protection (UEP) to improve the resilience to wireless errors, 3) an Energy-efficient Adaptive Real-time Rendering (EARR) heuristic to balance energy consumption, rendering speed and image quality and 4) an Energy-efficient Streaming Technique. The results facilitate a new class of mobile graphics application which can gracefully adapt the lowest acceptable rendering resolution to the wireless network conditions and the availability of resources and battery energy on mobile device adaptively
    • …
    corecore