35 research outputs found

    Mapping Digital Media: Russia

    Get PDF
    Examines trends in Russia's media system, including media consumption, media ownership, the use of television as an organ of executive power, and the effect of digital media on freedom of speech, pluralism, civic participation, and news quality

    Towards the Advanced Data Processing for Medical Applications Using Task Offloading Strategy

    Get PDF
    Broad adoption of resource-constrained devices for medical use has additional limitations in terms of execution of delay-sensitive medical applications. As one of the solutions, new ways of computational offloading could be developed and integrated. The recently emerged Mobile Edge Computing (MEC) and Mobile Cloud Computing (MCC) paradigms attempt to address this problem by offloading tasks to a the resource-rich server. In the context of the availability of eHealth services for all patients, independently of the location, the implementation of MEC and MCC could help ensure a high availability of medical services. Remote medical examination, robotic surgery, and cardiac telemetry require efficient computing solutions. This work discusses three alternative computing models: local computing, MEC, and MCC. We have designed a Matlab-based tool to calculate and compare the response time and energy efficiency. We show that local computing demands 48 times more power than MEC/MCC with increasing packet workload. On the other hand, the throughput of MEC/MCC highly depends on the parameters of the communication channel. Finding an optimal trade-off between the response time and energy consumption is an important research question that could not be solved without investigating the system’s bottlenecks.acceptedVersionPeer reviewe

    Demystifying Usability of Open Source Computational Offloading Simulators : Performance Evaluation Campaign

    Get PDF
    Along with analysis and practical implementation, simulations play a key role in wireless networks and computational offloading research for several reasons. First, the simulations provide the ability to easily obtain the data for a complex system’s model evaluation. Secondly, simulated data provides a controlled environment for experimentation, allowing models and algorithms to be tested for robustness and identifying potential limitations before deploying them in real-world applications. Choosing the most appropriate tool for simulation might be challenging and depends on several factors, such as the main purpose, complexity of data, researcher skills, community support, and available budget. As of the time of the present analysis, several system-level open-source tools for modeling computational offloading also cover the systems’ communications side, such as CloudSim , CloudSim Plus , IoTSim-Edge , EdgeCloudSim , iFogSim2 , PureEdgeSim , and YAFS . This work presents an evaluation of those based on the unique features and performance results of intensive workload- and delay-tolerant scenarios: XR with an extremely high data rate and workload; remote monitoring with a low data rate with moderate delays and workload requirements; and data streaming as a general human traffic with a relatively high bit rate but moderate workload. The work concludes that CloudSim provides a reliable environment for virtualization on the host resources, while YAFS shows minimal hardware usage, while IoTSim-Edge , PureEdgeSim , and EdgeCloudSim have fewer implemented features.Peer reviewe

    Low-Level Laser Treatment Induces the Blood-Brain Barrier Opening and the Brain Drainage System Activation: Delivery of Liposomes into Mouse Glioblastoma

    Get PDF
    The progress in brain diseases treatment is limited by the blood-brain barrier (BBB), which prevents delivery of the vast majority of drugs from the blood into the brain. In this study, we discover unknown phenomenon of opening of the BBBB (BBBO) by low-level laser treatment (LLLT, 1268 nm) in the mouse cortex. LLLT-BBBO is accompanied by activation of the brain drainage system contributing effective delivery of liposomes into glioblastoma (GBM). The LLLT induces the generation of singlet oxygen without photosensitizers (PSs) in the blood endothelial cells and astrocytes, which can be a trigger mechanism of BBBO. LLLT-BBBO causes activation of the ABC-transport system with a temporal decrease in the expression of tight junction proteins. The BBB recovery is accompanied by activation of neuronal metabolic activity and stabilization of the BBB permeability. LLLT-BBBO can be used as a new opportunity of interstitial PS-free photodynamic therapy (PDT) for modulation of brain tumor immunity and improvement of immuno-therapy for GBM in infants in whom PDT with PSs, radio- and chemotherapy are strongly limited, as well as in adults with a high allergic reaction to PSs

    Applying Machine Learning to LTE Traffic Prediction: Comparison of Bagging, Random Forest, and SVM

    Get PDF
    Today, a significant share of smartphone applications use Artificial Intelligence (AI) elements that, in turn, are based on Machine Learning (ML) principles. Particularly, ML is also applied to the Edge paradigm aiming to predict and optimize the network load conventionally caused by human-based traffic, which is growing each year rapidly. The application of both standard and deep ML techniques is expected to improve the networks’ operation in the most complex heterogeneous environment. In this work, we propose a method to predict the LTE network edge traffic by utilizing various ML techniques. The analysis is based on the public cellular traffic dataset, and it presents a comparison of the quality metrics. The Support Vector Machines method allows much faster training than the Bagging and Random Forest that operate well with a mixture of numerical and categorical features.acceptedVersionPeer reviewe

    The future of computing paradigms for medical and emergency applications

    Get PDF
    Healthcare is of particular importance in everyone’s life, and keeping the advancement of it on a good pace is a priority of any country, as it highly influences the overall well-being of its citizens. Each government strives to build a modern, intelligent medical system that provides maximum population coverage with high-quality medical services. The development of Information and Communication Technologies (ICT) significantly improves the accessibility and effectiveness of the healthcare system by forming the eHealth environment, thus, providing an opportunity to enhance the quality of patient care and significantly speed up the work of medical experts and reduce costs for medical services. Shifting medical services to digital and remote operations requires a lot of computational capabilities. Implementing new computing paradigms is prominent — remote services face new requirements due to the increasing data and demand for new computing solutions. Computing paradigms, e.g., Cloud, Edge, Mobile Edge Computing, besides others, are used to process the collected medical data, improving patient healthcare quality. This paper focuses on computing solutions for medical use cases by offering a comprehensive survey on standardization aspects, use cases, applicable computing paradigms, security limitations, and design considerations within the ICT usages for medical applications. Finally, it outlines the most critical integration challenges and solutions from the literature.publishedVersionPeer reviewe

    Emotional intelligence, empathy, extraversion, alexithymia, environmentally responsible behavior in students-carriers of different MAOA, COMT gene genotypes

    No full text
    The article is devoted to the study of the association of genotypes of genes of monoamine oxidase A (MAOA) and catechol-O-methyltransferase (COMT) with emotional intelligence and personality traits of young people, such as extraversion-introversion, empathy, and alexithymia. The work was attended by students-psychologists, in the amount of 100 people. The following methods were used: Test of emotional intelligence (D.V. Lyusin); Emotional Empathy Questionnaire (A. Mehrabian, N. Epstein); «Big five» test; Toronto Alexithymia Scale. For statistical processing of the results obtained, we used multivariate analysis of variance ANOVA with Tukey’s post hoc analysis for non-equilibrium sample sizes. As a result, it was found that the genes of the monoaminergic system COMT and MAOA are associated with the general level of emotional intelligence. Women, in general, showed a lower level of emotional intelligence. The Met/Met genotype of the COMT gene is associated with a higher level of emotional intelligence and high extraversion. The Val/Met genotype of the COMT gene in women is associated with low emotional intelligence and low empathy. The Val/Val genotype of the COMT gene in men is associated with extraversion. In the work, no associations were found between the genotypes of the MAOA, COMT genes, and the level of alexithymia

    Logistic approach to intellectual property

    No full text
    In the age of economy’s digitalization, the importance of intangible assets in the activity of enterprises increases, which is caused by the content of fundamental trends, according to which the behavior of economic entities and their way of functioning transform into a new operating model of companies, especially in banking and telecommunications sectors, aimed at increasing cost efficiency and identification of new opportunities in the market mainly on the basis of methods of analysis of large amounts of data to generate new knowledge and make effective management decisions. Accordingly, in the conditions of economy’s digitalization, intellectual property starts to play more and more significant role as a backbone asset of enterprises, causing development of the intellectual property market and the need to formalize its operation and create an efficient infrastructure for the market. This article discusses the issue of intellectual property objects’ turnover in the Russian economy in the conditions of digitalization with funds being invested into specific energy project

    Recognition of Facial Expressions Based on Information From the Areas of Highest Increase in Luminance Contrast

    No full text
    It is generally accepted that the use of the most informative areas of the input image significantly optimizes visual processing. Several authors agree that, the areas of spatial heterogeneity are the most interesting for the visual system and the degree of difference between those areas and their surroundings determine the saliency. The purpose of our study was to test the hy-pothesis that the most informative are the areas of the image of largest increase in total luminance contrast, and information from these areas is used in the process of categorization facial expressions. Using our own program that was developed to imitate the work of second-order visual mechanisms, we created stimuli from the initial photographic images of faces with 6 basic emotions and a neutral expression. These images consisted only of areas of highest increase in total luminance contrast. Initially, we determined the spatial frequency ranges in which the selected areas contain the most useful information for the recognition of each of the expressions. We then compared the expressions recognition accuracy in images of real faces and those synthe-sized from the areas of highest contrast increase. The obtained results indicate that the recognition of expressions in synthe-sized images is somewhat worse than in real ones (73% versus 83%). At the same time, the partial loss of information that oc-curs due to the replacing real and synthesized images does not disrupt the overall logic of the recognition. Possible ways to make up for the missing information in the synthesized images are suggested

    Comparison of Machine Learning Techniques Applied to Traffic Prediction of Real Wireless Network

    Get PDF
    Today, the traffic amount is growing inexorably due to the increase in the number of devices on the network. Researchers analyze traffic by identifying sophisticated dependencies, anomalies, and novel traffic patterns to improve the system performance. One of the fast development niches in this domain is related to Classic and Deep Machine Learning techniques that are supposed to improve the network operation in the most complex heterogeneous environment. In this work, we first outline existing applications of Machine Learning in the communications domain and further list the most significant challenges and potential solutions while implementing those. Finally, we compare different classical methods predicting the traffic on the LTE network Edge by utilizing such techniques as Linear Regression, Gradient Boosting, Random Forest, Bootstrap Aggregation (Bagging), Huber Regression, Bayesian Regression, and Support Vector Machines (SVM). We develop the corresponding Machine Learning environment based on a public cellular traffic dataset and present a comparison table of the quality metrics and execution time for each model. After the analysis, the SVM method proved to allow for a much faster training compared to other algorithms. Gradient Boosting showed the best quality of predictions as it has the most efficient data determination. Random forest shows the worst result since it depends on the number of features that may be limited. The probabilistic approach-based Bayesian regression method showed slightly worse results than Gradient Boosting, but its training time was shorter. The performance evaluation demonstrated good results for linear models with the Huber loss function, which optimizes the model parameters better. As a standalone contribution, we offer the source code of the analyzed algorithms in Open Access.acceptedVersionPeer reviewe
    corecore