159 research outputs found
Advances image-based automated security system
As security is a serious concern nowadays, it becomes important to develop a product that deals with security issues without any human intervention. Hence, an automatic security system is a proposed device that ensures the security of the premises. Using both emerging technologies and specialized hardware, we can achieve safety goals and be able to develop the proposed device. It is an IoT-based approach that includes cloud computing, OpenCV, and web application for developing a security-based automatic system. Using raspberry pi and software, the authors design an automated security system where all the used electrical items are controlled. This system deals with the protection of possessions, minimizing break-ins, and avoiding any dangerous situations. The additional salient feature is that it also deals with the COVID-19 alerts, which are generated from the temperature sensor. Therefore, it protects the premises not only from any unauthorized access but also protects the premises from any infected person
RLIS: resource limited improved security beyond fifth generation networks using deep learning algorithms.
This study explores the feasibility of allocating finite resources beyond fifth generation networks for extended reality applications through the implementation of enhanced security measures via offloading analysis (RLIS). The quantification of resources is facilitated through the utilization of parameters, namely energy, capacity, and power, which are equipped with proximity constraints. These constraints are then integrated with activation functions in both multilayer perceptron and long short term memory models. Furthermore, the system model has been developed using vision-based computing, which involves managing data queues in terms of waiting periods to minimize congestion for data transmission with limited resources. The major significance of the proposed method is to utilize allocated spectrums for future generation networks by allocating necessary resources and therefore high usage of resources by all users can be avoided. In addition the advantage of the proposed method is secure the networks that operate beyond 5G where more number of users will try to share the allocated resources that needs to be provided with high security conditions
A Systematic Literature Review on Resilient Digital Transformation, Examining How Organizations Sustain Digital Capabilities
In an era marked by relentless technological shifts and market volatility, digital transformation (DT) alone is insufficient. Organizations must develop Resilient Digital Transformation (RDT)—the organizational capabilities required to sustain DT over a medium-term horizon—to navigate these challenges effectively. This study primarily aims to propose a guideline for fostering RDT. Drawing on the PRISMA guidelines and a systematic review of 77 peer-reviewed papers, this study identifies and synthesizes key targets and drivers across three core pillars: Technology, Organization, and External Environment. These elements collectively foster organizational resilience. Specifically, this study highlights how adaptability, innovation, and scalability form the technological underpinnings of sustained digital maturity; meanwhile, effective governance frameworks, ongoing workforce development, and supportive cultures promote organizational agility. Externally, proactive stakeholder engagement, responsiveness to market shifts, and robust regulatory compliance help ensure the long-term viability of digital initiatives. The findings contribute to the existing literature by unifying an integrative framework illustrating how organizations can sense, seize, and reconfigure resources to embed resilience across strategic and operational processes. By moving beyond static maturity models, the framework stresses the continuous nature of digital transformation, offering both academics and practitioners a structured approach to sustaining competitive advantage amid incessant disruptions
Machine learning methods for service placement : a systematic review
With the growth of real-time and latency-sensitive applications in the Internet of Everything (IoE), service placement cannot rely on cloud computing alone. In response to this need, several computing paradigms, such as Mobile Edge Computing (MEC), Ultra-dense Edge Computing (UDEC), and Fog Computing (FC), have emerged. These paradigms aim to bring computing resources closer to the end user, reducing delay and wasted backhaul bandwidth. One of the major challenges of these new paradigms is the limitation of edge resources and the dependencies between different service parts. Some solutions, such as microservice architecture, allow different parts of an application to be processed simultaneously. However, due to the ever-increasing number of devices and incoming tasks, the problem of service placement cannot be solved today by relying on rule-based deterministic solutions. In such a dynamic and complex environment, many factors can influence the solution. Optimization and Machine Learning (ML) are two well-known tools that have been used most for service placement. Both methods typically use a cost function. Optimization is usually a way to define the difference between the predicted and actual value, while ML aims to minimize the cost function. In simpler terms, ML aims to minimize the gap between prediction and reality based on historical data. Instead of relying on explicit rules, ML uses prediction based on historical data. Due to the NP-hard nature of the service placement problem, classical optimization methods are not sufficient. Instead, metaheuristic and heuristic methods are widely used. In addition, the ever-changing big data in IoE environments requires the use of specific ML methods. In this systematic review, we present a taxonomy of ML methods for the service placement problem. Our findings show that 96% of applications use a distributed microservice architecture. Also, 51% of the studies are based on on-demand resource estimation methods and 81% are multi-objective. This article also outlines open questions and future research trends. Our literature review shows that one of the most important trends in ML is reinforcement learning, with a 56% share of research
Intravenous Formulation of HET0016 Decreased Human Glioblastoma Growth and Iimplicated Survival Benefit in Rat Xenograft Models
Glioblastoma (GBM) is a hypervascular primary brain tumor with poor prognosis. HET0016 is a selective CYP450 inhibitor, which has been shown to inhibit angiogenesis and tumor growth. Therefore, to explore novel treatments, we have generated an improved intravenous (IV) formulation of HET0016 with HPssCD and tested in animal models of human and syngeneic GBM. Administration of a single IV dose resulted in 7-fold higher levels of HET0016 in plasma and 3.6-fold higher levels in tumor at 60 min than that in IP route. IV treatment with HPssCD-HET0016 decreased tumor growth, and altered vascular kinetics in early and late treatment groups (p \u3c 0.05). Similar growth inhibition was observed in syngeneic GL261 GBM (p \u3c 0.05). Survival studies using patient derived xenografts of GBM811, showed prolonged survival to 26 weeks in animals treated with focal radiation, in combination with HET0016 and TMZ (p \u3c 0.05). We observed reduced expression of markers of cell proliferation (Ki-67), decreased neovascularization (laminin and alphaSMA), in addition to inflammation and angiogenesis markers in the treatment group (p \u3c 0.05). Our results indicate that HPssCD-HET0016 is effective in inhibiting tumor growth through decreasing proliferation, and neovascularization. Furthermore, HPssCD-HET0016 significantly prolonged survival in PDX GBM811 model
A multimodel-based screening framework for C-19 using deep learning-inspired data fusion.
In recent times, there has been a notable rise in the utilization of Internet of Medical Things (IoMT) frameworks particularly those based on edge computing, to enhance remote monitoring in healthcare applications. Most existing models in this field have been developed temperature screening methods using RCNN, face temperature encoder (FTE), and a combination of data from wearable sensors for predicting respiratory rate (RR) and monitoring blood pressure. These methods aim to facilitate remote screening and monitoring of Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) and COVID-19. However, these models require inadequate computing resources and are not suitable for lightweight environments. We propose a multimodal screening framework that leverages deep learning-inspired data fusion models to enhance screening results. A Variation Encoder (VEN) design proposes to measure skin temperature using Regions of Interest (RoI) identified by YoLo. Subsequently, the multi-data fusion model integrates electronic records features with data from wearable human sensors. To optimize computational efficiency, a data reduction mechanism is added to eliminate unnecessary features. Furthermore, we employ a contingent probability method to estimate distinct feature weights for each cluster, deepening our understanding of variations in thermal and sensory data to assess the prediction of abnormal COVID-19 instances. Simulation results using our lab dataset demonstrate a precision of 95.2%, surpassing state-of-the-art models due to the thoughtful design of the multimodal data-based feature fusion model, weight prediction factor, and feature selection model
Growth performance of planted population of Pinus roxburghii in central Nepal
Background
Climate change has altered the various ecosystem processes including forest ecosystem in Himalayan region. Although the high mountain natural forests including treelines in the Himalayan region are mainly reported to be temperature sensitive, the temperature-related water stress in an important growth-limiting factor for middle elevation mountains. And there are very few evidences on growth performance of planted forest in changing climate in the Himalayan region. A dendrochronological study was carried out to verify and record the impact of warming temperature tree growth by using the tree cores of Pinus roxburghii from Batase village of Dhulikhel in Central Nepal with sub-tropical climatic zone. For this total, 29 tree cores from 25 trees of P. roxburghii were measured and analyzed.
Result
A 44-year long tree ring width chronology was constructed from the cores. The result showed that the radial growth of P. roxburghii was positively correlated with pre-monsoon (April) rainfall, although the correlation was not significant and negatively correlated with summer rainfall. The strongest negative correlation was found between radial growth and rainfall of June followed by the rainfall of January. Also, the radial growth showed significant positive correlation with that previous year August mean temperature and maximum temperature, and significant negative correlation between radial growth and maximum temperature (Tmax) of May and of spring season (March-May), indicating moisture as the key factor for radial growth. Despite the overall positive trend in the basal area increment (BAI), we have found the abrupt decline between 1995 and 2005 AD.
Conclusion
The results indicated that chir pine planted population was moisture sensitive, and the negative impact of higher temperature during early growth season (March-May) was clearly seen on the radial growth. We emphasize that the forest would experience further moisture stress if the trend of warming temperatures continues. The unusual decreasing BAI trend might be associated with forest management processes including resin collection and other disturbances. Our results showed that the planted pine forest stand is sub-healthy due to major human intervention at times. Further exploration of growth climate response from different climatic zones and management regimes is important to improve our understanding on the growth performance of mid-hill pine forests in Nepal
Topic Modeling based text classification regarding Islamophobia using Word Embedding and Transformers Techniques
Islamophobia is a rising area of concern in the current era where Muslims face discrimination and receive negative perspectives towards their religion, Islam. Islamophobia is a type of racism that is being practiced by individuals, groups, and organizations worldwide. Moreover, the ease of access to social media platforms and their augmented usage has also contributed to spreading hate speech, false information, and negative opinions about Islam. In this research study, we focused to detect Islamophobic textual content shared on various social media platforms. We explored the state-of-the-art techniques being followed in text data mining and Natural Language Processing (NLP). Topic modelling algorithm Latent Dirichlet Allocation is used to find top topics. Then, word embedding approaches such as Word2Vec and Global Vectors for word representation (GloVe) are used as feature extraction techniques. For text classification, we utilized modern text analysis techniques of transformers-based Deep Learning algorithms named Bidirectional Encoders Representation from Transformers (BERT) and Generative Pre-Trained Transformer (GPT). For results comparison, we conducted an extensive empirical analysis of Machine Learning algorithms and Deep Learning using conventional textual features such as the Term Frequency-Inverse Document Frequency, N-gram, and Bag of words (BoW). The empirical based results evaluated using standard performance evaluation measures show that the proposed approach effectively detects the textual content related to Islamophobia. In the corpus of the study under Machine Learning models Support Vector Machine (SVM) performed best with an F1 score of 91%. The Transformer based core NLP models and the Deep Learning model Convolutional Neural Network (CNN) when combined with GloVe performed best among all the techniques except SVM with BoW. GPT, SVM when combined with BoW and BERT yielded the best F1 score of 92%, 92% and 91.9% respectively, while CNN performed slightly poor with an F1 score of 91%
Targeting bone marrow to potentiate the anti-tumor effect of tyrosine kinase inhibitor in preclinical rat model of human glioblastoma
Antiangiogenic agents caused paradoxical increase in pro-growth and pro-angiogenic factors and caused tumor growth in glioblastoma (GBM). It is hypothesized that paradoxical increase in pro-angiogenic factors would mobilize Bone Marrow Derived Cells (BMDCs) to the treated tumor and cause refractory tumor growth. The purposes of the studies were to determine whether whole body irradiation (WBIR) or a CXCR4 antagonist (AMD3100) will potentiate the effect of vatalanib (a VEGFR2 tyrosine kinase inhibitor) and prevent the refractory growth of GBM. Human GBM were grown orthotopically in three groups of rats (control, pretreated with WBIR and AMD3100) and randomly selected for vehicle or vatalanib treatments for 2 weeks. Then all animals underwent Magnetic Resonance Imaging (MRI) followed by euthanasia and histochemical analysis. Tumor volume and different vascular parameters (plasma volume (vp), forward transfer constant (Ktrans), back flow constant (kep), extravascular extracellular space volume (ve) were determined from MRI. In control group, vatalanib treatment increased the tumor growth significantly compared to that of vehicle treatment but by preventing the mobilization of BMDCs and interaction of CXCR4-SDF-1 using WBIR and ADM3100, respectively, paradoxical growth of tumor was controlled. Pretreatment with WBIR or AMD3100 also decreased tumor cell migration, despite the fact that ADM3100 increased the accumulation of M1 and M2 macrophages in the tumors. Vatalanib also increased Ktrans and ve in control animals but both of the vascular parameters were decreased when the animals were pretreated with WBIR and AMD3100. In conclusion, depleting bone marrow cells or CXCR4 interaction can potentiate the effect of vatalanib
- …
