1,644 research outputs found

    Enhancing cloud security through the integration of deep learning and data mining techniques: A comprehensive review

    Get PDF
    Cloud computing is crucial in all areas of data storage and online service delivery. It adds various benefits to the conventional storage and sharing system, such as simple access, on-demand storage, scalability, and cost savings. The employment of its rapidly expanding technologies may give several benefits in protecting the Internet of Things (IoT) and physical cyber systems (CPS) from various cyber threats, with IoT and CPS providing facilities for people in their everyday lives. Because malware (malware) is on the rise and there is no well-known strategy for malware detection, leveraging the cloud environment to identify malware might be a viable way forward. To avoid detection, a new kind of malware employs complex jamming and packing methods. Because of this, it is very hard to identify sophisticated malware using typical detection methods. The article presents a detailed assessment of cloud-based malware detection technologies, as well as insight into understanding the cloud's use in protecting the Internet of Things and critical infrastructure from intrusions. This study examines the benefits and drawbacks of cloud environments in malware detection, as well as presents a methodology for detecting cloud-based malware using deep learning and data extraction and highlights new research on the issues of propagating existing malware. Finally, similarities and variations across detection approaches will be exposed, as well as detection technique flaws. The findings of this work may be utilized to highlight the current issue being tackled in malware research in the future

    Effective Secure Data Agreement Approach-based cloud storage for a healthcare organization

    Get PDF
    In recent days, there has been a significant development in the field of computers as they need to handle the vast resource using cloud computing and performing various cloud services. The cloud helps to manage the resource dynamically based on the user demand and is transmitted to multiple users in healthcare organizations. Mainly the cloud helps to reduce the performance cost and enhance data scalability & flexibility. The main challenges faced by the existing technologies integrated with the cloud need to be solved in managing the data and the problem of data heterogeneity. As the above challenges, mitigation makes the services more data stable should the healthcare organization identify the malware. Developed countries are utilizing the services through the cloud as it needs more security. In this work, a secure data agreement approach is proposed as it is associated with feature extraction with cloud computing for healthcare to examine and enhance the user parties to make effective decisions. The proposed method classifies into two components. The first component deals with the modified data formulation algorithm, used to identify the relationship among variables, i.e., data correlation, and validate the data using trained data. It helps to achieve data reduction and data scale development. In the second component, Feature selection is used to validate the model using subset selection to determine the model fitness based on the data. It is necessary to have more samples of different Android applications to examine the framework using factors like data correctness and the F-measure. As feature selection is a concern, this study focuses on Chi-square, gain ratio, information gain, logistic regression analysis, OneR, and PCA

    DATDroid: Dynamic Analysis Technique in Android Malware Detection

    Get PDF
    Android system has become a target for malware developers due to its huge market globally in recent years. The emergence of 5G in the market and limited protocols post a great challenge to the security in Android. Hence, various techniques have been taken by researchers to ensure high security in Android devices. There are three types of analysis namely static, dynamic and hybrid analysis used to detect and analyze the malicious application in Android. Due to evolving nature of the malware, it is very challenging for the existing techniques to detect and analyze it efficiently and accurately. This paper proposed a Dynamic Analysis Technique in Android Malware detection called DATDroid. The proposed technique consists of three phases, which includes feature extraction, feature selection and classification phases. A total of five features namely system call, errors and time of system call process, CPU usage, memory and network packets are extracted. During the classification 70% of the dataset was allocated for training phase and 30% for testing phase using machine learning algorithm. Our experimental results achieved an overall accuracy of 91.7% with lower false positive rates as compared to benchmarked method. DATDroid also achieved higher precision and recall rate of 93.1% and 90.0%, respectively. Hence our proposed technique has proven to be able to classify malware more accurately and reduce misclassification of malware application as benign significantly

    Deep Learning for Mobile Multimedia: A Survey

    Get PDF
    Deep Learning (DL) has become a crucial technology for multimedia computing. It offers a powerful instrument to automatically produce high-level abstractions of complex multimedia data, which can be exploited in a number of applications, including object detection and recognition, speech-to- text, media retrieval, multimodal data analysis, and so on. The availability of affordable large-scale parallel processing architectures, and the sharing of effective open-source codes implementing the basic learning algorithms, caused a rapid diffusion of DL methodologies, bringing a number of new technologies and applications that outperform, in most cases, traditional machine learning technologies. In recent years, the possibility of implementing DL technologies on mobile devices has attracted significant attention. Thanks to this technology, portable devices may become smart objects capable of learning and acting. The path toward these exciting future scenarios, however, entangles a number of important research challenges. DL architectures and algorithms are hardly adapted to the storage and computation resources of a mobile device. Therefore, there is a need for new generations of mobile processors and chipsets, small footprint learning and inference algorithms, new models of collaborative and distributed processing, and a number of other fundamental building blocks. This survey reports the state of the art in this exciting research area, looking back to the evolution of neural networks, and arriving to the most recent results in terms of methodologies, technologies, and applications for mobile environments

    Overcoming Language Dichotomies: Toward Effective Program Comprehension for Mobile App Development

    Full text link
    Mobile devices and platforms have become an established target for modern software developers due to performant hardware and a large and growing user base numbering in the billions. Despite their popularity, the software development process for mobile apps comes with a set of unique, domain-specific challenges rooted in program comprehension. Many of these challenges stem from developer difficulties in reasoning about different representations of a program, a phenomenon we define as a "language dichotomy". In this paper, we reflect upon the various language dichotomies that contribute to open problems in program comprehension and development for mobile apps. Furthermore, to help guide the research community towards effective solutions for these problems, we provide a roadmap of directions for future work.Comment: Invited Keynote Paper for the 26th IEEE/ACM International Conference on Program Comprehension (ICPC'18

    An efficient human activity recognition model based on deep learning approaches

    Get PDF
    Human Activity Recognition (HAR) has gained traction in recent years in diverse areas such as observation, entertainment, teaching and healthcare, using wearable and smartphone sensors. Such environments and systems necessitate and subsume activity recognition, aimed at recognizing the actions, characteristics, and goals of one or more individuals from a temporal series of observations streamed from one or more sensors. Different developed models for HAR have been explained in literature. Deep learning systems and algorithms were shown to perform highly in HAR in recent years, but these algorithms need lots of computerization to be deployed efficiently in applications. This paper presents a HAR lightweight, low computing capacity, deep learning model, which is ideal for use in real-time applications. The generic HAR framework for smartphone sensor data is proposed, based on Long Short-Term Memory (LSTM) networks for time-series domains and standard Convolutional Neural Network (CNN) used for classification. The findings demonstrate that many of the deployed deep learning and machine learning techniques are surpassed by the proposed model. TRANSLATE with x English ArabicHebrewPolishBulgarianHindiPortugueseCatalanHmong DawRomanianChinese SimplifiedHungarianRussianChinese TraditionalIndonesianSlovakCzechItalianSlovenianDanishJapaneseSpanishDutchKlingonSwedishEnglishKoreanThaiEstonianLatvianTurkishFinnishLithuanianUkrainianFrenchMalayUrduGermanMalteseVietnameseGreekNorwegianWelshHaitian CreolePersian // TRANSLATE with COPY THE URL BELOW Back EMBED THE SNIPPET BELOW IN YOUR SITE Enable collaborative features and customize widget: Bing Webmaster Portal Back /

    Next Generation Cloud Computing: New Trends and Research Directions

    Get PDF
    The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201
    • …
    corecore