435 research outputs found

    GRASE: Granulometry Analysis with Semi Eager Classifier to Detect Malware

    Get PDF
    Technological advancement in communication leading to 5G, motivates everyone to get connected to the internet including ‘Devices’, a technology named Web of Things (WoT). The community benefits from this large-scale network which allows monitoring and controlling of physical devices. But many times, it costs the security as MALicious softWARE (MalWare) developers try to invade the network, as for them, these devices are like a ‘backdoor’ providing them easy ‘entry’. To stop invaders from entering the network, identifying malware and its variants is of great significance for cyberspace. Traditional methods of malware detection like static and dynamic ones, detect the malware but lack against new techniques used by malware developers like obfuscation, polymorphism and encryption. A machine learning approach to detect malware, where the classifier is trained with handcrafted features, is not potent against these techniques and asks for efforts to put in for the feature engineering. The paper proposes a malware classification using a visualization methodology wherein the disassembled malware code is transformed into grey images. It presents the efficacy of Granulometry texture analysis technique for improving malware classification. Furthermore, a Semi Eager (SemiE) classifier, which is a combination of eager learning and lazy learning technique, is used to get robust classification of malware families. The outcome of the experiment is promising since the proposed technique requires less training time to learn the semantics of higher-level malicious behaviours. Identifying the malware (testing phase) is also done faster. A benchmark database like malimg and Microsoft Malware Classification challenge (BIG-2015) has been utilized to analyse the performance of the system. An overall average classification accuracy of 99.03 and 99.11% is achieved, respectively

    Securing the Internet of Things: A Study on Machine Learning-Based Solutions for IoT Security and Privacy Challenges

    Get PDF
    The Internet of Things (IoT) is a rapidly growing technology that connects and integrates billions of smart devices, generating vast volumes of data and impacting various aspects of daily life and industrial systems. However, the inherent characteristics of IoT devices, including limited battery life, universal connectivity, resource-constrained design, and mobility, make them highly vulnerable to cybersecurity attacks, which are increasing at an alarming rate. As a result, IoT security and privacy have gained significant research attention, with a particular focus on developing anomaly detection systems. In recent years, machine learning (ML) has made remarkable progress, evolving from a lab novelty to a powerful tool in critical applications. ML has been proposed as a promising solution for addressing IoT security and privacy challenges. In this article, we conducted a study of the existing security and privacy challenges in the IoT environment. Subsequently, we present the latest ML-based models and solutions to address these challenges, summarizing them in a table that highlights the key parameters of each proposed model. Additionally, we thoroughly studied available datasets related to IoT technology. Through this article, readers will gain a detailed understanding of IoT architecture, security attacks, and countermeasures using ML techniques, utilizing available datasets. We also discuss future research directions for ML-based IoT security and privacy. Our aim is to provide valuable insights into the current state of research in this field and contribute to the advancement of IoT security and privacy

    Data-Driven Evaluation of In-Vehicle Information Systems

    Get PDF
    Today’s In-Vehicle Information Systems (IVISs) are featurerich systems that provide the driver with numerous options for entertainment, information, comfort, and communication. Drivers can stream their favorite songs, read reviews of nearby restaurants, or change the ambient lighting to their liking. To do so, they interact with large center stack touchscreens that have become the main interface between the driver and IVISs. To interact with these systems, drivers must take their eyes off the road which can impair their driving performance. This makes IVIS evaluation critical not only to meet customer needs but also to ensure road safety. The growing number of features, the distraction caused by large touchscreens, and the impact of driving automation on driver behavior pose significant challenges for the design and evaluation of IVISs. Traditionally, IVISs are evaluated qualitatively or through small-scale user studies using driving simulators. However, these methods are not scalable to the growing number of features and the variety of driving scenarios that influence driver interaction behavior. We argue that data-driven methods can be a viable solution to these challenges and can assist automotive User Experience (UX) experts in evaluating IVISs. Therefore, we need to understand how data-driven methods can facilitate the design and evaluation of IVISs, how large amounts of usage data need to be visualized, and how drivers allocate their visual attention when interacting with center stack touchscreens. In Part I, we present the results of two empirical studies and create a comprehensive understanding of the role that data-driven methods currently play in the automotive UX design process. We found that automotive UX experts face two main conflicts: First, results from qualitative or small-scale empirical studies are often not valued in the decision-making process. Second, UX experts often do not have access to customer data and lack the means and tools to analyze it appropriately. As a result, design decisions are often not user-centered and are based on subjective judgments rather than evidence-based customer insights. Our results show that automotive UX experts need data-driven methods that leverage large amounts of telematics data collected from customer vehicles. They need tools to help them visualize and analyze customer usage data and computational methods to automatically evaluate IVIS designs. In Part II, we present ICEBOAT, an interactive user behavior analysis tool for automotive user interfaces. ICEBOAT processes interaction data, driving data, and glance data, collected over-the-air from customer vehicles and visualizes it on different levels of granularity. Leveraging our multi-level user behavior analysis framework, it enables UX experts to effectively and efficiently evaluate driver interactions with touchscreen-based IVISs concerning performance and safety-related metrics. In Part III, we investigate drivers’ multitasking behavior and visual attention allocation when interacting with center stack touchscreens while driving. We present the first naturalistic driving study to assess drivers’ tactical and operational self-regulation with center stack touchscreens. Our results show significant differences in drivers’ interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. These results emphasize the importance of context-dependent driver distraction assessment of driver interactions with IVISs. Motivated by this we present a machine learning-based approach to predict and explain the visual demand of in-vehicle touchscreen interactions based on customer data. By predicting the visual demand of yet unseen touchscreen interactions, our method lays the foundation for automated data-driven evaluation of early-stage IVIS prototypes. The local and global explanations provide additional insights into how design artifacts and driving context affect drivers’ glance behavior. Overall, this thesis identifies current shortcomings in the evaluation of IVISs and proposes novel solutions based on visual analytics and statistical and computational modeling that generate insights into driver interaction behavior and assist UX experts in making user-centered design decisions

    Evaluation Methodologies in Software Protection Research

    Full text link
    Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs, and try to break the confidentiality or integrity of assets embedded in the software. Both companies and malware authors want to prevent such attacks. This has driven an arms race between attackers and defenders, resulting in a plethora of different protection and analysis methods. However, it remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways and a universally accepted evaluation methodology does not exist. This survey systematically reviews the evaluation methodologies of papers on obfuscation, a major class of protections against MATE attacks. For 572 papers, we collected 113 aspects of their evaluation methodologies, ranging from sample set types and sizes, over sample treatment, to performed measurements. We provide detailed insights into how the academic state of the art evaluates both the protections and analyses thereon. In summary, there is a clear need for better evaluation methodologies. We identify nine challenges for software protection evaluations, which represent threats to the validity, reproducibility, and interpretation of research results in the context of MATE attacks

    A Cybersecurity review of Healthcare Industry

    Get PDF
    Antecedentes La ciberseguridad no es un concepto nuevo de nuestros días. Desde los años 60 la ciberseguridad ha sido un ámbito de discusión e investigación. Aunque los mecanismos de defensa en materia de seguridad han evolucionado, las capacidades del atacante también se han incrementado de igual o mayor manera. Prueba de este hecho es la precaria situación en materia de ciberseguridad de muchas empresas, que ha llevado a un incremento de ataques de ransomware y el establecimiento de grandes organizaciones criminales dedicadas al cibercrimen. Esta situación, evidencia la necesidad de avances e inversión en ciberseguridad en multitud de sectores, siendo especialmente relevante en la protección de infraestructuras críticas. Se conoce como infraestructuras críticas aquellas infraestructuras estratégicas cuyo funcionamiento es indispensable y no permite soluciones alternativas, por lo que su perturbación o destrucción tendría un grave impacto sobre los servicios esenciales. Dentro de esta categorización se encuentran los servicios e infraestructuras sanitarias. Estas infraestructuras ofrecen un servicio, cuya interrupción conlleva graves consecuencias, como la pérdida de vidas humanas. Un ciberataque puede afectar a estos servicios sanitarios, llevando a su paralización total o parcial, como se ha visto en recientes incidentes, llevando incluso a la pérdida de vidas humanas. Además, este tipo de servicios contienen multitud de información personal de carácter altamente sensible. Los datos médicos son un tipo de datos con alto valor en mercados ilegales, y por tanto objetivos de ataques centrados en su robo. Por otra parte, se debe mencionar, que al igual que otros sectores, actualmente los servicios sanitarios se encuentran en un proceso de digitalización. Esta evolución, ha obviado la ciberseguridad en la mayoría de sus desarrollos, contribuyendo al crecimiento y gravedad de los ataques previamente mencionados. - Metodología e investigación El trabajo presentado en esta tesis sigue claramente un método experimental y deductivo. Está investigación se ha centrado en evaluar el estado de la ciberseguridad en infraestructuras sanitarias y proponer mejoras y mecanismos de detección de ciberataques. Las tres publicaciones científicas incluidas en esta tesis buscan dar soluciones y evaluar problemas actuales en el ámbito de las infraestructuras y sistemas sanitarios. La primera publicación, 'Mobile malware detection using machine learning techniques', se centró en desarrollar nuevas técnicas de detección de amenazas basadas en el uso de tecnologías de inteligencia artificial y ‘machine learning’. Esta investigación fue capaz de desarrollar un método de detección de aplicaciones potencialmente no deseadas y maliciosas en entornos móviles de tipo Android. Además, tanto en el diseño y creación se tuvo en cuenta las necesidades específicas de los entornos sanitarios. Buscando ofrecer una implantación sencilla y viable de acorde las necesidades de estos centros, obteniéndose resultados satisfactorios. La segunda publicación, 'Interconnection Between Darknets', buscaba identificar y detectar robos y venta de datos médicos en darknets. El desarrollo de esta investigación conllevó el descubrimiento y prueba de la interconexión entre distintas darknets. La búsqueda y el análisis de información en este tipo de redes permitió demostrar como distintas redes comparten información y referencias entre ellas. El análisis de una darknet implica la necesidad de analizar otras, para obtener una información más completa de la primera. Finalmente, la última publicación, 'Security and privacy issues of data-over-sound technologies used in IoT healthcare devices' buscó investigar y evaluar la seguridad de dispositivos médicos IoT ('Internet of Things'). Para desarrollar esta investigación se adquirió un dispositivo médico, un electrocardiógrafo portable, actualmente en uso por diversos hospitales. Las pruebas realizadas sobre este dispositivo fueron capaces de descubrir múltiples fallos de ciberseguridad. Estos descubrimientos evidenciaron la carencia de certificaciones y revisiones obligatorias en materia ciberseguridad en productos sanitarios, comercializados actualmente. Desgraciadamente la falta de presupuesto dedicado a investigación no permitió la adquisición de varios dispositivos médicos, para su posterior evaluación en ciberseguridad. - Conclusiones La realización de los trabajos e investigaciones previamente mencionadas permitió obtener las siguientes conclusiones. Partiendo de la necesidad en mecanismos de ciberseguridad de las infraestructuras sanitarias, se debe tener en cuenta su particularidad diseño y funcionamiento. Las pruebas y mecanismos de ciberseguridad diseñados han de ser aplicables en entornos reales. Desgraciadamente actualmente en las infraestructuras sanitarias hay sistemas tecnológicos imposibles de actualizar o modificar. Multitud de máquinas de tratamiento y diagnostico cuentan con software y sistemas operativos propietarios a los cuales los administradores y empleados no tienen acceso. Teniendo en cuenta esta situación, se deben desarrollar medidas que permitan su aplicación en este ecosistema y que en la medida de los posible puedan reducir y paliar el riesgo ofrecido por estos sistemas. Esta conclusión viene ligada a la falta de seguridad en dispositivos médicos. La mayoría de los dispositivos médicos no han seguido un proceso de diseño seguro y no han sido sometidos a pruebas de seguridad por parte de los fabricantes, al suponer esto un coste directo en el desarrollo del producto. La única solución en este aspecto es la aplicación de una legislación que fuerce a los fabricantes a cumplir estándares de seguridad. Y aunque actualmente se ha avanzado en este aspecto regulatorio, se tardaran años o décadas en sustituir los dispositivos inseguros. La imposibilidad de actualizar, o fallos relacionados con el hardware de los productos, hacen imposible la solución de todos los fallos de seguridad que se descubran. Abocando al reemplazo del dispositivo, cuando exista una alternativa satisfactoria en materia de ciberseguridad. Por esta razón es necesario diseñar nuevos mecanismos de ciberseguridad que puedan ser aplicados actualmente y puedan mitigar estos riesgos en este periodo de transición. Finalmente, en materia de robo de datos. Aunque las investigaciones preliminares realizadas en esta tesis no consiguieron realizar ningún descubrimiento significativo en el robo y venta de datos. Actualmente las darknets, en concreto la red Tor, se han convertido un punto clave en el modelo de Ransomware as a Business (RaaB), al ofrecer sitios webs de extorsión y contacto con estos grupos

    Test Flakiness Prediction Techniques for Evolving Software Systems

    Get PDF

    TRANSEUNTIS MUNDI, A NOMADIC ARTISTIC PRACTICE

    Get PDF
    In this practice-led Ph.D. research, I investigate how an artistic practice can respond to the migration phenomena performed by human beings across the planet over millennia ¬– what I refer to as the millennial global human journey. Based on the idea of mobility, I chose to frame this research in the articulation of concepts deriving from the prefix trans: transculture, transhumance and transmediality. This research contributes to studies in art composition by developing the processes and concept of transmedial composition, mainly contributing to the field of New Media Art. This investigation resulted in the work Transeuntis Mundi (TM) Project – a nomadic artistic practice that encompasses: the TM Derive and manual, the TM Archive, the TM VR work Derive 01 and two forms for its notation. Transeuntis mundi (TM), from the Latin language, means the ‘passersby of the world’ and metaphorically personify in this work the millennial migrants and their global journeys. Based on proposals from the Realism art movement and the walking-based methodologies of Walkscapes and Dérive, the TM Derive was created as a nomadic methodology of composition in response to the ideas of migration and ancestry. It is framed by the minimal stories ¬– the form of narrative of this work, captured from field recordings with 3D technology of everyday life worldwide. This material formed the TM Archive, presented in the TM VR work. The TM VR work Transeuntis Mundi Derive 01 is an immersive and interactive performative experience for virtual reality, that artistically brings together stories, sounds, images, people, and places worldwide, ¬as a metaphor of the millennial global human migration. This work happens as a VR application using 3D technology with 360º image and ambisonic sound, in order to promote an engaged experience through the immersion and interactivity of the participant. This thesis presents and contextualizes these creations: the scope, references, concepts, origin, collaborations, methodology, technologies, and results of this work. It is informed and accompanied by reflexive and critical writing, including an articulation with references of works across different artistic media and fields.UNIRIO Federal University of the State of Rio de Janeir

    A Study of Founder-entrepreneurs from High-tech Software SMEs in China in Managing Risks and Exploring Market Opportunities

    Get PDF
    Small and medium-sized enterprises (SMEs) are the backbone of the national economy (Franco et al., 2014). And the rapid growth of the software market has brought in a lot of new opportunities for SMEs, especially in the high-tech software sector in China. However, the development of these SMEs still faces many uncertainties and competition that highly threaten their survival and sustainability (Li, 2019). In such scenarios, having good risk management strategies can be a significant factor towards the development and success of enterprises (Gourinchas et al., 2020; Bartik et al., 2020). However, limited by size and resources, SMEs commonly lack a well-developed risk management structure. In most scenarios, the founder-entrepreneurs tend to play a key role in the risk management of their SMEs (Islam and Tedford, 2012; Oláh et al., 2019). Their personal risk attitude and judgments will therefore directly decide the company's development and direction, and sometimes will even influence the company's survival (Carland et al., 1995). To understand the risk management status of Chinese high-tech software SMEs, this study adopted a qualitative research approach, using multiple case studies to deeper explore the role of the founder-entrepreneur in the risk management process. Nine founder-entrepreneurs from Chinese high-tech software SMEs have been selected as samples in this study. The findings show the important role of founder-entrepreneurs plays in the risk management of Chinese high-tech software SMEs. In this study, founder-entrepreneurs are primarily driven by 'pull motivations' to see entrepreneurship as a self-directed pursuit. Besides, the study supports that different personality traits significantly impact the founder-entrepreneurs' risk attitude and further lead the different risk strategies during the company's operation. Among them, openness and neuroticism are the two main personality traits which highly impact the founder-entrepreneurs' decision-making when facing risks. In this study, participants with a higher level of openness tended to accept new challenges and tasks, they showed a more positive attitude towards facing market uncertainties and addressing challenges. At the same time, people who scored lower in neuroticism have a better capability to manage their emotions. This quality in their personality helps them to be more objective in their decision-making, especially during crises and when facing risks and setbacks

    Large Language Models for Software Engineering: A Systematic Literature Review

    Full text link
    Large Language Models (LLMs) have significantly impacted numerous domains, notably including Software Engineering (SE). Nevertheless, a well-rounded understanding of the application, effects, and possible limitations of LLMs within SE is still in its early stages. To bridge this gap, our systematic literature review takes a deep dive into the intersection of LLMs and SE, with a particular focus on understanding how LLMs can be exploited in SE to optimize processes and outcomes. Through a comprehensive review approach, we collect and analyze a total of 229 research papers from 2017 to 2023 to answer four key research questions (RQs). In RQ1, we categorize and provide a comparative analysis of different LLMs that have been employed in SE tasks, laying out their distinctive features and uses. For RQ2, we detail the methods involved in data collection, preprocessing, and application in this realm, shedding light on the critical role of robust, well-curated datasets for successful LLM implementation. RQ3 allows us to examine the specific SE tasks where LLMs have shown remarkable success, illuminating their practical contributions to the field. Finally, RQ4 investigates the strategies employed to optimize and evaluate the performance of LLMs in SE, as well as the common techniques related to prompt optimization. Armed with insights drawn from addressing the aforementioned RQs, we sketch a picture of the current state-of-the-art, pinpointing trends, identifying gaps in existing research, and flagging promising areas for future study

    Undergraduate and Graduate Course Descriptions, 2023 Spring

    Get PDF
    Wright State University undergraduate and graduate course descriptions from Spring 2023
    • …
    corecore