364 research outputs found

    Development of a Client-Side Evil Twin Attack Detection System for Public Wi-Fi Hotspots based on Design Science Approach

    Get PDF
    Users and providers benefit considerably from public Wi-Fi hotspots. Users receive wireless Internet access and providers draw new prospective customers. While users are able to enjoy the ease of Wi-Fi Internet hotspot networks in public more conveniently, they are more susceptible to a particular type of fraud and identify theft, referred to as evil twin attack (ETA). Through setting up an ETA, an attacker can intercept sensitive data such as passwords or credit card information by snooping into the communication links. Since the objective of free open (unencrypted) public Wi-Fi hotspots is to provide ease of accessibility and to entice customers, no security mechanisms are in place. The public’s lack of awareness of the security threat posed by free open public Wi-Fi hotspots makes this problem even more heinous. Client-side systems to help wireless users detect and protect themselves from evil twin attacks in public Wi-Fi hotspots are in great need. In this dissertation report, the author explored the problem of the need for client-side detection systems that will allow wireless users to help protect their data from evil twin attacks while using free open public Wi-Fi. The client-side evil twin attack detection system constructed as part of this dissertation linked the gap between the need for wireless security in free open public Wi-Fi hotspots and limitations in existing client-side evil twin attack detection solutions. Based on design science research (DSR) literature, Hevner’s seven guidelines of DSR, Peffer’s design science research methodology (DSRM), Gregor’s IS design theory, and Hossen & Wenyuan’s (2014) study evaluation methodology, the author developed design principles, procedures and specifications to guide the construction, implementation, and evaluation of a prototype client-side evil twin attack detection artifact. The client-side evil twin attack detection system was evaluated in a hotel public Wi-Fi environment. The goal of this research was to develop a more effective, efficient, and practical client-side detection system for wireless users to independently detect and protect themselves from mobile evil twin attacks while using free open public Wi-Fi hotspots. The experimental results showed that client-side evil twin attack detection system can effectively detect and protect users from mobile evil twin AP attacks in public Wi-Fi hotspots in various real-world scenarios despite time delay caused by many factors

    Development and application of distributed computing tools for virtual screening of large compound libraries

    Get PDF
    Im derzeitigen Drug Discovery Prozess ist die Identifikation eines neuen Targetproteins und dessen potenziellen Liganden langwierig, teuer und zeitintensiv. Die Verwendung von in silico Methoden gewinnt hier zunehmend an Bedeutung und hat sich als wertvolle Strategie zur Erkennung komplexer Zusammenhänge sowohl im Bereich der Struktur von Proteinen wie auch bei Bioaktivitäten erwiesen. Die zunehmende Nachfrage nach Rechenleistung im wissenschaftlichen Bereich sowie eine detaillierte Analyse der generierten Datenmengen benötigen innovative Strategien für die effiziente Verwendung von verteilten Computerressourcen, wie z.B. Computergrids. Diese Grids ergänzen bestehende Technologien um einen neuen Aspekt, indem sie heterogene Ressourcen zur Verfügung stellen und koordinieren. Diese Ressourcen beinhalten verschiedene Organisationen, Personen, Datenverarbeitung, Speicherungs- und Netzwerkeinrichtungen, sowie Daten, Wissen, Software und Arbeitsabläufe. Das Ziel dieser Arbeit war die Entwicklung einer universitätsweit anwendbaren Grid-Infrastruktur - UVieCo (University of Vienna Condor pool) -, welche für die Implementierung von akademisch frei verfügbaren struktur- und ligandenbasierten Drug Discovery Anwendungen verwendet werden kann. Firewall- und Sicherheitsprobleme wurden mittels eines virtuellen privaten Netzwerkes gelöst, wohingegen die Virtualisierung der Computerhardware über das CoLinux Konzept ermöglicht wurde. Dieses ermöglicht, dass unter Linux auszuführende Aufträge auf Windows Maschinen laufen können. Die Effektivität des Grids wurde durch Leistungsmessungen anhand sequenzieller und paralleler Aufgaben ermittelt. Als Anwendungsbeispiel wurde die Assoziation der Expression bzw. der Sensitivitätsprofile von ABC-Transportern mit den Aktivitätsprofilen von Antikrebswirkstoffen durch Data-Mining des NCI (National Cancer Institute) Datensatzes analysiert. Die dabei generierten Datensätze wurden für liganden-basierte Computermethoden wie Shape-Similarity und Klassifikationsalgorithmen mit dem Ziel verwendet, P-glycoprotein (P-gp) Substrate zu identifizieren und sie von Nichtsubstraten zu trennen. Beim Erstellen vorhersagekräftiger Klassifikationsmodelle konnte das Problem der extrem unausgeglichenen Klassenverteilung durch Verwendung der „Cost-Sensitive Bagging“ Methode gelöst werden. Applicability Domain Studien ergaben, dass unser Modell nicht nur die NCI Substanzen gut vorhersagen kann, sondern auch für wirkstoffähnliche Moleküle verwendet werden kann. Die entwickelten Modelle waren relativ einfach, aber doch präzise genug um für virtuelles Screening einer großen chemischen Bibliothek verwendet werden zu können. Dadurch könnten P-gp Substrate schon frühzeitig erkannt werden, was möglicherweise nützlich sein kann zur Entfernung von Substanzen mit schlechten ADMET-Eigenschaften bereits in einer frühen Phase der Arzneistoffentwicklung. Zusätzlich wurden Shape-Similarity und Self-organizing Map Techniken verwendet um neue Substanzen in einer hauseigenen sowie einer großen kommerziellen Datenbank zu identifizieren, die ähnlich zu selektiven Serotonin-Reuptake-Inhibitoren (SSRI) sind und Apoptose induzieren können. Die erhaltenen Treffer besitzen neue chemische Grundkörper und können als Startpunkte für Leitstruktur-Optimierung in Betracht gezogen werden. Die in dieser Arbeit beschriebenen Studien werden nützlich sein um eine verteilte Computerumgebung zu kreieren die vorhandene Ressourcen in einer Organisation nutzt, und die für verschiedene Anwendungen geeignet ist, wie etwa die effiziente Handhabung der Klassifizierung von unausgeglichenen Datensätzen, oder mehrstufiges virtuelles Screening.In the current drug discovery process, the identification of new target proteins and potential ligands is very tedious, expensive and time-consuming. Thus, use of in silico techniques is of utmost importance and proved to be a valuable strategy in detecting complex structural and bioactivity relationships. Increased demands of computational power for tremendous calculations in scientific fields and timely analysis of generated piles of data require innovative strategies for efficient utilization of distributed computing resources in the form of computational grids. Such grids add a new aspect to the emerging information technology paradigm by providing and coordinating the heterogeneous resources such as various organizations, people, computing, storage and networking facilities as well as data, knowledge, software and workflows. The aim of this study was to develop a university-wide applicable grid infrastructure, UVieCo (University of Vienna Condor pool) which can be used for implementation of standard structure- and ligand-based drug discovery applications using freely available academic software. Firewall and security issues were resolved with a virtual private network setup whereas virtualization of computer hardware was done using the CoLinux concept in a way to run Linux-executable jobs inside Windows machines. The effectiveness of the grid was assessed by performance measurement experiments using sequential and parallel tasks. Subsequently, the association of expression/sensitivity profiles of ABC transporters with activity profiles of anticancer compounds was analyzed by mining the data from NCI (National Cancer Institute). The datasets generated in this analysis were utilized with ligand-based computational methods such as shape similarity and classification algorithms to identify and separate P-gp substrates from non-substrates. While developing predictive classification models, the problem of imbalanced class distribution was proficiently addressed using the cost-sensitive bagging approach. Applicability domain experiment revealed that our model not only predicts NCI compounds well, but it can also be applied to drug-like molecules. The developed models were relatively simple but precise enough to be applicable for virtual screening of large chemical libraries for the early identification of P-gp substrates which can potentially be useful to remove compounds of poor ADMET properties in an early phase of drug discovery. Additionally, shape-similarity and self-organizing maps techniques were used to screen in-house as well as a large vendor database for identification of novel selective serotonin reuptake inhibitor (SSRI) like compounds to induce apoptosis. The retrieved hits possess novel chemical scaffolds and can be considered as a starting point for lead optimization studies. The work described in this thesis will be useful to create distributed computing environment using available resources within an organization and can be applied to various applications such as efficient handling of imbalanced data classification problems or multistep virtual screening approach

    Building Cloud-Based Information Systems Lab Architecture: Deriving Design Principles that Facilitate the Effective Construction and Evaluation of a Cloud-Based Lab Environment

    Get PDF
    The problem explored in this dissertation report was that at the time of this study, there were no design principles or methodologies based on design science research (DSR) available to use for artifact construction, implementation, and effective evaluation of cloud-based networking lab environments that can be used to foster hands-on technology skills in students. Primarily based on Hevner’s 7 guidelines of DSR, Peffer’s design science research methodology (DSRM), and Gregor’s IS design theory, this study forms the groundwork for the development of procedures and specifications derived from DSR literature to facilitate the construction, implementation, and evaluation of a comprehensive cloud-based computer and information systems (CIS) laboratory artifact that is globally accessible 24 hours a day and 7 days a week. Secondarily, this study guided the construction and implementation of a prototype cloud-based lab environment using the procedures and specifications derived from DSR. The cloud-based lab environment was then evaluated based on the skill level attained by students enrolled in courses that leveraged the proposed system. Results of this study showed that the overwhelming majority of the students who participated in the experiment using the cloud-based lab environment showed statistically significant gains in pretest and posttest scores compared to the students who participated in the experiment using the classroom-based physical equipment. These results fully supported the first hypothesis for this study, that participation in the cloud-based lab environment would promote positive student outcomes. The second hypothesis also was supported. The majority of the experimental group students completed most of the labs and significantly spent more time on the system compared to the control group students using the traditional classroom-based physical lab equipment, which indicated the specifications derived from DSR positively influenced the use of the cloud-based system. An argument was made that the proposed study advances IS and education research through artifact construction and evaluation by correlating Hevner’s 7 steps of effective DSR theory, Peffer’s DSRM, and Gregor’s IS design theory to the problem statement, research questions, and hypothesis in order to develop guiding principles and specifications for building and assessing a cloud-based lab environment

    EMPIRICAL STUDIES BASED ON HONEYPOTS FOR CHARACTERIZING ATTACKERS BEHAVIOR

    Get PDF
    The cybersecurity community has made substantial efforts to understand and mitigate security flaws in information systems. Oftentimes when a compromise is discovered, it is difficult to identify the actions performed by an attacker. In this study, we explore the compromise phase, i.e., when an attacker exploits the host he/she gained access to using a vulnerability exposed by an information system. More specifically, we look at the main actions performed during the compromise and the factors deterring the attackers from exploiting the compromised systems. Because of the lack of security datasets on compromised systems, we need to deploy systems to more adequately study attackers and the different techniques they employ to compromise computer. Security researchers employ target computers, called honeypots, that are not used by normal or authorized users. In this study we first describe the distributed honeypot network architecture deployed at the University of Maryland and the different honeypot-based experiments enabling the data collection required to conduct the studies on attackers' behavior. In a first experiment we explore the attackers' skill levels and the purpose of the malicious software installed on the honeypots. We determined the relative skill levels of the attackers and classified the different software installed. We then focused on the crimes committed by the attackers, i.e., the attacks launched from the honeypots by the attackers. We defined the different computer crimes observed (e.g., brute-force attacks and denial of service attacks) and their characteristics (whether they were coordinated and/or destructive). We looked at the impact of computer resources restrictions on the crimes and then, at the deterrent effect of warning and surveillance. Lastly, we used different metrics related to the attack sessions to investigate the impact of surveillance on the attackers based on their country of origin. During attacks, we found that attackers mainly installed IRC-based bot tools and sometimes shared their honeypot access. From the analysis on crimes, it appears that deterrence does not work; we showed attackers seem to favor certain computer resources. Lastly, we observed that the presence of surveillance had no significant impact on the attack sessions, however surveillance altered the behavior originating from a few countries

    SYSTEMATIC DISCOVERY OF ANDROID CUSTOMIZATION HAZARDS

    Get PDF
    The open nature of Android ecosystem has naturally laid the foundation for a highly fragmented operating system. In fact, the official AOSP versions have been aggressively customized into thousands of system images by everyone in the customization chain, such as device manufacturers, vendors, carriers, etc. If not well thought-out, the customization process could result in serious security problems. This dissertation performs a systematic investigation of Android customization’ inconsistencies with regards to security aspects at various Android layers. It brings to light new vulnerabilities, never investigated before, caused by the under-regulated and complex Android customization. It first describes a novel vulnerability Hare and proves that it is security critical and extensive affecting devices from major vendors. A new tool is proposed to detect the Hare problem and to protect affected devices. This dissertation further discovers security configuration changes through a systematic differential analysis among custom devices from different vendors and demonstrates that they could lead to severe vulnerabilities if introduced unintentionally

    Virtual Cluster Management for Analysis of Geographically Distributed and Immovable Data

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics and Computing, 2015Scenarios exist in the era of Big Data where computational analysis needs to utilize widely distributed and remote compute clusters, especially when the data sources are sensitive or extremely large, and thus unable to move. A large dataset in Malaysia could be ecologically sensitive, for instance, and unable to be moved outside the country boundaries. Controlling an analysis experiment in this virtual cluster setting can be difficult on multiple levels: with setup and control, with managing behavior of the virtual cluster, and with interoperability issues across the compute clusters. Further, datasets can be distributed among clusters, or even across data centers, so that it becomes critical to utilize data locality information to optimize the performance of data-intensive jobs. Finally, datasets are increasingly sensitive and tied to certain administrative boundaries, though once the data has been processed, the aggregated or statistical result can be shared across the boundaries. This dissertation addresses management and control of a widely distributed virtual cluster having sensitive or otherwise immovable data sets through a controller. The Virtual Cluster Controller (VCC) gives control back to the researcher. It creates virtual clusters across multiple cloud platforms. In recognition of sensitive data, it can establish a single network overlay over widely distributed clusters. We define a novel class of data, notably immovable data that we call "pinned data", where the data is treated as a first-class citizen instead of being moved to where needed. We draw from our earlier work with a hierarchical data processing model, Hierarchical MapReduce (HMR), to process geographically distributed data, some of which are pinned data. The applications implemented in HMR use extended MapReduce model where computations are expressed as three functions: Map, Reduce, and GlobalReduce. Further, by facilitating information sharing among resources, applications, and data, the overall performance is improved. Experimental results show that the overhead of VCC is minimum. The HMR outperforms traditional MapReduce model while processing a particular class of applications. The evaluations also show that information sharing between resources and application through the VCC shortens the hierarchical data processing time, as well satisfying the constraints on the pinned data

    UNIX Administrator Information Security Policy Compliance: The Influence of a Focused SETA Workshop and Interactive Security Challenges on Heuristics and Biases

    Get PDF
    Information Security Policy (ISP) compliance is crucial to the success of healthcare organizations due to security threats and the potential for security breaches. UNIX Administrators (UXAs) in healthcare Information Technology (IT) maintain critical servers that house Protected Health Information (PHI). Their compliance with ISP is crucial to the confidentiality, integrity, and availability of PHI data housed or accessed by their servers. The use of cognitive heuristics and biases may negatively influence threat appraisal, coping appraisal, and ultimately ISP compliance behavior. These failures may result in insufficiently protected servers and put organizations at greater risk of data breaches and financial loss. The goal was to empirically assess the effect of a focused Security Education, Training, and Awareness (SETA) workshop, an Interactive Security Challenge (ISC), and periodic security update emails on UXAs knowledge sharing, use of cognitive heuristics and biases, and ISP compliance behavior. This quantitative study employed a pretest and posttest experimental design to evaluate the effectiveness of a SETA workshop and an ISC on the ISP compliance of UXAs. The survey instrument was developed based on prior validated instrument questions and augmented with newly designed questions related to the use of cognitive heuristics and biases. Forty-two participants completed the survey prior to and following the SETA, ISC, and security update emails. Actual compliance (AC) behavior was assessed by comparing the results of security scans on administrator’s servers prior to and 90 days following the SETA workshop and ISC. SmartPLS was used to analyze the pre-workshop data, post-workshop data, and combined data to evaluate the proposed structural and measurement models. The results indicated that Confirmation Bias (CB) and the Availability Heuristic (AH) were significantly influenced by the Information Security Knowledge Sharing (ISKS). Optimism Bias (OB) did not reach statistically significant levels relating to ISKS. OB did, however, significantly influence on perceived severity (TA-PS), perceived vulnerability (TA-PV), response-efficacy (CA-RE), and self-efficacy (CA-SE). Also, it was noted that all five security implementation data points collected to assess pre- and post-workshop compliance showed statistically significant change. A total of eight hypotheses were accepted and nine hypotheses were rejected

    Creation of value with open source software in the telecommunications field

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200

    Content rendering and interaction technologies for digital heritage systems

    Get PDF
    Existing digital heritage systems accommodate a huge amount of digital repository information; however their content rendering and interaction components generally lack the more interesting functionality that allows better interaction with heritage contents. Many digital heritage libraries are simply collections of 2D images with associated metadata and textual content, i.e. little more than museum catalogues presented online. However, over the last few years, largely as a result of EU framework projects, some 3D representation of digital heritage objects are beginning to appear in a digital library context. In the cultural heritage domain, where researchers and museum visitors like to observe cultural objects as closely as possible and to feel their existence and use in the past, giving the user only 2D images along with textual descriptions significantly limits interaction and hence understanding of their heritage. The availability of powerful content rendering technologies, such as 3D authoring tools to create 3D objects and heritage scenes, grid tools for rendering complex 3D scenes, gaming engines to display 3D interactively, and recent advances in motion capture technologies for embodied immersion, allow the development of unique solutions for enhancing user experience and interaction with digital heritage resources and objects giving a higher level of understanding and greater benefit to the community. This thesis describes DISPLAYS (Digital Library Services for Playing with Shared Heritage Resources), which is a novel conceptual framework where five unique services are proposed for digital content: creation, archival, exposition, presentation and interaction services. These services or tools are designed to allow the heritage community to create, interpret, use and explore digital heritage resources organised as an online exhibition (or virtual museum). This thesis presents innovative solutions for two of these services or tools: content creation where a cost effective render grid is proposed; and an interaction service, where a heritage scenario is presented online using a real-time motion capture and digital puppeteer solution for the user to explore through embodied immersive interaction their digital heritage

    Policy Conflict Management in Distributed SDN Environments

    Get PDF
    abstract: The ease of programmability in Software-Defined Networking (SDN) makes it a great platform for implementation of various initiatives that involve application deployment, dynamic topology changes, and decentralized network management in a multi-tenant data center environment. However, implementing security solutions in such an environment is fraught with policy conflicts and consistency issues with the hardness of this problem being affected by the distribution scheme for the SDN controllers. In this dissertation, a formalism for flow rule conflicts in SDN environments is introduced. This formalism is realized in Brew, a security policy analysis framework implemented on an OpenDaylight SDN controller. Brew has comprehensive conflict detection and resolution modules to ensure that no two flow rules in a distributed SDN-based cloud environment have conflicts at any layer; thereby assuring consistent conflict-free security policy implementation and preventing information leakage. Techniques for global prioritization of flow rules in a decentralized environment are presented, using which all SDN flow rule conflicts are recognized and classified. Strategies for unassisted resolution of these conflicts are also detailed. Alternately, if administrator input is desired to resolve conflicts, a novel visualization scheme is implemented to help the administrators view the conflicts in an aesthetic manner. The correctness, feasibility and scalability of the Brew proof-of-concept prototype is demonstrated. Flow rule conflict avoidance using a buddy address space management technique is studied as an alternate to conflict detection and resolution in highly dynamic cloud systems attempting to implement an SDN-based Moving Target Defense (MTD) countermeasures.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    • …
    corecore