13 research outputs found

    A Comprehensive Cybersecurity Defense Framework for Large Organizations

    Get PDF
    There is a growing need to understand and identify overarching organizational requirements for cybersecurity defense in large organizations. Applying proper cybersecurity defense will ensure that the right capabilities are fielded at the right locations to safeguard critical assets while minimizing duplication of effort and taking advantage of efficiencies. Exercising cybersecurity defense without an understanding of comprehensive foundational requirements instills an ad hoc and in many cases conservative approach to network security. Organizations must be synchronized across federal and civil agencies to achieve adequate cybersecurity defense. Understanding what constitutes comprehensive cybersecurity defense will ensure organizations are better protected and more efficient. This work, represented through design science research, developed a model to understand comprehensive cybersecurity defense, addressing the lack of standard requirements in large organizations. A systemic literature review and content analysis were conducted to form seven criteria statements for understanding comprehensive cybersecurity defense. The seven criteria statements were then validated by a panel of expert cyber defenders utilizing the Delphi consensus process. Based on the approved criteria, the team of cyber defenders facilitated the development of a Comprehensive Cybersecurity Defense Framework prototype for understanding cybersecurity defense. Through the Delphi process, the team of cyber defense experts ensured the framework matched the seven criteria statements. An additional and separate panel of stakeholders conducted the Delphi consensus process to ensure a non-biased evaluation of the framework. The comprehensive cybersecurity defense framework is developed through the data collected from two distinct and separate Delphi panels. The framework maps risk management, behavioral, and defense in depth frameworks with cyber defense roles to offer a comprehensive approach to cyber defense in large companies, agencies, or organizations. By defining the cyber defense tasks, what those tasks are trying to achieve and where best to accomplish those tasks on the network, a comprehensive approach is reached

    Analyzing Small Business Strategies to Prevent External Cybersecurity Threats

    Get PDF
    Some small businesses’ cybersecurity analysts lack strategies to prevent their organizations from compromising personally identifiable information (PII) via external cybersecurity threats. Small business leaders are concerned, as they are the most targeted critical infrastructures in the United States and are a vital part of the economic system as data breaches threaten the viability of these organizations. Grounded in routine activity theory, the purpose of this pragmatic qualitative inquiry was to explore strategies small business organizations utilize to prevent external cybersecurity threats. The participants were nine cybersecurity analysts who utilized strategies to defend small businesses from external threats. Data were collected via online semistructured interviews and the National Institute of Standards and Technology documentation as well as analyzed thematically. Six major themes emerged: (a) applying standards regarding external threats, (b) evaluation of cybersecurity strategies and effectiveness, (c) consistent awareness of the external threat landscape, (d) assessing threat security posture, (e) measuring the ability to address risk and prevent attacks related to external threats, and (f) centralizing communication across departments to provide a holistic perspective on threats. A key recommendation for cybersecurity analysts is to employ moving the target defenses to prevent external cybersecurity threats. The implications for positive social change include the potential to provide small business cybersecurity analysts with additional strategies to effectively mitigate the compromise of customer PII, creating more resilient economic infrastructures while strengthening communities

    Analyzing Small Business Strategies to Prevent External Cybersecurity Threats

    Get PDF
    Some small businesses’ cybersecurity analysts lack strategies to prevent their organizations from compromising personally identifiable information (PII) via external cybersecurity threats. Small business leaders are concerned, as they are the most targeted critical infrastructures in the United States and are a vital part of the economic system as data breaches threaten the viability of these organizations. Grounded in routine activity theory, the purpose of this pragmatic qualitative inquiry was to explore strategies small business organizations utilize to prevent external cybersecurity threats. The participants were nine cybersecurity analysts who utilized strategies to defend small businesses from external threats. Data were collected via online semistructured interviews and the National Institute of Standards and Technology documentation as well as analyzed thematically. Six major themes emerged: (a) applying standards regarding external threats, (b) evaluation of cybersecurity strategies and effectiveness, (c) consistent awareness of the external threat landscape, (d) assessing threat security posture, (e) measuring the ability to address risk and prevent attacks related to external threats, and (f) centralizing communication across departments to provide a holistic perspective on threats. A key recommendation for cybersecurity analysts is to employ moving the target defenses to prevent external cybersecurity threats. The implications for positive social change include the potential to provide small business cybersecurity analysts with additional strategies to effectively mitigate the compromise of customer PII, creating more resilient economic infrastructures while strengthening communities

    An investigation into trust and security in the mandatory and imposed use of financial ICTs upon older people

    Get PDF
    Care needs to be taken to reduce the number of people who are fearful and mistrustful of using ICT where that usage is forced upon them without choice or alternative. The growing incidence of mandatory and imposed online systems can result in confusion, misuse, fear, and rejection by people with only rudimentary ICT skills. A cohort where a high percentage of such people occur is older people, defined in this study as people over the age of 60 Examples of compulsory ICT interactions include some banks limiting bank statement access through online rather than paper-based options. Other examples include the purchase of theatre or sports events tickets through ticketing systems that require an online transaction to take place. Increasingly, people are living beyond the normal retiring age. As the older cohort increases in size and in overall global population percentage, the problem of forced technology usage affects technology acceptance, technology trust, and technology rejection. People care about ICT systems where reduced trusted acceptance of technology reduces the advantages of digital health care, the perceived security of banking and shopping, and the autonomy of ICT-driven lifestyle choices. This study aims to solve one of the puzzles of ICT-driven change, where older people can show trepidation towards using technology. By understanding the drivers that influence the choices older people make in relation to ICT systems, it may be possible to introduce a much higher level of trusted acceptance in ICT systems. Although many people adopt ICTs into their lives, many older people face difficulty in using technology when it is forced upon them. This study aims to understand the connection between how choice (or lack of choice) can lead to the rejection or resistance towards ICT usage. Older people sometimes opt towards practices that place themselves at risk of financial or informational disadvantage. This study used a qualitative approach to understanding the factors that influenced the trusted acceptance, trepidation, and in some cases rejection of ICT usage by interviewing a sample of older people. Participants were asked to consider a wide range of ICT-usage scenarios and to describe their intentions. The study focussed on circumstances where ICT usage fell under either mandatory, imposed, or voluntary conditions in order to compare user behaviour. Settings included a range of technology-reliant states that examined IT security, volition and choice, aging, trusted acceptance, and technology adoption. Participants were interviewed to discover and sort the conditions (whether singly or in combination) under which the expectation of ICT acceptance was in some way altered, diminished, or prevented. This research found that older people made poor decisions when the choice to use a technology was replaced with a mandatory or strongly imposed pathway. Mandatory ICT usage across the broad area of financial transactions brought about widespread fear and distrust of online technology usage. The results revealed that many older people not only find these innovations daunting and confronting, but they also have difficulty placing their trust in ICT systems and applications that have become mandatory. In normative conditions, increased ICT acceptance and ICT usage is expected. When ICTs are mandatory in their usage, acceptance is replaced with compulsory procedure. This does not mean that mandatory things cannot be accepted, but rather that older people will accept the need to use a technology according to their perception of what is necessary for their daily and routine interactions. This study showed that voluntary ICT usages including choices increase informed decision-making, security of online financial interactions, and trusted reliance upon ICTs. Choice in ICT usage carries greater trust than mandatory, obligated, or heavily imposed ICTs. The study revealed that mandatory ICT systems can create perceptions of fear, mistrust and uncertainty. In situations where a mandatory ICT system becomes the normative method of transaction, a strong risk to the trusted acceptance of a technology is not merely the lack of ICT-based choice, but also the inability to gain reassurance or secondary confirmation through either face to face or telephone-based communication. Trust in not just the usage, but the implied secure usage of mandated and imposed ICTs, is problematic for older people. This study revealed the significance of mandated ICT systems that limit choices for people, because older humans more readily validate and associate their trust in new innovations when they can access various different professional, technical, peer-based, social and popular opinions. The research also showed that older people are fearful and less trusting in mandatory and imposed systems because they have less financial resilience, and less opportunity to bounce back from loss and disadvantage brought about by digital and online interactions. Older people were worried and reluctant to accept technology at first glance because they knew that they had spent more time than others in a pre-internet, pre-digital environment, and their seminal life experiences are correspondingly less technology-related. The results showed that many older people preferred human communication and interaction rather than communicating, buying, paying, and trusting in purely digital, ICT-based experiences. This demonstrated a gap in the trust and security of digital systems, and the need to address those ICTs that impose and mandate instruments and procedures for daily life. Specifically this study looked at what could reduce unsafe and insecure banking practices by understanding the role of choice in the trusted usage of ICT systems. This study is significant because it shows that older people make financial and social, decisions under reactionary, insecure, and under-informed conditions as a result of a gap in terms of trust security and choice. On the one hand older people develop trust towards a new innovation based on accumulated human discussion, information and reputation. On the other hand older people hold the perception that online systems offer reduced choices. This study led to the development of a model for trusted technology choice (TTCM). It differs from traditional acceptance and diffusion thinking, by having outputs as either ICT acceptance or ICT rejection. It diverges from diffusion and technology acceptance models (TAM), because technology acceptance is not regarded as a foregone conclusion. Instead, it places a very high value upon choice and volition, trust, security and human interaction. The TTCM model, together with a framework for identifying volition barriers, provides a different set of criteria for understanding the needs of older people and their meaningful interactions with new innovation and ICTs. The practical applications for using such a model directly impact upon financial and social stability for older people. Where choices are either removed or limited due to ICT usage, older citizens are unfairly disadvantaged. A model that accurately predicts the trusted usage of ICT innovations can have a widespread effect on the implementation of large-scale public and private systems where the trusted acceptance (or rejection) of each system has on flow impact on financial, health, and other critical services that include the growing population of older people

    Counteracting phishing through HCI

    Get PDF
    Computer security is a very technical topic that is in many cases hard to grasp for the average user. Especially when using the Internet, the biggest network connecting computers globally together, security and safety are important. In many cases they can be achieved without the user's active participation: securely storing user and customer data on Internet servers is the task of the respective company or service provider, but there are also a lot of cases where the user is involved in the security process, especially when he or she is intentionally attacked. Socially engineered phishing attacks are such a security issue were users are directly attacked to reveal private data and credentials to an unauthorized attacker. These types of attacks are the main focus of the research presented within my thesis. I have a look at how these attacks can be counteracted by detecting them in the first place but also by mediating these detection results to the user. In prior research and development these two areas have most often been regarded separately, and new security measures were developed without taking the final step of interacting with the user into account. This interaction mainly means presenting the detection results and receiving final decisions from the user. As an overarching goal within this thesis I look at these two aspects united, stating the overall protection as the sum of detection and "user intervention". Within nine different research projects about phishing protection this thesis gives answers to ten different research questions in the areas of creating new phishing detectors (phishing detection) and providing usable user feedback for such systems (user intervention): The ten research questions cover five different topics in both areas from the definition of the respective topic over ways how to measure and enhance the areas to finally reasoning about what is making sense. The research questions have been chosen to cover the range of both areas and the interplay between them. They are mostly answered by developing and evaluating different prototypes built within the projects that cover a range of human-centered detection properties and evaluate how well these are suited for phishing detection. I also take a look at different possibilities for user intervention (e.g. how should a warning look like? should it be blocking or non-blocking or perhaps even something else?). As a major contribution I finally present a model that combines phishing detection and user intervention and propose development and evaluation recommendations for similar systems. The research results show that when developing security detectors that yield results being relevant for end users such a detector can only be successful in case the final user feedback already has been taken into account during the development process.Sicherheit rund um den Computer ist ein, für den durchschnittlichen Benutzer schwer zu verstehendes Thema. Besonders, wenn sich die Benutzer im Internet - dem größten Netzwerk unserer Zeit - bewegen, ist die technische und persönliche Sicherheit der Benutzer extrem wichtig. In vielen Fällen kann diese ohne das Zutun des Benutzers erreicht werden. Datensicherheit auf Servern zu garantieren obliegt den Dienstanbietern, ohne dass eine aktive Mithilfe des Benutzers notwendig ist. Es gibt allerdings auch viele Fälle, bei denen der Benutzer Teil des Sicherheitsprozesses ist, besonders dann, wenn er selbst ein Opfer von Attacken wird. Phishing Attacken sind dabei ein besonders wichtiges Beispiel, bei dem Angreifer versuchen durch soziale Manipulation an private Daten des Nutzers zu gelangen. Diese Art der Angriffe stehen im Fokus meiner vorliegenden Arbeit. Dabei werfe ich einen Blick darauf, wie solchen Attacken entgegen gewirkt werden kann, indem man sie nicht nur aufspürt, sondern auch das Ergebnis des Erkennungsprozesses dem Benutzer vermittelt. Die bisherige Forschung und Entwicklung betrachtete diese beiden Bereiche meistens getrennt. Dabei wurden Sicherheitsmechanismen entwickelt, ohne den finalen Schritt der Präsentation zum Benutzer hin einzubeziehen. Dies bezieht sich hauptsächlich auf die Präsentation der Ergebnisse um dann den Benutzer eine ordnungsgemäße Entscheidung treffen zu lassen. Als übergreifendes Ziel dieser Arbeit betrachte ich diese beiden Aspekte zusammen und postuliere, dass Benutzerschutz die Summe aus Problemdetektion und Benutzerintervention' ("user intervention") ist. Mit Hilfe von neun verschiedenen Forschungsprojekten über Phishingschutz beantworte ich in dieser Arbeit zehn Forschungsfragen über die Erstellung von Detektoren ("phishing detection") und das Bereitstellen benutzbaren Feedbacks für solche Systeme ("user intervention"). Die zehn verschiedenen Forschungsfragen decken dabei jeweils fünf verschiedene Bereiche ab. Diese Bereiche erstrecken sich von der Definition des entsprechenden Themas über Messmethoden und Verbesserungsmöglichkeiten bis hin zu Überlegungen über das Kosten-Nutzen-Verhältnis. Dabei wurden die Forschungsfragen so gewählt, dass sie die beiden Bereiche breit abdecken und auf die Abhängigkeiten zwischen beiden Bereichen eingegangen werden kann. Die Forschungsfragen werden hauptsächlich durch das Schaffen verschiedener Prototypen innerhalb der verschiedenen Projekte beantwortet um so einen großen Bereich benutzerzentrierter Erkennungsparameter abzudecken und auszuwerten wie gut diese für die Phishingerkennung geeignet sind. Außerdem habe ich mich mit den verschiedenen Möglichkeiten der Benutzerintervention befasst (z.B. Wie sollte eine Warnung aussehen? Sollte sie Benutzerinteraktion blockieren oder nicht?). Ein weiterer Hauptbeitrag ist schlussendlich die Präsentation eines Modells, dass die Entwicklung von Phishingerkennung und Benutzerinteraktionsmaßnahmen zusammenführt und anhand dessen dann Entwicklungs- und Analyseempfehlungen für ähnliche Systeme gegeben werden. Die Forschungsergebnisse zeigen, dass Detektoren im Rahmen von Computersicherheitsproblemen die eine Rolle für den Endnutzer spielen nur dann erfolgreich entwickelt werden können, wenn das endgültige Benutzerfeedback bereits in den Entwicklungsprozesses des Detektors einfließt

    User Experience Design for Cybersecurity & Privacy: addressing user misperceptions of system security and privacy

    Get PDF
    The increasing magnitude and sophistication of malicious cyber activities by various threat actors poses major risks to our increasingly digitized and inter-connected societies. However, threats can also come from non-malicious users who are being assigned too complex security or privacy-related tasks, who are not motivated to comply with security policies, or who lack the capability to make good security decisions. This thesis posits that UX design methods and practices are necessary to complement security and privacy engineering practices in order to (1) identify and address user misperceptions of system security and privacy; and (2) inform the design of secure systems that are useful and appealing from end-users’ perspective. The first research objective in this thesis is to provide new empirical accounts of UX aspects in three distinct contexts that encompass security and privacy considerations, namely: cyber threat intelligence, secure and private communication, and digital health technology. The second objective is to empirically contribute to the growing research domain of mental models in security and privacy by investigating user perceptions and misperceptions in the afore-mentioned contexts. Our third objective is to explore and propose methodological approaches to incorporating users’ perceptions and misperceptions in the socio-technical security analyses of systems. Qualitative and quantitative user research methods with experts as well as end users of the applications and systems under investigation were used to achieve the first two objectives. To achieve the third objective, we also employed simulation and computational methods. Cyber Threat Intelligence: CTI sharing platforms Reporting on a number of user studies conducted over a period of two years, this thesis offers a unique contribution towards understanding the constraining and enabling factors of security information sharing within one of the leading CTI sharing platforms, called MISP. Further, we propose a conceptual workflow and toolchain that would seek to detect user (mis)perceptions of key tasks in the context of CTI sharing, such as verifying whether users have an accurate comprehension of how far information travels when shared in a CTI sharing platform, and discuss the benefits of our socio-technical approach as a potential security analysis tool, simulation tool, or educational / training support tool. Secure & Private Communication: Secure Email We propose and describe multi-layered user journeys, a conceptual framework that serves to capture the interaction of a user with a system as she performs certain goals along with the associated user beliefs and perceptions about specific security or privacy-related aspects of that system. We instantiate the framework within a use case, a recently introduced secure email system called p≡p, and demonstrate how the approach can be used to detect misperceptions of security and privacy by comparing user opinions and behavior against system values and objective technical guarantees offered by the system. We further present two sets of user studies focusing on the usability and effectiveness of p≡p’s security and privacy indicators and their traffic-light inspired metaphor to represent different privacy states and guarantees. Digital Health Technology: Contact Tracing Apps Considering human factors when exploring the adoption as well as the security and privacy aspects of COVID-19 contact tracing apps is a timely societal challenge as the effectiveness and utility of these apps highly depend on their widespread adoption by the general population. We present the findings of eight focus groups on the factors that impact people’s decisions to adopt, or not to adopt, a contact tracing app, conducted with participants living in France and Germany. We report how our participants perceived the benefits, drawbacks, and threat model of the contact tracing apps in their respective countries, and discuss the similarities and differences between and within the study groups. Finally, we consolidate the findings from these studies and discuss future challenges and directions for UX design methods and practices in cybersecurity and digital privacy

    Essentials of forensic accounting

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/2728/thumbnail.jp

    Annual Report of the University, 2007-2008, Volumes 1-6

    Get PDF
    Project Summary and Goals Historically, affirmative action policies have evolved from initial programs aimed at providing equal educational opportunities to all students, to the legitimacy of programs that are aimed at achieving diversity in higher education. In June 2003, a U.S. Supreme Court ruling on affirmative action pushed higher education across the threshold toward creating a new paradigm for diversity in the 21 51 century. The court clearly stale that affirmative action is still viable but that our institutions must reconsider our traditional concepts for building diversity in the next few decades. This shift in historical context of diversity in our society has led to an important objective: If a diverse student body is an essential factor in a quality higher education, then it is imperative that elementary, secondary and undergraduate schools fulfill their missions to successfully educate a diverse population. In NM, the success of graduate programs depends on the state\u27s P-12 schools, the community and institutions of higher education, and their shared task of educating all students. Further, when the lens in broadened to view the entire P - 20 educational pipeline, it becomes apparent that the loss of students from elementary school to high school is enormous, constricting the number of students who go on to college. Not only are these of concern to what is happening in terms of their academic education but as well in terms of the communities that are affected to make critical decision and become and stay involved in the political and policy world that affects them. Guiding Principles Engaging Latino Communities for Education New Mexico (ENLACE NM) is a statewide collaboration of gente who represent the voices of underrepresented children and families- people who have historically not had a say in policy initiatives that directly impact them and their communities. Therefore, they, and others from our community, are at the forefront of this initiative. We have developed this collaboration based on a process that empowers these communities to find their voice in the pursuit of social justice and educational access, equity and success
    corecore