8,660 research outputs found
TOWARDS AN UNDERSTANDING OF EFFORTFUL FUNDRAISING EXPERIENCES: USING INTERPRETATIVE PHENOMENOLOGICAL ANALYSIS IN FUNDRAISING RESEARCH
Physical-activity oriented community fundraising has experienced an exponential growth in popularity over the past 15 years. The aim of this study was to explore the value of effortful fundraising experiences, from the point of view of participants, and explore the impact that these experiences have on people’s lives. This study used an IPA approach to interview 23 individuals, recognising the role of participants as proxy (nonprofessional) fundraisers for charitable organisations, and the unique organisation donor dynamic that this creates. It also bought together relevant psychological theory related to physical activity fundraising experiences (through a narrative literature review) and used primary interview data to substantiate these. Effortful fundraising experiences are examined in detail to understand their significance to participants, and how such experiences influence their connection with a charity or cause. This was done with an idiographic focus at first, before examining convergences and divergences across the sample. This study found that effortful fundraising experiences can have a profound positive impact upon community fundraisers in both the short and the long term. Additionally, it found that these experiences can be opportunities for charitable organisations to create lasting meaningful relationships with participants, and foster mutually beneficial lifetime relationships with them. Further research is needed to test specific psychological theory in this context, including self-esteem theory, self determination theory, and the martyrdom effect (among others)
Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing
Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging.
In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging.
Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid.
Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components
The Adirondack Chronology
The Adirondack Chronology is intended to be a useful resource for researchers and others interested in the Adirondacks and Adirondack history.https://digitalworks.union.edu/arlpublications/1000/thumbnail.jp
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
Investigating and mitigating the role of neutralisation techniques on information security policies violation in healthcare organisations
Healthcare organisations today rely heavily on Electronic Medical Records systems (EMRs), which have become highly crucial IT assets that require significant security efforts to safeguard patients’ information. Individuals who have legitimate access to an organisation’s assets to perform their day-to-day duties but intentionally or unintentionally violate information security policies can jeopardise their organisation’s information security efforts and cause significant legal and financial losses. In the information security (InfoSec) literature, several studies emphasised the necessity to understand why employees behave in ways that contradict information security requirements but have offered widely different solutions. In an effort to respond to this situation, this thesis addressed the gap in the information security academic research by providing a deep understanding of the problem of medical practitioners’ behavioural justifications to violate information security policies and then determining proper solutions to reduce this undesirable behaviour. Neutralisation theory was used as the theoretical basis for the research. This thesis adopted a mixed-method research approach that comprises four consecutive phases, and each phase represents a research study that was conducted in light of the results from the preceding phase. The first phase of the thesis started by investigating the relationship between medical practitioners’ neutralisation techniques and their intention to violate information security policies that protect a patient’s privacy. A quantitative study was conducted to extend the work of Siponen and Vance [1] through a study of the Saudi Arabia healthcare industry. The data was collected via an online questionnaire from 66 Medical Interns (MIs) working in four academic hospitals. The study found that six neutralisation techniques—(1) appeal to higher loyalties, (2) defence of necessity, (3) the metaphor of ledger, (4) denial of responsibility, (5) denial of injury, and (6) condemnation of condemners—significantly contribute to the justifications of the MIs in hypothetically violating information security policies. The second phase of this research used a series of semi-structured interviews with IT security professionals in one of the largest academic hospitals in Saudi Arabia to explore the environmental factors that motivated the medical practitioners to evoke various neutralisation techniques. The results revealed that social, organisational, and emotional factors all stimulated the behavioural justifications to breach information security policies. During these interviews, it became clear that the IT department needed to ensure that security policies fit the daily tasks of the medical practitioners by providing alternative solutions to ensure the effectiveness of those policies. Based on these interviews, the objective of the following two phases was to improve the effectiveness of InfoSec policies against the use of behavioural justification by engaging the end users in the modification of existing policies via a collaborative writing process. Those two phases were conducted in the UK and Saudi Arabia to determine whether the collaborative writing process could produce a more effective security policy that balanced the security requirements with daily business needs, thus leading to a reduction in the use of neutralisation techniques to violate security policies. The overall result confirmed that the involvement of the end users via a collaborative writing process positively improved the effectiveness of the security policy to mitigate the individual behavioural justifications, showing that the process is a promising one to enhance security compliance
Recommended from our members
After Creation: Intergovernmental Organizations and Member State Governments as Co-Participants in an Authority Relationship
This is a re-amalgamation of what started as one manuscript and became two when the length proved to be more than any publisher wanted to consider. The splitting consisted of removing what are now Parts 3, 4, and 5 so that the manuscript focused on the outcome-related shared beliefs holding an authority relationship together. Those parts were last worked on in 2018. The rest were last worked on in late 2021 but also remain incomplete.
The relational approach adopted in this study treats intergovernmental organizations and the governments of member states as co-participants in an authority relationship with the governments of their member states. Authority relationships link two types of actor, defined by their authority-holder or addressee role in the relationship, through a set of shared beliefs about why the relationship exists and how the participants should fulfill their respective roles. The IGO as authority holder has a role that includes a right to instruct other actors about what they should or should not do; the governments of member states as addressees are expected to comply with the instructions. Three sets of shared beliefs provide the conceptual “glue” holding the relationship together. The first defines the goal of the collective effort, providing both the rationale for having the authority relationship and providing a lode star for assessments of the collective effort’s success or lack of success. The second set defines the shared understanding about allocation of roles and the process of interaction by establishing shared expectations about a) the selection process by which particular actors acquire authority holder roles, b) the definitions identifying one or more categories of addressees expected to follow instructions, and c) the procedures through which the authority holder issues instructions. The third set focus on the outcomes of cooperation through the relationship by defining a) the substantive areas in which the authority holder may issue instructions, b) the bases for assessing the relevance actions mandated in instructions for reaching the goal, and c) the relative efficacy of action paths chosen for reaching the goal as compared to other possible action paths.
Using an authority relationship framework for analyzing cooperation through IGOs highlights the inherently bi-directional nature of IGO-member government activity by viewing their interaction as involving a three-step process in which the IGO as authority holder decides when to issue what instruction, the member state governments as followers react to the instruction with anything from prompt and full compliance through various forms of pushback to outright rejection, and the IGO as authority holder responds to how the followers react with efforts to increase individual compliance with instructions and reinforce continuing acceptance of the authority relationship. Foregrounding the dynamics produced by the interaction of these two streams of perception and action reveals more clearly how far intergovernmental organizations acquire capacity to operate as independent actors, the dynamic ways they maintain that capacity, and how much they influence member governments’ beliefs and actions at different times. The approach fosters better understanding of why, when, and for how long governments choose cooperation through an IGO even in periods of rising unilateralism
Insights into the effectuation entrepreneurial approach of small artisan entrepreneurs in Thailand
The aim of this thesis is to gain insights into the role of effectuation in influencing small artisans’ entrepreneurial decisions, actions and performance in Thailand. Specifically, this study examines the impact and role of effectuation on small artisan entrepreneurs’ performance such as improving business performance, strengthening long-term partnership commitment and managing the Covid-19 crisis
Recommended from our members
Reliable Decision-Making with Imprecise Models
The rapid growth in the deployment of autonomous systems across various sectors has generated considerable interest in how these systems can operate reliably in large, stochastic, and unstructured environments. Despite recent advances in artificial intelligence and machine learning, it is challenging to assure that autonomous systems will operate reliably in the open world. One of the causes of unreliable behavior is the impreciseness of the model used for decision-making. Due to the practical challenges in data collection and precise model specification, autonomous systems often operate based on models that do not represent all the details in the environment. Even if the system has access to a comprehensive decision-making model that accounts for all the details in the environment and all possible scenarios the agent may encounter, it may be intractable to solve this complex model optimally. Consequently, this complex, high fidelity model may be simplified to accelerate planning, introducing imprecision. Reasoning with such imprecise models affects the reliability of autonomous systems. A system\u27s actions may sometimes produce unexpected, undesirable consequences, which are often identified after deployment. How can we design autonomous systems that can operate reliably in the presence of uncertainty and model imprecision?
This dissertation presents solutions to address three classes of model imprecision in a Markov decision process, along with an analysis of the conditions under which bounded-performance can be guaranteed. First, an adaptive outcome selection approach is introduced to devise risk-aware reduced models of the environment that efficiently balance the trade-off between model simplicity and fidelity, to accelerate planning in resource-constrained settings. Second, a framework that extends stochastic shortest path framework to problems with imperfect information about the goal state during planning is introduced, along with two solution approaches to solve this problem. Finally, two complementary solution approaches are presented to minimize the negative side effects of agent actions. The techniques presented in this dissertation enable an autonomous system to detect and mitigate undesirable behavior, without redesigning the model entirely
REDESIGNING THE COUNTER UNMANNED SYSTEMS ARCHITECTURE
Includes supplementary material. Please contact [email protected] for access.When the Islamic State used Unmanned Aerial Vehicles (UAV) to target coalition forces in 2014, the use of UAVs rapidly expanded, giving weak states and non-state actors an asymmetric advantage over their technologically superior foes. This asymmetry led the Department of Defense (DOD) and the Department of Homeland Security (DHS) to spend vast sums of money on counter-unmanned aircraft systems (C-UAS). Despite the market density, many C-UAS technologies use expensive, bulky, and high-power-consuming electronic attack methods for ground-to-air interdiction. This thesis outlines the current technology used for C-UAS and proposes a defense-in-depth framework using airborne C-UAS patrols outfitted with cyber-attack capabilities. Using aerial interdiction, this thesis develops a novel C-UAS device called the Detachable Drone Hijacker—a low-size, weight, and power C-UAS device designed to deliver cyber-attacks against commercial UAVs using the IEEE 802.11 wireless communication specification. The experimentation results show that the Detachable Drone Hijacker, which weighs 400 grams, consumes one Watt of power, and costs $250, can interdict adversarial UAVs with no unintended collateral damage. This thesis recommends that the DOD and DHS incorporates aerial interdiction to support its C-UAS defense-in-depth, using technologies similar to the Detachable Drone Hijacker.DASN-OE, Washington DC, 20310Captain, United States Marine CorpsApproved for public release. Distribution is unlimited
Estudo do IPFS como protocolo de distribuição de conteúdos em redes veiculares
Over the last few years, vehicular ad-hoc networks (VANETs) have been the
focus of great progress due to the interest in autonomous vehicles and in
distributing content not only between vehicles, but also to the Cloud. Performing
a download/upload to/from a vehicle typically requires the existence
of a cellular connection, but the costs associated with mobile data transfers
in hundreds or thousands of vehicles quickly become prohibitive. A VANET
allows the costs to be several orders of magnitude lower - while keeping the
same large volumes of data - because it is strongly based in the communication
between vehicles (nodes of the network) and the infrastructure.
The InterPlanetary File System (IPFS) is a protocol for storing and distributing
content, where information is addressed by its content, instead of
its location. It was created in 2014 and it seeks to connect all computing
devices with the same system of files, comparable to a BitTorrent swarm
exchanging Git objects. It has been tested and deployed in wired networks,
but never in an environment where nodes have intermittent connectivity,
such as a VANET. This work focuses on understanding IPFS, how/if it can
be applied to the vehicular network context, and comparing it with other
content distribution protocols.
In this dissertation, IPFS has been tested in a small and controlled network
to understand its working applicability to VANETs. Issues such as neighbor
discoverability times and poor hashing performance have been addressed.
To compare IPFS with other protocols (such as Veniam’s proprietary solution
or BitTorrent) in a relevant way and in a large scale, an emulation platform
was created. The tests in this emulator were performed in different times of
the day, with a variable number of files and file sizes. Emulated results show
that IPFS is on par with Veniam’s custom V2V protocol built specifically for
V2V, and greatly outperforms BitTorrent regarding neighbor discoverability
and data transfers.
An analysis of IPFS’ performance in a real scenario was also conducted, using
a subset of STCP’s vehicular network in Oporto, with the support of
Veniam. Results from these tests show that IPFS can be used as a content
dissemination protocol, showing it is up to the challenge provided by a
constantly changing network topology, and achieving throughputs up to 2.8
MB/s, values similar or in some cases even better than Veniam’s proprietary
solution.Nos últimos anos, as redes veiculares (VANETs) têm sido o foco de grandes
avanços devido ao interesse em veículos autónomos e em distribuir conteúdos,
não só entre veículos mas também para a "nuvem" (Cloud). Tipicamente,
fazer um download/upload de/para um veículo exige a utilização
de uma ligação celular (SIM), mas os custos associados a fazer transferências
com dados móveis em centenas ou milhares de veículos rapidamente se
tornam proibitivos. Uma VANET permite que estes custos sejam consideravelmente
inferiores - mantendo o mesmo volume de dados - pois é fortemente
baseada na comunicação entre veículos (nós da rede) e a infraestrutura.
O InterPlanetary File System (IPFS - "sistema de ficheiros interplanetário")
é um protocolo de armazenamento e distribuição de conteúdos, onde a informação
é endereçada pelo conteúdo, em vez da sua localização. Foi criado
em 2014 e tem como objetivo ligar todos os dispositivos de computação num
só sistema de ficheiros, comparável a um swarm BitTorrent a trocar objetos
Git. Já foi testado e usado em redes com fios, mas nunca num ambiente
onde os nós têm conetividade intermitente, tal como numa VANET. Este
trabalho tem como foco perceber o IPFS, como/se pode ser aplicado ao
contexto de rede veicular e compará-lo a outros protocolos de distribuição
de conteúdos.
Numa primeira fase o IPFS foi testado numa pequena rede controlada, de
forma a perceber a sua aplicabilidade às VANETs, e resolver os seus primeiros
problemas como os tempos elevados de descoberta de vizinhos e o fraco desempenho
de hashing.
De modo a poder comparar o IPFS com outros protocolos (tais como a
solução proprietária da Veniam ou o BitTorrent) de forma relevante e em
grande escala, foi criada uma plataforma de emulação. Os testes neste emulador
foram efetuados usando registos de mobilidade e conetividade veicular
de alturas diferentes de um dia, com um número variável de ficheiros e
tamanhos de ficheiros. Os resultados destes testes mostram que o IPFS está
a par do protocolo V2V da Veniam (desenvolvido especificamente para V2V
e VANETs), e que o IPFS é significativamente melhor que o BitTorrent no
que toca ao tempo de descoberta de vizinhos e transferência de informação.
Uma análise do desempenho do IPFS em cenário real também foi efetuada,
usando um pequeno conjunto de nós da rede veicular da STCP no Porto,
com o apoio da Veniam. Os resultados destes testes demonstram que o
IPFS pode ser usado como protocolo de disseminação de conteúdos numa
VANET, mostrando-se adequado a uma topologia constantemente sob alteração,
e alcançando débitos até 2.8 MB/s, valores parecidos ou nalguns
casos superiores aos do protocolo proprietário da Veniam.Mestrado em Engenharia de Computadores e Telemátic
- …