1,435 research outputs found
The security implications of geoengineering:blame,imposed agreement and the security of critical infrastructure
The prospect of solar geoengineering in response to climate change (on the basis of its supposedly significantly lower cost and/or more rapid impact on global temperature than carbon reduction strategies) raises a number of security concerns that have traditionally been understood within a standard Geo-political framing of security. This relates to unrealistic direct application in inter-State warfare or to a securitization of climate change. However, indirect security implications are potentially significant. Current capability, security threats and international law loopholes suggest the military, rather than scientists would undertake geoengineering, and solar radiation management (SRM) in particular. SRM activity would be covered by Critical National Infrastructure policies, and as such would require a significant level of secondary security infrastructure. Concerns about termination effects, the need to impose international policy agreement 4 (given the ability of 'rogue States' to disrupt SRM and existing difficulties in producing global agreement on climate policy), and a world of extreme weather events, where weather is engineered and hence blameworthy rather than natural, suggest these costs would be large. Evidence on how blame is attributed suggest blame for extreme weather events may be directed towards more technologically advanced nations, (such as the USA) even if they are not engaged in geoengineering. From a security perspective SRM is costly, ungovernable, and raises security concerns of a sufficient magnitude to make it a non-viable policy option
Recommended from our members
A qualititative approach to HCI research
Whilst science has a strong reliance on quantitative and experimental methods, there are many complex, socially based phenomena in HCI that cannot be easily quantified or experimentally manipulated or, for that matter, ethically researched with experiments. For example, the role of privacy in HCI is not obviously reduced to numbers and it would not be appropriate to limit a person's privacy in the name of research. In addition, technology is rapidly changing â just think of developments in mobile devices, tangible interfaces and so on â making it harder to abstract technology from the context of use if we are to study it effectively. Developments such as mediated social networking and the dispersal of technologies in ubiquitous computing also loosen the connection between technologies and work tasks that were the traditional cornerstone of HCI. Instead, complex interactions between technologies and ways of life are coming to the fore. Consequently, we frequently find that we do not know what the real HCI issues are before we start our research. This makes it hard, if not actually impossible, to define the variables necessary to do quantitative research, (see Chapter 2).
Within HCI, there is also the recognition that the focus on tasks is not enough to design and implement an effective system. There is also a growing need to understand how usability issues are subjectively and collectively experienced and perceived by different user groups (Pace, 2004; Razavim and Iverson, 2006). This means identifying the users' emotional and social drives and perspectives; their motivations, expectations, trust, identity, social norms and so on. It also means relating these concepts to work practices, communities and organisational social structures as well as organisational, economic and political drivers. These issues are increasingly needed in the design, development and implementation of systems to be understood both in isolation and as a part of the whole.
HCI researchers are therefore turning to more qualitative methods in order to deliver the research results that HCI needs.With qualitative research, the emphasis is not on measuring and producing numbers but instead on understanding the qualities of a particular technology and how people use it in their lives, how they think about it and how they feel about it. There are many varied approaches to qualitative research within the social sciences depending on what is being studied, how it can be studied and what the goals of the research are.Within HCI, though, grounded theory has been found to provide good insights that address well the issues raised above (Pace, 2004; Adams, Blandford and Lunt, 2005; Razavim and Iverson, 2006).
The purpose of this chapter is to give an overview of how grounded theory works as a method. Quantitative research methods adopt measuring instruments and experimental manipulations that can be repeated by any researcher (at least in principle) and every effort is made to reduce the influence of the researcher on the researched, which is regarded as a source of bias or error. In contrast, in qualitative research, where the goal is understanding rather than measuring and manipulating, the subjectivity of the researcher is an essential part of the production of an interpretation. The chapter therefore discusses how the influence of the researcher can be ameliorated through the grounded theory methodology whilst also acknowledging the subjective input of the researcher through reflexivity. The chapter also presents a case study of how grounded theory was used in practice to study people's use and understanding of computer passwords and related security
Wattsup? Motivating reductions in domestic energy consumption using social networks
This paper reports on the design, deployment and evaluation of âWattsupâ, an innovative application which displays live autonomously logged data from the Wattson energy monitor, allowing users to compare domestic energy consumption on Facebook. Discussions and sketches from a workshop with Facebook users were used to develop a final design implemented using the Facebook API. Wattson energy monitors and the Wattsup app were deployed and trialled in eight homes over an eighteen day period in two conditions. In the first condition participants could only access their personal energy data, whilst in the second they could access each othersâ data to make comparisons. A significant reduction in energy was observed in the socially enabled condition. Comments on discussion boards and semi-structured interviews with the participants indicated that the element of competition helped motivate energy savings. The paper argues that socially-mediated banter and competition made for a more enjoyable user experience
Competitive carbon counting: can social networking sites Make saving energy more enjoyable?
This paper reports on the design, deployment and initial
evaluation of âWattsupâ, an innovative Facebook
application which displays live data from a commercial
off-the-shelf energy monitor. The Wattsup application
was deployed and trialled in eight homes over an
eighteen day period in two conditions - personal energy
data viewable and friendâs energy data viewable. A
significant reduction in energy was observed in the
socially enabled condition. The paper argues that
socially-mediated discussion and competition made for
a more enjoyable user experience
A note on the evaluation of a beta-casein variant in bovine breeds by allele-specific PCR and relevance to β-casomorphin
peer-reviewedThis work was supported by Enterprise Ireland and by a Teagasc Walsh fellowship to A.F. Keating.Two genetic variants of the bovine β-casein gene (A1 and B) encode a histidine residue at codon 67, resulting in potential liberation of a bioactive peptide, β-casomorphin, upon digestion. An allele-specific PCR (AS-PCR) was evaluated to distinguish between the β-casomorphin-releasing variants (A1 and B) and the non-releasing variants. AS-PCR successfully distinguished β-casein variants in 41 of 42 animals as confirmed by sequence analysis. Overall, while the incidence of the homozygous A1 and B animals (i.e., homozygous for the histidine residue; 21.4%) was lower than that for animals without the histidine residue (30.9% respectively), 69% of animals carried at least one allele for the histidine residue at codon 67.Teagasc Walsh Fellowship ProgrammeEnterprise Irelan
Modelling the liquidity premium on corporate bonds
AbstractThe liquidity premium on corporate bonds has been high on the agenda of Solvency regulators owing to its potential relationship to an additional discount factor on long-dated insurance liabilities. We analyse components of the credit spread as a function of standard bond characteristics during 2003â2014 on a daily basis by regression analyses, after introducing a new liquidity proxy. We derive daily distributions of illiquidity contributions to the credit spread at the individual bond level and find that liquidity premia were close to zero just before the financial crisis. We observe the time-varying nature of liquidity premia as well as a widening in the daily distribution in the years after the credit crunch. We find evidence to support higher liquidity premia, on average, on bonds of lower credit quality. The evolution of model parameters is economically intuitive and brings additional insight into investorsâ behaviour. The frequent and bond-level estimation of liquidity premia, combined with few data restrictions makes the approach suitable for ALM modelling, especially when future work is directed towards arriving at forward-looking estimates at both the aggregate and bond-specific level.</jats:p
Gold Standards Training and Evaluator Calibration of Pilot School Check Instructors
A key component of air carrier advanced qualification programs is the calibration and training of instructors and evaluators and assurance of reliable and valid data in support of such programs. A significant amount of research is available concerning the calibration of air carrier evaluators, but no research exists regarding the calibration of pilot school check instructors. This study was designed to determine if pilot school check instructors can be calibrated against a gold standard to perform reliable and accurate evaluations. Calibration followed the principles and theories of andragogy and adult learning and teaching, including emphasis on the cognitive domain of learning, learnercentered instruction, and human resource development. These in combination with methods commonly used in aviation instruction aimed to increase the effectiveness of the calibration. Discussion of these combinations is included. A specific method for delivery of the calibration was provided along with a complete lesson plan. This study used a one group pretest-posttest design. A group of 10 pilot school check instructors were measured before and after receiving rater calibration training. Statistical measures included raw inter- and referent-rater agreement percentages, Cohenâs kappa and kappa-like statistics for inter- and referent-rater reliability, Pearson product-moment correlations for sensitivity to true changes in pilot performance, and a standardized mean absolute difference for grading accuracy. Improvement in all the measurements from pretest to posttest was expected, but actual results were mixed. However, a holistic interpretation of the results combined with feedback from the check instructors showed promise in calibration training for pilot school check instructors. Thorough discussion of the limitations and lessons learned from the study, recommendations for pilot schools, and recommendations for future research is included
From premature semantics to mature interaction programming
As HCI has progressed as a discipline, perhaps just as time has passed, the engineering work of programming has become increasingly separated from the HCI, the core user interface design work. At the same time, the sophistication of digital devices, across multiple dimensions, has grown exponentially. The result is that HCI and User Experience (UX) professionals and programmers now work in very different worlds. This separation causes problems for users: the UX is attractive but the program is unreliable, or the program is reliable but unattractive or unhelpful to use, correctly implementing the wrong thing. In this chapter, we dig down from this high-level view to get to what we identify as a new sort of fundamental problem, one we call premature semantics. Premature semantics must be recognised and understood by name by UX and HCI practitioners and addressed by programmers
- âŚ