188 research outputs found

    A pilot test of the effect of mild-hypoxia on unrealistically optimistic risk judgements

    Get PDF
    Although hypoxia is believed to occur above altitudes of 10,000 ft, some have suggested that effects may occur at lower altitudes. This pilot study explored risk judgments under conditions of mild hypoxia (simulated altitude of 8,000 ft). Some evidence of an increased optimism was found at this level, suggesting the need for a larger scale study with more experimental power

    Does science need computer science?

    No full text
    IBM Hursley Talks Series 3An afternoon of talks, to be held on Wednesday March 10 from 2:30pm in Bldg 35 Lecture Room A, arranged by the School of Chemistry in conjunction with IBM Hursley and the Combechem e-Science Project.The talks are aimed at science students (undergraduate and post-graduate) from across the faculty. This is the third series of talks we have organized, but the first time we have put them together in an afternoon. The talks are general in nature and knowledge of computer science is certainly not necessary. After the talks there will be an opportunity for a discussion with the lecturers from IBM.Does Science Need Computer Science?Chair and Moderator - Jeremy Frey, School of Chemistry.- 14:00 "Computer games for fun and profit" (*) - Andrew Reynolds - 14:45 "Anyone for tennis? The science behind WIBMledon" (*) - Matt Roberts - 15:30 Tea (Chemistry Foyer, Bldg 29 opposite bldg 35) - 15:45 "Disk Drive physics from grandmothers to gigabytes" (*) - Steve Legg - 16:35 "What could happen to your data?" (*) - Nick Jones - 17:20 Panel Session, comprising the four IBM speakers and May Glover-Gunn (IBM) - 18:00 Receptio

    UC-153 Station 17

    Get PDF
    Station 17 is a classic style First Person Shooter, where you are sent to clear out a derelict space station on a foreign planet\u27s facility. Utilize your archetypal weapons and handy grapple shot to take down all the creatures the hive mind can throw at you while navigating levels within the facility

    Using Emerging Technologies to Bolster Long-Term Monitoring of Wetlands

    Get PDF
    Freshwater wetlands support a disproportionately high diversity of species relative to other ecosystems and they are particularly vulnerable to climate change. Across Grand Teton and Yellowstone National Parks, wetlands represent just 3% of the landscape, yet 70% of Wyoming bird species and all native amphibians in the region use wetlands for some stage of their life. The Greater Yellowstone Inventory and Monitoring Network has monitored amphibians in wetlands since 2006 and found that over 40% of the region’s isolated wetlands are dry in years with above average temperatures and reduced precipitation. Adding novel technologies to these monitoring efforts will increase our understanding of species diversity in wetlands susceptible to drying. We outfitted three wetland sites in Grand Teton National Park with acoustic (i.e., audible and ultrasonic) monitoring technology and wildlife camera traps in summer 2016. We collected data over a four-week period to test the efficacy of automated technology for wetland monitoring. Based on preliminary results from the ultrasonic monitoring and wildlife cameras, we detected four times more species with these tools, when compared to visual surveys of amphibians alone. Additionally, automated methods allowed us to detect species over a longer time window than feasible with visual surveys. We will continue our work in 2017, using environmental DNA, acoustic monitoring, and wildlife camera traps to capture information about a broader diversity of taxa using wetlands, to expand and enrich current monitoring efforts

    Functionality-preserving adversarial machine learning for robust classification in cybersecurity and intrusion detection domains: A survey

    Get PDF
    Machine learning has become widely adopted as a strategy for dealing with a variety of cybersecurity issues, ranging from insider threat detection to intrusion and malware detection. However, by their very nature, machine learning systems can introduce vulnerabilities to a security defence whereby a learnt model is unaware of so-called adversarial examples that may intentionally result in mis-classification and therefore bypass a system. Adversarial machine learning has been a research topic for over a decade and is now an accepted but open problem. Much of the early research on adversarial examples has addressed issues related to computer vision, yet as machine learning continues to be adopted in other domains, then likewise it is important to assess the potential vulnerabilities that may occur. A key part of transferring to new domains relates to functionality-preservation, such that any crafted attack can still execute the original intended functionality when inspected by a human and/or a machine. In this literature survey, our main objective is to address the domain of adversarial machine learning attacks and examine the robustness of machine learning models in the cybersecurity and intrusion detection domains. We identify the key trends in current work observed in the literature, and explore how these relate to the research challenges that remain open for future works. Inclusion criteria were: articles related to functionality-preservation in adversarial machine learning for cybersecurity or intrusion detection with insight into robust classification. Generally, we excluded works that are not yet peer-reviewed; however, we included some significant papers that make a clear contribution to the domain. There is a risk of subjective bias in the selection of non-peer reviewed articles; however, this was mitigated by co-author review. We selected the following databases with a sizeable computer science element to search and retrieve literature: IEEE Xplore, ACM Digital Library, ScienceDirect, Scopus, SpringerLink, and Google Scholar. The literature search was conducted up to January 2022. We have striven to ensure a comprehensive coverage of the domain to the best of our knowledge. We have performed systematic searches of the literature, noting our search terms and results, and following up on all materials that appear relevant and fit within the topic domains of this review. This research was funded by the Partnership PhD scheme at the University of the West of England in collaboration with Techmodal Ltd

    State of the UK climate 2018

    Get PDF
    This report provides a summary of the UK weather and climate through the calendar year 2018, alongside the historical context for a number of essential climate variables. This is the fifth in a series of annual “State of the UK climate” publications and an update to the 2017 report (Kendon et al., 2018). It provides an accessible, authoritative and up‐to‐date assessment of UK climate trends, variations and extremes based on the most up to date observational datasets of climate quality. The majority of this report is based on observations of temperature, precipitation, sunshine and wind speed from the UK land weather station network as managed by the Met Office and a number of key partners and co‐operating volunteers. The observations are carefully managed such that they conform to current best practice observational standards as defined by the World Meteorological Organization (WMO). The observations also pass through a range of quality assurance procedures at the Met Office before application for climate monitoring. In addition, time series of near‐coast sea‐surface temperature (SST) and sea‐level rise are also presented. The process for generating national and regional statistics from these observations has been updated since Kendon et al., 2018. This report makes use of a new dataset, HadUK‐Grid, which provides improved quality and traceability for these national statistics along with temperature and rainfall series that extend back into the 19th Century. Differences with previous data are described in the relevant sections and appendices. The report presents summary statistics for year 2018 and the most recent decade (2009–2018) against 1961–1990 and 1981–2010 averages. Year 2009–2018 is a non‐standard reference period, but it provides a 10‐year “snapshot” of the most recent experience of the UK's climate and how that compares to historical records. This means differences between 2009 and 2018 and the baseline reference averages may reflect shorter‐term decadal variations as well as long‐term trends. These data are presented to show what has happened in recent years, not necessarily what is expected to happen in a changing climate. The majority of maps in this report show year 2018 against the 1981–2010 baseline reference averaging period—that is, they are anomaly maps which show the spatial variation in this difference from average. Maps of actual values are in most cases not displayed because these are dominated by the underlying climatology, which for this report is of a lesser interest than the year‐to‐year variability. Throughout the report's text the terms “above normal” and “above average,” etc. refer to the 1981–2010 baseline reference averaging period unless otherwise stated. Values quoted in tables throughout this report are rounded, but where the difference between two such values is quoted in the text (for example, comparing the most recent decade with 1981–2010), this difference is calculated from the original unrounded values

    Imagining fantastica : the direction and puppet design of The Neverending Story

    Get PDF
    Pages 2-5 bound out of order.In the The Neverending Story, a novel by Michael Ende adapted for the stage by David S. Craig, the child protagonist reads a fantastical world into existence. When directing and puppet designing a theatre production of The Neverending Story at the University of Lethbridge in February of 2013,1 sought to populate that world using mundane objects and character situations from the real world, repurposed into object puppets and animated by actor-puppeteers. In this paper, I assess the conception, design and performance of puppets in my production of The Neverending Story

    Automated registration of multimodal optic disc images: clinical assessment of alignment accuracy

    Get PDF
    Purpose: To determine the accuracy of automated alignment algorithms for the registration of optic disc images obtained by 2 different modalities: fundus photography and scanning laser tomography. Materials and Methods: Images obtained with the Heidelberg Retina Tomograph II and paired photographic optic disc images of 135 eyes were analyzed. Three state-of-the-art automated registration techniques Regional Mutual Information, rigid Feature Neighbourhood Mutual Information (FNMI), and nonrigid FNMI (NRFNMI) were used to align these image pairs. Alignment of each composite picture was assessed on a 5-point grading scale: “Fail” (no alignment of vessels with no vessel contact), “Weak” (vessels have slight contact), “Good” (vessels with 50% contact), and “Excellent” (complete alignment). Custom software generated an image mosaic in which the modalities were interleaved as a series of alternate 5×5-pixel blocks. These were graded independently by 3 clinically experienced observers. Results: A total of 810 image pairs were assessed. All 3 registration techniques achieved a score of “Good” or better in >95% of the image sets. NRFNMI had the highest percentage of “Excellent” (mean: 99.6%; range, 95.2% to 99.6%), followed by Regional Mutual Information (mean: 81.6%; range, 86.3% to 78.5%) and FNMI (mean: 73.1%; range, 85.2% to 54.4%). Conclusions: Automated registration of optic disc images by different modalities is a feasible option for clinical application. All 3 methods provided useful levels of alignment, but the NRFNMI technique consistently outperformed the others and is recommended as a practical approach to the automated registration of multimodal disc images
    • 

    corecore