25 research outputs found

    The Ecological Effects of Increasing Potting Density in the Lyme Bay Marine Protected Area

    Get PDF
    In light of past and present ecosystem and fisheries management failures leading to the continued decline of global fish stocks and ongoing degradation of many marine habitats, ecosystem-based approaches to management have been favoured. Marine Protected Areas (MPAs) have been championed as tools that allow the holistic management of marine resources. Faced with national and international marine conservation targets, the UK has adopted and implemented MPAs based on guidelines from various legislature. Many of these MPAs are multiuse meaning the MPA restricts some anthropogenic activities, typically certain types of commercial fishing, while permitting other activities to continue on account of their compatibility with conservation and fisheries management. The Lyme Bay MPA, introduced in 2008 to protect sensitive benthic reef habitats, is one such multi-use MPA that has restricted bottom towed fishing, while authorising alternative commercial fisheries to continue within the MPA. Commercial potting inside the Lyme Bay MPA has increased in response to the removal of bottom towed fishing, and is now the dominant Lyme Bay fishery. The ecological effects of current, and increasing, levels of commercial potting effort were unknown and thus increases in potting were cause for concern, particularly when allowed to continue unregulated within the MPA. This thesis therefore developed a three year experimental potting study that manipulated potting densities in order to gather evidence on the ecological impacts of increasing potting density on both the ecosystem and fisheries, in order to address existing knowledge gaps. This thesis contains original contributions to knowledge in each of its chapters. Ecological research into the ecosystem impacts (Chapter Two, Three) of commercial potting was carried out using underwater video methods showing that, contrary to previous understanding, commercial potting reduced the number of two key sensitive sessile reef species in Lyme Bay when potting density was high. Research into the fishery impacts (Chapter Four) of increasing potting density showed that under medium to high potting densities, selective fishing pressures alter the population densities and overall condition of key species (brown crab (Cancer pagurus L.) in particular) targeted by commercial potting. The knock-on ecological impacts of these impacts are theorised (Chapter Six) and using this new evidence MPA regulators should now decide what level of commercial potting is compatible in Lyme Bay, and other MPAs, on account of individual MPA ecosystem and fisheries objectives. Overall the thesis has provided evidence on the potential ecological impacts of increasing commercial potting, as well as providing robust evidence for the ecological sustainability of low levels of commercial potting. It has highlighted that when applying an ecosystem-based approach to ecosystem and fisheries management, it is necessary to consider commercial potting, along side other fisheries, when managing multi-use MPAs.Blue Marine Foundatio

    DO LARGE, INFREQUENT DISTURBANCES RELEASE ESTUARINE WETLANDS FROM COASTAL SQUEEZING?

    Get PDF
    As disturbance frequencies, intensities, and types have changed and continue to change in response to changing climate and land-use patterns, coastal communities undergo shifts in both species composition and dominant vegetation type. Over the past 100 years, fire suppression throughout the Northern Gulf of Mexico coast has resulted in shifts towards woody species dominance at the expense of marsh cover. Over the next 100 years, sea levels will rise and tropical storm activity is projected to increase; resultant changes in salinity could reduce cover of salt-intolerant fresh marsh species. Together, the effects of fire suppression upslope and rising salinities downslope could squeeze fresh marsh species, reducing cover and potentially threatening persistence. To mitigate the effects of fire suppression, the use of prescribed fire as a management tool to mimic historic conditions is becoming increasingly widespread and will likely gain further popularity during the 21st century. Ecological shifts that will result from changing disturbance regimes are unknown. It was hypothesized that two recent hurricanes, Ivan and Katrina in 2004 and 2005, respectively, and a prescribed fire, in 2010, differentially affected species along the estuarine gradient and drove overall shifts away from woody dominance. Overall community composition did not change significantly in the intermediate and fresh marsh zones. However, significant changes occurred in the salt and brackish marshes and in the woody-dominated fresh marsh-scrub ecotone zones. Relative to 2004, woody species abundance decreased significantly in all zones in 2006, following Hurricanes Ivan and Katrina, and 2012, following the hurricanes and fire, though woody species regeneration in the marsh-scrub ecotone had begun to occur by 2012. It is hypothesized that interacting changes in fire and tropical storm regimes could release upslope areas from coastal squeezing

    OS-level Attacks and Defenses: from Software to Hardware-based Exploits

    Get PDF
    Run-time attacks have plagued computer systems for more than three decades, with control-flow hijacking attacks such as return-oriented programming representing the long-standing state-of-the-art in memory-corruption based exploits. These attacks exploit memory-corruption vulnerabilities in widely deployed software, e.g., through malicious inputs, to gain full control over the platform remotely at run time, and many defenses have been proposed and thoroughly studied in the past. Among those defenses, control-flow integrity emerged as a powerful and effective protection against code-reuse attacks in practice. As a result, we now start to see attackers shifting their focus towards novel techniques through a number of increasingly sophisticated attacks that combine software and hardware vulnerabilities to construct successful exploits. These emerging attacks have a high impact on computer security, since they completely bypass existing defenses that assume either hardware or software adversaries. For instance, they leverage physical effects to provoke hardware faults or force the system into transient micro-architectural states. This enables adversaries to exploit hardware vulnerabilities from software without requiring physical presence or software bugs. In this dissertation, we explore the real-world threat of hardware and software-based run-time attacks against operating systems. While memory-corruption-based exploits have been studied for more than three decades, we show that data-only attacks can completely bypass state-of-the-art defenses such as Control-Flow Integrity which are also deployed in practice. Additionally, hardware vulnerabilities such as Rowhammer, CLKScrew, and Meltdown enable sophisticated adversaries to exploit the system remotely at run time without requiring any memory-corruption vulnerabilities in the system’s software. We develop novel design strategies to defend the OS against hardware-based attacks such as Rowhammer and Meltdown to tackle the limitations of existing defenses. First, we present two novel data-only attacks that completely break current code-reuse defenses deployed in real-world software and propose a randomization-based defense against such data-only attacks in the kernel. Second, we introduce a compiler-based framework to automatically uncover memory-corruption vulnerabilities in real-world kernel code. Third, we demonstrate the threat of Rowhammer-based attacks in security-sensitive applications and how to enable a partitioning policy in the system’s physical memory allocator to effectively and efficiently defend against such attacks. We demonstrate feasibility and real-world performance through our prototype for the popular and widely used Linux kernel. Finally, we develop a side-channel defense to eliminate Meltdown-style cache attacks by strictly isolating the address space of kernel and user memory

    Cutting force component-based rock differentiation utilising machine learning

    Get PDF
    This dissertation evaluates the possibilities and limitations of rock type identification in rock cutting with conical picks. For this, machine learning in conjunction with features derived from high frequency cutting force measurements is used. On the basis of linear cutting experiments, it is shown that boundary layers can be identified with a precision of less than 3.7 cm when using the developed programme routine. It is further shown that rocks weakened by cracks can be well identified and that anisotropic rock behaviour may be problematic to the classification success. In a case study, it is shown that the supervised algorithms artificial neural network and distributed random forest perform relatively well while unsupervised k-means clustering provides limited accuracies for complex situations. The 3d-results are visualised in a web app. The results suggest that a possible rock classification system can achieve good results—that are robust to changes in the cutting parameters when using the proposed evaluation methods.:1 Introduction...1 2 Cutting Excavation with Conical Picks...5 2.1 Cutting Process...8 2.1.2 Cutting Parameters...11 2.1.3 Influences of Rock Mechanical Properties...17 2.1.4 Influences of the Rock Mass...23 2.2 Ratios of Cutting Force Components...24 3 State of the Art...29 3.1 Data Analysis in Rock Cutting Research...29 3.2 Rock Classification Systems...32 3.2.1 MWC – Measure-While-Cutting...32 3.2.2 MWD – Measuring-While-Drilling...34 3.2.3 Automated Profiling During Cutting...35 3.2.4 Wear Monitoring...36 3.3 Machine learning for Rock Classification...36 4 Problem Statement and Justification of Topic...38 5 Material and Methods...40 5.1 Rock Cutting Equipment...40 5.2 Software & PC...42 5.3 Samples and Rock Cutting Parameters...43 5.3.1 Sample Sites...43 5.3.2 Experiment CO – Zoned Concrete...45 5.3.3 Experiment GN – Anisotropic Rock Gneiss...47 5.3.4 Experiment GR – Uncracked and Cracked Granite...49 5.3.5 Case Study PB and FBA – Lead-Zinc and Fluorite-Barite Ores...50 5.4 Data Processing...53 5.5 Force Component Ratio Calculation...54 5.6 Procedural Selection of Features...57 5.7 Image-Based Referencing and Rock Boundary Modelling...60 5.8 Block Modelling and Gridding...61 5.9 Correlation Analysis...63 5.10 Regression Analysis of Effect...64 5.11 Machine Learning...65 5.11.2 K-Means Algorithm...66 5.11.3 Artificial Neural Networks...67 5.11.4 Distributed Random Forest...70 5.11.5 Classification Success...72 5.11.6 Boundary Layer Recognition Precision...73 5.12 Machine Learning Case Study...74 6 Results...75 6.1 CO – Zoned Concrete...75 6.1.1 Descriptive Statistics...75 6.1.2 Procedural Evaluation...76 6.1.3 Correlation of the Covariates...78 6.1.4 K-Means Cluster Analysis...79 6.2 GN – Foliated Gneiss...85 6.2.1 Cutting Forces...86 6.2.2 Regression Analysis of Effect...88 6.2.3 Details Irregular Behaviour...90 6.2.4 Interpretation of Anisotropic Behaviour...92 6.2.5 Force Component Ratios...92 6.2.6 Summary and Interpretations of Results...93 6.3 CR – Cracked Granite...94 6.3.1 Force Component Results...94 6.3.2 Spatial Analysis...97 6.3.3 Error Analysis...99 6.3.4 Summary...100 6.4 Case Study...100 6.4.1 Feature Distribution in Block Models...101 6.4.2 Distributed Random Forest...105 6.4.3 Artificial Neural Network...107 6.4.4 K-Means...110 6.4.5 Training Data Required...112 7 Discussion...114 7.1 Critical Discussion of Experimental Results...114 7.1.1 Experiment CO...114 7.1.2 Experiment GN...115 7.1.3 Experiment GR...116 7.1.4 Case Study...116 7.1.5 Additional Outcomes...117 7.2 Comparison of Machine Learning Algorithms...118 7.2.1 K-Means...118 7.2.2 Artificial Neural Networks and Distributed Random Forest...119 7.2.3 Summary...120 7.3 Considerations Towards Sensor System...121 7.3.1 Force Vectors and Data Acquisition Rate...121 7.3.2 Sensor Types...122 7.3.3 Computation Speed...123 8 Summary and Outlook...125 References...128 Annex A Fields of Application of Conical Tools...145 Annex B Supplements Cutting and Rock Parameters...149 Annex C Details Topic-Analysis Rock Cutting Publications...155 Annex D Details Patent Analysis...157 Annex E Details Rock Cutting Unit HSX-1000-50...161 Annex F Details Used Pick...162 Annex G Error Analysis Cutting Experiments...163 Annex H Details Photographic Modelling...166 Annex I Laser Offset...168 Annex J Supplements Experiment CO...169 Annex K Supplements Experiment GN...187 Annex L Supplements Experiment GR...191 Annex M Preliminary Artificial Neural Network Training...195 Annex N Supplements Case Study (CD)...201 Annex O R-Codes (CD)...203 Annex P Supplements Rock Mechanical Tests (CD)...204Die Dissertation evaluiert Möglichkeiten und Grenzen der Gebirgserkennung bei der schneidenden Gewinnung von Festgesteinen mit Rundschaftmeißeln unter Nutzung maschinellen Lernens – in Verbindung mit aus hochaufgelösten Schnittkraftmessungen abgeleiteten Kennwerten. Es wird auf linearen Schneidversuchen aufbauend gezeigt, dass Schichtgrenzen mit Genauigkeiten unter 3,7 cm identifiziert werden können. Ferner wird gezeigt, dass durch Risse geschwächte Gesteine gut identifiziert werden können und dass anisotropes Gesteinsverhalten möglicherweise problematisch auf den Klassifizierungserfolg wirkt. In einer Fallstudie wird gezeigt, dass die überwachten Algorithmen Künstliches Neurales Netz und Distributed Random Forest teils sehr gute Ergebnisse erzielen und unüberwachtes k-means-Clustering begrenzte Genauigkeiten für komplexe Situationen liefert. Die Ergebnisse werden in einer Web-App visualisiert. Aus den Ergebnissen wird abgeleitet, dass ein mögliches Sensorsystem mit den vorgeschlagenen Auswerteroutinen gute Ergebnisse erzielen kann, die gleichzeitig robust gegen Änderungen der Schneidparameter sind.:1 Introduction...1 2 Cutting Excavation with Conical Picks...5 2.1 Cutting Process...8 2.1.2 Cutting Parameters...11 2.1.3 Influences of Rock Mechanical Properties...17 2.1.4 Influences of the Rock Mass...23 2.2 Ratios of Cutting Force Components...24 3 State of the Art...29 3.1 Data Analysis in Rock Cutting Research...29 3.2 Rock Classification Systems...32 3.2.1 MWC – Measure-While-Cutting...32 3.2.2 MWD – Measuring-While-Drilling...34 3.2.3 Automated Profiling During Cutting...35 3.2.4 Wear Monitoring...36 3.3 Machine learning for Rock Classification...36 4 Problem Statement and Justification of Topic...38 5 Material and Methods...40 5.1 Rock Cutting Equipment...40 5.2 Software & PC...42 5.3 Samples and Rock Cutting Parameters...43 5.3.1 Sample Sites...43 5.3.2 Experiment CO – Zoned Concrete...45 5.3.3 Experiment GN – Anisotropic Rock Gneiss...47 5.3.4 Experiment GR – Uncracked and Cracked Granite...49 5.3.5 Case Study PB and FBA – Lead-Zinc and Fluorite-Barite Ores...50 5.4 Data Processing...53 5.5 Force Component Ratio Calculation...54 5.6 Procedural Selection of Features...57 5.7 Image-Based Referencing and Rock Boundary Modelling...60 5.8 Block Modelling and Gridding...61 5.9 Correlation Analysis...63 5.10 Regression Analysis of Effect...64 5.11 Machine Learning...65 5.11.2 K-Means Algorithm...66 5.11.3 Artificial Neural Networks...67 5.11.4 Distributed Random Forest...70 5.11.5 Classification Success...72 5.11.6 Boundary Layer Recognition Precision...73 5.12 Machine Learning Case Study...74 6 Results...75 6.1 CO – Zoned Concrete...75 6.1.1 Descriptive Statistics...75 6.1.2 Procedural Evaluation...76 6.1.3 Correlation of the Covariates...78 6.1.4 K-Means Cluster Analysis...79 6.2 GN – Foliated Gneiss...85 6.2.1 Cutting Forces...86 6.2.2 Regression Analysis of Effect...88 6.2.3 Details Irregular Behaviour...90 6.2.4 Interpretation of Anisotropic Behaviour...92 6.2.5 Force Component Ratios...92 6.2.6 Summary and Interpretations of Results...93 6.3 CR – Cracked Granite...94 6.3.1 Force Component Results...94 6.3.2 Spatial Analysis...97 6.3.3 Error Analysis...99 6.3.4 Summary...100 6.4 Case Study...100 6.4.1 Feature Distribution in Block Models...101 6.4.2 Distributed Random Forest...105 6.4.3 Artificial Neural Network...107 6.4.4 K-Means...110 6.4.5 Training Data Required...112 7 Discussion...114 7.1 Critical Discussion of Experimental Results...114 7.1.1 Experiment CO...114 7.1.2 Experiment GN...115 7.1.3 Experiment GR...116 7.1.4 Case Study...116 7.1.5 Additional Outcomes...117 7.2 Comparison of Machine Learning Algorithms...118 7.2.1 K-Means...118 7.2.2 Artificial Neural Networks and Distributed Random Forest...119 7.2.3 Summary...120 7.3 Considerations Towards Sensor System...121 7.3.1 Force Vectors and Data Acquisition Rate...121 7.3.2 Sensor Types...122 7.3.3 Computation Speed...123 8 Summary and Outlook...125 References...128 Annex A Fields of Application of Conical Tools...145 Annex B Supplements Cutting and Rock Parameters...149 Annex C Details Topic-Analysis Rock Cutting Publications...155 Annex D Details Patent Analysis...157 Annex E Details Rock Cutting Unit HSX-1000-50...161 Annex F Details Used Pick...162 Annex G Error Analysis Cutting Experiments...163 Annex H Details Photographic Modelling...166 Annex I Laser Offset...168 Annex J Supplements Experiment CO...169 Annex K Supplements Experiment GN...187 Annex L Supplements Experiment GR...191 Annex M Preliminary Artificial Neural Network Training...195 Annex N Supplements Case Study (CD)...201 Annex O R-Codes (CD)...203 Annex P Supplements Rock Mechanical Tests (CD)...20

    High Availability and Scalability of Mainframe Environments using System z and z/OS as example

    Get PDF
    Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations

    MICROBIAL ECOLOGY AND ENDOLITH COLONIZATION: SUCCESSION AT A GEOTHERMAL SPRING IN THE HIGH ARCTIC

    Get PDF
    A critical question in microbial ecology concerns how environmental conditions affect community makeup. Arctic thermal springs enable study of this question due to steep environmental gradients that impose strong selective pressures. I use microscopic and molecular methods to quantify community makeup at Troll Springs on Svalbard in the high arctic. Troll has two ecosystems, aquatic and terrestrial, in proximity, shaped by different environmental factors. Microorganisms exist in warm water as periphyton, in moist granular materials, and in cold, dry rock as endoliths. Environmental conditions modulate community composition. The strongest relationships of environmental parameters to composition are pH and temperature in aquatic samples, and water content in terrestrial samples. Periphyton becomes trapped by calcite precipitation, and is a precursor for endolithic communities. Microbial succession takes place at Troll in response to incremental environmental disturbances. Photosynthetic organisms are dominantly eukaryotic algae in the wet, high-illumination environments, and Cyanobacteria in the drier, lower-illumination endolithic environments. Periphyton communities vary strongly from pool to pool, with a few dominant taxa. Endolithic communities are more even, with bacterial taxa and cyanobacterial diversity similar to alpine and other Arctic endoliths. Richness and evenness increase with successional age, except in the most mature endolith where they diminish because of sharply reduced resource and niche availability. Evenness is limited in calcite-poor environments by competition with photosynthetic eukaryotes, and in the driest endolith by competition for water. Richness is influenced by availability of physical niches, increasing as calcite grain surfaces become available for colonization, and then decreasing as pore volume decreases. In most endoliths, rock predates microbial colonization; the reverse is true at Troll. The harsh Arctic environment likely imposes a lifestyle in which microbes survive best in embedded formats, and to preserve live inocula for regrowth. ARISA is commonly used to assess variations in microbial community structure. Applying a uniform threshold across a sample set, as is normally done, treats samples non-optimally and unequally. I present an algorithm for optimal threshold selection that maximizes similarity between replicate pairs, improving results

    Development and validation of spatial distribution models of marine habitats, in support of the ecological valuation of the seabed

    Get PDF
    The marine environment is subjected to increasing anthropogenic pressure. Although there is a willingness of the different activities to minimize their impacts, there is a strong need for the assessment of the ecological value of the seabed, comprising both the abiotic substrate and the living organisms related to it (together called a ‘habitat’). Therefore, ‘habitat mapping’ is crucial, not only for the assessment of the ecological value at a certain moment, but also to follow its evolution over time. Because of the world-wide application of marine habitat mapping, there is currently a great variety in approaches, methodologies to use, as also in the ways habitats are classified. Therefore, it is of utmost importance that attempts are being made to propose more ‘common approaches’ in marine habitat mapping. The general aim of this study is to apply and develop straightforward and statistically sound methodologies for highly reliable sedimentological and habitat modelling, in support of a more sustainable management of our seas. To achieve these aims, this thesis is subdivided into 2 themes: 1) Best coverage data for habitat mapping; and 2) Integration of datasets in the view of habitat mapping

    Reef fish associations with benthic habitats at a remote protected coral reef ecosystem in the Western Indian Ocean-Aldabra Atoll, Seychelles

    Get PDF
    The aim of the thesis is to develop an understanding of the associations between reef fish and benthic habitats and assess the modifying effects of environmental processes on these relationships at Aldabra, a pristine atoll in the Western Indian Ocean (WIO). Conducting research in pristine, or reference coral reef ecosystem, removes the impact of direct anthropogenic disturbances and provides essential information on natural ecosystem structure and functioning. Three primary hypotheses were tested: 1) Environmental drivers such as depth and exposure to wave energy determine the spatial distribution of benthic habitats; 2) The reef fish assemblage structure is explained by habitat at multiple scales and modified by the effects of environmental drivers such as depth, wave energy and cyclical temporal drivers such as time and tides; 3) The reef fish assemblage at Aldabra represents a pristine reef fish assemblage, comprising high levels of herbivores and predators. The research focussed on the benthic habitat on the seaward reefs between the shoreline and 50 m depth. The first objective was to characterise the benthic habitats on Aldabra Atoll’s seaward reefs and map their spatial distributions using remotely sensed imagery and ground truthing data. The second was to assess the influence of depth and exposure to wave energy on the distribution of benthic habitats. The third was to identify the most suitable standardised method to survey the reef fish assemblage structure on Aldabra’s, and fourth to determine the effect of tide and time of day on the reef fish assemblage. The fifth objective was to establish the association between reef fish assemblage structure and benthic habitats and to test how species-size influenced the scale of habitat at which the associations were most apparent. Four categories of geomorphic reef zones (reef flats (19.2 km2), top of the forereef slope (7.8 km2), deep forereef slope (11.6 km2), and reef platform (14.3 km2)) were manually delineated following the visual outlines of reef features from satellite imagery. The six broad-scale and twelve fine-scale benthic habitats were mapped using a supervised maximum likelihood classification and the spatial coverage of each determined. The broad-scale habitats were 1) Epilithic algal matrix, 2) Hard and soft (coral, 3) Rubble, 4) Macroalgae, 5) Seagrass and 6) Sand. Similarly, twelve fine-scale benthic habitats were characterised and mapped, for example, Hard coral (19 %) including massive and submassive forms with Millepora and Rhytisma. The broad-scale benthic habitat map had an overall producer accuracy of 54 % and fine-scale habitat map 29 %, which was consistent with studies using similar habitat classification methods. The prevailing wave energy, depth and the directional orientation of coral reefs (aspect) significantly influenced the probability of occurrence of each of the broad-scale benthic habitats, and there was a shift in peak probability of occurrence of all habitat categories to a greater depth with an increase in wave energy. The strong relationship of benthic habitats with depth and wave energy suggests that the distributions of benthic habitats are likely to change with sea-level rise and increased intensity and frequency of storms in future. Overall, 338 fish species from 51 families, including 14 species of elasmobranch were recorded using Baited Remote Underwater Video systems (BRUVs) and unbaited Remote Underwater Video systems (RUVS) from 231 samples. Fish were significantly more abundant when observed using BRUVs (119 ± 7) relative to RUVs (92 ± 7), and the assemblage structures were significantly different between the two sampling methods. Abundance and species richness of generalist carnivores and piscivores were significantly greater in BRUVs, while RUVs recorded significantly greater numbers of herbivores and more species of herbivore and corallivore. The results suggest that BRUVs are better suited when studying predatory fish which may not be detected without bait. However, when surveying a taxonomically and functionally diverse assemblage of fishes at a pristine reef, RUVs may provide a more accurate estimate of natural reef fish assemblage structure. Reef fish assemblages observed using RUVs were significantly different between morning-high-tide, midday-low-tide and evening-high-tide for all trophic groups. However, the reef fish assemblage structure observed using BRUVs was insensitive to change in tide and time of day, which may be explained by the attraction effect of bait dampening the effect of tide and time of day. While RUVs appear better to detect more subtle variations in fish assemblage structure, care needs to be taken when designing research programmes that use RUVs, as the sampling design should account for tide and time of day to avoid misinterpreting the cyclical variation, which may confound results. Reef fish assemblages were significantly different among habitats within geomorphic reef zones, broad-scale and fine-scale habitats. Species turnover rates were significantly different for all Actinopterygii size-class categories between the three scales of habitat. No marked differences in species turnover rates among habitats were detected for the majority of Elasmobranch size-class categories. The strong habitat dependency over various spatial scales indicates that effective conservation of Actinopterygii fish at Aldabra, and elsewhere in similar ecosystems requires protection of representative sets of benthic habitats. However, Elasmobranch conservation requires sufficiently large areas as these species utilise multiple habitats, over multiple scales, which are likely to exceed the confines of Aldabra’s reef

    Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense

    Full text link
    Recent progress in deep learning is essentially based on a "big data for small tasks" paradigm, under which massive amounts of data are used to train a classifier for a single narrow task. In this paper, we call for a shift that flips this paradigm upside down. Specifically, we propose a "small data for big tasks" paradigm, wherein a single artificial intelligence (AI) system is challenged to develop "common sense", enabling it to solve a wide range of tasks with little training data. We illustrate the potential power of this new paradigm by reviewing models of common sense that synthesize recent breakthroughs in both machine and human vision. We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense. When taken as a unified concept, FPICU is concerned with the questions of "why" and "how", beyond the dominant "what" and "where" framework for understanding vision. They are invisible in terms of pixels but nevertheless drive the creation, maintenance, and development of visual scenes. We therefore coin them the "dark matter" of vision. Just as our universe cannot be understood by merely studying observable matter, we argue that vision cannot be understood without studying FPICU. We demonstrate the power of this perspective to develop cognitive AI systems with humanlike common sense by showing how to observe and apply FPICU with little training data to solve a wide range of challenging tasks, including tool use, planning, utility inference, and social learning. In summary, we argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.Comment: For high quality figures, please refer to http://wellyzhang.github.io/attach/dark.pd

    Designing for adaptability in architecture

    Get PDF
    The research is framed on the premise that designing buildings that can adapt by accommodating change easier and more cost-effectively provides an effective means to a desired end a more sustainable built environment. In this context, adaptability can be viewed as a means to decrease the amount of new construction (reduce), (re)activate underused or vacant building stock (reuse) and enhance disassembly/ deconstruction of components (reuse, recycle) - prolonging the useful life of buildings (reduce, reuse, recycle). The aim of the research is to gain a holistic overview of the concept of adaptability in the construction industry and provide an improved framework to design for, deploy and implement adaptability. An over-arching research question was posited to guide the inquiry: how can architects understand, communicate, design for and test the concept of adaptability in the context of the design process? The research followed Dubois and Gadde s (2002) systematic combining as an over-arching approach that continuously moves between the empirical world and theoretical models allowing the co-evolution of data collection and theory from the beginning as part of a non-linear process with the objective of matching theory with reality. An initial framework was abducted from a preliminary collection of data from which a set of mixed research methods was deployed to explore adaptability (interviews, building case studies, dependency structural matrices, practitioner surveys and workshop). Emergent from the data is an expanded and revised theory on designing for adaptability consisting of concepts, models and propositions. The models illustrate many of the casual links between the physical design structure of the building (e.g. plan depth, storey height) and the soft contingencies of a messy design/construction/occupation process (e.g. procurement route, funding methods, stakeholder mindsets). In an effort to enhance building adaptability, the abducted propositions suggest a shift in the way the industry values buildings and conducts aspects of the design process and how designer s approach designing for adaptability
    corecore