1,566 research outputs found
Recommended from our members
Explainable AI: The new 42?
Explainable AI is not a new field. Since at least the early exploitation of C.S. Pierce’s abductive reasoning in expert systems of the 1980s, there were reasoning architectures to support an explanation function for complex AI systems, including applications in medical diagnosis, complex multi-component design, and reasoning about the real world. So explainability is at least as old as early AI, and a natural consequence of the design of AI systems. While early expert systems consisted of handcrafted knowledge bases that enabled reasoning over narrowly well-defined domains (e.g., INTERNIST, MYCIN), such systems had no learning capabilities and had only primitive uncertainty handling. But the evolution of formal reasoning architectures to incorporate principled probabilistic reasoning helped address the capture and use of uncertain knowledge.
There has been recent and relatively rapid success of AI/machine learning solutions arises from neural network architectures. A new generation of neural methods now scale to exploit the practical applicability of statistical and algebraic learning approaches in arbitrarily high dimensional spaces. But despite their huge successes, largely in problems which can be cast as classification problems, their effectiveness is still limited by their un-debuggability, and their inability to “explain” their decisions in a human understandable and reconstructable way. So while AlphaGo or DeepStack can crush the best humans at Go or Poker, neither program has any internal model of its task; its representations defy interpretation by humans, there is no mechanism to explain their actions and behaviour, and furthermore, there is no obvious instructional value.. the high performance systems can not help humans improve. Even when we understand the underlying mathematical scaffolding of current machine learning architectures, it is often impossible to get insight into the internal working of the models; we need explicit modeling and reasoning tools to explain how and why a result was achieved. We also know that a significant challenge for future AI is contextual adaptation, i.e., systems that incrementally help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence
Extreme 13C depletion of CCl2F2 in firn air samples from NEEM, Greenland
A series of 12 high volume air samples collected from the S2 firn core during the North Greenland Eemian Ice Drilling (NEEM) 2009 campaign have been measured for mixing ratio and stable carbon isotope composition of the chlorofluorocarbon CFC-12 (CCl2F2). While the mixing ratio measurements compare favorably to other firn air studies, the isotope results show extreme 13C depletion at the deepest measurable depth (65 m), to values lower than d13C = -80‰ vs. VPDB (the international stable carbon isotope scale), compared to present day surface tropospheric measurements near -40‰. Firn air modeling was used to interpret these measurements. Reconstructed atmospheric time series indicate even larger depletions (to -120‰) near 1950 AD, with subsequent rapid enrichment of the atmospheric reservoir of the compound to the present day value. Mass-balance calculations show that this change is likely to have been caused by a large change in the isotopic composition of anthropogenic CFC-12 emissions, probably due to technological advances in the CFC production process over the last 80 yr, though direct evidence is lacking
Recommended from our members
On the challenges and opportunities in visualization for machine learning and knowledge extraction: A research agenda
We describe a selection of challenges at the intersection of machine learning and data visualization and outline a subjective research agenda based on professional and personal experience. The unprecedented increase in the amount, variety and the value of data has been significantly transforming the way that scientific research is carried out and businesses operate. Within data science, which has emerged as a practice to enable this data-intensive innovation by gathering together and advancing the knowledge from fields such as statistics, machine learning, knowledge extraction, data management, and visualization, visualization plays a unique and maybe the ultimate role as an approach to facilitate the human and computer cooperation, and to particularly enable the analysis of diverse and heterogeneous data using complex computational methods where algorithmic results are challenging to interpret and operationalize. Whilst algorithm development is surely at the center of the whole pipeline in disciplines such as Machine Learning and Knowledge Discovery, it is visualization which ultimately makes the results accessible to the end user. Visualization thus can be seen as a mapping from arbitrarily high-dimensional abstract spaces to the lower dimensions and plays a central and critical role in interacting with machine learning algorithms, and particularly in interactive machine learning (iML) with including the human-in-the-loop. The central goal of the CD-MAKE VIS workshop is to spark discussions at this intersection of visualization, machine learning and knowledge discovery and bring together experts from these disciplines. This paper discusses a perspective on the challenges and opportunities in this integration of these discipline and presents a number of directions and strategies for further research
Air mass factor formulation for spectroscopic measurements from satellites: Application to formaldehyde retrievals from the Global Ozone Monitoring Experiment
Abstract. We present a new formulation for the air mass factor (AMF) to convert slant column measurements of optically thin atmospheric species from space into total vertical columns. Because of atmospheric scattering, the AMF depends on the vertical distribution of the species. We formulate the AMF as the integral of the relative vertical distribution (shape factor) of the species over the depth of the atmosphere, weighted by altitudedependent coefficients (scattering weights) computed independently from a radiative transfer model. The scattering weights are readily tabulated, and one can then obtain the AMF for any observation scene by using shape factors from a three dimensional (3-D) atmospheric chemistry model for the period of observation. This approach subsequently allows objective evaluation of the 3-D model with the observed vertical columns, since the shape factor and the vertical column in the model represent two independent pieces of information. We demonstrate the AMF method by using slant column measurements of formaldehyde at 346 nm from the Global Ozone Monitoring Experiment satellite instrument over North America during July 1996. Shape factors are computed with the Global Earth Observing System CHEMistry (GEOS-CHEM) global 3-D model and are checked for consistency with the few available aircraft measurements. Scattering weights increase by an order of magnitude from the surface to the upper troposphere. The AMFs are typically 20-40 % less over continents than over the oceans and are approximately half the values calculated in the absence of scattering. Model-induced errors in the AMF are estimated to be • 10%. The GEOS-CHEM model captures 50 % and 60 % of the variances in the observed slant and vertical columns, respectively. Comparison of the simulated and observed vertical columns allows assessment of model bias. 1
Total Observed Organic Carbon (TOOC): A synthesis of North American observations
Measurements of organic carbon compounds in both the gas and particle phases measured upwind, over and downwind of North America are synthesized to examine the total observed organic carbon (TOOC) over this region. These include measurements made aboard the NOAA WP-3 and BAe-146 aircraft, the NOAA research vessel Ronald H. Brown, and at the Thompson Farm and Chebogue Point surface sites during the summer 2004 ICARTT campaign. Both winter and summer 2002 measurements during the Pittsburgh Air Quality Study are also included. Lastly, the spring 2002 observations at Trinidad Head, CA, surface measurements made in March 2006 in Mexico City and coincidentally aboard the C-130 aircraft during the MILAGRO campaign and later during the IMPEX campaign off the northwestern United States are incorporated. Concentrations of TOOC in these datasets span more than two orders of magnitude. The daytime mean TOOC ranges from 4.0 to 456 μgC m^−3 from the cleanest site (Trinidad Head) to the most polluted (Mexico City). Organic aerosol makes up 3–17% of this mean TOOC, with highest fractions reported over the northeastern United States, where organic aerosol can comprise up to 50% of TOOC. Carbon monoxide concentrations explain 46 to 86% of the variability in TOOC, with highest TOOC/CO slopes in regions with fresh anthropogenic influence, where we also expect the highest degree of mass closure for TOOC. Correlation with isoprene, formaldehyde, methyl vinyl ketene and methacrolein also indicates that biogenic activity contributes substantially to the variability of TOOC, yet these tracers of biogenic oxidation sources do not explain the variability in organic aerosol observed over North America. We highlight the critical need to develop measurement techniques to routinely detect total gas phase VOCs, and to deploy comprehensive suites of TOOC instruments in diverse environments to quantify the ambient evolution of organic carbon from source to sink
Usability Challenges in Smartphone Web Access: A Systematic Literature Review
Part 8: International Workshop on Information Engineering and ManagementInternational audienceSystematic literature reviews facilitate methodical understanding of current advances in a field. With the increasing popularity of smartphones, they have become an important means to access the web. Although the literature on this topic is growing in recent times, there has been no effort yet to systematically review it. This paper reports on a systematic literature review of primary studies from 2007 to 2012 that concern mobile web usability. We identify the usability dimensions tested and the testing procedures adopted in the literature. We anticipate that our work will not only help researchers understand the current state of usability testing of mobile web but also identify the areas where further research is needed in addressing the challenges identified
Reasoning Under Uncertainty: Towards Collaborative Interactive Machine Learning
In this paper, we present the current state-of-the-art of decision making (DM) and machine learning (ML) and bridge the two research domains to create an integrated approach of complex problem solving based on human and computational agents. We present a novel classification of ML, emphasizing the human-in-the-loop in interactive ML (iML) and more specific on collaborative interactive ML (ciML), which we understand as a deep integrated version of iML, where humans and algorithms work hand in hand to solve complex problems. Both humans and computers have specific strengths and weaknesses and integrating humans into machine learning processes might be a very efficient way for tackling problems. This approach bears immense research potential for various domains, e.g., in health informatics or in industrial applications. We outline open questions and name future challenges that have to be addressed by the research community to enable the use of collaborative interactive machine learning for problem solving in a large scale
SP-0489: HPV-transformation in the cervix and at non-cervical sites
Pla general d'un dels panells horitzontals sobre espais verds de Barcelona a l'exposició Ciutat. Barcelona projecta a l'Edifici Fòrum. Exposició sobre la planificació urbanística i arquitectònica de Barcelon
- …
