115,153 research outputs found
The VEX-93 environment as a hybrid tool for developing knowledge systems with different problem solving techniques
The paper describes VEX-93 as a hybrid environment for developing
knowledge-based and problem solver systems. It integrates methods and
techniques from artificial intelligence, image and signal processing and
data analysis, which can be mixed. Two hierarchical levels of reasoning
contains an intelligent toolbox with one upper strategic inference engine
and four lower ones containing specific reasoning models: truth-functional
(rule-based), probabilistic (causal networks), fuzzy (rule-based) and
case-based (frames). There are image/signal processing-analysis capabilities
in the form of programming languages with more than one hundred primitive
functions.
User-made programs are embeddable within knowledge basis, allowing the
combination of perception and reasoning. The data analyzer toolbox contains
a collection of numerical classification, pattern recognition and ordination
methods, with neural network tools and a data base query language at
inference engines's disposal.
VEX-93 is an open system able to communicate with external computer programs
relevant to a particular application. Metaknowledge can be used for
elaborate conclusions, and man-machine interaction includes, besides windows
and graphical interfaces, acceptance of voice commands and production of
speech output.
The system was conceived for real-world applications in general domains, but
an example of a concrete medical diagnostic support system at present under
completion as a cuban-spanish project is mentioned.
Present version of VEX-93 is a huge system composed by about one and half
millions of lines of C code and runs in microcomputers under Windows 3.1.Postprint (published version
Case-based reasoning combined with statistics for diagnostics and prognosis
Many approaches used for diagnostics today are based on a precise model. This excludes diagnostics of many complex types of machinery that cannot be modelled and simulated easily or without great effort. Our aim is to show that by including human experience it is possible to diagnose complex machinery when there is no or limited models or simulations available. This also enables diagnostics in a dynamic application where conditions change and new cases are often added. In fact every new solved case increases the diagnostic power of the system. We present a number of successful projects where we have used feature extraction together with case-based reasoning to diagnose faults in industrial robots, welding, cutting machinery and we also present our latest project for diagnosing transmissions by combining Case-Based Reasoning (CBR) with statistics. We view the fault diagnosis process as three consecutive steps. In the first step, sensor fault signals from machines and/or input from human operators are collected. Then, the second step consists of extracting relevant fault features. In the final diagnosis/prognosis step, status and faults are identified and classified. We view prognosis as a special case of diagnosis where the prognosis module predicts a stream of future features
Stay Tuned: Whether Cloud-Based Service Providers Can Have Their Copyrighted Cake and Eat It Too
Copyright owners have the exclusive right to perform their works publicly and the ability to license their work to others who want to share that right. Subsections 106(4) and (5) of the Copyright Act govern this exclusive public performance right, but neither subsection elaborates on what constitutes a performance made “to the public” versus one that remains private. This lack of clarity has made it difficult for courts to apply the Copyright Act consistently, especially in the face of changing technology.
Companies like Aereo, Inc. and AereoKiller, Inc. developed novel ways to transmit content over the internet to be viewed instantly by their subscribers and declined to procure the licenses that would have been required if these transmissions were being made “to the public.” However, while these companies claimed that their activities were outside of the purview of § 106(4) and (5), their rivals, copyright owners, and the U.S. Supreme Court disagreed. Likening Aereo to a cable company for purposes of § 106(4) and (5), the Supreme Court determined that the company would need to pay for the material it streamed. Perhaps more problematic for Aereo (and other similar companies) is the fact that the Court declined to categorize Aereo as an actual cable company, such that it would qualify to pay compulsory licensing fees—the more affordable option given to cable companies under § 111—to copyright holders.
This Comment shows that, while the Court correctly ruled that companies like Aereo and AereoKiller should pay for the content transmitted, its failure to address whether Aereo is a cable company could frustrate innovation to the detriment of the public. It suggests, therefore, that these companies should be required to pay for the content that they transmit in the same way that cable companies do until Congress develops another system
Artefacts and Errors: Acknowledging Issues of Representation in the Digital: Imaging of Ancient Texts
It is assumed, in palaeography, papyrology and epigraphy, that a certain amount of
uncertainty is inherent in the reading of damaged and abraded texts. Yet we have
not really grappled with the fact that, nowadays, as many scholars tend to deal with
digital images of texts, rather than handling the texts themselves, the procedures for
creating digital images of texts can insert further uncertainty into the representation
of the text created. Technical distortions can lead to the unintentional introduction
of ‘artefacts’ into images, which can have an effect on the resulting representation. If
we cannot trust our digital surrogates of texts, can we trust the readings from them?
How do scholars acknowledge the quality of digitised images of texts? Furthermore,
this leads us to the type of discussions of representation that have been present in
Classical texts since Plato: digitisation can be considered as an alternative form of
representation, bringing to the modern debate of the use of digital technology in Classics
the familiar theories of mimesis (imitation) and ekphrasis (description): the conversion
of visual evidence into explicit descriptions of that information, stored in computer
files in distinct linguistic terms, with all the difficulties of conversion understood in the
ekphratic process. The community has not yet considered what becoming dependent
on digital texts means for the field, both in practical and theoretical terms. Issues of
quality, copying, representation, and substance should be part of our dialogue when
we consult digital surrogates of documentary material, yet we are just constructing
understandings of what it means to rely on virtual representations of artefacts. It is
necessary to relate our understandings of uncertainty in palaeography and epigraphy
to our understanding of the mechanics of visualization employed by digital imaging
techniques, if we are to fully understand the impact that these will have
Context-aware Captions from Context-agnostic Supervision
We introduce an inference technique to produce discriminative context-aware
image captions (captions that describe differences between images or visual
concepts) using only generic context-agnostic training data (captions that
describe a concept or an image in isolation). For example, given images and
captions of "siamese cat" and "tiger cat", we generate language that describes
the "siamese cat" in a way that distinguishes it from "tiger cat". Our key
novelty is that we show how to do joint inference over a language model that is
context-agnostic and a listener which distinguishes closely-related concepts.
We first apply our technique to a justification task, namely to describe why an
image contains a particular fine-grained category as opposed to another
closely-related category of the CUB-200-2011 dataset. We then study
discriminative image captioning to generate language that uniquely refers to
one of two semantically-similar images in the COCO dataset. Evaluations with
discriminative ground truth for justification and human studies for
discriminative image captioning reveal that our approach outperforms baseline
generative and speaker-listener approaches for discrimination.Comment: Accepted to CVPR 2017 (Spotlight
Quick and energy-efficient Bayesian computing of binocular disparity using stochastic digital signals
Reconstruction of the tridimensional geometry of a visual scene using the
binocular disparity information is an important issue in computer vision and
mobile robotics, which can be formulated as a Bayesian inference problem.
However, computation of the full disparity distribution with an advanced
Bayesian model is usually an intractable problem, and proves computationally
challenging even with a simple model. In this paper, we show how probabilistic
hardware using distributed memory and alternate representation of data as
stochastic bitstreams can solve that problem with high performance and energy
efficiency. We put forward a way to express discrete probability distributions
using stochastic data representations and perform Bayesian fusion using those
representations, and show how that approach can be applied to diparity
computation. We evaluate the system using a simulated stochastic implementation
and discuss possible hardware implementations of such architectures and their
potential for sensorimotor processing and robotics.Comment: Preprint of article submitted for publication in International
Journal of Approximate Reasoning and accepted pending minor revision
- …