467,972 research outputs found
Distributed human computation framework for linked data co-reference resolution
Distributed Human Computation (DHC) is a technique used to solve computational problems by incorporating the collaborative effort of a large number of humans. It is also a solution to AI-complete problems such as natural language processing. The Semantic Web with its root in AI is envisioned to be a decentralised world-wide information space for sharing machine-readable data with minimal integration costs. There are many research problems in the Semantic Web that are considered as AI-complete problems. An example is co-reference resolution, which involves determining whether different URIs refer to the same entity. This is considered to be a significant hurdle to overcome in the realisation of large-scale Semantic Web applications. In this paper, we propose a framework for building a DHC system on top of the Linked Data Cloud to solve various computational problems. To demonstrate the concept, we are focusing on handling the co-reference resolution in the Semantic Web when integrating distributed datasets. The traditional way to solve this problem is to design machine-learning algorithms. However, they are often computationally expensive, error-prone and do not scale. We designed a DHC system named iamResearcher, which solves the scientific publication author identity co-reference problem when integrating distributed bibliographic datasets. In our system, we aggregated 6 million bibliographic data from various publication repositories. Users can sign up to the system to audit and align their own publications, thus solving the co-reference problem in a distributed manner. The aggregated results are published to the Linked Data Cloud
Cooperation in Industrial Systems
ARCHON is an ongoing ESPRIT II project (P-2256) which is approximately half way through its five year duration. It is concerned with defining and applying techniques from the area of Distributed Artificial Intelligence to the development of real-size industrial applications. Such techniques enable multiple problem solvers (e.g. expert systems, databases and conventional numerical software systems) to communicate and cooperate with each other to improve both their individual problem solving behavior and the behavior of the community as a whole. This paper outlines the niche of ARCHON in the Distributed AI world and provides an overview of the philosophy and architecture of our approach the essence of which is to be both general (applicable to the domain of industrial process control) and powerful enough to handle real-world problems
Integer Sparse Distributed Memory and Modular Composite Representation
Challenging AI applications, such as cognitive architectures, natural language understanding, and visual object recognition share some basic operations including pattern recognition, sequence learning, clustering, and association of related data. Both the representations used and the structure of a system significantly influence which tasks and problems are most readily supported. A memory model and a representation that facilitate these basic tasks would greatly improve the performance of these challenging AI applications.Sparse Distributed Memory (SDM), based on large binary vectors, has several desirable properties: auto-associativity, content addressability, distributed storage, robustness over noisy inputs that would facilitate the implementation of challenging AI applications. Here I introduce two variations on the original SDM, the Extended SDM and the Integer SDM, that significantly improve these desirable properties, as well as a new form of reduced description representation named MCR.Extended SDM, which uses word vectors of larger size than address vectors, enhances its hetero-associativity, improving the storage of sequences of vectors, as well as of other data structures. A novel sequence learning mechanism is introduced, and several experiments demonstrate the capacity and sequence learning capability of this memory.Integer SDM uses modular integer vectors rather than binary vectors, improving the representation capabilities of the memory and its noise robustness. Several experiments show its capacity and noise robustness. Theoretical analyses of its capacity and fidelity are also presented.A reduced description represents a whole hierarchy using a single high-dimensional vector, which can recover individual items and directly be used for complex calculations and procedures, such as making analogies. Furthermore, the hierarchy can be reconstructed from the single vector. Modular Composite Representation (MCR), a new reduced description model for the representation used in challenging AI applications, provides an attractive tradeoff between expressiveness and simplicity of operations. A theoretical analysis of its noise robustness, several experiments, and comparisons with similar models are presented.My implementations of these memories include an object oriented version using a RAM cache, a version for distributed and multi-threading execution, and a GPU version for fast vector processing
How Artificial Intelligence for Healthcare Look Like in the Future?
Research on artificial intelligence (AI) for healthcare gained interest in recent years. However, the use of AI in daily clinical practice is still rare. We created and distributed an online survey among professionals working within the health informatics field to explore their views. The provided answers were classified into referring or not to: 1) Application areas; 2) Medical specialities; 3) Specific technologies; 4) Use cases; 5) Citizens involvement; and 6) Challenges. We received 42 valid responses. With regard to the sentiment of the answers, 71,4% were classified by the AFINN tool as being positive. In light of the open question, 76,2% of the respondents referred to possible applications areas. They think the most frequent uses will be for diagnostic, decision making and treatment. 54,8% of respondents referred to use cases, being personalized care and daily practice the most popular scenarios. 28,6% mentioned citizens' involvement, and 23,8% medical specialities in which AI might be used. There is a mostly positive attitude towards the application of AI in healthcare, in particular regarding its future use for realising routine tasks. From these results, we conclude that research should further focus on realising AI-based applications for relieving health professionals from repetitive tasks and optimize healthcare processes
Blockchain: The Next Breakthrough in the Rapid Progress of AI
Blockchain technologies, once used exclusively for buying and selling bitcoins, have entered the mainstream of computer applications, fundamentally changing the way Internet transactions can be implemented by ascertaining trust between unknown parties. In addition, they ensure immutability (once information is entered it cannot be modified) and enable disintermediation (as trust is assured, no third party is required to verify transactions). These advantages can produce disruptive changes when properly exploited, inspiring a large number of applications. These applications are forming the backbone of what can be called the Internet of Value, bound to bring as significant changes as those brought over the last 20Ā years by the traditional Internet. This chapter investigates blockchain and the technologies behind it and explains their technological might and outstanding potential, not only for transactions but also as distributed databases. It also discusses its future prospects and the disruptive changes it promises to bring, while also considering the challenges that would need to be overcome for its widespread adoption. Finally, the chapter considers combining blockchain with Artificial Intelligence (AI) and discusses the revolutionary changes that would result by rapidly advancing the AI field
Recommended from our members
APACE: AlphaFold2 and advanced computing as a service for accelerated discovery in biophysics
The prediction of protein 3D structure from amino acid sequence is a computational grand challenge in biophysics and plays a key role in robust protein structure prediction algorithms, from drug discovery to genome interpretation. The advent of AI models, such as AlphaFold, is revolutionizing applications that depend on robust protein structure prediction algorithms. To maximize the impact, and ease the usability, of these AI tools we introduce APACE, AlphaFold2 and advanced computing as a service, a computational framework that effectively handles this AI model and its TB-size database to conduct accelerated protein structure prediction analyses in modern supercomputing environments. We deployed APACE in the Delta and Polaris supercomputers and quantified its performance for accurate protein structure predictions using four exemplar proteins: 6AWO, 6OAN, 7MEZ, and 6D6U. Using up to 300 ensembles, distributed across 200 NVIDIA A100 GPUs, we found that APACE is up to two orders of magnitude faster than off-the-self AlphaFold2 implementations, reducing time-to-solution from weeks to minutes. This computational approach may be readily linked with robotics laboratories to automate and accelerate scientific discovery
- ā¦