1,042 research outputs found
Knowledge-based automatic tolerance analysis system
Tolerance measure is an important part of engineering, however, to date the system of
applying this important technology has been left to the assessment of the engineer using
appropriate guidelines. This work offers a major departure from the trial and error or random
number generation techniques that have been used previously by using a knowledge-based
system to ensure the intelligent optimisation within the manufacturing system. A system to
optimise manufacturing tolerance allocation to a part known as Knowledge-based Automatic
Tolerance Analysis (KATA) has been developed. KATA is a knowledge-based system shell
built within AutoCAD. It has the ability for geometry creation in CAD and the capability to
optimise the tolerance heuristically as an expert system. Besides the worst-case tolerancing
equation to optimise the tolerance allocation, KATA's algorithm is supported by actual
production information such as machine capability, types of cutting tools, materials, process
capabilities etc. KATA's prototype is currently able to analyse a cylindrical shape workpiece
and a simple prismatic part. Analyses of tolerance include dimensional tolerance and
geometrical tolerance. KATA is also able to do angular cuts such as tapers and chamfers. The
investigation has also led to the significant development of the single tolerance reference
technique. This method departs from the common practice of multiple tolerance referencing
technique to optimise tolerance allocation. Utilisation of this new technique has eradicated
the error of tolerance stackup. The retests have been undertaken, two of which are cylindrical
parts meant to test dimensional tolerance and an angular cut. The third is a simple prismatic
part to experiment with the geometrical tolerance analysis.
The ability to optimise tolerance allocation is based on real production data and not imaginary
or random number generation and has improved the accuracy of the expected result after
manufacturing. Any failure caused by machining parameters is cautioned at an early stage
before an actual production run has commenced. Thus, the manufacturer is assured that the
product manufactured will be within the required tolerance limits. Being the central database
for all production capability information enables KATA to opt for several approaches and
techniques of processing. Hence, giving the user flexibility of selecting the process plan best
suited for any required situation
Cortical Learning of Recognition Categories: A Resolution of the Exemplar Vs. Prototype Debate
Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.Defense Advanced Projects Research Agency SyNaPSE program (Hewlett-Packard Company, DARPA HR0011-09-3-0001; HRL Laboratories LLC #801881-BS under HR0011-09-C-0011); Science of Learning Centers program of the National Science Foundation (NSF SBE-0354378
Overview of Some Intelligent Control Structures and Dedicated Algorithms
Automatic control refers to the use of a control device to make the controlled object automatically run or keep the state unchanged without the participation of people. The guiding ideology of intelligent control is based on people’s way of thinking and ability to solve problems, in order to solve the current methods that require human intelligence. We already know that the complexity of the controlled object includes model uncertainty, high nonlinearity, distributed sensors/actuators, dynamic mutations, multiple time scales, complex information patterns, big data process, and strict characteristic indicators, etc. In addition, the complexity of the environment manifests itself in uncertainty and uncertainty of change. Based on this, various researches continue to suggest that the main methods of intelligent control can include expert control, fuzzy control, neural network control, hierarchical intelligent control, anthropomorphic intelligent control, integrated intelligent control, combined intelligent control, chaos control, wavelet theory, etc. However, it is difficult to want all the intelligent control methods in a chapter, so this chapter focuses on intelligent control based on fuzzy logic, intelligent control based on neural network, expert control and human-like intelligent control, and hierarchical intelligent control and learning control, and provide relevant and useful programming for readers to practice
Mathematical Expression Recognition based on Probabilistic Grammars
[EN] Mathematical notation is well-known and used all over the
world. Humankind has evolved from simple methods representing
countings to current well-defined math notation able to account for
complex problems. Furthermore, mathematical expressions constitute a
universal language in scientific fields, and many information
resources containing mathematics have been created during the last
decades. However, in order to efficiently access all that information,
scientific documents have to be digitized or produced directly in
electronic formats.
Although most people is able to understand and produce mathematical
information, introducing math expressions into electronic devices
requires learning specific notations or using editors. Automatic
recognition of mathematical expressions aims at filling this gap
between the knowledge of a person and the input accepted by
computers. This way, printed documents containing math expressions
could be automatically digitized, and handwriting could be used for
direct input of math notation into electronic devices.
This thesis is devoted to develop an approach for mathematical
expression recognition. In this document we propose an approach for
recognizing any type of mathematical expression (printed or
handwritten) based on probabilistic grammars. In order to do so, we
develop the formal statistical framework such that derives several
probability distributions. Along the document, we deal with the
definition and estimation of all these probabilistic sources of
information. Finally, we define the parsing algorithm that globally
computes the most probable mathematical expression for a given input
according to the statistical framework.
An important point in this study is to provide objective performance
evaluation and report results using public data and standard
metrics. We inspected the problems of automatic evaluation in this
field and looked for the best solutions. We also report several
experiments using public databases and we participated in several
international competitions. Furthermore, we have released most of the
software developed in this thesis as open source.
We also explore some of the applications of mathematical expression
recognition. In addition to the direct applications of transcription
and digitization, we report two important proposals. First, we
developed mucaptcha, a method to tell humans and computers apart by
means of math handwriting input, which represents a novel application
of math expression recognition. Second, we tackled the problem of
layout analysis of structured documents using the statistical
framework developed in this thesis, because both are two-dimensional
problems that can be modeled with probabilistic grammars.
The approach developed in this thesis for mathematical expression
recognition has obtained good results at different levels. It has
produced several scientific publications in international conferences
and journals, and has been awarded in international competitions.[ES] La notación matemática es bien conocida y se utiliza en todo el
mundo. La humanidad ha evolucionado desde simples métodos para
representar cuentas hasta la notación formal actual capaz de modelar
problemas complejos. Además, las expresiones matemáticas constituyen
un idioma universal en el mundo científico, y se han creado muchos
recursos que contienen matemáticas durante las últimas décadas. Sin
embargo, para acceder de forma eficiente a toda esa información, los
documentos científicos han de ser digitalizados o producidos
directamente en formatos electrónicos.
Aunque la mayoría de personas es capaz de entender y producir
información matemática, introducir expresiones matemáticas en
dispositivos electrónicos requiere aprender notaciones especiales o
usar editores. El reconocimiento automático de expresiones matemáticas
tiene como objetivo llenar ese espacio existente entre el conocimiento
de una persona y la entrada que aceptan los ordenadores. De este modo,
documentos impresos que contienen fórmulas podrían digitalizarse
automáticamente, y la escritura se podría utilizar para introducir
directamente notación matemática en dispositivos electrónicos.
Esta tesis está centrada en desarrollar un método para reconocer
expresiones matemáticas. En este documento proponemos un método para
reconocer cualquier tipo de fórmula (impresa o manuscrita) basado en
gramáticas probabilísticas. Para ello, desarrollamos el marco
estadístico formal que deriva varias distribuciones de probabilidad. A
lo largo del documento, abordamos la definición y estimación de todas
estas fuentes de información probabilística. Finalmente, definimos el
algoritmo que, dada cierta entrada, calcula globalmente la expresión
matemática más probable de acuerdo al marco estadístico.
Un aspecto importante de este trabajo es proporcionar una evaluación
objetiva de los resultados y presentarlos usando datos públicos y
medidas estándar. Por ello, estudiamos los problemas de la evaluación
automática en este campo y buscamos las mejores soluciones. Asimismo,
presentamos diversos experimentos usando bases de datos públicas y
hemos participado en varias competiciones internacionales. Además,
hemos publicado como código abierto la mayoría del software
desarrollado en esta tesis.
También hemos explorado algunas de las aplicaciones del reconocimiento
de expresiones matemáticas. Además de las aplicaciones directas de
transcripción y digitalización, presentamos dos propuestas
importantes. En primer lugar, desarrollamos mucaptcha, un método para
discriminar entre humanos y ordenadores mediante la escritura de
expresiones matemáticas, el cual representa una novedosa aplicación
del reconocimiento de fórmulas. En segundo lugar, abordamos el
problema de detectar y segmentar la estructura de documentos
utilizando el marco estadístico formal desarrollado en esta tesis,
dado que ambos son problemas bidimensionales que pueden modelarse con
gramáticas probabilísticas.
El método desarrollado en esta tesis para reconocer expresiones
matemáticas ha obtenido buenos resultados a diferentes niveles. Este
trabajo ha producido varias publicaciones en conferencias
internacionales y revistas, y ha sido premiado en competiciones
internacionales.[CA] La notació matemàtica és ben coneguda i s'utilitza a tot el món. La
humanitat ha evolucionat des de simples mètodes per representar
comptes fins a la notació formal actual capaç de modelar
problemes complexos. A més, les expressions matemàtiques
constitueixen un idioma universal al món científic, i s'han creat
molts recursos que contenen matemàtiques durant les últimes
dècades. No obstant això, per accedir de forma eficient a tota
aquesta informació, els documents científics han de ser
digitalitzats o produïts directament en formats electrònics.
Encara que la majoria de persones és capaç d'entendre i produir
informació matemàtica, introduir expressions matemàtiques en
dispositius electrònics requereix aprendre notacions especials o usar
editors. El reconeixement automàtic d'expressions matemàtiques
té per objectiu omplir aquest espai existent entre el coneixement
d'una persona i l'entrada que accepten els ordinadors. D'aquesta
manera, documents impresos que contenen fórmules podrien
digitalitzar-se automàticament, i l'escriptura es podria utilitzar per
introduir directament notació matemàtica en dispositius electrònics.
Aquesta tesi està centrada en desenvolupar un mètode per reconèixer
expressions matemàtiques. En aquest document proposem un mètode per
reconèixer qualsevol tipus de fórmula (impresa o manuscrita) basat en
gramàtiques probabilístiques. Amb aquesta finalitat, desenvolupem el
marc estadístic formal que deriva diverses distribucions de
probabilitat. Al llarg del document, abordem la definició i estimació
de totes aquestes fonts d'informació probabilística. Finalment,
definim l'algorisme que, donada certa entrada, calcula globalment
l'expressió matemàtica més probable d'acord al marc estadístic.
Un aspecte important d'aquest treball és proporcionar una avaluació
objectiva dels resultats i presentar-los usant dades públiques i
mesures estàndard. Per això, estudiem els problemes de l'avaluació
automàtica en aquest camp i busquem les millors solucions. Així
mateix, presentem diversos experiments usant bases de dades públiques
i hem participat en diverses competicions internacionals. A més, hem
publicat com a codi obert la majoria del software desenvolupat en
aquesta tesi.
També hem explorat algunes de les aplicacions del reconeixement
d'expressions matemàtiques. A més de les aplicacions directes de
transcripció i digitalització, presentem dues propostes
importants. En primer lloc, desenvolupem mucaptcha, un mètode per
discriminar entre humans i ordinadors mitjançant l'escriptura
d'expressions matemàtiques, el qual representa una nova aplicació del
reconeixement de fórmules. En segon lloc, abordem el problema de
detectar i segmentar l'estructura de documents utilitzant el marc
estadístic formal desenvolupat en aquesta tesi, donat que ambdós són
problemes bidimensionals que poden modelar-se amb gramàtiques
probabilístiques.
El mètode desenvolupat en aquesta tesi per reconèixer expressions
matemàtiques ha obtingut bons resultats a diferents nivells. Aquest
treball ha produït diverses publicacions en conferències
internacionals i revistes, i ha sigut premiat en competicions
internacionals.Álvaro Muñoz, F. (2015). Mathematical Expression Recognition based on Probabilistic Grammars [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/51665TESI
A new knowledge sourcing framework to support knowledge-based engineering development
New trends in Knowledge-Based Engineering (KBE) highlight the need for decoupling the automation aspect from the knowledge management side of KBE. In this direction, some authors argue that KBE is capable of effectively capturing, retaining and reusing engineering knowledge. However, there are some limitations associated with some aspects of KBE that present a barrier to deliver the knowledge sourcing process requested by the industry. To overcome some of these limitations this research proposes a new methodology for efficient knowledge capture and effective management of the complete knowledge life cycle.
Current knowledge capture procedures represent one of the main constraints limiting the wide use of KBE in the industry. This is due to the extraction of knowledge from experts in high cost knowledge capture sessions. To reduce the amount of time required from experts to extract relevant knowledge, this research uses Artificial Intelligence (AI) techniques capable of generating new knowledge from company assets. Moreover the research reported here proposes the integration of AI methods and experts increasing as a result the accuracy of the predictions and the reliability of using advanced reasoning tools. The proposed knowledge sourcing framework integrates two features: (i) use of advanced data mining tools and expert knowledge to create new knowledge from raw data, (ii) adoption of a well-established and reliable methodology to systematically capture, transfer and reuse engineering knowledge.
The methodology proposed in this research is validated through the development and implementation of two case studies aiming at the optimisation of wing design concepts. The results obtained in both use cases proved the extended KBE capability for fast and effective knowledge sourcing. This evidence was provided by the experts working in the development of each of the case studies through the implementation of structured quantitative and qualitative analyses
How sketches work: a cognitive theory for improved system design
Evidence is presented that in the early stages of design or composition the
mental processes used by artists for visual invention require a different type of
support from those used for visualising a nearly complete object. Most research
into machine visualisation has as its goal the production of realistic images which
simulate the light pattern presented to the retina by real objects. In contrast sketch
attributes preserve the results of cognitive processing which can be used
interactively to amplify visual thought. The traditional attributes of sketches
include many types of indeterminacy which may reflect the artist's need to be
"vague".
Drawing on contemporary theories of visual cognition and neuroscience this
study discusses in detail the evidence for the following functions which are better
served by rough sketches than by the very realistic imagery favoured in machine
visualising systems.
1. Sketches are intermediate representational types which facilitate the
mental translation between descriptive and depictive modes of representing visual
thought.
2. Sketch attributes exploit automatic processes of perceptual retrieval and
object recognition to improve the availability of tacit knowledge for visual
invention.
3. Sketches are percept-image hybrids. The incomplete physical attributes
of sketches elicit and stabilise a stream of super-imposed mental images which
amplify inventive thought.
4. By segregating and isolating meaningful components of visual
experience, sketches may assist the user to attend selectively to a limited part of a
visual task, freeing otherwise over-loaded cognitive resources for visual thought.
5. Sequences of sketches and sketching acts support the short term episodic
memory for cognitive actions. This assists creativity, providing voluntary control
over highly practised mental processes which can otherwise become stereotyped.
An attempt is made to unite the five hypothetical functions. Drawing on the
Baddeley and Hitch model of working memory, it is speculated that the five
functions may be related to a limited capacity monitoring mechanism which makes
tacit visual knowledge explicitly available for conscious control and manipulation.
It is suggested that the resources available to the human brain for imagining nonexistent
objects are a cultural adaptation of visual mechanisms which evolved in
early hominids for responding to confusing or incomplete stimuli from immediately
present objects and events. Sketches are cultural inventions which artificially
mimic aspects of such stimuli in order to capture these shared resources for the
different purpose of imagining objects which do not yet exist.
Finally the implications of the theory for the design of improved machine
systems is discussed. The untidy attributes of traditional sketches are revealed to
include cultural inventions which serve subtle cognitive functions. However
traditional media have many short-comings which it should be possible to correct
with new technology. Existing machine systems for sketching tend to imitate nonselectively
the media bound properties of sketches without regard to the functions
they serve. This may prove to be a mistake. It is concluded that new system
designs are needed in which meaningfully structured data and specialised imagery
amplify without interference or replacement the impressive but limited creative
resources of the visual brain
Fine Art Pattern Extraction and Recognition
This is a reprint of articles from the Special Issue published online in the open access journal Journal of Imaging (ISSN 2313-433X) (available at: https://www.mdpi.com/journal/jimaging/special issues/faper2020)
An intelligent knowledge based cost modelling system for innovative product development
This research work aims to develop an intelligent knowledge-based system for product
cost modelling and design for automation at an early design stage of the product
development cycle, that would enable designers/manufacturing planners to make more
accurate estimates of the product cost. Consequently, a quicker response to customers’
expectations. The main objectives of the research are to: (1) develop a prototype system
that assists an inexperienced designer to estimate the manufacturing cost of the product,
(2) advise designers on how to eliminate design and manufacturing related conflicts that
may arise during the product development process, (3) recommend the most economic
assembly technique for the product in order to consider this technique during the design
process and provide design improvement suggestions to simplify the assembly
operations (i.e. to provide an opportunity for designers to design for assembly (DFA)),
(4) apply a fuzzy logic approach to certain cases, and (5) evaluate the developed
prototype system through five case studies.
The developed system for cost modelling comprises of a CAD solid modelling system,
a material selection module, knowledge-based system (KBS), process optimisation
module, design for assembly module, cost estimation technique module, and a user
interface. In addition, the system encompasses two types of databases, permanent
(static) and temporary (dynamic). These databases are categorised into five separate
groups of database, Feature database, Material database, Machinability database,
Machine database, and Mould database.
The system development process has passed through four major steps: firstly,
constructing the knowledge-based and process optimisation system, secondly
developing a design for assembly module. Thirdly, integrating the KBS with both
material selection database and a CAD system. Finally, developing and implementing a
ii
fuzzy logic approach to generate reliable estimation of cost and to handle the
uncertainty in cost estimation model that cannot be addressed by traditional analytical
methods.
The developed system has, besides estimating the total cost of a product, the capability
to: (1) select a material as well as the machining processes, their sequence and
machining parameters based on a set of design and production parameters that the user
provides to the system, and (2) recommend the most economic assembly technique for a
product and provide design improvement suggestion, in the early stages of the design
process, based on a design feasibility technique. It provides recommendations when a
design cannot be manufactured with the available manufacturing resources and
capabilities. In addition, a feature-by-feature cost estimation report was generated using
the system to highlight the features of high manufacturing cost. The system can be
applied without the need for detailed design information, so that it can be implemented
at an early design stage and consequently cost redesign, and longer lead-time can be
avoided. One of the tangible advantages of this system is that it warns users of features
that are costly and difficult to manufacture. In addition, the system is developed in such
a way that, users can modify the product design at any stage of the design processes.
This research dealt with cost modelling of both machined components and injection
moulded components.
The developed cost effective design environment was evaluated on real products,
including a scientific calculator, a telephone handset, and two machined components.
Conclusions drawn from the system indicated that the developed prototype system
could help companies reducing product cost and lead time by estimating the total
product cost throughout the entire product development cycle including assembly cost.
Case studies demonstrated that designing a product using the developed system is more
cost effective than using traditional systems. The cost estimated for a number of
products used in the case studies was almost 10 to 15% less than cost estimated by the
traditional system since the latter does not take into consideration process optimisation,
design alternatives, nor design for assembly issue
Automated visual inspection for the quality control of pad printing
Pad printing is used to decorate consumer goods largely because of its unique ability to apply graphics to doubly curved surfaces. The Intelpadrint project was conceived to develop a better understanding of the process and new printing pads, inks and printers. The thesis deals primarily with the research of a printer control system including machine vision. At present printing is manually controlled. Operator knowledge was gathered for use by an expert system to control the process. A novel local corner- matching algorithm was conceived to effect image segmentation, and neuro-fuzzy techniques were used to recognise patterns in printing errors. Non-linear Finite Element Analysis of the rubber printing-pad led to a method for pre-distorting artwork so that it would print undistorted on a curved product. A flexible, more automated printer was developed that achieves a higher printing rate. Ultraviolet-cured inks with improved printability were developed. The image normalisation/ error-signalling stage in inspection was proven in isolation, as was the pattern recognition system
- …