1,024 research outputs found
Usage Bibliometrics
Scholarly usage data provides unique opportunities to address the known
shortcomings of citation analysis. However, the collection, processing and
analysis of usage data remains an area of active research. This article
provides a review of the state-of-the-art in usage-based informetric, i.e. the
use of usage data to study the scholarly process.Comment: Publisher's PDF (by permission). Publisher web site:
books.infotoday.com/asist/arist44.shtm
El controlador KUKA youBot
En nuestros dÃas, muchas placas base y PCBs presentan fallos debido al fin de su vida útil o a
errores en el proceso de producción. En muchos de estos casos, su clasificación se realiza
mediante operaciones manuales, llevando a los empleados a una posible exposición a
elementos tóxicos. Debido a ello, este proyecto se propone detectar dichos fallos con
el procesamiento de imágenes tomadas por una cámara Siemens MV440 mediante el programa
GNU Octave, asà como la clasificación de las mismas gracias a la programación de un robotbrazo
KUKA youBot dentro del entorno Linux conocido Robot Operating System o ROS. Previa a
la programación de dicho robot, se simulará su comportamiento dentro del programa Gazebo.Vilnius Gediminas Technical University. Faculty of Electronics. Department of Electrical EngineeringGrado en IngenierÃa en Electrónica Industrial y Automátic
Framework for data quality in knowledge discovery tasks
Actualmente la explosión de datos es tendencia en el universo digital debido a los
avances en las tecnologÃas de la información. En este sentido, el descubrimiento
de conocimiento y la minerÃa de datos han ganado mayor importancia debido a
la gran cantidad de datos disponibles. Para un exitoso proceso de descubrimiento
de conocimiento, es necesario preparar los datos. Expertos afirman que la fase de
preprocesamiento de datos toma entre un 50% a 70% del tiempo de un proceso de
descubrimiento de conocimiento.
Herramientas software basadas en populares metodologÃas para el descubrimiento
de conocimiento ofrecen algoritmos para el preprocesamiento de los datos.
Según el cuadrante mágico de Gartner de 2018 para ciencia de datos y plataformas
de aprendizaje automático, KNIME, RapidMiner, SAS, Alteryx, y H20.ai son las
mejores herramientas para el desucrimiento del conocimiento. Estas herramientas
proporcionan diversas técnicas que facilitan la evaluación del conjunto de datos,
sin embargo carecen de un proceso orientado al usuario que permita abordar los
problemas en la calidad de datos. Adem´as, la selección de las técnicas adecuadas
para la limpieza de datos es un problema para usuarios inexpertos, ya que estos
no tienen claro cuales son los métodos más confiables.
De esta forma, la presente tesis doctoral se enfoca en abordar los problemas
antes mencionados mediante: (i) Un marco conceptual que ofrezca un proceso
guiado para abordar los problemas de calidad en los datos en tareas de descubrimiento
de conocimiento, (ii) un sistema de razonamiento basado en casos
que recomiende los algoritmos adecuados para la limpieza de datos y (iii) una ontologÃa que representa el conocimiento de los problemas de calidad en los datos
y los algoritmos de limpieza de datos. Adicionalmente, esta ontologÃa contribuye
en la representacion formal de los casos y en la fase de adaptación, del sistema de
razonamiento basado en casos.The creation and consumption of data continue to grow by leaps and bounds. Due
to advances in Information and Communication Technologies (ICT), today the
data explosion in the digital universe is a new trend. The Knowledge Discovery
in Databases (KDD) gain importance due the abundance of data. For a successful
process of knowledge discovery is necessary to make a data treatment. The
experts affirm that preprocessing phase take the 50% to 70% of the total time of
knowledge discovery process.
Software tools based on Knowledge Discovery Methodologies offers algorithms
for data preprocessing. According to Gartner 2018 Magic Quadrant for
Data Science and Machine Learning Platforms, KNIME, RapidMiner, SAS, Alteryx
and H20.ai are the leader tools for knowledge discovery. These software
tools provide different techniques and they facilitate the evaluation of data analysis,
however, these software tools lack any kind of guidance as to which techniques
can or should be used in which contexts. Consequently, the use of suitable data
cleaning techniques is a headache for inexpert users. They have no idea which
methods can be confidently used and often resort to trial and error.
This thesis presents three contributions to address the mentioned problems:
(i) A conceptual framework to provide the user a guidance to address data quality
issues in knowledge discovery tasks, (ii) a Case-based reasoning system to
recommend the suitable algorithms for data cleaning, and (iii) an Ontology that
represent the knowledge in data quality issues and data cleaning methods. Also,
this ontology supports the case-based reasoning system for case representation
and reuse phase.Programa Oficial de Doctorado en Ciencia y TecnologÃa InformáticaPresidente: Fernando Fernández Rebollo.- Secretario: Gustavo Adolfo RamÃrez.- Vocal: Juan Pedro Caraça-Valente Hernánde
Evolvable Smartphone-Based Point-of-Care Systems For In-Vitro Diagnostics
Recent developments in the life-science -omics disciplines, together with advances in micro and nanoscale technologies offer unprecedented opportunities to tackle some of the major healthcare challenges of our time. Lab-on-Chip technologies coupled with smart-devices in particular, constitute key enablers for the decentralization of many in-vitro medical diagnostics applications to the point-of-care, supporting the advent of a preventive and personalized medicine.
Although the technical feasibility and the potential of Lab-on-Chip/smart-device systems is repeatedly demonstrated, direct-to-consumer applications remain scarce. This thesis addresses this limitation. System evolvability is a key enabler to the adoption and long-lasting success of next generation point-of-care systems by favoring the integration of new technologies, streamlining the reengineering efforts for system upgrades and limiting the risk of premature system obsolescence. Among possible implementation strategies, platform-based design stands as a particularly suitable entry point. One necessary condition, is for change-absorbing and change-enabling mechanisms to be incorporated in the platform architecture at initial design-time. Important considerations arise as to where in Lab-on-Chip/smart-device platforms can these mechanisms be integrated, and how to implement them.
Our investigation revolves around the silicon-nanowire biological field effect transistor, a promising biosensing technology for the detection of biological analytes at ultra low concentrations. We discuss extensively the sensitivity and instrumentation requirements
set by the technology before we present the design and implementation of an evolvable smartphone-based platform capable of interfacing lab-on-chips embedding such sensors. We elaborate on the implementation of various architectural patterns throughout the platform and present how these facilitated the evolution of the system towards one accommodating for electrochemical sensing. Model-based development was undertaken throughout the engineering process. A formal SysML system model fed our evolvability assessment process. We introduce, in particular, a model-based methodology enabling the evaluation of modular scalability: the ability of a system to scale the current value of one of its specification by successively reengineering targeted system modules.
The research work presented in this thesis provides a roadmap for the development of evolvable point-of-care systems, including those targeting direct-to-consumer applications. It extends from the early identification of anticipated change, to the assessment of the ability of a system to accommodate for these changes. Our research should thus interest industrials eager not only to disrupt, but also to last in a shifting socio-technical paradigm
Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1
The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization
Proposal for an Organic Web, The missing link between the Web and the Semantic Web, Part 1
A huge amount of information is produced in digital form. The Semantic Web
stems from the realisation that dealing efficiently with this production
requires getting better at interlinking digital informational resources
together. Its focus is on linking data. Linking data isn't enough. We need to
provide infrastructural support for linking all sorts of informational
resources including resources whose understanding and fine interlinking
requires domain-specific human expertise. At times when many problems scale to
planetary dimensions, it is essential to scale coordination of information
processing and information production, without giving up on expertise and depth
of analysis, nor forcing languages and formalisms onto thinkers,
decision-makers and innovators that are only suitable to some forms of
intelligence. This article makes a proposal in this direction and in line with
the idea of interlinking championed by the Semantic Web.Comment: Supplementary material by Guillaume Bouzige and Mathilde Noua
Risks of Robotic Process Automation: A multivocal literature review
In recent years, many companies from different sectors have chosen to support digital
transformation in process automation using RPA. In fact, it can be seen that automations have
been revolutionising and benefiting the human workforce by minimising repetitive tasks
subjective to errors and maximising the technical and operational efficiency of companies.
However, it is not without its risks, since it is based on robots devoid of any critical thinking.
Thus, the present research focuses on a case study on RPA risks, in which an in-depth analysis
was conducted through an MLR with 107 documents gathered and thoroughly examined
throughout the community, including books, scientific articles, technical reports, conferences,
among others. This research contributes a list of a total of 88 risks organized, mapped and
grouped among 9 categories. In this sense, this study will assist future researchers to identify
RPA risks in order to define actions to avoid negative impacts.Nos últimos anos, muitas empresas de diferentes setores optaram por apoiar a transformação
digital na automação de processos usando RPA. De facto, é possÃvel verificar que as
automações têm vindo a revolucionar e beneficiar a força do trabalho humano minimizando as
tarefas repetitivas subjetiveis a erros e maximizando a eficiência técnica e operacional das
empresas. No entanto, não deixa de ter os seus riscos, uma vez que se fundamenta em robôs
desprovidos de qualquer pensamento crÃtico. Assim, a presente investigação incide num estudo
de caso sobre os riscos de RPA, no qual se realizou uma análise profunda, através de um MLR
com 107 documentos reunidos e minuciosamente examinados em toda a comunidade, incluindo
livros, artigos cientÃficos, relatórios técnicos, conferências, entre outros. Esta investigação
contribui com uma lista de um total de 88 riscos organizados, mapeados e agrupados entre 9
categorias. Nesse sentido, este estudo auxiliará futuros investigadores a identificar os riscos de
RPA de forma a definirem ações que evitem impactos negativos
- …