55 research outputs found
Firefly algorithm for polynomial Bézier surface parameterization
A classical issue in many applied fields is to obtain an approximating surface to a given set of data points. This problem arises in Computer-Aided Design and Manufacturing (CAD/CAM), virtual reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy
Towards Combining Individual and Collaborative Work Spaces under a Unified E-Portfolio
Proceedings of: 11th International Conference on Computational Science and Applications (ICCSA 2011). Santander, Spain, June 20-23, 2011E-portfolios in learning environments have been attributed numerous benefits and their presence has been steadily increasing. And so has the variety of environments in which a student participates. Collaborative learning requires communication and resource sharing among team members. Students may participate in multiple teams throughout a long period of time, sometimes even simultaneously. Conventional eportfolios are oriented toward showcasing individual achievements, but they need to also equally reflect collaborative achievements. The approach described in this paper has the objective of offering students an e-portfolio as a local folder their personal computer containing a combined view of their individual and collaborative work spaces. The content of this folder can be synchronized with a remote server thus achieving resource sharing and publication of a clearly identified set of resources.Work partially funded by the Learn3 project, “Plan Nacional de I+D+I TIN2008- 05163/TSI”, the Consejo Social - Universidad Carlos III de Madrid, the Acción Integrada Ref. DE2009-0051, and the “Emadrid: Investigación y desarrollo de tecnologías para el e-learning en la Comunidad de Madrid” project (S2009/TIC-1650).Publicad
Stability of the weighted splitting finite-difference scheme for a two-dimensional parabolic equation with two nonlocal integral conditions
AbstractNonlocal conditions arise in mathematical models of various physical, chemical or biological processes. Therefore, interest in developing computational techniques for the numerical solution of partial differential equations (PDEs) with various types of nonlocal conditions has been growing fast. We construct and analyse a weighted splitting finite-difference scheme for a two-dimensional parabolic equation with nonlocal integral conditions. The main attention is paid to the stability of the method. We apply the stability analysis technique which is based on the investigation of the spectral structure of the transition matrix of a finite-difference scheme. We demonstrate that depending on the parameters of the finite-difference scheme and nonlocal conditions the proposed method can be stable or unstable. The results of numerical experiments with several test problems are also presented and they validate theoretical results
Semantic model for mining e-learning usage with ontology and meaningful learning characteristics
The use of e-learning in higher education institutions is a necessity in the learning process. E-learning accumulates vast amount of usage data which could produce a new knowledge and useful for educators. The demand to gain knowledge from e-learning usage data requires a correct mechanism to extract exact information. Current models for mining e-learning usage have focused on the activities usage but ignored the actions usage. In addition, the models lack the ability to incorporate learning pedagogy, leading to a semantic gap to annotate mining data towards education domain. The other issue raised is the absence of usage recommendation that refers to result of data mining task. This research proposes a semantic model for mining e-learning usage with ontology and meaningful learning characteristics. The model starts by preparing data including activity and action hits. The next step is to calculate meaningful hits which categorized into five namely active, cooperative, constructive, authentic, and intentional. The process continues to apply K-means clustering analysis to group usage data into three clusters. Lastly, the usage data is mapped into ontology and the ontology manager generates the meaningful usage cluster and usage recommendation. The model was experimented with three datasets of distinct courses and evaluated by mapping against the student learning outcomes of the courses. The results showed that there is a positive relationship between meaningful hits and learning outcomes, and there is a positive relationship between meaningful usage cluster and learning outcomes. It can be concluded that the proposed semantic model is valid with 95% of confidence level. This model is capable to mine and gain insight into e-learning usage data and to provide usage recommendation
Configuration Analysis for Large Scale Feature Models: Towards Speculative-Based Solutions
Los sistemas de alta variabilidad son sistemas de software en los que la gestión de la
variabilidad es una actividad central. Algunos ejemplos actuales de sistemas de alta
variabilidad son el sistema web de gesión de contenidos Drupal, el núcleo de Linux,
y las distribuciones Debian de Linux.
La configuración en sistemas de alta variabilidad es la selección de opciones
de configuración según sus restricciones de configuración y los requerimientos de
usuario. Los modelos de características son un estándar “de facto” para modelar las
funcionalidades comunes y variables de sistemas de alta variabilidad. No obstante,
el elevado número de componentes y configuraciones que un modelo de características
puede contener hacen que el análisis manual de estos modelos sea una tarea muy
costosa y propensa a errores. Así nace el análisis automatizado de modelos de características
con mecanismos y herramientas asistidas por computadora para extraer
información de estos modelos. Las soluciones tradicionales de análisis automatizado
de modelos de características siguen un enfoque de computación secuencial para
utilizar una unidad central de procesamiento y memoria. Estas soluciones son adecuadas
para trabajar con sistemas de baja escala. Sin embargo, dichas soluciones demandan
altos costos de computación para trabajar con sistemas de gran escala y alta
variabilidad. Aunque existan recusos informáticos para mejorar el rendimiento de
soluciones de computación, todas las soluciones con un enfoque de computación secuencial
necesitan ser adaptadas para el uso eficiente de estos recursos y optimizar su
rendimiento computacional. Ejemplos de estos recursos son la tecnología de múltiples
núcleos para computación paralela y la tecnología de red para computación distribuida.
Esta tesis explora la adaptación y escalabilidad de soluciones para el analisis automatizado
de modelos de características de gran escala. En primer lugar, nosotros
presentamos el uso de programación especulativa para la paralelización de soluciones.
Además, nosotros apreciamos un problema de configuración desde otra perspectiva,
para su solución mediante la adaptación y aplicación de una solución no
tradicional. Más tarde, nosotros validamos la escalabilidad y mejoras de rendimiento
computacional de estas soluciones para el análisis automatizado de modelos de características
de gran escala.
Concretamente, las principales contribuciones de esta tesis son:
• Programación especulativa para la detección de un conflicto mínimo y
1
2
preferente. Los algoritmos de detección de conflictos mínimos determinan
el conjunto mínimo de restricciones en conflicto que son responsables de comportamiento
defectuoso en el modelo en análisis. Nosotros proponemos una
solución para, mediante programación especulativa, ejecutar en paralelo y reducir
el tiempo de ejecución de operaciones de alto costo computacional que
determinan el flujo de acción en la detección de conflicto mínimo y preferente
en modelos de características de gran escala.
• Programación especulativa para un diagnóstico mínimo y preferente. Los
algoritmos de diagnóstico mínimo determinan un conjunto mínimo de restricciones
que, por una adecuada adaptación de su estado, permiten conseguir un
modelo consistente o libre de conflictos. Este trabajo presenta una solución
para el diagnóstico mínimo y preferente en modelos de características de gran
escala mediante la ejecución especulativa y paralela de operaciones de alto
costo computacional que determinan el flujo de acción, y entonces disminuir
el tiempo de ejecución de la solución.
• Completar de forma mínima y preferente una configuración de modelo
por diagnóstico. Las soluciones para completar una configuración parcial
determinan un conjunto no necesariamente mínimo ni preferente de opciones
para obtener una completa configuración. Esta tesis soluciona el completar
de forma mínima y preferente una configuración de modelo mediante técnicas
previamente usadas en contexto de diagnóstico de modelos de características.
Esta tesis evalua que todas nuestras soluciones preservan los valores de salida esperados,
y también presentan mejoras de rendimiento en el análisis automatizado de
modelos de características con modelos de gran escala en las operaciones descrita
Orchestration of e-learning services for automatic evaluation of programming exercises
Managing programming exercises require several heterogeneous systems such as
evaluation engines, learning objects repositories and exercise resolution environments. The
coordination of networks of such disparate systems is rather complex. These tools would be too
specific to incorporate in an e-Learning platform. Even if they could be provided as pluggable
components, the burden of maintaining them would be prohibitive to institutions with few
courses in those domains. This work presents a standard based approach for the coordination of
a network of e-Learning systems participating on the automatic evaluation of programming
exercises. The proposed approach uses a pivot component to orchestrate the interaction among
all the systems using communication standards. This approach was validated through its
effective use on classroom and we present some preliminary results
Orchestration of E-Learning Services for Automatic Evaluation of Programming Exercises
Abstract: Managing programming exercises require several heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. These tools would be too specific to incorporate in an e-Learning platform. Even if they could be provided as pluggable components, the burden of maintaining them would be prohibitive to institutions with few courses in those domains. This work presents a standard based approach for the coordination of a network of e-Learning systems participating on the automatic evaluation of programming exercises. The proposed approach uses a pivot component to orchestrate the interaction among all the systems using communication standards. This approach was validated through its effective use on classroom and we present some preliminary results
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …