23 research outputs found
Parallel model of online sequential extreme learning machines for classification problems with large-scale databases
Nowadays, the sizes of databases in real-world applications are around TeraByte or PetaByte. Therefore, training neural networks in reasonable times is challenging and requires high-cost computational architectures. OS-ELM is a variant of ELM, proposed for real-world applications.
This algorithm allows training with new data using the previous results without reusing the previous dataset. In this work, we present a parallel model of OS-ELM for classification problems using large-scale databases. The model consists of training several OS-ELM using multithreaded programming. The training dataset is distributed according to the number of working threads. Then, the test dataset is classified by all pre-trained OS-ELMs. Finally, the test dataset is classified using a frequency criterion. Preliminary results show that increasing the number of threads decreases the training time without significantly affecting the test accuracy of each OS-ELM.Facultad de Informátic
Social interaction as a process of encounter or misencounter in the academic learning of adolescents
El presente artículo de resultados aborda el aprendizaje académico en la edad adolescente como un proceso
fundamental para determinar los aspectos que minimizan el rendimiento desde la interacción con padres,
profesores y compañeros de curso. Se destacan las rupturas comunicativas con los padres, el desinterés académico
del grupo y la mediación de los aprendizajes por los asesores de tareas. Para ello se realizó una investigación
centrada en los análisis teóricos del aprendizaje social con Vygotsky, Bandura y las teorías ecológicas, las cuales
permiten explorar el interés del colectivo para optimizar los aprendizajes individuales. Metodológicamente se
trabajó bajo un enfoque cualitativo, con un enfoque fenomenológico y un alcance de tipo descriptivo. La
muestra estuvo constituida por 30 estudiantes, sus padres y 5 docentes de octavo grado del Colegio Sagrado
Corazón de Jesús de Cúcuta. Las técnicas e instrumentos de recolección de datos implementados fueron la
entrevista por medio de un guion semi-estructurado y la observación directa de tipo participante monitoreada
a través del diario de campo, mientras que el análisis de información se realizó mediante la triangulación por
actores. Este estudio fenomenológico permitió comprender el aprendizaje social como elemento fundamental
para mejorar el rendimiento académico.This results article addresses academic learning in adolescent age as a fundamental process to determine
the aspects that minimize performance from the interaction with parents, teachers and classmates. The
communication breaks with parents, the academic disinterest of the group and the mediation of learning by
the homework advisors stand out. For this, a research focused on the theoretical analyzes of social learning
was carried out with Vygotsky, Bandura and ecological theories, which allow exploring the interest of the
group to optimize individual learning. Methodologically, we worked under a qualitative approach, with a
phenomenological approach and a descriptive scope. The sample consisted of 30 students, their parents and
five eighth grade teachers from the Sagrado Corazon School in Cúcuta. The techniques and instruments of data
collection implemented were the interview by means of a semi-structured script and the direct observation
of the participant type monitored through the field diary, while the information analysis was carried out by
triangulation by participants involved. This phenomenological study allowed understanding social learning as
a fundamental element to improve academic performance
Automatic segmentation of a meningioma using a computational technique in magnetic resonance imaging
Through this work we propose a computational techniquefor the segmentation of a brain tumor, identified as meningioma(MGT), which is present in magnetic resonance images(MRI). This technique consists of 3 stages developed inthe three-dimensional domain: pre-processing, segmentationand post-processing. The percent relative error (PrE) is consideredto compare the segmentations of the MGT, generatedby a neuro-oncologist manually, with the dilated segmentationsof the MGT, obtained automatically. The combination ofparameters linked to the lowest PrE, provides the optimal parametersof each computational algorithm that makes up theproposed computational technique. Results allow reporting aPrE of 1.44%, showing an excellent correlation between themanual segmentations and those produced by the computationaltechnique developed
Segmentación automática de un meningioma usando una técnica computacional en imágenes de resonancia magnética
Through this work we propose a computational technique
for the segmentation of a brain tumor, identified as meningioma
(MGT), which is present in magnetic resonance images
(MRI). This technique consists of 3 stages developed in
the three-dimensional domain: pre-processing, segmentation
and post-processing. The percent relative error (PrE) is considered
to compare the segmentations of the MGT, generated
by a neuro-oncologist manually, with the dilated segmentations
of the MGT, obtained automatically. The combination of
parameters linked to the lowest PrE, provides the optimal parameters
of each computational algorithm that makes up the
proposed computational technique. Results allow reporting a
PrE of 1.44%, showing an excellent correlation between the
manual segmentations and those produced by the computational
technique developed.Este trabajo propone una técnica computacional para la segmentación
de un tumor cerebral, identificado como meningioma
(MGT), que está presente en imágenes de resonancia
magnética (MRI). Esta técnica consta de 3 etapas desarrolladas
en el dominio tridimensional: preprocesamiento,
segmentación y postprocesamiento. El porcentaje de error
relativo (PrE) se considera para comparar las segmentaciones
de la MGT, generadas por un neurooncólogo de forma
manual, con las segmentaciones dilatadas de la MGT, obtenidas
automáticamente. La combinación de parámetros vinculados
al PrE más bajo proporciona los parámetros óptimos
de cada algoritmo computacional que conforma la técnica
de cálculo propuesta. Los resultados permiten informar un
PrE de 1.44%, mostrando una excelente correlación entre
las segmentaciones manuales y las producidas por la técnica
computacional desarrollada
A Parallel Computing Method for the Computation of the Moore–Penrose Generalized Inverse for Shared-Memory Architectures
The computation of the Moore–Penrose generalized inverse is a commonly used operation in various fields such as the training of neural networks based on random weights. Therefore, a fast computation of this inverse is important for problems where such neural networks provide a solution. However, due to the growth of databases, the matrices involved have large dimensions, thus requiring a significant amount of processing and execution time. In this paper, we propose a parallel computing method for the computation of the Moore–Penrose generalized inverse of large-size full-rank rectangular matrices. The proposed method employs the Strassen algorithm to compute the inverse of a nonsingular matrix and is implemented on a shared-memory architecture. The results show a significant reduction in computation time, especially for high-rank matrices. Furthermore, in a sequential computing scenario (using a single execution thread), our method achieves a reduced computation time compared with other previously reported algorithms. Consequently, our approach provides a promising solution for the efficient computation of the Moore–Penrose generalized inverse of large-size matrices employed in practical scenarios
Mathematical argumentation in the classroom
The article shares some elements of comprehensive type about "mathematical argumentation in the classroom"; whose analysis, was made from two fundamental categories in the development of an oral mathematical argumentation process for the conviction, contradiction and validation of a written mathematical argumentation process. The research addressed two central categories of argumentation as a discursive form, the first one is the epistemic position, and the second one is the discursive position that students unveil at the time of mathematically arguing the solution to a problem situation. The research was developed under the interpretative paradigm through the design of a case study directed by the theory and technique of a focal group, for the collection of information. In the findings, difficulties in the passage were evidenced from the semantic to the theoretical from the epistemic position; regarding the discursive position, the presence of three discursive forms was revealed: description, explanation and argumentation, the latter being the least used by the students
Volumetry of subdural hematomas in computed tomography images: ABC methods versus an intelligent computational technique
This work evaluates the performance of some methods orientedtowards the generation of the volume of four subduralhematomas (SDH), present in multi-layer computed tomographyimages. To do this, firstly, a reference volume is specified:the volume obtained by a neurosurgeon using the manualplanimetric method (MPM); which allows the generation ofmanual segmentations of space-occupying lesions. In thiscase, these volumes are matched with the SDH. In parallel,the volumetry of the 4 SDHs is obtained, considering both theoriginal version of the ABC/2 method and two of its variants,identified in this paper as ABC/3 method and 2ABC/3 method.The ABC methods allow the calculation of the volume ofthe hematoma under the assumption that the SDH has anellipsoidal shape. In third place, SDH’s are studied throughan intelligent automatic technique (SAT) that generates thethree-dimensional segmentation of each SDH. Finally, thepercentage relative error is calculated as a metric to evaluatethe methodologies considered. The results show that the SATmethod exhibits the best performance generating an averagepercentage error of less than 5%
Volumetry of epidural hematomas in computed tomography images: Comparative study between linear and volumetric methods
This work evaluates the performance of somemethods employed for assessing the volume ofseven subdural hematomas (EDH), present inmulti-layer computed tomography images. Firstly, a referencevolume is considered to be that obtained by a neurosurgeonusing the manual planimetric method (MPM).Secondly, the volume of the 7 EDHs is obtained consideringboth the original version of the ABC/2 method and two ofits variants, identified in this paper as ABC/3 method and2ABC/3 method. The ABC methods allow for calculationof the volume of the hematoma under the assumptionthat the EDH has an ellipsoidal shape. In third place, anintelligent automatic technique (SAT) is implemented thatgenerates the three-dimensional segmentation of eachEDH and from it the volume of the hematoma is calculated.The SAT consists of the pre-processing, segmentationand post-processing stages. In order to make judgmentsabout the performance of the SAT, the Dice coefficient(Dc) is used to compare the dilated segmentations of theEDH with the EDH segmentations generated manually. Finally,the percentage relative error is calculated as a metricto evaluate the methodologies considered. The resultsshow that the SAT method exhibits the best performancegenerating an average percentage error of less than 2%
Estimación del tamaño de hematomas epidurales en imágenes de tomografía computarizada: estudio comparativo entre métodos lineales y volumétricos
This work evaluates the performance of some
methods employed for assessing the volume of
seven subdural hematomas (EDH), present in
multi-layer computed tomography images. Firstly, a reference
volume is considered to be that obtained by a neurosurgeon
using the manual planimetric method (MPM).
Secondly, the volume of the 7 EDHs is obtained considering
both the original version of the ABC/2 method and two of
its variants, identified in this paper as ABC/3 method and
2ABC/3 method. The ABC methods allow for calculation
of the volume of the hematoma under the assumption
that the EDH has an ellipsoidal shape. In third place, an
intelligent automatic technique (SAT) is implemented that
generates the three-dimensional segmentation of each
EDH and from it the volume of the hematoma is calculated.
The SAT consists of the pre-processing, segmentation
and post-processing stages. In order to make judgments
about the performance of the SAT, the Dice coefficient
(Dc) is used to compare the dilated segmentations of the
EDH with the EDH segmentations generated manually. Finally,
the percentage relative error is calculated as a metric
to evaluate the methodologies considered. The results
show that the SAT method exhibits the best performance
generating an average percentage error of less than 2%
Volumetría de hematomas subdurales en imágenes de tomografía computarizada: métodos abc versus una técnica computacional inteligente
This work evaluates the performance of some methods oriented
towards the generation of the volume of four subdural
hematomas (SDH), present in multi-layer computed tomography
images. To do this, firstly, a reference volume is specified:
the volume obtained by a neurosurgeon using the manual
planimetric method (MPM); which allows the generation of
manual segmentations of space-occupying lesions. In this
case, these volumes are matched with the SDH. In parallel,
the volumetry of the 4 SDHs is obtained, considering both the
original version of the ABC/2 method and two of its variants,
identified in this paper as ABC/3 method and 2ABC/3 method.
The ABC methods allow the calculation of the volume of
the hematoma under the assumption that the SDH has an
ellipsoidal shape. In third place, SDH’s are studied through
an intelligent automatic technique (SAT) that generates the
three-dimensional segmentation of each SDH. Finally, the
percentage relative error is calculated as a metric to evaluate
the methodologies considered. The results show that the SAT
method exhibits the best performance generating an average
percentage error of less than 5%.Este trabajo evalúa el rendimiento de algunos métodos orientados
a la generación del volumen de cuatro hematomas
subdurales (SDH), presentes en imágenes de tomografía
computarizada multicapa. Para hacer esto, en primer lugar,
se especifica un volumen de referencia: el volumen obtenido
por un neurocirujano utilizando el método planimétrico manual
(MPM); que permite la generación de segmentaciones
manuales de lesiones ocupantes de espacio. En este caso,
estos volúmenes se comparan con el SDH. Paralelamente,
se obtiene la volumetría de los 4 SDH, considerando tanto
la versión original del método ABC / 2 como dos de sus
variantes, identificadas en este documento como el método
ABC / 3 y el método 2ABC / 3. Los métodos ABC permiten el
cálculo del volumen del hematoma bajo el supuesto de que
el SDH tiene una forma elipsoidal. En tercer lugar, los SDH
se estudian a través de una técnica automática inteligente
(SAT) que genera la segmentación tridimensional de cada
SDH. Finalmente, el error relativo porcentual se calcula como
una métrica para evaluar las metodologías consideradas.
Los resultados muestran que el método SAT exhibe el mejor
rendimiento generando un porcentaje de error promedio de
menos del 5%