157 research outputs found

    Artificial Catalytic Reactions in 2D for Combinatorial Optimization

    Full text link
    Presented in this paper is a derivation of a 2D catalytic reaction-based model to solve combinatorial optimization problems (COPs). The simulated catalytic reactions, a computational metaphor, occurs in an artificial chemical reactor that finds near-optimal solutions to COPs. The artificial environment is governed by catalytic reactions that can alter the structure of artificial molecular elements. Altering the molecular structure means finding new solutions to the COP. The molecular mass of the elements was considered as a measure of goodness of fit of the solutions. Several data structures and matrices were used to record the directions and locations of the molecules. These provided the model the 2D topology. The Traveling Salesperson Problem (TSP) was used as a working example. The performance of the model in finding a solution for the TSP was compared to the performance of a topology-less model. Experimental results show that the 2D model performs better than the topology-less one.Comment: 8 pages, 2 figures, In H.N. Adorna (ed.) Proceedings of the 3rd Symposium on Mathematical Aspects of Computer Science (SMACS 2006), Adventist University of the Philippines, Silang, Cavite, Philippines, 19-20 October 2006 (Published by the Computing Society of the Philippines

    A System for Sensing Human Sentiments to Augment a Model for Predicting Rare Lake Events

    Full text link
    Fish kill events (FKE) in the caldera lake of Taal occur rarely (only 0.5\% in the last 10 years) but each event has a long-term effect on the environmental health of the lake ecosystem, as well as a devastating effect on the financial and emotional aspects of the residents whose livelihood rely on aquaculture farming. Predicting with high accuracy when within seven days and where on the vast expanse of the lake will FKEs strike will be a very important early warning tool for the lake's aquaculture industry. Mathematical models to predict the occurrences of FKEs developed by several studies done in the past use as predictors the physico-chemical characteristics of the lake water, as well as the meteorological parameters above it. Some of the models, however, did not provide acceptable predictive accuracy and enough early warning because they were developed with unbalanced binary data set, i.e., characterized by dense negative examples (no FKE) and highly sparse positive examples (with FKE). Other models require setting up an expensive sensor network to measure the water parameters not only at the surface but also at several depths. Presented in this paper is a system for capturing, measuring, and visualizing the contextual sentiment polarity (CSP) of dated and geolocated social media microposts of residents within 10km radius of the Taal Volcano crater (14∘14^\circN, 121∘121^\circE). High frequency negative CSP co-occur with FKE for two occasions making human expressions a viable non-physical sensors for impending FKE to augment existing mathematical models.Comment: 20 pages, 7 figures, appeared in Proceedings of the Joint 12th International Agricultural Engineering Conference and Exhibition, 65th PSAE National Convention, and 26th Philippine Agricultural Engineering Week (PSAE 2015), KCC Convention and Events Center, General Santos City, Philippines, 19-25 April 201

    A Framework for a Multiagent-based Scheduling of Parallel Jobs

    Full text link
    This paper presents a multiagent approach as a paradigm for scheduling parallel jobs in a parallel system. Scheduling parallel jobs is performed as a means to balance the load of a system in order to improve the performance of a parallel application. Parallel job scheduling is presented as a mapping between two graphs: one represents the dependency of jobs and the other represents the interconnection among processors. The usual implementation of parallel job scheduling algorithms is via the master-slave paradigm. The master-slave paradigm has inherent communication bottleneck that reduces the performance of the system when more processors are needed to process the jobs. The multiagent approach attempts to distribute the communication latency among the processors which improves the performance of the system as the number of participating processors increases. Presented in this paper is a framework for the behavior of an autonomous agent that cooperates with other agents to achieve a community goal of minimizing the processing time. Achieving this goal means an agent must truthfully share information with other agents via {\em normalization}, {\em task sharing}, and {\em result sharing} procedures. The agents consider a parallel scientific application as a finite-horizon game where truthful information sharing results into performance improvement for the parallel application. The performance of the multiagent-based algorithm is compared to that of an existing one via a simulation of the wavepacket dynamics using the quantum trajectory method (QTM) as a test application. The average parallel cost of running the QTM using the multiagent-based system is lower at higher number of processors.Comment: 8 pages, 8 figures, in R.P. Salda\~na (ed.) Proceedings of the 6th Philippine Computing Science Congress (PCSC 2006), Ateneo De Manila University, Loyola Heights, Quezon City, Philippines, 28-29 March 2006, pp. 81-88 (CDROM ISSN 1908-1146

    Synchronization of ad hoc Clock Networks

    Full text link
    We introduce a graph-theoretic approach to synchronizing clocks in an {\em ad hoc} network of NN~timepieces. Clocks naturally drift away from being synchronized because of many physical factors. The manual way of clock synchronization suffers from an inherrent propagation of the so called "clock drift" due to "word-of-mouth effect." The current standard way of automated clock synchronization is either via radio band transmission of the global clock or via the software-based Network Time Protocol (NTP). Synchronization via radio band transmission suffers from the wave transmission delay, while the client-server-based NTP does not scale to increased number of clients as well as to unforeseen server overload conditions (e.g., flash crowd and time-of-day effects). Further, the trivial running time of NTP for synchronizing an NN-node network, where each node is a clock and the NTP server follows a single-port communication model, is~\bigO(N). We introduce in this paper a \bigO(\log N) time for synchronizing the clocks in exchange for an increase of \bigO(N) in space complexity, though through creative "tweaking," we later reduced the space requirement to~\bigO(1). Our graph-theoretic protocol assumes that the network is \K_N, while the subset of clocks are in an embedded circulant graph \C_{n with qq~jumps and clock information is communicated through circular shifts within the \C_{n. All NN~nodes communicate via a single-port duplex channel model. Theoretically, this synchronization protocol allows for N(log⁑N)βˆ’1βˆ’1N(\log N)^{-1} - 1 more synchronizations than the client-server-based one. Empirically through statistically replicated multi-agent-based microsimulation runs, our protocol allows at most 80\% of the clocks synchronized compared to the current protocol which only allows up to 30\% after some steady-state time.Comment: 11 pages, 9 figures, appeared in H.N. Adorna and A.A. Sioson (eds.) Proceedings of the 7th National Symposium on Mathematical Aspects of Computer Science (SMACS 2014), Ateneo de Naga University, Naga City, Philippines, 24-28 November 2014, pp. 33-43. Paper submitted to Philippine Computing Journal (ISSN 1908-1995

    Inferences in a Virtual Community: Demography, User Preferences, and Network Topology

    Full text link
    This paper presents a computational procedure for extracting demography data, mining patterns of human preferences, and measuring the topology of a virtual network. The network was created from the personal and relationships data of an online Internet-based community, where persons are considered nodes in the network, and relationships between persons are considered edges. A community of Friendster users whose listed hometown is Los Ba\~nos, Laguna was used as a test bed for the methodology. The method was able to provide the following demographic, preferential, and topological results about the test bed: (1) There are more female users (52.34\%) than male (47.66\%); (2) Homophily (i.e., birds-of-a-feather adage) is observed in the preferences of users with respect to age levels, such that they are strongly biased towards being friends with users of a similar age; (3) There is heterophily in gender preference such that friendship among users of the opposite gender occurs more often. (4) It exhibits a small-world characteristic with an average path length of 4.5 (maximum=12) among connected users, shorter than the well-known {\em six degrees of separation}~\cite{travers69}; And (5) The network exhibits a scale-free characteristics with heavily-tailed power-law distribution (with the power Ξ»=βˆ’1.02\lambda = -1.02 and R2=0.84R^2 = 0.84) suggesting the presence of many users acting as the network hubs. The methodology was successful in providing important data from a virtual community which can be used by several researchers in the fields of statistics, mathematics, physics, social sciences, and computer science.Comment: 12 pages, 8 figures, appeared in Proceedings (CDROM) of the 6th National Conference on IT Education (NCITE 2008), University of the Philippines Los Ba\~nos, College, Laguna, Philippines, 23-24 October 200

    On Gobbledygook and Mood of the Philippine Senate: An Exploratory Study on the Readability and Sentiment of Selected Philippine Senators' Microposts

    Full text link
    This paper presents the findings of a readability assessment and sentiment analysis of selected six Philippine senators' microposts over the popular Twitter microblog. Using the Simple Measure of Gobbledygook (SMOG), tweets of Senators Cayetano, Defensor-Santiago, Pangilinan, Marcos, Guingona, and Escudero were assessed. A sentiment analysis was also done to determine the polarity of the senators' respective microposts. Results showed that on the average, the six senators are tweeting at an eight to ten SMOG level. This means that, at least a sixth grader will be able to understand the senators' tweets. Moreover, their tweets are mostly neutral and their sentiments vary in unison at some period of time. This could mean that a senator's tweet sentiment is affected by specific Philippine-based events.Comment: 13 pages, 6 figures, submitted to the Asia Pacific Journal on Education, Arts, and Science

    Information Spread Over an Internet-mediated Social Network: Phases, Speed, Width, and Effects of Promotion

    Full text link
    In this study, we looked at the effect of promotion in the speed and width of spread of information on the Internet by tracking the diffusion of news articles over a social network. Speed of spread means the number of readers that the news has reached in a given time, while width of spread means how far the story has travelled from the news originator within the social network. After analyzing six stories in a 30-hour time span, we found out that the lifetime of a story's popularity among the members of the social network has three phases: Expansion, Front-page, and Saturation. Expansion phase starts when a story is published and the article spreads from a source node to nodes within a connected component of the social network. Front-page phase happens when a news aggregator promotes the story in its front page resulting to the story's faster rate of spread among the connected nodes while at the same time spreading the article to nodes outside the original connected component of the social network. Saturation phase is when the story ages and its rate of spread within the social network slows down, suggesting popularity saturation among the nodes. Within these three phases, we observed minimal changes on the width of information spread as suggested by relatively low increase of the width of the spread's diameter within the social network. We see that this paper provides the various stakeholders a first-hand empirical data for modeling, designing, and improving the current web-based services, specifically the IT educators for designing and improving academic curricula, and for improving the current web-enabled deployment of knowledge and online evaluation of skills.Comment: 11 pages, 9 figures, initially appeared in Proceedings (CDROM) of the 8th National Conference on Information Technology Education, La Carmela de Boracay Convention Center, Boracay Island, Malay, Aklan, Philippines, 20-23 October 201

    The Interactive Effects of Operators and Parameters to GA Performance Under Different Problem Sizes

    Full text link
    The complex effect of genetic algorithm's (GA) operators and parameters to its performance has been studied extensively by researchers in the past but none studied their interactive effects while the GA is under different problem sizes. In this paper, We present the use of experimental model (1)~to investigate whether the genetic operators and their parameters interact to affect the offline performance of GA, (2)~to find what combination of genetic operators and parameter settings will provide the optimum performance for GA, and (3)~to investigate whether these operator-parameter combination is dependent on the problem size. We designed a GA to optimize a family of traveling salesman problems (TSP), with their optimal solutions known for convenient benchmarking. Our GA was set to use different algorithms in simulating selection (Ξ©s\Omega_s), different algorithms (Ξ©c\Omega_c) and parameters (pcp_c) in simulating crossover, and different parameters (pmp_m) in simulating mutation. We used several nn-city TSPs (n={5,7,10,100,1000}n=\{5, 7, 10, 100, 1000\}) to represent the different problem sizes (i.e., size of the resulting search space as represented by GA schemata). Using analysis of variance of 3-factor factorial experiments, we found out that GA performance is affected by Ξ©s\Omega_s at small problem size (5-city TSP) where the algorithm Partially Matched Crossover significantly outperforms Cycle Crossover at 95%95\% confidence level.Comment: 19 page

    Towards Input Device Satisfaction Through Hand Anthropometry

    Full text link
    We collected the hand anthropometric data of 91 respondents to come up with a Filipino-based measurement to determine the suitability of an input device for a digital equipment, the standard PC keyboard. For correlation purposes, we also collected other relevant information like age, height, province of origin, and gender, among others. We computed the percentiles for each finger to classify various finger dimensions and identify length-specific anthropometric cut-points. We compared the percentiles of each finger dimension against the actual length of the longest key combinations when correct finger placement is used for typing, to determine whether the standard PC keyboard is fit for use by our sampled population. Our analysis shows that the members of the population with hand dimensions at extended position below 75th percentile and at 99th percentile are the ones who would most likely not reach the longest key combination for the left and the right hands, respectively. Using machine vision and image processing techniques, we automated the anthropometric process and compared the accuracy of its measurements to that of manual process'. We compared the measurement generated by our automated anthropometric process with the measurements using the manual one and we found out that they have a very minimal absolute difference. The data collected from this study could be used in other studies such as determining a good design for mobile and other handheld devices, or input devices other than keyboard. The automated method that we developed could be used to easily measure hand dimensions given a digital image of the hand and could be extended for measuring the entire human body for various other applications.Comment: 20 pages, 12 figures, appeared in A.L. Sioson (ed.) Proceedings (CDROM) of the 10th National Conference on Information Technology Education (NCITE 2012), Laoag City, Ilocos Norte, Philippines, 18-20 October 201

    Improved Sampling Techniques for Learning an Imbalanced Data Set

    Full text link
    This paper presents the performance of a classifier built using the stackingC algorithm in nine different data sets. Each data set is generated using a sampling technique applied on the original imbalanced data set. Five new sampling techniques are proposed in this paper (i.e., SMOTERandRep, Lax Random Oversampling, Lax Random Undersampling, Combined-Lax Random Oversampling Undersampling, and Combined-Lax Random Undersampling Oversampling) that were based on the three sampling techniques (i.e., Random Undersampling, Random Oversampling, and Synthetic Minority Oversampling Technique) usually used as solutions in imbalance learning. The metrics used to evaluate the classifier's performance were F-measure and G-mean. F-measure determines the performance of the classifier for every class, while G-mean measures the overall performance of the classifier. The results using F-measure showed that for the data without a sampling technique, the classifier's performance is good only for the majority class. It also showed that among the eight sampling techniques, RU and LRU have the worst performance while other techniques (i.e., RO, C-LRUO and C-LROU) performed well only on some classes. The best performing techniques in all data sets were SMOTE, SMOTERandRep, and LRO having the lowest F-measure values between 0.5 and 0.65. The results using G-mean showed that the oversampling technique that attained the highest G-mean value is LRO (0.86), next is C-LROU (0.85), then SMOTE (0.84) and finally is SMOTERandRep (0.83). Combining the result of the two metrics (F-measure and G-mean), only the three sampling techniques are considered as good performing (i.e., LRO, SMOTE, and SMOTERandRep).Comment: 7 pages, 10 figures, 16th Philippine Computing Science Congress (PCSC 2016
    • …
    corecore