24 research outputs found
Multi-population-based differential evolution algorithm for optimization problems
A differential evolution (DE) algorithm is an evolutionary algorithm for optimization problems over a continuous domain. To solve high dimensional global optimization problems, this work investigates the performance of differential evolution algorithms under a multi-population strategy. The original DE algorithm generates an initial set of suitable solutions. The multi-population strategy divides the set into several subsets. These subsets evolve independently and connect with each other according to the DE algorithm. This helps in preserving the diversity of the initial set. Furthermore, a comparison of combination of different mutation techniques on several optimization algorithms is studied to verify their performance. Finally, the computational results on the arbitrarily generated experiments, reveal some interesting relationship between the number of subpopulations and performance of the DE.
Centralized charging of electric vehicles (EVs) based on battery swapping is a promising strategy for their large-scale utilization in power systems. In this problem, the above algorithm is designed to minimize total charging cost, as well as to reduce power loss and voltage deviation of power networks. The resulting algorithm and several others are executed on an IEEE 30-bus test system, and the results suggest that the proposed algorithm is one of effective and promising methods for optimal EV centralized charging
Recent Advances in miRNA Delivery Systems
MicroRNAs (miRNAs) represent a family of short non-coding regulatory RNA molecules that are produced in a tissue and time-specific manner to orchestrate gene expression post-transcription. MiRNAs hybridize to target mRNA(s) to induce translation repression or mRNA degradation. Functional studies have demonstrated that miRNAs are engaged in virtually every physiological process and, consequently, miRNA dysregulations have been linked to multiple human pathologies. Thus, miRNA mimics and anti-miRNAs that restore miRNA expression or downregulate aberrantly expressed miRNAs, respectively, are highly sought-after therapeutic strategies for effective manipulation of miRNA levels. In this regard, carrier vehicles that facilitate proficient and safe delivery of miRNA-based therapeutics are fundamental to the clinical success of these pharmaceuticals. Here, we highlight the strengths and weaknesses of current state-of-the-art viral and non-viral miRNA delivery systems and provide perspective on how these tools can be exploited to improve the outcomes of miRNA-based therapeutics
Robust estimation of Ackerman angles for front-axle steering vehicles
The multiple benefits of automating steering in agricultural vehicles have resulted in various autoguidance systems commercially available, most of them relying on satellite-based positioning. However, the fact that farm equipment is typically oversized, heavy, and highly powered poses serious challenges to automation in terms of safety and reliability. The objective of this research is to improve the reliability of front-wheel feedback signals as a preliminary stage in the development of stable steering control systems. To do so, the angle turned by each front wheel of a conventional tractor was independently measured by an optical encoder and fused to generate the Ackerman feedback angle. The proposed fusion algorithm analyzes the consistency of each signal with time and checks the coherence between left and right front wheels according to the vehicle steering mechanism. Field experiments demonstrated the benefits of using redundant sensors coupled through logic algorithms for estimating Ackerman angles as the harsh conditions of off-road environments often resulted in the unreliable performance of electronic devices.Sáiz Rubio, V.; Rovira Más, F.; Chatterjee, I.; Molina Hidalgo, JM. (2013). Robust estimation of Ackerman angles for front-axle steering vehicles. Artificial Intelligence Research. 2(2):18-28. doi:10.5430/air.v2n2p18S18282
Statistics-Based Outlier Detection and Correction Method for Amazon Customer Reviews
People nowadays use the internet to project their assessments, impressions, ideas, and observations about various subjects or products on numerous social networking sites. These sites serve as a great source to gather data for data analytics, sentiment analysis, natural language processing, etc. Conventionally, the true sentiment of a customer review matches its corresponding star rating. There are exceptions when the star rating of a review is opposite to its true nature. These are labeled as the outliers in a dataset in this work. The state-of-the-art methods for anomaly detection involve manual searching, predefined rules, or traditional machine learning techniques to detect such instances. This paper conducts a sentiment analysis and outlier detection case study for Amazon customer reviews, and it proposes a statistics-based outlier detection and correction method (SODCM), which helps identify such reviews and rectify their star ratings to enhance the performance of a sentiment analysis algorithm without any data loss. This paper focuses on performing SODCM in datasets containing customer reviews of various products, which are (a) scraped from Amazon.com and (b) publicly available. The paper also studies the dataset and concludes the effect of SODCM on the performance of a sentiment analysis algorithm. The results exhibit that SODCM achieves higher accuracy and recall percentage than other state-of-the-art anomaly detection algorithms
Statistics-based anomaly detection and correction method for amazon customer reviews
People nowadays use the Internet to project their assessments, impressions, ideas, and observations about various subjects or products on numerous social networking sites. These sites serve as a great source of gathering information for data analytics, sentiment analysis, natural language processing, etc. The most critical challenge is interpreting this data and capturing the sentiment behind these expressions. Sentiment analysis is analyzing, processing, concluding, and inferencing subjective texts with the views. Companies use sentiment analysis to understand public opinions, perform market research, analyze brand reputation, recognize customer experiences, and study social media influence. According to the different needs for aspect granularity, it can be divided into document, sentence, and aspect-based sentiment analysis.
Conventionally, the true sentiment of a customer review matches its corresponding star rating. There are exceptions when the star rating of a review is opposite to its true nature. These are labeled as the outliers in a dataset for this work. The state-of-the-art methods for anomaly detection involve manual search, predefined rules, or machine learning techniques to detect such instances. This dissertation work proposes a statistics-based anomaly detection and correction method (SADCM), which helps identify such reviews and rectify their star ratings to enhance the performance of a sentiment analysis algorithm without any data loss. This data analysis pipeline preserves these outliers to correct them and prevents any information loss.
This research work focuses on performing SADCM in datasets containing customer reviews of various products, which are a) scraped from Amazon.com and b) publicly available. The scraped dataset includes 35,000 Amazon customer reviews while the publicly available dataset includes 100,000 Amazon customer reviews for multiple products reviewed this year. The research work also analyzes these datasets and concludes the effect of SADCM on the performances of several sentiment analysis algorithms. The results exhibit that SADCM outperforms other state-of-the-art anomaly detection algorithms with a higher accuracy and recall percentage for all the datasets. The proposed method should thus help businesses that rely on public reviews to enhance their performances in better decision-making
Urban street vending practices: an investigation of ethnic food safety knowledge, attitudes, and risks among untrained Chinese vendors in chinatown, Kolkata
Background: The main objective of this study is to inspect the food safety and hygiene practices of Chinese street vendors of Kolkata where the food is prepared at home following authentic Chinese recipes and served with congenial affability. This study also suggests that the right to earn livelihood should be protected for Chinese street vendors. Methods: In the present study, we apply the scales developed by Sekar [54], Chukuezi [22], Privitera and Nesci [51], Ismail et al [30], and Cortese et al [25]. The research was carried out using a 4 section questionnaire adapted from previous scholar’s works. The final questionnaire comprises questions about demographic characteristics, socio-economic factors, and food safety and hygiene practices and customers experiences. Data collection was performed in three different ways: 1) direct participant observation, 2) in depth interview, and 3) checklist items to observe and evaluate food safety and hygiene. Results: Indian Chinese pavement hawkers contribute to a substantial proportion of the informal economy which creates an authoritative profitable role in the city as an important wellspring of income. They provided some vague information about ethnic food safety like contamination, cooking methods, food contact applicators, handling procedures, washing instruments and hygiene practices. Conclusion: This research provides data necessary for the improvement of policies/regulations and safety standards that will sustain the quality of Chinese street foods which provide more fruitful implications for nurture gastro tourism in Chinatown.References Keywords: Food safety and hygiene, Informal economy, Street food, Street vendor
Fast Bounded Suboptimal Probabilistic Planning with Clear Preferences on Missing Information
In the real-world, robots must often plan despite the environment being partially known. This frequently necessitates planning under uncertainty over missing information about the environment. Unfortunately, the computational expense of such planning often precludes its scalability to real-world problems. The Probabilistic Planning with Clear Preferences (PPCP) framework focuses on a specific subset of such planning problems wherein there exist clear preferences over the actual values of missing information (Likhachev and Stenz 2009). PPCP exploits the existence and knowledge of these preferences to perform provably optimal planning via a series of deterministic A*-like searches over particular instantiations of the environment. Such decomposition leads to much better scalability with respect to both the size of a problem and the amount of missing information in it. The run-time of PPCP however is a function of the number of searches it has to run until convergence. In this paper, we make a key observation that the number of searches PPCP has to run can be dramatically decreased if each search computes a plan that minimizes the amount of missing information it relies upon. To that end, we introduce Fast-PPCP, a novel planning algorithm that computes a provably bounded suboptimal policy using significantly lesser number of searches than that required to find an optimal policy. We present Fast-PPCP with its theoretical analysis, compare with common alternative approaches to planning under uncertainty over missing information, and experimentally show that Fast-PPCP provides substantial gain in runtime over other approaches while incurring little loss in solution quality
Search Reduction through Conservative Abstract-Space Based Heuristic
The efficiency of heuristic search depends dramatically on the quality of the heuristic function. For an optimal heuristic search, heuristics that estimate cost-to-goal better typically lead to faster searches. For a sub-optimal heuristic search such as weighted A*, the search speed depends more on the correlation between the heuristic and the true cost-to-goal. In this extended abstract, we discuss our preliminary work on computing heuristic functions that exploit this fact. In particular, we introduce a many-to-one mapping from an original search space to a conservative abstract space. Edges in the abstract space capture reachability among all corresponding nodes in the original space. We compute a heuristic in the conservative abstract space which when used by the search in the original space reduces the number of searched nodes. Our preliminary results on 3D navigation show that in more complex scenarios the speedup can be dramatic
Speeding Up Search-Based Motion Planning using Expansion Delay Heuristics
Suboptimal search algorithms are a popular way to find solutions to planning problems faster by trading off solution optimality for search time. This is often achieved with the help of inadmissible heuristics. Prior work has explored ways to learn such inadmissible heuristics. However, it has focused on learning the heuristic value as an estimate of the cost to reach a goal. In this paper, we present a different approach that computes inadmissible heuristics by learning Expansion Delay for transitions in the state space. Expansion Delay is defined as the number of states expanded during the search between two consecutive states. Expansion Delay can be used as a measure of the depth of local minima regions i.e., regions where the heuristic(s) are weakly correlated with the true cost-to-goal (Vats, Narayanan and Likhachev 2017). Our key idea is to learn this measure in order to guide the search such that it reduces the total Expansion delay for reaching the goal and hence, avoid local minima regions in the state space. We analyze our method on 3D (x, y, theta) planning and Humanoid footstep planning. We find that the heuristics computed using our technique result in finding feasible plans faster
Speeding Up Search-Based Motion Planning via Conservative Heuristics
Weighted A* search (wA*) is a popular tool for robot motionplanning. Its efficiency however depends on the quality of heuristic function used. In fact, it has been shown that the correlation between the heuristic function and the true costto-goal significantly affects the efficiency of the search, when used with a large weight on the heuristics. Motivated by this observation, we investigate the problem of computing heuristics that explicitly aim to minimize the amount of search efforts in finding a feasible plan. The key observation we exploit is that while heuristics tries to guide the search along what looks like an optimal path towards the goal, there are other paths that are clearly sub-optimal yet are much easier to compute. For example, in motion planning domains like footstep-planning for humanoids, a heuristic that guides the search along a path away from obstacles is less likely to encounter local minima compared with the heuristics that guides the search along an optimal but close-to-obstacles path. We utilize this observation to define the concept of conservative heuristics and propose a simple algorithm for computing such a heuristic function. Experimental analysis on (1) humanoid footstep planning (simulation), (2) path planning for a UAV (simulation), and a real-world experiment in footstep-planning for a NAO robot shows the utility of the approach