498 research outputs found

    A social vulnerability-based genetic algorithm to locate-allocate transit bus stops for disaster evacuation in New Orleans, Louisiana

    Get PDF
    In the face of severe disasters, some or all of the endangered residents must be evacuated to a safe place. A portion of people, due to various reasons (e.g., no available vehicle, too old to drive), will need to take public transit buses to be evacuated. However, to optimize the operation efficiency, the location of these transit pick-up stops and the allocation of the available buses to these stops should be considered seriously by the decision-makers. In the case of a large number of alternative bus stops, it is sometimes impractical to use the exhaustive (brute-force) search to solve this kind of optimization problem because the enumeration and comparison of the effectiveness of a huge number of alternative combinations would take too much model running time. A genetic algorithm (GA) is an efficient and robust method to solve the location/allocation problem. This thesis utilizes GA to discover accurately and efficiently the optimal combination of locations of the transit bus stop for a regional evacuation of the New Orleans metropolitan area, Louisiana. When considering people’s demand for transit buses in the face of disaster evacuation, this research assumes that residents of high social vulnerability should be evacuated with high priority and those with low social vulnerability can be put into low priority. Factor analysis, specifically principal components analysis, was used to identify the social vulnerability from multiple variables input over the study area. The social vulnerability was at the census block group level and the overall social vulnerability index was used to weight the travel time between the centroid of each census block to the nearest transit pick-up location. The simulation results revealed that the pick-up locations obtained from this study can greatly improve the efficiency over the ones currently used by the New Orleans government. The new solution led to a 26,397.6 (total weighted travel time for the entire system measured in hours) fitness value, which is much better than the fitness value 62,736.3 rendered from the currently used evacuation solution

    Product Redesign and Innovation Based on Online Reviews:A Multistage Combined Search Method

    Get PDF
    Online reviews published on the e-commerce platform provide a new source of information for designers to develop new products. Past research on new product development (NPD) using user-generated textual data commonly focused solely on extracting and identifying product features to be improved. However, the competitive analysis of product features and more specific improvement strategies have not been explored deeply. This study fully uses the rich semantic attributes of online review texts and proposes a novel online review–driven modeling framework. This new approach can extract fine-grained product features; calculate their importance, performance, and competitiveness; and build a competitiveness network for each feature. As a result, decision making is assisted, and specific product improvement strategies are developed for NPD beyond existing modeling approaches in this domain. Specifically, online reviews are first classified into redesign- and innovation-related themes using a multiple embedding model, and the redesign and innovation product features can be extracted accordingly using a mutual information multilevel feature extraction method. Moreover, the importance and performance of features are calculated, and the competitiveness and competitiveness network of features are obtained through a personalized unidirectional bipartite graph algorithm. Finally, the importance performance competitiveness analysis plot is constructed, and the product improvement strategy is developed via a multistage combined search algorithm. Case studies and comparative experiments show the effectiveness of the proposed method and provide novel business insights for stakeholders, such as product providers, managers, and designers

    Semantic-Enhanced Image Clustering

    Full text link
    Image clustering is an important and open-challenging task in computer vision. Although many methods have been proposed to solve the image clustering task, they only explore images and uncover clusters according to the image features, thus being unable to distinguish visually similar but semantically different images. In this paper, we propose to investigate the task of image clustering with the help of a visual-language pre-training model. Different from the zero-shot setting, in which the class names are known, we only know the number of clusters in this setting. Therefore, how to map images to a proper semantic space and how to cluster images from both image and semantic spaces are two key problems. To solve the above problems, we propose a novel image clustering method guided by the visual-language pre-training model CLIP, named \textbf{Semantic-Enhanced Image Clustering (SIC)}. In this new method, we propose a method to map the given images to a proper semantic space first and efficient methods to generate pseudo-labels according to the relationships between images and semantics. Finally, we propose performing clustering with consistency learning in both image space and semantic space, in a self-supervised learning fashion. The theoretical result of convergence analysis shows that our proposed method can converge at a sublinear speed. Theoretical analysis of expectation risk also shows that we can reduce the expected risk by improving neighborhood consistency, increasing prediction confidence, or reducing neighborhood imbalance. Experimental results on five benchmark datasets clearly show the superiority of our new method

    Alternative Telescopic Displacement: An Efficient Multimodal Alignment Method

    Full text link
    Feature alignment is the primary means of fusing multimodal data. We propose a feature alignment method that fully fuses multimodal information, which alternately shifts and expands feature information from different modalities to have a consistent representation in a feature space. The proposed method can robustly capture high-level interactions between features of different modalities, thus significantly improving the performance of multimodal learning. We also show that the proposed method outperforms other popular multimodal schemes on multiple tasks. Experimental evaluation of ETT and MIT-BIH-Arrhythmia, datasets shows that the proposed method achieves state of the art performance.Comment: 8 pages,7 figure
    • …
    corecore