16 research outputs found

    Le neuromarketing via le web : Une performance commerciale face aux exigences de l’éthique

    Get PDF
    In recent years, the world has seen a strong use of social networks in different fields such as politics, economy, sports, etc. This use has extended to marketing, the purpose of which is to make products visible. For his part, the consumer participates in the construction of the commercial image of the company on the basis of his experience with the products consumed. There are many marketing tools. Each company adopts its digital strategy according to its means. The objective of this contribution is to clarify some dimensions as well as the consequences of neuromarketing on consumer behavior based on research conducted by marketing and neuroscience experts while presenting suggestions to deal with opportunistic behaviors of companies. In principle, the integration of neuroscience in marketing has been the subject of fierce debate among marketing theorists. In previous work, researchers try to mobilize neurological tools to understand and influence consumer behavior. Neuromarketing as a discipline has been criticized by public opinion because of its ability to use psychological and nervous mechanisms to manipulate consumption habits, hence the need to think about framing neuromarketing practices and filling the legal void while taking into account the human dimension in the product marketing process.Durant les dernières années, le monde a connu une forte utilisation des réseaux sociaux dans les différents domaines tels que la politique, l’économie, le sport, etc. Cette utilisation s’est étendue au marketing dont le but est de rendre les produits visibles. De sa part, le consommateur participe à la construction de l’mage commerciale de l’entreprise sur la base de son expérience avec les produits consommés. Les outils du marketing sont multiples. Chaque entreprise adopte sa stratégie digitale en fonction de ses moyens. L’objectif de la présente contribution est de clarifier quelques dimensions ainsi que les conséquences du neuromarketing sur le comportement du consommateur sur la base des recherches menées par des experts en marketing et en neurosciences tout en présentant des suggestions pour faire face aux comportements opportunistes des entreprises. En principe, l’intégration des neurosciences en marketing a fait l’objet des débats acharnés entre les théoriciens du marketing. Dans les travaux antérieurs, les chercheurs essayent de mobiliser les outils neurologiques pour comprendre et influencer le comportement du consommateur. Le neuromarketing comme discipline a été critiquée par l’opinion publique à cause de sa capacité à utiliser les mécanismes psychologiques et nerveux pour manipuler les habitudes de consommation, d’où la nécessité de penser à encadrer les pratiques neuromarketing et de remplir le vide juridique tout en tenant compte la dimension humaine dans le processus de commercialisation des produits

    Contribution à l’analyse des déterminants de la divulgation des informations RSE sur les sites web au Maroc : cas des organismes financiers cotés dans la bourse de Casablanca

    Get PDF
    As part of this contribution, we seek to assess the quality of CSR communication from financial companies listed on the Casablanca Stock Exchange (6 banks, 5 insurance companies, 4 finance companies). As well as testing whether size, auditor quality, age, leverage and profitability influence said quality. To achieve our objective in terms of evaluating the quality of CSR communication, we used the grid of Branco and Rodrigues, (2006) and (2008). This index is made up of 4 sections representing the foundations of CSR communication with a total of 22 items. The results clearly show that banks publish more CSR information on their websites compared to insurance companies and finance companies. The results of the Poisson regression confirm that there is no correlation between the quality of the audit, the age, on the one hand, and the quality of the CSR communication. The same results have shown that size and performance positively influence the quality of CSR communication. On the other hand, the Poisson test confirms that there is a negative correlation between indebtedness and the disclosure of CSR information by Moroccan financial organizations.     JEL Classification : M14 Paper type: Empirical researchDans le cadre de la présente contribution, nous cherchons à évaluer la qualité de la communication RSE des sociétés financières cotées dans la bourse de Casablanca (6 banques, 5 assurances, 4 sociétés de financement). Ainsi que de tester si la taille, la qualité de l’auditeur, l’âge, l’édentement et la rentabilité influencent ladite qualité. Pour atteindre notre objectif en matière d’évaluation de la qualité de la communication RSE, nous avons utilisé la grille de Branco et Rodrigues (2006) et (2008). Cet indice est composé de 4 rubriques représente les fondements de la communication RSE avec un total de 22 items. Les résultats montrent clairement que les banques publient plus d’informations RSE sur leurs sites web en comparaison avec les sociétés d’assurance et les sociétés de financement. Les résultats de la régression de Poisson confirment qu’il n’y a pas de corrélation entre la qualité de l’auditer, l’âge d’une part et la qualité de la communication RSE. Les mêmes résultats ont montré que la taille et la rentabilité influencent positivement la qualité de la communication RSE. Par contre, le test de Poisson confirme qu’il y a une corrélation négative entre l’endettement et la divulgation des informations RSE par les organismes financiers marocains.     Classification JEL: M14 Type de l’article : Recherche empirique

    Activity-Based Costing and performance: empirical study in the context of Moroccan companies

    Get PDF
    Following the evolution of companies’ economic and organizational environment, the ability of traditional cost calculation systems to provide relevant information for decision-making has been questioned by the majority of researchers in management accounting. The work carried out in this context led to the proposal of a new method for calculating costs, namely: Activity- Based Costing. Since its appearance, many studies have been carried out on the theoretical foundations, the adoption determinants, the success factors of implementation and the impact of this method on companies’ performance. Through this work, our objective is to take part in the work on the consequences of activity-based costing adoption on performance, in particular at the level of Moroccan companies. The results of our analysis based on a sample of 73 Moroccan companies indicate a positive and statistically significant association between activity-based costing and organizational objectives achievement in terms of cost reduction, product/services’ quality improvement, production and delivery times reduction, guiding employee behavior and productivity growth

    Toward a Deep Learning Approach for Automatic Semantic Segmentation of 3D Lidar Point Clouds in Urban Areas

    Full text link
    peer reviewedSemantic segmentation of Lidar data using Deep Learning (DL) is a fundamental step for a deep and rigorous understanding of large-scale urban areas. Indeed, the increasing development of Lidar technology in terms of accuracy and spatial resolution offers a best opportunity for delivering a reliable semantic segmentation in large-scale urban environments. Significant progress has been reported in this direction. However, the literature lacks a deep comparison of the existing methods and algorithms in terms of strengths and weakness. The aim of the present paper is therefore to propose an objective review about these methods by highlighting their strengths and limitations. We then propose a new approach based on the combination of Lidar data and other sources in conjunction with a Deep Learning technique whose objective is to automatically extract semantic information from airborne Lidar point clouds by enhancing both accuracy and semantic precision compared to the existing methods. We finally present the first results of our approach

    The contribution of deep learning to the semantic segmentation of 3D point-clouds in urban areas

    Full text link
    peer reviewedSemantic segmentation in a large-scale urban environment is crucial for a deep and rigorous understanding of urban environments. The development of Lidar tools in terms of resolution and precision offers a good opportunity to satisfy the need of developing 3D city models. In this context, deep learning revolutionizes the field of computer vision and demonstrates a good performance in semantic segmentation. To achieve this objective, we propose to design a scientific methodology involving a method of deep learning by integrating several data sources (Lidar data, aerial images, etc) to recognize objects semantically and automatically. We aim at extracting automatically the maximum amount of semantic information in a urban environment with a high accuracy and performance

    A Prior Level Fusion Approach for the Semantic Segmentation of 3D Point Clouds Using Deep Learning

    Full text link
    peer reviewedThree-dimensional digital models play a pivotal role in city planning, monitoring, and sustainable management of smart and Digital Twin Cities (DTCs). In this context, semantic segmentation of airborne 3D point clouds is crucial for modeling, simulating, and understanding large-scale urban environments. Previous research studies have demonstrated that the performance of 3D semantic segmentation can be improved by fusing 3D point clouds and other data sources. In this paper, a new prior-level fusion approach is proposed for semantic segmentation of large-scale urban areas using optical images and point clouds. The proposed approach uses image classification obtained by the Maximum Likelihood Classifier as the prior knowledge for 3D semantic segmentation. Afterwards, the raster values from classified images are assigned to Lidar point clouds at the data preparation step. Finally, an advanced Deep Learning model (RandLaNet) is adopted to perform the 3D semantic segmentation. The results show that the proposed approach provides good results in terms of both evaluation metrics and visual examination with a higher Intersection over Union (96%) on the created dataset, compared with (92%) for the non-fusion approach

    La Détection des changements tridimensionnels à l'aide de nuages de points : Une revue

    Full text link
    peer reviewedChange detection is an important step for the characterization of object dynamics at the earth’s surface. In multi-temporal point clouds, the main challenge is to detect true changes at different granularities in a scene subject to significant noise and occlusion. To better understand new research perspectives in this field, a deep review of recent advances in 3D change detection methods is needed. To this end, we present a comprehensive review of the state of the art of 3D change detection approaches, mainly those using 3D point clouds. We review standard methods and recent advances in the use of machine and deep learning for change detection. In addition, the paper presents a summary of 3D point cloud benchmark datasets from different sensors (aerial, mobile, and static), together with associated information. We also investigate representative evaluation metrics for this task. To finish, we present open questions and research perspectives. By reviewing the relevant papers in the field, we highlight the potential of bi- and multi-temporal point clouds for better monitoring analysis for various applications.11. Sustainable cities and communitie

    Ensemble de données de nuages de points multi-contexte et apprentissage automatique pour la segmentation sémantique des chemins de fer

    Full text link
    peer reviewedRailway scene understanding is crucial for various applications, including autonomous trains, digital twining, and infrastructure change monitoring. However, the development of the latter is constrained by the lack of annotated datasets and limitations of existing algorithms. To address this challenge, we present Rail3D, the first comprehensive dataset for semantic segmentation in railway environments with a comparative analysis. Rail3D encompasses three distinct railway contexts from Hungary, France, and Belgium, capturing a wide range of railway assets and conditions. With over 288 million annotated points, Rail3D surpasses existing datasets in size and diversity, enabling the training of generalizable machine learning models. We conducted a generic classification with nine universal classes (Ground, Vegetation, Rail, Poles, Wires, Signals, Fence, Installation, and Building) and evaluated the performance of three state-of-the-art models: KPConv (Kernel Point Convolution), LightGBM, and Random Forest. The best performing model, a fine-tuned KPConv, achieved a mean Intersection over Union (mIoU) of 86%. While the LightGBM-based method achieved a mIoU of 71%, outperforming Random Forest. This study will benefit infrastructure experts and railway researchers by providing a comprehensive dataset and benchmarks for 3D semantic segmentation. The data and code are publicly available for France and Hungary, with continuous updates based on user feedback

    Towards a Digital Twin of Liege: The Core 3D Model based on Semantic Segmentation and Automated Modeling of LiDAR Point Clouds

    Full text link
    peer reviewedAbstract. The emergence of Digital Twins in city planning and management marks a contemporary trend, elevating the realm of 3D modeling and simulation for cities. In this context, the use of semantic point clouds to generate 3D city models for Digital Twins proves instrumental in addressing this evolving need. This article introduces a processing pipeline for the automatic modeling of buildings, roads, and vegetation based on the semantic segmentation results of 3D LiDAR point clouds. It employs a semantic segmentation approach that integrates multiple training datasets to achieve precise extraction of target objects. Open-source reconstruction tools have been adapted for building and road modeling, while a Python code was optimized for tree modeling, leveraging a foundational code. The case study was conducted in the city of Liège, Belgium. The obtained results were satisfactory, and the schemas and geometry of the developed models were validated. An evaluation of the adopted reconstruction methods was conducted, along with their comparison to other methods from the literature
    corecore