OpenAI’s recent introduction of the memory feature in ChatGPT marks a significant enhancement, enabling the model to extract and store user information from conversations automatically to deliver more personalized responses. However, despite its potential benefits, this feature raises critical concerns regarding user cybersecurity and privacy. To investigate these issues, this study examines user awareness of ChatGPT\u27s memory functionality, their attitudes toward its privacy implications, and the behavioral changes prompted by perceived risks. Using an assessment framework used in the healthcare and cybersecurity fields, a questionnaire was developed and distributed primarily among college students, and an analysis of the responses revealed that, while some users have a basic understanding of the feature, many remain unaware or uncertain about its operation, particularly regarding data extraction, storage, and management practices. These findings highlight the importance of enhancing transparency and providing users with greater control over memory features in ChatGPT and similar large language models, emphasizing the need to address privacy and security challenges associated with such advancements
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.