949 research outputs found

    An Intelligent Context Aware Recommender System for Real-Estate

    Get PDF
    Finding products and items in large online space that meet user needs is difficult. Time spent searching before finding a relevant item can be a significant time sink for users. As with other economic branches, growing Internet usage also changed user behavior in the real-estate market. Advancements in virtual reality offer virtual tours and interactive map and floor plans which make an online rental websites very popular among users. With the abundance of information, recommender systems become more important than ever to give the user relevant property suggestions and reduce search time. A sophisticated recommender in this domain can help reduce the need of a real-estate agent. Session-based user behavior and lack of user profiles leads to the use of traditional recommendation methods. In this research, we propose an approach for real-estate recommendation based on Gated Orthogonal Recurrent Unit (GORU) and Weighted Cosine Similarity. GORU captures the user search context and weighted cosine similarity improves the rank of pertinent property. We have used the data of an online public real estate web portal (AARZ.PK). The data represents the original behavior of the user on an online portal. We have used Recall, User coverage and Mean Reciprocal Rank (MRR) metrics for the evaluation of our system against other state-of-the-art techniques. The proposed solution outperforms various baselines and state-of-the-art RNN based solutions

    DeepRank: Adapting Neural Tensor Networks for Ranking the Recommendations

    Get PDF
    Online real estate property portals are gaining great attraction from masses due to ease in finding properties for rental or sale/purchase. With a few clicks, a real estate portal can display relevant information to a user by ranking the searched items according to userā€™s specifications. It is highly significant that the ranking results display the most relevant search results to the user. Therefore, an efficient ranking algorithm that takes userā€™s context is crucial for enhancing user experience in finding real estate properties online. This paper proposes an expressive Neural Tensor Network to rank the properties when searched for based on the similarity between the two property entities. Previous similarity techniques do not take into account the numerous complex features used to define a property. We showed that the performance can be enhanced if the property entities are represented as an average of their constituting features before finding the similarity between them. The proposed method takes into account each feature dynamically and ranks properties according to similarity with an accuracy of 86.6%

    INQUIRIES IN INTELLIGENT INFORMATION SYSTEMS: NEW TRAJECTORIES AND PARADIGMS

    Get PDF
    Rapid Digital transformation drives organizations to continually revitalize their business models so organizations can excel in such aggressive global competition. Intelligent Information Systems (IIS) have enabled organizations to achieve many strategic and market leverages. Despite the increasing intelligence competencies offered by IIS, they are still limited in many cognitive functions. Elevating the cognitive competencies offered by IIS would impact the organizational strategic positions. With the advent of Deep Learning (DL), IoT, and Edge Computing, IISs has witnessed a leap in their intelligence competencies. DL has been applied to many business areas and many industries such as real estate and manufacturing. Moreover, despite the complexity of DL models, many research dedicated efforts to apply DL to limited computational devices, such as IoTs. Applying deep learning for IoTs will turn everyday devices into intelligent interactive assistants. IISs suffer from many challenges that affect their service quality, process quality, and information quality. These challenges affected, in turn, user acceptance in terms of satisfaction, use, and trust. Moreover, Information Systems (IS) has conducted very little research on IIS development and the foreseeable contribution for the new paradigms to address IIS challenges. Therefore, this research aims to investigate how the employment of new AI paradigms would enhance the overall quality and consequently user acceptance of IIS. This research employs different AI paradigms to develop two different IIS. The first system uses deep learning, edge computing, and IoT to develop scene-aware ridesharing mentoring. The first developed system enhances the efficiency, privacy, and responsiveness of current ridesharing monitoring solutions. The second system aims to enhance the real estate searching process by formulating the search problem as a Multi-criteria decision. The system also allows users to filter properties based on their degree of damage, where a deep learning network allocates damages in 12 each real estate image. The system enhances real-estate website service quality by enhancing flexibility, relevancy, and efficiency. The research contributes to the Information Systems research by developing two Design Science artifacts. Both artifacts are adding to the IS knowledge base in terms of integrating different components, measurements, and techniques coherently and logically to effectively address important issues in IIS. The research also adds to the IS environment by addressing important business requirements that current methodologies and paradigms are not fulfilled. The research also highlights that most IIS overlook important design guidelines due to the lack of relevant evaluation metrics for different business problems

    Focussed palmtop information access combining starfield displays and profile-based recommendations

    Get PDF
    This paper presents two palmtop applications: Taeneb CityGuide and Taeneb ConferenceGuide. Both applications are centred around Starfield displays on palmtop computers - this provides fast, dynamic access to information on a small platform. The paper describes the applications focussing on this novel palmtop information access method and on the user-profiling aspect of the CityGuide, where restaurants are recommended to users based on both the match of restaurant type to the users' observed previous interactions and the rating given by reviewers with similar observed preferences

    Into the Black Box: Designing for Transparency in Artificial Intelligence

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The rapid infusion of artificial intelligence into everyday technologies means that consumers are likely to interact with intelligent systems that provide suggestions and recommendations on a daily basis in the very near future. While these technologies promise much, current issues in low transparency create high potential to confuse end-users, limiting the market viability of these technologies. While efforts are underway to make machine learning models more transparent, HCI currently lacks an understanding of how these model-generated explanations should best translate into the practicalities of system design. To address this gap, my research took a pragmatic approach to improving system transparency for end-users. Through a series of three studies, I investigated the need and value of transparency to end-users, and explored methods to improve system designs to accomplish greater transparency in intelligent systems offering recommendations. My research resulted in a summarized taxonomy that outlines a variety of motivations for why users ask questions of intelligent systems; useful for considering the type and category of information users might appreciate when interacting with AI-based recommendations. I also developed a categorization of explanation types, known as explanation vectors, that is organized into groups that correspond to user knowledge goals. Explanation vectors provide system designers options for delivering explanations of system processes beyond those of basic explainability. I developed a detailed user typology, which is a four-factor categorization of the predominant attitudes and opinion schemes of everyday users interacting with AI-based recommendations; useful to understand the range of user sentiment towards AI-based recommender features, and possibly useful for tailoring interface design by user type. Lastly, I developed and tested an evaluation method known as the System Transparency Evaluation Method (STEv), which allows for real-world systems and prototypes to be evaluated and improved through a low-cost query method. Results from this dissertation offer concrete direction to interaction designers as to how these results might manifest in the design of interfaces that are more transparent to end users. These studies provide a framework and methodology that is complementary to existing HCI evaluation methods, and lay the groundwork upon which other research into improving system transparency might build

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System
    • ā€¦
    corecore