A recommender system or a recommendation system (sometimes replacing "system" with a synonym such as platform or engine) is a subclass of information filtering system that seeks to predict the "rating" or "preference" that a user would give to an item.
Recommender systems have become increasingly popular in recent years, and are utilized in a variety of areas including movies, music, news, books, research articles, search queries, social tags, and products in general. There are also recommender systems for experts, collaborators, jokes, restaurants, garments, financial services, life insurance, romantic partners (online dating), and Twitter pages.
Video Recommender system
Overview
Recommender systems typically produce a list of recommendations in one of two ways - through collaborative filtering or through content-based filtering (also known as the personality-based approach). Collaborative filtering approaches build a model from a user's past behaviour (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in. Content-based filtering approaches utilize a series of discrete characteristics of an item in order to recommend additional items with similar properties. These approaches are often combined (see Hybrid Recommender Systems).
The differences between collaborative and content-based filtering can be demonstrated by comparing two popular music recommender systems - Last.fm and Pandora Radio.
- Last.fm creates a "station" of recommended songs by observing what bands and individual tracks the user has listened to on a regular basis and comparing those against the listening behavior of other users. Last.fm will play tracks that do not appear in the user's library, but are often played by other users with similar interests. As this approach leverages the behavior of users, it is an example of a collaborative filtering technique.
- Pandora uses the properties of a song or artist (a subset of the 400 attributes provided by the Music Genome Project) to seed a "station" that plays music with similar properties. User feedback is used to refine the station's results, deemphasizing certain attributes when a user "dislikes" a particular song and emphasizing other attributes when a user "likes" a song. This is an example of a content-based approach.
Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of the cold start problem, and is common in collaborative filtering systems. Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).
Recommender systems are a useful alternative to search algorithms since they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data.
Recommender systems were first mentioned in a technical report as a "digital bookshelf" in 1990 by Jussi Karlgren at Columbia University, and implemented at scale and worked through in technical reports and publications from 1994 onwards by Jussi Karlgren, then at SICS, and research groups led by Pattie Maes at MIT, Will Hill at Bellcore, and Paul Resnick, also at MIT whose work with GroupLens was awarded the 2010 ACM Software Systems Award.
Montaner provided the first overview of recommender systems from an intelligent agent perspective. Adomavicius provided a new, alternate overview of recommender systems. Herlocker provides an additional overview of evaluation techniques for recommender systems, and Beel et al. discussed the problems of offline evaluations. Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.
Recommender systems have been the focus of several granted patents.
Maps Recommender system
Approaches
Collaborative filtering
One approach to the design of recommender systems that has wide use is collaborative filtering. Collaborative filtering methods are based on collecting and analyzing a large amount of information on users' behaviors, activities or preferences and predicting what users will like based on their similarity to other users. A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach and the Pearson Correlation as first implemented by Allen.
Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past.
When building a model from a user's behavior, a distinction is often made between explicit and implicit forms of data collection.
Examples of explicit data collection include the following:
- Asking a user to rate an item on a sliding scale.
- Asking a user to search.
- Asking a user to rank a collection of items from favorite to least favorite.
- Presenting two items to a user and asking him/her to choose the better one of them.
- Asking a user to create a list of items that he/she likes.
Examples of implicit data collection include the following:
- Observing the items that a user views in an online store.
- Analyzing item/user viewing times.
- Keeping a record of the items that a user purchases online.
- Obtaining a list of items that a user has listened to or watched on his/her computer.
- Analyzing the user's social network and discovering similar likes and dislikes.
The recommender system compares the collected data to similar and dissimilar data collected from others and calculates a list of recommended items for the user. Several commercial and non-commercial examples are listed in the article on collaborative filtering systems.
One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system. Other examples include:
- As previously detailed, Last.fm recommends music based on a comparison of the listening habits of similar users, while Readgeek compares books ratings for recommendations.
- Facebook, MySpace, LinkedIn, and other social networks use collaborative filtering to recommend new friends, groups, and other social connections (by examining the network of connections between a user and their friends). Twitter uses many signals and in-memory computations for recommending to its users whom they should "follow."
Collaborative filtering approaches often suffer from three problems: cold start, scalability, and sparsity.
- Cold start: These systems often require a large amount of existing data on a user in order to make accurate recommendations.
- Scalability: In many of the environments in which these systems make recommendations, there are millions of users and products. Thus, a large amount of computation power is often necessary to calculate recommendations.
- Sparsity: The number of items sold on major e-commerce sites is extremely large. The most active users will only have rated a small subset of the overall database. Thus, even the most popular items have very few ratings.
A particular type of collaborative filtering algorithm uses matrix factorization, a low-rank matrix approximation technique.
Collaborative filtering methods are classified as memory-based and model based collaborative filtering. A well-known example of memory-based approaches is user-based algorithm and that of model-based approaches is Kernel-Mapping Recommender.
Content-based filtering
Another common approach when designing recommender systems is content-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences. In a content-based recommender system, keywords are used to describe the items and a user profile is built to indicate the type of item this user likes. In other words, these algorithms try to recommend items that are similar to those that a user liked in the past (or is examining in the present). In particular, various candidate items are compared with items previously rated by the user and the best-matching items are recommended. This approach has its roots in information retrieval and information filtering research.
To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is the tf-idf representation (also called vector space representation).
To create a user profile, the system mostly focuses on two types of information: 1. A model of the user's preference. 2. A history of the user's interaction with the recommender system.
Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks in order to estimate the probability that the user is going to like the item.
Direct feedback from a user, usually in the form of a like or dislike button, can be used to assign higher or lower weights on the importance of certain attributes (using Rocchio classification or other similar techniques).
A key issue with content-based filtering is whether the system is able to learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on browsing of news is useful, but would be much more useful when music, videos, products, discussions etc. from different services can be recommended based on news browsing.
As previously detailed, Pandora Radio is a popular example of a content-based recommender system that plays music with similar characteristics to that of a song provided by the user as an initial seed. There are also a large number of content-based recommender systems aimed at providing movie recommendations, a few such examples include Rotten Tomatoes, Internet Movie Database, Jinni, Rovi Corporation, and Jaman. Document related recommender systems aim at providing document recommendations to knowledge workers. Public health professionals have been studying recommender systems to personalize health education and preventative strategies.
Hybrid recommender systems
Recent research has demonstrated that a hybrid approach, combining collaborative filtering and content-based filtering could be more effective in some cases. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model (see for a complete review of recommender systems). Several studies empirically compare the performance of the hybrid with the pure collaborative and content-based methods and demonstrate that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem.
Netflix is a good example of the use of hybrid recommender systems. The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).
A variety of techniques have been proposed as the basis for recommender systems: collaborative, content-based, knowledge-based, and demographic techniques. Each of these techniques has known shortcomings, such as the well known cold-start problem for collaborative and content-based systems (what to do with new users with few ratings) and the knowledge engineering bottleneck in knowledge-based approaches. A hybrid recommender system is one that combines multiple techniques together to achieve some synergy between them.
- Collaborative: The system generates recommendations using only information about rating profiles for different users or items. Collaborative systems locate peer users / items with a rating history similar to the current user or item and generate recommendations using this neighborhood. The user based and the item based nearest neighbor algorithms can be combined to deal with the cold start problem and improve recommendation results.
- Content-based: The system generates recommendations from two sources: the features associated with products and the ratings that a user has given them. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on product features.
- Demographic: A demographic recommender provides recommendations based on a demographic profile of the user. Recommended products can be produced for different demographic niches, by combining the ratings of users in those niches.
- Knowledge-based: A knowledge-based recommender suggests products based on inferences about a user's needs and preferences. This knowledge will sometimes contain explicit functional knowledge about how certain product features meet user needs.
The term hybrid recommender system is used here to describe any recommender system that combines multiple recommendation techniques together to produce its output. There is no reason why several different techniques of the same type could not be hybridized, for example, two different content-based recommenders could work together, and a number of projects have investigated this type of hybrid: NewsDude, which uses both naive Bayes and kNN classifiers in its news recommendations is just one example.
Seven hybridization techniques:
- Weighted: The score of different recommendation components are combined numerically.
- Switching: The system chooses among recommendation components and applies the selected one.
- Mixed: Recommendations from different recommenders are presented together.
- Feature Combination: Features derived from different knowledge sources are combined together and given to a single recommendation algorithm.
- Feature Augmentation: One recommendation technique is used to compute a feature or set of features, which is then part of the input to the next technique.
- Cascade: Recommenders are given strict priority, with the lower priority ones breaking ties in the scoring of the higher ones.
- Meta-level: One recommendation technique is applied and produces some sort of model, which is then the input used by the next technique.
Beyond accuracy
Typically, research on recommender systems is concerned about finding the most accurate recommendation algorithms. However, there is a number of factors that are also important.
- Diversity - Users tend to be more satisfied with recommendations when there is a higher intra-list diversity, e.g. items from different artists.
- Recommender persistence - In some situations, it is more effective to re-show recommendations, or let users re-rate items, than showing new items. There are several reasons for this. Users may ignore items when they are shown for the first time, for instance, because they had no time to inspect the recommendations carefully.
- Privacy - Recommender systems usually have to deal with privacy concerns because users have to reveal sensitive information. Building user profiles using collaborative filtering can be problematic from a privacy point of view. Many European countries have a strong culture of data privacy, and every attempt to introduce any level of user profiling can result in a negative customer response. A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database. As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and the Video Privacy Protection Act by releasing the datasets. This led in part to the cancellation of a second Netflix Prize competition in 2010. Much research has been conducted on ongoing privacy issues in this space. Ramakrishnan et al. have conducted an extensive overview of the trade-offs between personalization and privacy and found that the combination of weak ties (an unexpected connection that provides serendipitous recommendations) and other data sources can be used to uncover identities of users in an anonymized dataset.
- User demographics - Beel et al. found that user demographics may influence how satisfied users are with recommendations. In their paper they show that elderly users tend to be more interested in recommendations than younger users.
- Robustness - When users can participate in the recommender system, the issue of fraud must be addressed.
- Serendipity - Serendipity is a measure of "how surprising the recommendations are". For instance, a recommender system that recommends milk to a customer in a grocery store might be perfectly accurate, but it is not a good recommendation because it is an obvious item for the customer to buy. However, high scores of serendipity may have a negative impact on accuracy.
- Trust - A recommender system is of little value for a user if the user does not trust the system. Trust can be built by a recommender system by explaining how it generates recommendations, and why it recommends an item.
- Labelling - User satisfaction with recommendations may be influenced by the labeling of the recommendations. For instance, in the cited study click-through rate (CTR) for recommendations labeled as "Sponsored" were lower (CTR=5.93%) than CTR for identical recommendations labeled as "Organic" (CTR=8.86%). Interestingly, recommendations with no label performed best (CTR=9.87%) in that study.
Mobile recommender systems
One growing area of research in the area of recommender systems is mobile recommender systems. With the increasing ubiquity of internet-accessing smart phones, it is now possible to offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with (it is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems). Additionally, mobile recommender systems suffer from a transplantation problem - recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).
One example of a mobile recommender system is one that offers potentially profitable driving routes for taxi drivers in a city. This system takes input data in the form of GPS traces of the routes that taxi drivers took while working, which include location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits. This type of system is obviously location-dependent, and since it must operate on a handheld or embedded device, the computation and energy requirements must remain low.
Another example of mobile recommendation is what (Bouneffouf et al., 2012) developed for professional users. Using GPS traces of the user and his agenda, it suggests suitable information depending on his situation and interests. The system uses machine learning techniques and reasoning processes in order to dynamically adapt the mobile recommender system to the evolution of the user's interest. The author called his algorithm hybrid-?-greedy.
Mobile recommendation systems have also been successfully built using the "Web of Data" as a source for structured information. A good example of such system is SMARTMUSEUM The system uses semantic modelling, information retrieval, and machine learning techniques in order to recommend content matching user interests, even when presented with sparse or minimal user data.
Risk-aware recommender systems
The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information and do not take into account the risk of disturbing the user in specific situation. However, in many applications, such as recommending personalized content, it is also important to consider the risk of upsetting the user so as not to push recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process.
Risk definition
"The risk in recommender systems is the possibility to disturb or to upset the user which leads to a bad answer of the user".
In response to these challenges, the authors in DRARS, A Dynamic Risk-Aware Recommender System have developed a dynamic risk sensitive recommendation system called DRARS (Dynamic Risk-Aware Recommender System), which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm. They have shown that DRARS improves the Upper Confidence Bound (UCB) policy, the currently available best algorithm, by calculating the most optimal exploration value to maintain a trade-off between exploration and exploitation based on the risk level of the current user's situation. The authors conducted experiments in an industrial context with real data and real users and have shown that taking into account the risk level of users' situations significantly increased the performance of the recommender systems.
The Netflix Prize
One of the key events that energized research in recommender systems was the Netflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.
The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction:
Predictive accuracy is substantially improved when blending multiple predictors. Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique. Consequently, our solution is an ensemble of many methods.
Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place founded Gravity R&D, a recommendation engine that's active in the RecSys community. 4-Tell, Inc. created a Netflix project-derived solution for ecommerce websites.
A second contest was planned, but was ultimately canceled in response to an ongoing lawsuit and concerns from the Federal Trade Commission.
Performance measures
Evaluation is important in assessing the effectiveness of recommendation algorithms. The commonly used metrics are the mean squared error and root mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such as precision and recall or DCG are useful to assess the quality of a recommendation method. Recently, diversity, novelty, and coverage are also considered as important aspects in evaluation. However, many of the classic evaluation measures are highly criticized. Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction. The authors conclude "we would suggest treating results of offline evaluations [i.e. classic performance measures] with skepticism".
Multi-criteria recommender systems
Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion values, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems. See this chapter for an extended introduction.
Opinion-based recommender systems
In some cases, users are allowed to leave text review or feedback on the items. These user-generated text are implicit data for the recommender system because they are potentially rich resource of both feature/aspects of the item, and the user's evaluation/sentiment to the item. Features extracted from the user-generated reviews are improved meta-data of items, because as they also reflect aspects of the item like meta-data, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as user's rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques including text mining, information retrieval and sentiment analysis.
Recommender system evaluation
To measure the effectiveness of recommender systems, and compare different approaches, three types of evaluations are available: user studies, online evaluations (A/B tests), and offline evaluations. User studies are rather small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge, which recommendations are best. In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such as conversion rate or click-through rate. Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies. The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers. For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests.
Reproducibility in recommender system research
In recent years there was a growing understanding in the community that lots of previous research had little impact on the practical application of recommender systems. Ekstrand, Konstan, et al. criticize that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently". Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions.". As a consequence, lots of research about recommender systems can be considered as not reproducible. Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems. Said & Bellogín conducted a study of recent papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used. Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation: "(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research."
While reproducibility has not been considered for a long time in the recommender-system community, this aspects is much more considered recently, with several workshops and conferences focusing on reproducibility in recommender system research.
Anticipatory design
Anticipatory design is the combination of Internet of Things, User experience design and machine learning. Anticipatory design differs from conventional design, in that within anticipatory design the goal is to simplify the process and minimize difficulty by making decisions on behalf of the users. Examples of services that uses various levels of anticipatory designs are: Amazon and Netflix production recommendation who recommend products based on previous behavior, The mobile application Peapod used a recommendation engine to allow users to fill their shopping basket based on previous orders and the Nest thermostat predicts the perfect room temperature based on user input and time of the day.
See also
- Rating site
- Cold start
- Collaborative filtering
- Collective intelligence
- Content discovery platform
- Enterprise bookmarking
- Filter bubble
- Personalized marketing
- Preference elicitation
- Product finder
- Configurator
- Pattern recognition
References
Further reading
- Books
- Bharat Bhasker; K. Srikumar (2010). Recommender Systems in E-Commerce. CUP. ISBN 978-0-07-068067-8. Archived from the original on 2010-09-01.
- Francesco Ricci; Lior Rokach; Bracha Shapira; Paul B. Kantor, eds. (2011). Recommender Systems Handbook. ISBN 978-0-387-85819-7.
- Bracha Shapira; Lior Rokach (June 2012). Building Effective Recommender Systems. ISBN 978-1-4419-0047-0.
- Dietmar Jannach; Markus Zanker; Alexander Felfernig; Gerhard Friedrich (2010). Recommender Systems:An Introduction. CUP. ISBN 978-0-521-49336-9.
- Scientific articles
- Prem Melville, Raymond J. Mooney, and Ramadass Nagarajan. (2002) Content-Boosted Collaborative Filtering for Improved Recommendations. Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI-2002), pp. 187-192, Edmonton, Canada, July 2002.
- Frank Meyer. (2012), Recommender systems in industrial contexts. ArXiv e-prints.
- Bouneffouf, Djallel (2012), "Following the User's Interests in Mobile Context-Aware Recommender Systems: The Hybrid-e-greedy Algorithm", Proceedings of the 2012 26th International Conference on Advanced Information Networking and Applications Workshops (PDF), Lecture Notes in Computer Science, IEEE Computer Society, pp. 657-662, ISBN 978-0-7695-4652-0, archived from the original (PDF) on 2014-05-14 .
- Bouneffouf, Djallel (2013), DRARS, A Dynamic Risk-Aware Recommender System (Ph.D.), Institut National des Télécommunications .
External links
- Recommender Systems Wiki
- Robert M. Bell; Jim Bennett; Yehuda Koren & Chris Volinsky (May 2009). "The Million Dollar Programming Prize". IEEE Spectrum.
- Hangartner, Rick, "What is the Recommender Industry?", MSearchGroove, December 17, 2007.
- ACM Conference on Recommender Systems
- Recsys group at Politecnico di Milano
- Data Science: Data to Insights from MIT (recommendation systems)
Source of the article : Wikipedia