1基于用户的协同过滤算法:
基于用户的协同过滤算法是推荐系统中最古老的的算法,可以说是这个算法的诞生标志了推荐系统的诞生。该算法在1992年被提出,并应用于邮件过滤系统,1994年被GroupLens用于新闻过滤。
在一个在线个性化推荐系统中,当一个用户A需要个性化推荐时,可以先找到和他有相似兴趣的其他用户,然后把那些用户喜欢的而用户A没有接触过的物品推荐给A。这种方法称为基于用户的协同过滤算法。
给定用户u和用户v,令N(u)表示用户u曾经有过正反馈的物品集合,通过余弦相似度计算用户的相似度。由于很多用户相互之间并没有对同样的物品产生过行为,即,因此可以先计算[](http://static.oschina.net/uploads/img/201312/18161550_i74y.png)的用户对(u,v)。为此,可以首先建立物品到用户的倒查表,对于每个物品保存对该物品产生过行为的用户列表。令稀疏矩阵[![clip_image006](http://static.oschina.net/uploads/img/201312/18161550_Y7o0.png "clip_image006")](http://static.oschina.net/uploads/img/201312/18161550_G4S0.png),假设用户u和用户v同时属于倒查表中K个物品对应的用户列表,就有[![clip_image008](http://static.oschina.net/uploads/img/201312/18161551_Vael.png "clip_image008")](http://static.oschina.net/uploads/img/201312/18161550_xn17.png)(即用户u和v对相同物品产生正反馈的物品数),从而可以扫描倒查表中每个物品对应的用户列表,将用户列表中的两两用户对应的[![clip_image010](http://static.oschina.net/uploads/img/201312/18161552_EIx8.png "clip_image010")](http://static.oschina.net/uploads/img/201312/18161551_lKhI.png)加1,最终就可以获得所有用户之间不为0的[![clip_image010[1]](http://static.oschina.net/uploads/img/201312/18161553_OJQj.png "clip_image010[1]")](http://static.oschina.net/uploads/img/201312/18161552_1wN6.png)(也就是余弦相似度的分子)。
得到用户之间的兴趣相似度之后,基于用户的协同过滤算法(User Based Collaborative Filering)会给用户推荐和他兴趣最相似的K个用户喜欢的物品。如下公式度量了UserCF算法中用户u对物品i的感兴趣程度:
[](http://static.oschina.net/uploads/img/201312/18161553_1lik.png)
其中,S(u,k)包含和用户u兴趣相似度最接近的k个用户集合,N(i)是对物品i有过行为的用户集合,是用户u和用户v的兴趣相似度,代表用户v对物品i的兴趣,因为使用的是单一行为的隐反馈数据,因此为1。
根据以上思路,使用Python实现UserCF算法的代码如下:
import random
import math
class UserBasedCF:
def __init__(self,datafile = None):
self.datafile = datafile
self.readData()
self.splitData(3,47)
def readData(self,datafile = None):
"""
read the data from the data file which is a data set
"""
self.datafile = datafile or self.datafile
self.data = []
for line in open(self.datafile):
userid,itemid,record,_ = line.split()
self.data.append((userid,itemid,int(record)))
def splitData(self,k,seed,data=None,M = 8):
"""
split the data set
testdata is a test data set
traindata is a train set
test data set / train data set is 1:M-1
"""
self.testdata = {}
self.traindata = {}
data = data or self.data
random.seed(seed)
for user,item, record in self.data:
if random.randint(0,M) == k:
self.testdata.setdefault(user,{})
self.testdata[user][item] = record
else:
self.traindata.setdefault(user,{})
self.traindata[user][item] = record
def userSimilarityBest(self,train = None):
"""
the other method of getting user similarity which is better than above
you can get the method on page 46
In this experiment,we use this method
"""
train = train or self.traindata
self.userSimBest = dict()
item_users = dict()
for u,item in train.items():
for i in item.keys():
item_users.setdefault(i,set())
item_users[i].add(u)
user_item_count = dict()
count = dict()
for item,users in item_users.items():
for u in users:
user_item_count.setdefault(u,0)
user_item_count[u] += 1
for v in users:
if u == v:continue
count.setdefault(u,{})
count[u].setdefault(v,0)
count[u][v] += 1
for u ,related_users in count.items():
self.userSimBest.setdefault(u,dict())
for v, cuv in related_users.items():
self.userSimBest[u][v] = cuv / math.sqrt(user_item_count[u] * user_item_count[v] * 1.0)
def recommend(self,user,train = None,k = 8,nitem = 40):
train = train or self.traindata
rank = dict()
interacted_items = train.get(user,{})
for v ,wuv in sorted(self.userSimBest[user].items(),key = lambda x : x[1],reverse = True)[0:k]:
for i , rvi in train[v].items():
if i in interacted_items:
continue
rank.setdefault(i,0)
rank[i] += wuv
return dict(sorted(rank.items(),key = lambda x :x[1],reverse = True)[0:nitem])
def recallAndPrecision(self,train = None,test = None,k = 8,nitem = 10):
"""
Get the recall and precision, the method you want to know is listed
in the page 43
"""
train = train or self.traindata
test = test or self.testdata
hit = 0
recall = 0
precision = 0
for user in train.keys():
tu = test.get(user,{})
rank = self.recommend(user, train = train,k = k,nitem = nitem)
for item,_ in rank.items():
if item in tu:
hit += 1
recall += len(tu)
precision += nitem
return (hit / (recall * 1.0),hit / (precision * 1.0))
def coverage(self,train = None,test = None,k = 8,nitem = 10):
train = train or self.traindata
test = test or self.testdata
recommend_items = set()
all_items = set()
for user in train.keys():
for item in train[user].keys():
all_items.add(item)
rank = self.recommend(user, train, k = k, nitem = nitem)
for item,_ in rank.items():
recommend_items.add(item)
return len(recommend_items) / (len(all_items) * 1.0)
def popularity(self,train = None,test = None,k = 8,nitem = 10):
"""
Get the popularity
the algorithm on page 44
"""
train = train or self.traindata
test = test or self.testdata
item_popularity = dict()
for user ,items in train.items():
for item in items.keys():
item_popularity.setdefault(item,0)
item_popularity[item] += 1
ret = 0
n = 0
for user in train.keys():
rank = self.recommend(user, train, k = k, nitem = nitem)
for item ,_ in rank.items():
ret += math.log(1+item_popularity[item])
n += 1
return ret / (n * 1.0)
def testUserBasedCF():
cf = UserBasedCF('u.data')
cf.userSimilarityBest()
print "%3s%20s%20s%20s%20s" % ('K',"recall",'precision','coverage','popularity')
for k in [5,10,20,40,80,160]:
recall,precision = cf.recallAndPrecision( k = k)
coverage = cf.coverage(k = k)
popularity = cf.popularity(k = k)
print "%3d%19.3f%%%19.3f%%%19.3f%%%20.3f" % (k,recall * 100,precision * 100,coverage * 100,popularity)
if __name__ == "__main__":
testUserBasedCF()
2 基于物品的协同过滤算法:
基于物品的协同过滤算法(Item-Based Collaborative Filtering)是目前业界应用最多的算法,亚马逊、Netflix、Hulu、YouTube都采用该算法作为其基础推荐算法。
基于用户的协同过滤算法有一些缺点:随着网站的用户数目越来越大,计算用户兴趣相似度矩阵将越来越困难,其运算时间复杂度和空间复杂度的增长和用户数的增长近似平方关心。并且,基于用户的协同过滤算法很难对推荐结果做出解释。因此亚马逊提出了基于物品的协同过滤算法。
基于物品的协同过滤算法给用户推荐那些和他们之前喜欢的物品相似的物品。不过ItemCF算法并不利用物品的内容属性计算物品之间的相似度,它主要通过分析用户的行为记录计算用户之间的相似度,也就是说物品A和物品B具有很大的相似度是因为喜欢物品A的用户大都也喜欢物品B(这一点也是基于物品的协同过滤算法和基于内容的推荐算法最主要的区别)。同时,基于物品的协同过滤算法可以利用用户的历史行为给推荐结果提供推荐解释,用于解释的物品都是用户之前喜欢的或者购买的物品。
“Customers Who Bought This Item Also Bought”(亚马逊显示相关物品推荐时的标题),从这句话的定义出发,利用以下公式定义物品之间的相似度:
其中N(i)是喜欢物品i的用户数,分子是同时喜欢物品i和物品j的用户数。这个公式惩罚了物品j的权重,减轻了热门物品会和很多物品相似的可能性(这样wij的值会很大,接近于1)。这个公式说明两个物品产生相似度是因为它们共同被很多用户喜欢,也就是说每个用户都可以通过他们的历史兴趣列表给物品“贡献”相似度。
和UserCF算法类似,用ItemCF算法计算物品相似度时也可以首先建立用户-物品倒排表,即对每个用户建立一个包含他喜欢的物品的列表,然后对于每个用户,将他物品列表中的物品两两在共现矩阵C中加1,最终就可以得到所有物品之间不为0的,也就是公式中的分子。
在得到物品之间的相似度后,ItemCF通过如下公式计算用户u对一个物品i的兴趣:
其中,N(u)是用户喜欢的物品集合,S(i,k)是和物品i最相似的k个物品的集合,是物品j和物品i的相似度,是用户u对物品i的兴趣,对于隐反馈数据集,如果用户u对物品i有过行为,即可令为1。
根据以上思路,使用Python实现ItemCF算法的代码如下:
import math
import random
class ItemBasedCF:
def __init__(self, datafile = None):
self.datafile = datafile
self.readData()
self.splitData(3,47)
def readData(self,datafile = None):
self.datafile = datafile or self.datafile
self.data = []
file = open(self.datafile,'r')
for line in file.readlines()[0:100*1000]:
userid, itemid, record,_ = line.split()
self.data.append((userid,itemid,int(record)))
def splitData(self,k,seed,data=None,M = 8):
self.testdata = {}
self.traindata = {}
data = data or self.data
random.seed(seed)
for user,item,record in self.data:
if random.randint(0,7) == k:
self.testdata.setdefault(item,{})
self.testdata[item][user] = record
else:
self.traindata.setdefault(item,{})
self.traindata[item][user] = record
def ItemSimilarity(self, train = None):
train = train or self.traindata
self.itemSim = dict()
#user_items = dict()
item_user_count = dict() #item_user_count{item: likeCount} the number of users who like the item
count = dict() #count{i:{j:value}} the number of users who both like item i and j
for user, item in train.items(): #initialize the user_items{user: items}
for i in item.keys():
item_user_count.setdefault(i,0)
item_user_count[i] += 1
for j in item.keys():
if i == j:
continue
count.setdefault(i,{})
count[i].setdefault(j,0)
count[i][j] += 1
for i, related_items in count.items():
self.itemSim.setdefault(i,dict())
for j, cuv in related_items.items():
self.itemSim[i].setdefault(j,0)
self.itemSim[i][j] = cuv / math.sqrt(item_user_count[i] * item_user_count[j] * 1.0)
def recommend(self,user,train = None, k = 8, nitem = 40):
train = train or self.traindata
rank = dict()
ru = train.get(user,{})
for i,pi in ru.items():
for j,wj in sorted(self.itemSim[i].items(), key = lambda x:x[1], reverse = True)[0:k]:
if j in ru:
continue
rank.setdefault(j,0)
rank[j] += wj
#print dict(sorted(rank.items(), key = lambda x:x[1], reverse = True)[0:nitem])
return dict(sorted(rank.items(), key = lambda x:x[1], reverse = True)[0:nitem])
def recallAndPrecision(self,train = None,test = None,k = 8,nitem = 10):
train = train or self.traindata
test = test or self.testdata
hit = 0
recall = 0
precision = 0
for user in train.keys():
tu = test.get(user,{})
rank = self.recommend(user,train = train,k = k,nitem = nitem)
for item,_ in rank.items():
if item in tu:
hit += 1
recall += len(tu)
precision += nitem
return (hit / (recall * 1.0),hit / (precision * 1.0))
def coverage(self,train = None,test = None,k = 8,nitem = 10):
train = train or self.traindata
test = test or self.testdata
recommend_items = set()
all_items = set()
for user in train.keys():
for item in train[user].keys():
all_items.add(item)
rank = self.recommend(user, train, k = k, nitem = nitem)
for item,_ in rank.items():
recommend_items.add(item)
return len(recommend_items) / (len(all_items) * 1.0)
def popularity(self,train = None,test = None,k = 8,nitem = 10):
"""
Get the popularity
the algorithm on page 44
"""
train = train or self.traindata
test = test or self.testdata
item_popularity = dict()
for user ,items in train.items():
for item in items.keys():
item_popularity.setdefault(item,0)
item_popularity[item] += 1
ret = 0
n = 0
for user in train.keys():
rank = self.recommend(user, train, k = k, nitem = nitem)
for item ,_ in rank.items():
ret += math.log(1+item_popularity[item])
n += 1
return ret / (n * 1.0)
def testRecommend():
ubcf = ItemBasedCF('u.data')
ubcf.readData()
ubcf.splitData(4,100)
ubcf.ItemSimilarity()
user = "345"
rank = ubcf.recommend(user,k = 3)
for i,rvi in rank.items():
items = ubcf.testdata.get(user,{})
record = items.get(i,0)
print "%5s: %.4f--%.4f" %(i,rvi,record)
def testItemBasedCF():
cf = ItemBasedCF('u.data')
cf.ItemSimilarity()
print "%3s%20s%20s%20s%20s" % ('K',"recall",'precision','coverage','popularity')
for k in [5,10,20,40,80,160]:
recall,precision = cf.recallAndPrecision( k = k)
coverage = cf.coverage(k = k)
popularity = cf.popularity(k = k)
print "%3d%19.3f%%%19.3f%%%19.3f%%%20.3f" % (k,recall * 100,precision * 100,coverage * 100,popularity)
if __name__ == "__main__":
testItemBasedCF()
3 UserCF和ItemCF的综合比较:
UserCF给用户推荐那些和他有共同兴趣爱好的用户喜欢的物品,而ItemCF给用户推荐那些和他之前喜欢的物品类似的物品。从这个原理可以看到,UserCF的推荐结果着重于反映和用户兴趣相似的小群体的热点,而ItemCF的推荐结果着重于维系用户的历史兴趣。UserCF的推荐更社会化,反映了用户所在的小型兴趣群体中物品的热门程度,而ItemCF的推荐更加个性化,反映了用户自己的兴趣传承。同时,从技术上来说,UserCF需要维护一个用户相似度的矩阵,而ItemCF需要维护一个物品相似度矩阵。从存储的角度来说,如果用户很多,那么维护用户兴趣相似度矩阵需要很大的空间,同理,如果物品很多,维护物品相似度矩阵代价较大。
对于UserCF和ItemCF,我们采用http://www.grouplens.org/node/73 的数据集进行测试,使用准确率/召回率、覆盖率和流行度对实验结果进行评测。
对用户u推荐N个物品R(u),令用户u在测试集上喜欢的物品集合为T(u),则:
召回率描述有多少比例的用户-物品评分记录包含在最终的推荐列表中,而准确率描述最终的推荐列表中有多少比例是发生过的用户-物品评分记录。
覆盖率表示最终的推荐列表中包含多大比例的物品,如果所有的物品都被推荐给至少一个用户,那么覆盖率就是100%。
最后还需要评测推荐的新颖度,这里用推荐列表中物品的平均流行度度量推荐结果的新颖度,如果推荐出的物品都很热门,说明推荐的新颖度较低,否则说明推荐结果比较新颖。
图__1 UserCF__实验结果
图__2 ItemCF__实验结果
对于以上UserCF和ItemCF的实验结果可以看出,推荐系统的精度指标(准确率和召回率)并不和参数k成线性关系。推荐结果的精度对k也不是特别敏感,只要选在一定的区域内,就可以获得不错的精度。
对于覆盖率而言,k越大则推荐结果的覆盖率越低,覆盖率的降低是因为流行度的增加,随着流行度增加,推荐算法越来越倾向于推荐热门的物品,这是因为k决定了推荐算法在做推荐时参考多少和你兴趣相似的其他用户的兴趣,如果k越大,参考的人或者物品越多,结果就越来越趋近于全局热门的物品。