From aac34df11791d25417f7d756dc277b6f95996b47 Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Mon, 9 Sep 2013 07:16:41 -0700 Subject: fs: remove vfs_follow_link For a long time no filesystem has been using vfs_follow_link, and as seen by recent filesystem submissions any new use is accidental as well. Remove vfs_follow_link, document the replacement in Documentation/filesystems/porting and also rename __vfs_follow_link to match its only caller better. Signed-off-by: Christoph Hellwig Signed-off-by: Al Viro --- include/linux/fs.h | 1 - 1 file changed, 1 deletion(-) (limited to 'include') diff --git a/include/linux/fs.h b/include/linux/fs.h index 529d8711baba..49e71b0f0e9f 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2494,7 +2494,6 @@ extern const struct file_operations generic_ro_fops; #define special_file(m) (S_ISCHR(m)||S_ISBLK(m)||S_ISFIFO(m)||S_ISSOCK(m)) extern int vfs_readlink(struct dentry *, char __user *, int, const char *); -extern int vfs_follow_link(struct nameidata *, const char *); extern int page_readlink(struct dentry *, char __user *, int); extern void *page_follow_link_light(struct dentry *, struct nameidata *); extern void page_put_link(struct dentry *, struct nameidata *, void *); -- cgit v1.2.3 From 3942c07ccf98e66b8893f396dca98f5b076f905f Mon Sep 17 00:00:00 2001 From: Glauber Costa Date: Wed, 28 Aug 2013 10:17:53 +1000 Subject: fs: bump inode and dentry counters to long MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This series reworks our current object cache shrinking infrastructure in two main ways: * Noticing that a lot of users copy and paste their own version of LRU lists for objects, we put some effort in providing a generic version. It is modeled after the filesystem users: dentries, inodes, and xfs (for various tasks), but we expect that other users could benefit in the near future with little or no modification. Let us know if you have any issues. * The underlying list_lru being proposed automatically and transparently keeps the elements in per-node lists, and is able to manipulate the node lists individually. Given this infrastructure, we are able to modify the up-to-now hammer called shrink_slab to proceed with node-reclaim instead of always searching memory from all over like it has been doing. Per-node lru lists are also expected to lead to less contention in the lru locks on multi-node scans, since we are now no longer fighting for a global lock. The locks usually disappear from the profilers with this change. Although we have no official benchmarks for this version - be our guest to independently evaluate this - earlier versions of this series were performance tested (details at http://permalink.gmane.org/gmane.linux.kernel.mm/100537) yielding no visible performance regressions while yielding a better qualitative behavior in NUMA machines. With this infrastructure in place, we can use the list_lru entry point to provide memcg isolation and per-memcg targeted reclaim. Historically, those two pieces of work have been posted together. This version presents only the infrastructure work, deferring the memcg work for a later time, so we can focus on getting this part tested. You can see more about the history of such work at http://lwn.net/Articles/552769/ Dave Chinner (18): dcache: convert dentry_stat.nr_unused to per-cpu counters dentry: move to per-sb LRU locks dcache: remove dentries from LRU before putting on dispose list mm: new shrinker API shrinker: convert superblock shrinkers to new API list: add a new LRU list type inode: convert inode lru list to generic lru list code. dcache: convert to use new lru list infrastructure list_lru: per-node list infrastructure shrinker: add node awareness fs: convert inode and dentry shrinking to be node aware xfs: convert buftarg LRU to generic code xfs: rework buffer dispose list tracking xfs: convert dquot cache lru to list_lru fs: convert fs shrinkers to new scan/count API drivers: convert shrinkers to new count/scan API shrinker: convert remaining shrinkers to count/scan API shrinker: Kill old ->shrink API. Glauber Costa (7): fs: bump inode and dentry counters to long super: fix calculation of shrinkable objects for small numbers list_lru: per-node API vmscan: per-node deferred work i915: bail out earlier when shrinker cannot acquire mutex hugepage: convert huge zero page shrinker to new shrinker API list_lru: dynamically adjust node arrays This patch: There are situations in very large machines in which we can have a large quantity of dirty inodes, unused dentries, etc. This is particularly true when umounting a filesystem, where eventually since every live object will eventually be discarded. Dave Chinner reported a problem with this while experimenting with the shrinker revamp patchset. So we believe it is time for a change. This patch just moves int to longs. Machines where it matters should have a big long anyway. Signed-off-by: Glauber Costa Cc: Dave Chinner Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: Dave Chinner Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/dcache.c | 8 ++++---- fs/inode.c | 18 +++++++++--------- fs/internal.h | 2 +- include/linux/dcache.h | 10 +++++----- include/linux/fs.h | 4 ++-- include/uapi/linux/fs.h | 6 +++--- kernel/sysctl.c | 6 +++--- 7 files changed, 27 insertions(+), 27 deletions(-) (limited to 'include') diff --git a/fs/dcache.c b/fs/dcache.c index 4d9df3c940e6..6ef1c2e1bbc4 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -146,13 +146,13 @@ struct dentry_stat_t dentry_stat = { .age_limit = 45, }; -static DEFINE_PER_CPU(unsigned int, nr_dentry); +static DEFINE_PER_CPU(long, nr_dentry); #if defined(CONFIG_SYSCTL) && defined(CONFIG_PROC_FS) -static int get_nr_dentry(void) +static long get_nr_dentry(void) { int i; - int sum = 0; + long sum = 0; for_each_possible_cpu(i) sum += per_cpu(nr_dentry, i); return sum < 0 ? 0 : sum; @@ -162,7 +162,7 @@ int proc_nr_dentry(ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { dentry_stat.nr_dentry = get_nr_dentry(); - return proc_dointvec(table, write, buffer, lenp, ppos); + return proc_doulongvec_minmax(table, write, buffer, lenp, ppos); } #endif diff --git a/fs/inode.c b/fs/inode.c index 93a0625b46e4..2a3c37ea823d 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -70,33 +70,33 @@ EXPORT_SYMBOL(empty_aops); */ struct inodes_stat_t inodes_stat; -static DEFINE_PER_CPU(unsigned int, nr_inodes); -static DEFINE_PER_CPU(unsigned int, nr_unused); +static DEFINE_PER_CPU(unsigned long, nr_inodes); +static DEFINE_PER_CPU(unsigned long, nr_unused); static struct kmem_cache *inode_cachep __read_mostly; -static int get_nr_inodes(void) +static long get_nr_inodes(void) { int i; - int sum = 0; + long sum = 0; for_each_possible_cpu(i) sum += per_cpu(nr_inodes, i); return sum < 0 ? 0 : sum; } -static inline int get_nr_inodes_unused(void) +static inline long get_nr_inodes_unused(void) { int i; - int sum = 0; + long sum = 0; for_each_possible_cpu(i) sum += per_cpu(nr_unused, i); return sum < 0 ? 0 : sum; } -int get_nr_dirty_inodes(void) +long get_nr_dirty_inodes(void) { /* not actually dirty inodes, but a wild approximation */ - int nr_dirty = get_nr_inodes() - get_nr_inodes_unused(); + long nr_dirty = get_nr_inodes() - get_nr_inodes_unused(); return nr_dirty > 0 ? nr_dirty : 0; } @@ -109,7 +109,7 @@ int proc_nr_inodes(ctl_table *table, int write, { inodes_stat.nr_inodes = get_nr_inodes(); inodes_stat.nr_unused = get_nr_inodes_unused(); - return proc_dointvec(table, write, buffer, lenp, ppos); + return proc_doulongvec_minmax(table, write, buffer, lenp, ppos); } #endif diff --git a/fs/internal.h b/fs/internal.h index 2be46ea5dd0b..b6495659d6e8 100644 --- a/fs/internal.h +++ b/fs/internal.h @@ -121,7 +121,7 @@ extern void inode_add_lru(struct inode *inode); */ extern void inode_wb_list_del(struct inode *inode); -extern int get_nr_dirty_inodes(void); +extern long get_nr_dirty_inodes(void); extern void evict_inodes(struct super_block *); extern int invalidate_inodes(struct super_block *, bool); diff --git a/include/linux/dcache.h b/include/linux/dcache.h index feaa8d88eef7..844a1ef387e4 100644 --- a/include/linux/dcache.h +++ b/include/linux/dcache.h @@ -55,11 +55,11 @@ struct qstr { #define hashlen_len(hashlen) ((u32)((hashlen) >> 32)) struct dentry_stat_t { - int nr_dentry; - int nr_unused; - int age_limit; /* age in seconds */ - int want_pages; /* pages requested by system */ - int dummy[2]; + long nr_dentry; + long nr_unused; + long age_limit; /* age in seconds */ + long want_pages; /* pages requested by system */ + long dummy[2]; }; extern struct dentry_stat_t dentry_stat; diff --git a/include/linux/fs.h b/include/linux/fs.h index 49e71b0f0e9f..3b3edac75df2 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1271,12 +1271,12 @@ struct super_block { struct list_head s_mounts; /* list of mounts; _not_ for fs use */ /* s_dentry_lru, s_nr_dentry_unused protected by dcache.c lru locks */ struct list_head s_dentry_lru; /* unused dentry lru */ - int s_nr_dentry_unused; /* # of dentry on lru */ + long s_nr_dentry_unused; /* # of dentry on lru */ /* s_inode_lru_lock protects s_inode_lru and s_nr_inodes_unused */ spinlock_t s_inode_lru_lock ____cacheline_aligned_in_smp; struct list_head s_inode_lru; /* unused inode lru */ - int s_nr_inodes_unused; /* # of inodes on lru */ + long s_nr_inodes_unused; /* # of inodes on lru */ struct block_device *s_bdev; struct backing_dev_info *s_bdi; diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index a4ed56cf0eac..6c28b61bb690 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -49,9 +49,9 @@ struct files_stat_struct { }; struct inodes_stat_t { - int nr_inodes; - int nr_unused; - int dummy[5]; /* padding for sysctl ABI compatibility */ + long nr_inodes; + long nr_unused; + long dummy[5]; /* padding for sysctl ABI compatibility */ }; diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 07f6fc468e17..7822cd88a95c 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1471,14 +1471,14 @@ static struct ctl_table fs_table[] = { { .procname = "inode-nr", .data = &inodes_stat, - .maxlen = 2*sizeof(int), + .maxlen = 2*sizeof(long), .mode = 0444, .proc_handler = proc_nr_inodes, }, { .procname = "inode-state", .data = &inodes_stat, - .maxlen = 7*sizeof(int), + .maxlen = 7*sizeof(long), .mode = 0444, .proc_handler = proc_nr_inodes, }, @@ -1508,7 +1508,7 @@ static struct ctl_table fs_table[] = { { .procname = "dentry-state", .data = &dentry_stat, - .maxlen = 6*sizeof(int), + .maxlen = 6*sizeof(long), .mode = 0444, .proc_handler = proc_nr_dentry, }, -- cgit v1.2.3 From 55f841ce9395a72c6285fbcc4c403c0c786e1c74 Mon Sep 17 00:00:00 2001 From: Glauber Costa Date: Wed, 28 Aug 2013 10:17:53 +1000 Subject: super: fix calculation of shrinkable objects for small numbers MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The sysctl knob sysctl_vfs_cache_pressure is used to determine which percentage of the shrinkable objects in our cache we should actively try to shrink. It works great in situations in which we have many objects (at least more than 100), because the aproximation errors will be negligible. But if this is not the case, specially when total_objects < 100, we may end up concluding that we have no objects at all (total / 100 = 0, if total < 100). This is certainly not the biggest killer in the world, but may matter in very low kernel memory situations. Signed-off-by: Glauber Costa Reviewed-by: Carlos Maiolino Acked-by: KAMEZAWA Hiroyuki Acked-by: Mel Gorman Cc: Dave Chinner Cc: Al Viro Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/gfs2/glock.c | 2 +- fs/gfs2/quota.c | 2 +- fs/mbcache.c | 2 +- fs/nfs/dir.c | 2 +- fs/quota/dquot.c | 5 ++--- fs/super.c | 14 +++++++------- fs/xfs/xfs_qm.c | 2 +- include/linux/dcache.h | 4 ++++ 8 files changed, 18 insertions(+), 15 deletions(-) (limited to 'include') diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index 722329cac98f..b782bb56085d 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -1462,7 +1462,7 @@ static int gfs2_shrink_glock_memory(struct shrinker *shrink, gfs2_scan_glock_lru(sc->nr_to_scan); } - return (atomic_read(&lru_count) / 100) * sysctl_vfs_cache_pressure; + return vfs_pressure_ratio(atomic_read(&lru_count)); } static struct shrinker glock_shrinker = { diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index 3768c2f40e43..d550a5d6a05f 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -114,7 +114,7 @@ int gfs2_shrink_qd_memory(struct shrinker *shrink, struct shrink_control *sc) spin_unlock(&qd_lru_lock); out: - return (atomic_read(&qd_lru_count) * sysctl_vfs_cache_pressure) / 100; + return vfs_pressure_ratio(atomic_read(&qd_lru_count)); } static u64 qd2index(struct gfs2_quota_data *qd) diff --git a/fs/mbcache.c b/fs/mbcache.c index 8c32ef3ba88e..5eb04767cb29 100644 --- a/fs/mbcache.c +++ b/fs/mbcache.c @@ -189,7 +189,7 @@ mb_cache_shrink_fn(struct shrinker *shrink, struct shrink_control *sc) list_for_each_entry_safe(entry, tmp, &free_list, e_lru_list) { __mb_cache_entry_forget(entry, gfp_mask); } - return (count / 100) * sysctl_vfs_cache_pressure; + return vfs_pressure_ratio(count); } diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index e79bc6ce828e..813ef2571545 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -2046,7 +2046,7 @@ remove_lru_entry: } spin_unlock(&nfs_access_lru_lock); nfs_access_free_list(&head); - return (atomic_long_read(&nfs_access_nr_entries) / 100) * sysctl_vfs_cache_pressure; + return vfs_pressure_ratio(atomic_long_read(&nfs_access_nr_entries)); } static void __nfs_access_zap_cache(struct nfs_inode *nfsi, struct list_head *head) diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c index 9a702e193538..13eee847605c 100644 --- a/fs/quota/dquot.c +++ b/fs/quota/dquot.c @@ -719,9 +719,8 @@ static int shrink_dqcache_memory(struct shrinker *shrink, prune_dqcache(nr); spin_unlock(&dq_list_lock); } - return ((unsigned) - percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS]) - /100) * sysctl_vfs_cache_pressure; + return vfs_pressure_ratio( + percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS])); } static struct shrinker dqcache_shrinker = { diff --git a/fs/super.c b/fs/super.c index f6961ea84c56..63b6863bac7b 100644 --- a/fs/super.c +++ b/fs/super.c @@ -82,13 +82,13 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc) int inodes; /* proportion the scan between the caches */ - dentries = (sc->nr_to_scan * sb->s_nr_dentry_unused) / - total_objects; - inodes = (sc->nr_to_scan * sb->s_nr_inodes_unused) / - total_objects; + dentries = mult_frac(sc->nr_to_scan, sb->s_nr_dentry_unused, + total_objects); + inodes = mult_frac(sc->nr_to_scan, sb->s_nr_inodes_unused, + total_objects); if (fs_objects) - fs_objects = (sc->nr_to_scan * fs_objects) / - total_objects; + fs_objects = mult_frac(sc->nr_to_scan, fs_objects, + total_objects); /* * prune the dcache first as the icache is pinned by it, then * prune the icache, followed by the filesystem specific caches @@ -104,7 +104,7 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc) sb->s_nr_inodes_unused + fs_objects; } - total_objects = (total_objects / 100) * sysctl_vfs_cache_pressure; + total_objects = vfs_pressure_ratio(total_objects); drop_super(sb); return total_objects; } diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 6218a0aeeeea..956da2e1c7af 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -1722,7 +1722,7 @@ xfs_qm_shake( } out: - return (qi->qi_lru_count / 100) * sysctl_vfs_cache_pressure; + return vfs_pressure_ratio(qi->qi_lru_count); } /* diff --git a/include/linux/dcache.h b/include/linux/dcache.h index 844a1ef387e4..59066e0b4ff1 100644 --- a/include/linux/dcache.h +++ b/include/linux/dcache.h @@ -395,4 +395,8 @@ static inline bool d_mountpoint(const struct dentry *dentry) extern int sysctl_vfs_cache_pressure; +static inline unsigned long vfs_pressure_ratio(unsigned long val) +{ + return mult_frac(val, sysctl_vfs_cache_pressure, 100); +} #endif /* __LINUX_DCACHE_H */ -- cgit v1.2.3 From 19156840e33a23eeb1a749c0f991dab6588b077d Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:17:55 +1000 Subject: dentry: move to per-sb LRU locks MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit With the dentry LRUs being per-sb structures, there is no real need for a global dentry_lru_lock. The locking can be made more fine-grained by moving to a per-sb LRU lock, isolating the LRU operations of different filesytsems completely from each other. The need for this is independent of any performance consideration that may arise: in the interest of abstracting the lru operations away, it is mandatory that each lru works around its own lock instead of a global lock for all of them. [glommer@openvz.org: updated changelog ] Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Reviewed-by: Christoph Hellwig Acked-by: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/dcache.c | 33 ++++++++++++++++----------------- fs/super.c | 1 + include/linux/fs.h | 4 +++- 3 files changed, 20 insertions(+), 18 deletions(-) (limited to 'include') diff --git a/fs/dcache.c b/fs/dcache.c index 03161240e744..e989ecb44a65 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -48,7 +48,7 @@ * - the dcache hash table * s_anon bl list spinlock protects: * - the s_anon list (see __d_drop) - * dcache_lru_lock protects: + * dentry->d_sb->s_dentry_lru_lock protects: * - the dcache lru lists and counters * d_lock protects: * - d_flags @@ -63,7 +63,7 @@ * Ordering: * dentry->d_inode->i_lock * dentry->d_lock - * dcache_lru_lock + * dentry->d_sb->s_dentry_lru_lock * dcache_hash_bucket lock * s_anon lock * @@ -81,7 +81,6 @@ int sysctl_vfs_cache_pressure __read_mostly = 100; EXPORT_SYMBOL_GPL(sysctl_vfs_cache_pressure); -static __cacheline_aligned_in_smp DEFINE_SPINLOCK(dcache_lru_lock); __cacheline_aligned_in_smp DEFINE_SEQLOCK(rename_lock); EXPORT_SYMBOL(rename_lock); @@ -362,12 +361,12 @@ static void dentry_unlink_inode(struct dentry * dentry) static void dentry_lru_add(struct dentry *dentry) { if (unlikely(!(dentry->d_flags & DCACHE_LRU_LIST))) { - spin_lock(&dcache_lru_lock); + spin_lock(&dentry->d_sb->s_dentry_lru_lock); dentry->d_flags |= DCACHE_LRU_LIST; list_add(&dentry->d_lru, &dentry->d_sb->s_dentry_lru); dentry->d_sb->s_nr_dentry_unused++; this_cpu_inc(nr_dentry_unused); - spin_unlock(&dcache_lru_lock); + spin_unlock(&dentry->d_sb->s_dentry_lru_lock); } } @@ -385,15 +384,15 @@ static void __dentry_lru_del(struct dentry *dentry) static void dentry_lru_del(struct dentry *dentry) { if (!list_empty(&dentry->d_lru)) { - spin_lock(&dcache_lru_lock); + spin_lock(&dentry->d_sb->s_dentry_lru_lock); __dentry_lru_del(dentry); - spin_unlock(&dcache_lru_lock); + spin_unlock(&dentry->d_sb->s_dentry_lru_lock); } } static void dentry_lru_move_list(struct dentry *dentry, struct list_head *list) { - spin_lock(&dcache_lru_lock); + spin_lock(&dentry->d_sb->s_dentry_lru_lock); if (list_empty(&dentry->d_lru)) { dentry->d_flags |= DCACHE_LRU_LIST; list_add_tail(&dentry->d_lru, list); @@ -402,7 +401,7 @@ static void dentry_lru_move_list(struct dentry *dentry, struct list_head *list) } else { list_move_tail(&dentry->d_lru, list); } - spin_unlock(&dcache_lru_lock); + spin_unlock(&dentry->d_sb->s_dentry_lru_lock); } /** @@ -895,14 +894,14 @@ void prune_dcache_sb(struct super_block *sb, int count) LIST_HEAD(tmp); relock: - spin_lock(&dcache_lru_lock); + spin_lock(&sb->s_dentry_lru_lock); while (!list_empty(&sb->s_dentry_lru)) { dentry = list_entry(sb->s_dentry_lru.prev, struct dentry, d_lru); BUG_ON(dentry->d_sb != sb); if (!spin_trylock(&dentry->d_lock)) { - spin_unlock(&dcache_lru_lock); + spin_unlock(&sb->s_dentry_lru_lock); cpu_relax(); goto relock; } @@ -918,11 +917,11 @@ relock: if (!--count) break; } - cond_resched_lock(&dcache_lru_lock); + cond_resched_lock(&sb->s_dentry_lru_lock); } if (!list_empty(&referenced)) list_splice(&referenced, &sb->s_dentry_lru); - spin_unlock(&dcache_lru_lock); + spin_unlock(&sb->s_dentry_lru_lock); shrink_dentry_list(&tmp); } @@ -938,14 +937,14 @@ void shrink_dcache_sb(struct super_block *sb) { LIST_HEAD(tmp); - spin_lock(&dcache_lru_lock); + spin_lock(&sb->s_dentry_lru_lock); while (!list_empty(&sb->s_dentry_lru)) { list_splice_init(&sb->s_dentry_lru, &tmp); - spin_unlock(&dcache_lru_lock); + spin_unlock(&sb->s_dentry_lru_lock); shrink_dentry_list(&tmp); - spin_lock(&dcache_lru_lock); + spin_lock(&sb->s_dentry_lru_lock); } - spin_unlock(&dcache_lru_lock); + spin_unlock(&sb->s_dentry_lru_lock); } EXPORT_SYMBOL(shrink_dcache_sb); diff --git a/fs/super.c b/fs/super.c index 63b6863bac7b..3c5318694ccd 100644 --- a/fs/super.c +++ b/fs/super.c @@ -176,6 +176,7 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags) INIT_HLIST_BL_HEAD(&s->s_anon); INIT_LIST_HEAD(&s->s_inodes); INIT_LIST_HEAD(&s->s_dentry_lru); + spin_lock_init(&s->s_dentry_lru_lock); INIT_LIST_HEAD(&s->s_inode_lru); spin_lock_init(&s->s_inode_lru_lock); INIT_LIST_HEAD(&s->s_mounts); diff --git a/include/linux/fs.h b/include/linux/fs.h index 3b3edac75df2..14a90f6886fa 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1269,7 +1269,9 @@ struct super_block { struct list_head s_files; #endif struct list_head s_mounts; /* list of mounts; _not_ for fs use */ - /* s_dentry_lru, s_nr_dentry_unused protected by dcache.c lru locks */ + + /* s_dentry_lru_lock protects s_dentry_lru and s_nr_dentry_unused */ + spinlock_t s_dentry_lru_lock ____cacheline_aligned_in_smp; struct list_head s_dentry_lru; /* unused dentry lru */ long s_nr_dentry_unused; /* # of dentry on lru */ -- cgit v1.2.3 From 24f7c6b981fb70084757382da464ea85d72af300 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:17:56 +1000 Subject: mm: new shrinker API MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The current shrinker callout API uses an a single shrinker call for multiple functions. To determine the function, a special magical value is passed in a parameter to change the behaviour. This complicates the implementation and return value specification for the different behaviours. Separate the two different behaviours into separate operations, one to return a count of freeable objects in the cache, and another to scan a certain number of objects in the cache for freeing. In defining these new operations, ensure the return values and resultant behaviours are clearly defined and documented. Modify shrink_slab() to use the new API and implement the callouts for all the existing shrinkers. Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Acked-by: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- include/linux/shrinker.h | 38 ++++++++++++++++++++++-------- mm/vmscan.c | 60 ++++++++++++++++++++++++++++++++---------------- 2 files changed, 69 insertions(+), 29 deletions(-) (limited to 'include') diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index ac6b8ee07825..884e76222e1b 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -4,6 +4,12 @@ /* * This struct is used to pass information from page reclaim to the shrinkers. * We consolidate the values for easier extention later. + * + * The 'gfpmask' refers to the allocation we are currently trying to + * fulfil. + * + * Note that 'shrink' will be passed nr_to_scan == 0 when the VM is + * querying the cache size, so a fastpath for that case is appropriate. */ struct shrink_control { gfp_t gfp_mask; @@ -12,23 +18,37 @@ struct shrink_control { unsigned long nr_to_scan; }; +#define SHRINK_STOP (~0UL) /* * A callback you can register to apply pressure to ageable caches. * - * 'sc' is passed shrink_control which includes a count 'nr_to_scan' - * and a 'gfpmask'. It should look through the least-recently-used - * 'nr_to_scan' entries and attempt to free them up. It should return - * the number of objects which remain in the cache. If it returns -1, it means - * it cannot do any scanning at this time (eg. there is a risk of deadlock). + * @shrink() should look through the least-recently-used 'nr_to_scan' entries + * and attempt to free them up. It should return the number of objects which + * remain in the cache. If it returns -1, it means it cannot do any scanning at + * this time (eg. there is a risk of deadlock). * - * The 'gfpmask' refers to the allocation we are currently trying to - * fulfil. + * @count_objects should return the number of freeable items in the cache. If + * there are no objects to free or the number of freeable items cannot be + * determined, it should return 0. No deadlock checks should be done during the + * count callback - the shrinker relies on aggregating scan counts that couldn't + * be executed due to potential deadlocks to be run at a later call when the + * deadlock condition is no longer pending. * - * Note that 'shrink' will be passed nr_to_scan == 0 when the VM is - * querying the cache size, so a fastpath for that case is appropriate. + * @scan_objects will only be called if @count_objects returned a non-zero + * value for the number of freeable objects. The callout should scan the cache + * and attempt to free items from the cache. It should then return the number + * of objects freed during the scan, or SHRINK_STOP if progress cannot be made + * due to potential deadlocks. If SHRINK_STOP is returned, then no further + * attempts to call the @scan_objects will be made from the current reclaim + * context. */ struct shrinker { int (*shrink)(struct shrinker *, struct shrink_control *sc); + unsigned long (*count_objects)(struct shrinker *, + struct shrink_control *sc); + unsigned long (*scan_objects)(struct shrinker *, + struct shrink_control *sc); + int seeks; /* seeks to recreate an obj */ long batch; /* reclaim batch size, 0 = default */ diff --git a/mm/vmscan.c b/mm/vmscan.c index 2cff0d491c6d..4d4e859b4b9c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -205,19 +205,24 @@ static inline int do_shrinker_shrink(struct shrinker *shrinker, * * Returns the number of slab objects which we shrunk. */ -unsigned long shrink_slab(struct shrink_control *shrink, +unsigned long shrink_slab(struct shrink_control *shrinkctl, unsigned long nr_pages_scanned, unsigned long lru_pages) { struct shrinker *shrinker; - unsigned long ret = 0; + unsigned long freed = 0; if (nr_pages_scanned == 0) nr_pages_scanned = SWAP_CLUSTER_MAX; if (!down_read_trylock(&shrinker_rwsem)) { - /* Assume we'll be able to shrink next time */ - ret = 1; + /* + * If we would return 0, our callers would understand that we + * have nothing else to shrink and give up trying. By returning + * 1 we keep it going and assume we'll be able to shrink next + * time. + */ + freed = 1; goto out; } @@ -225,14 +230,16 @@ unsigned long shrink_slab(struct shrink_control *shrink, unsigned long long delta; long total_scan; long max_pass; - int shrink_ret = 0; long nr; long new_nr; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; - max_pass = do_shrinker_shrink(shrinker, shrink, 0); - if (max_pass <= 0) + if (shrinker->count_objects) + max_pass = shrinker->count_objects(shrinker, shrinkctl); + else + max_pass = do_shrinker_shrink(shrinker, shrinkctl, 0); + if (max_pass == 0) continue; /* @@ -248,8 +255,8 @@ unsigned long shrink_slab(struct shrink_control *shrink, do_div(delta, lru_pages + 1); total_scan += delta; if (total_scan < 0) { - printk(KERN_ERR "shrink_slab: %pF negative objects to " - "delete nr=%ld\n", + printk(KERN_ERR + "shrink_slab: %pF negative objects to delete nr=%ld\n", shrinker->shrink, total_scan); total_scan = max_pass; } @@ -277,20 +284,33 @@ unsigned long shrink_slab(struct shrink_control *shrink, if (total_scan > max_pass * 2) total_scan = max_pass * 2; - trace_mm_shrink_slab_start(shrinker, shrink, nr, + trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, nr_pages_scanned, lru_pages, max_pass, delta, total_scan); while (total_scan >= batch_size) { - int nr_before; - nr_before = do_shrinker_shrink(shrinker, shrink, 0); - shrink_ret = do_shrinker_shrink(shrinker, shrink, - batch_size); - if (shrink_ret == -1) - break; - if (shrink_ret < nr_before) - ret += nr_before - shrink_ret; + if (shrinker->scan_objects) { + unsigned long ret; + shrinkctl->nr_to_scan = batch_size; + ret = shrinker->scan_objects(shrinker, shrinkctl); + + if (ret == SHRINK_STOP) + break; + freed += ret; + } else { + int nr_before; + long ret; + + nr_before = do_shrinker_shrink(shrinker, shrinkctl, 0); + ret = do_shrinker_shrink(shrinker, shrinkctl, + batch_size); + if (ret == -1) + break; + if (ret < nr_before) + freed += nr_before - ret; + } + count_vm_events(SLABS_SCANNED, batch_size); total_scan -= batch_size; @@ -308,12 +328,12 @@ unsigned long shrink_slab(struct shrink_control *shrink, else new_nr = atomic_long_read(&shrinker->nr_in_batch); - trace_mm_shrink_slab_end(shrinker, shrink_ret, nr, new_nr); + trace_mm_shrink_slab_end(shrinker, freed, nr, new_nr); } up_read(&shrinker_rwsem); out: cond_resched(); - return ret; + return freed; } static inline int is_page_cache_freeable(struct page *page) -- cgit v1.2.3 From 0a234c6dcb79a270803f5c9773ed650b78730962 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:17:57 +1000 Subject: shrinker: convert superblock shrinkers to new API MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Convert superblock shrinker to use the new count/scan API, and propagate the API changes through to the filesystem callouts. The filesystem callouts already use a count/scan API, so it's just changing counters to longs to match the VM API. This requires the dentry and inode shrinker callouts to be converted to the count/scan API. This is mainly a mechanical change. [glommer@openvz.org: use mult_frac for fractional proportions, build fixes] Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Acked-by: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/dcache.c | 7 +++-- fs/inode.c | 7 +++-- fs/internal.h | 2 ++ fs/super.c | 80 ++++++++++++++++++++++++++++++++--------------------- fs/xfs/xfs_icache.c | 4 +-- fs/xfs/xfs_icache.h | 2 +- fs/xfs/xfs_super.c | 8 +++--- include/linux/fs.h | 8 ++---- 8 files changed, 70 insertions(+), 48 deletions(-) (limited to 'include') diff --git a/fs/dcache.c b/fs/dcache.c index 509b49410943..77d466b13fef 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -913,11 +913,12 @@ static void shrink_dentry_list(struct list_head *list) * This function may fail to free any resources if all the dentries are in * use. */ -void prune_dcache_sb(struct super_block *sb, int count) +long prune_dcache_sb(struct super_block *sb, unsigned long nr_to_scan) { struct dentry *dentry; LIST_HEAD(referenced); LIST_HEAD(tmp); + long freed = 0; relock: spin_lock(&sb->s_dentry_lru_lock); @@ -942,7 +943,8 @@ relock: this_cpu_dec(nr_dentry_unused); sb->s_nr_dentry_unused--; spin_unlock(&dentry->d_lock); - if (!--count) + freed++; + if (!--nr_to_scan) break; } cond_resched_lock(&sb->s_dentry_lru_lock); @@ -952,6 +954,7 @@ relock: spin_unlock(&sb->s_dentry_lru_lock); shrink_dentry_list(&tmp); + return freed; } /* diff --git a/fs/inode.c b/fs/inode.c index 2a3c37ea823d..021d64768a55 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -706,10 +706,11 @@ static int can_unuse(struct inode *inode) * LRU does not have strict ordering. Hence we don't want to reclaim inodes * with this flag set because they are the inodes that are out of order. */ -void prune_icache_sb(struct super_block *sb, int nr_to_scan) +long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan) { LIST_HEAD(freeable); - int nr_scanned; + long nr_scanned; + long freed = 0; unsigned long reap = 0; spin_lock(&sb->s_inode_lru_lock); @@ -779,6 +780,7 @@ void prune_icache_sb(struct super_block *sb, int nr_to_scan) list_move(&inode->i_lru, &freeable); sb->s_nr_inodes_unused--; this_cpu_dec(nr_unused); + freed++; } if (current_is_kswapd()) __count_vm_events(KSWAPD_INODESTEAL, reap); @@ -789,6 +791,7 @@ void prune_icache_sb(struct super_block *sb, int nr_to_scan) current->reclaim_state->reclaimed_slab += reap; dispose_list(&freeable); + return freed; } static void __wait_on_freeing_inode(struct inode *inode); diff --git a/fs/internal.h b/fs/internal.h index b6495659d6e8..cb83a3417a68 100644 --- a/fs/internal.h +++ b/fs/internal.h @@ -114,6 +114,7 @@ extern int open_check_o_direct(struct file *f); * inode.c */ extern spinlock_t inode_sb_list_lock; +extern long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan); extern void inode_add_lru(struct inode *inode); /* @@ -130,6 +131,7 @@ extern int invalidate_inodes(struct super_block *, bool); */ extern struct dentry *__d_alloc(struct super_block *, const struct qstr *); extern int d_set_mounted(struct dentry *dentry); +extern long prune_dcache_sb(struct super_block *sb, unsigned long nr_to_scan); /* * read_write.c diff --git a/fs/super.c b/fs/super.c index 3c5318694ccd..8aa2660642b9 100644 --- a/fs/super.c +++ b/fs/super.c @@ -53,11 +53,15 @@ static char *sb_writers_name[SB_FREEZE_LEVELS] = { * shrinker path and that leads to deadlock on the shrinker_rwsem. Hence we * take a passive reference to the superblock to avoid this from occurring. */ -static int prune_super(struct shrinker *shrink, struct shrink_control *sc) +static unsigned long super_cache_scan(struct shrinker *shrink, + struct shrink_control *sc) { struct super_block *sb; - int fs_objects = 0; - int total_objects; + long fs_objects = 0; + long total_objects; + long freed = 0; + long dentries; + long inodes; sb = container_of(shrink, struct super_block, s_shrink); @@ -65,11 +69,11 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc) * Deadlock avoidance. We may hold various FS locks, and we don't want * to recurse into the FS that called us in clear_inode() and friends.. */ - if (sc->nr_to_scan && !(sc->gfp_mask & __GFP_FS)) - return -1; + if (!(sc->gfp_mask & __GFP_FS)) + return SHRINK_STOP; if (!grab_super_passive(sb)) - return -1; + return SHRINK_STOP; if (sb->s_op->nr_cached_objects) fs_objects = sb->s_op->nr_cached_objects(sb); @@ -77,33 +81,46 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc) total_objects = sb->s_nr_dentry_unused + sb->s_nr_inodes_unused + fs_objects + 1; - if (sc->nr_to_scan) { - int dentries; - int inodes; - - /* proportion the scan between the caches */ - dentries = mult_frac(sc->nr_to_scan, sb->s_nr_dentry_unused, - total_objects); - inodes = mult_frac(sc->nr_to_scan, sb->s_nr_inodes_unused, - total_objects); - if (fs_objects) - fs_objects = mult_frac(sc->nr_to_scan, fs_objects, - total_objects); - /* - * prune the dcache first as the icache is pinned by it, then - * prune the icache, followed by the filesystem specific caches - */ - prune_dcache_sb(sb, dentries); - prune_icache_sb(sb, inodes); + /* proportion the scan between the caches */ + dentries = mult_frac(sc->nr_to_scan, sb->s_nr_dentry_unused, + total_objects); + inodes = mult_frac(sc->nr_to_scan, sb->s_nr_inodes_unused, + total_objects); - if (fs_objects && sb->s_op->free_cached_objects) { - sb->s_op->free_cached_objects(sb, fs_objects); - fs_objects = sb->s_op->nr_cached_objects(sb); - } - total_objects = sb->s_nr_dentry_unused + - sb->s_nr_inodes_unused + fs_objects; + /* + * prune the dcache first as the icache is pinned by it, then + * prune the icache, followed by the filesystem specific caches + */ + freed = prune_dcache_sb(sb, dentries); + freed += prune_icache_sb(sb, inodes); + + if (fs_objects) { + fs_objects = mult_frac(sc->nr_to_scan, fs_objects, + total_objects); + freed += sb->s_op->free_cached_objects(sb, fs_objects); } + drop_super(sb); + return freed; +} + +static unsigned long super_cache_count(struct shrinker *shrink, + struct shrink_control *sc) +{ + struct super_block *sb; + long total_objects = 0; + + sb = container_of(shrink, struct super_block, s_shrink); + + if (!grab_super_passive(sb)) + return 0; + + if (sb->s_op && sb->s_op->nr_cached_objects) + total_objects = sb->s_op->nr_cached_objects(sb); + + total_objects += sb->s_nr_dentry_unused; + total_objects += sb->s_nr_inodes_unused; + total_objects = vfs_pressure_ratio(total_objects); drop_super(sb); return total_objects; @@ -211,7 +228,8 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags) s->cleancache_poolid = -1; s->s_shrink.seeks = DEFAULT_SEEKS; - s->s_shrink.shrink = prune_super; + s->s_shrink.scan_objects = super_cache_scan; + s->s_shrink.count_objects = super_cache_count; s->s_shrink.batch = 1024; } out: diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 16219b9c6790..73b62a24ceac 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -1167,7 +1167,7 @@ xfs_reclaim_inodes( * them to be cleaned, which we hope will not be very long due to the * background walker having already kicked the IO off on those dirty inodes. */ -void +long xfs_reclaim_inodes_nr( struct xfs_mount *mp, int nr_to_scan) @@ -1176,7 +1176,7 @@ xfs_reclaim_inodes_nr( xfs_reclaim_work_queue(mp); xfs_ail_push_all(mp->m_ail); - xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, &nr_to_scan); + return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, &nr_to_scan); } /* diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h index 8a89f7d791bd..456f0144e1b6 100644 --- a/fs/xfs/xfs_icache.h +++ b/fs/xfs/xfs_icache.h @@ -46,7 +46,7 @@ void xfs_reclaim_worker(struct work_struct *work); int xfs_reclaim_inodes(struct xfs_mount *mp, int mode); int xfs_reclaim_inodes_count(struct xfs_mount *mp); -void xfs_reclaim_inodes_nr(struct xfs_mount *mp, int nr_to_scan); +long xfs_reclaim_inodes_nr(struct xfs_mount *mp, int nr_to_scan); void xfs_inode_set_reclaim_tag(struct xfs_inode *ip); diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 979a77d4b87d..71d7aaebb912 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1535,19 +1535,19 @@ xfs_fs_mount( return mount_bdev(fs_type, flags, dev_name, data, xfs_fs_fill_super); } -static int +static long xfs_fs_nr_cached_objects( struct super_block *sb) { return xfs_reclaim_inodes_count(XFS_M(sb)); } -static void +static long xfs_fs_free_cached_objects( struct super_block *sb, - int nr_to_scan) + long nr_to_scan) { - xfs_reclaim_inodes_nr(XFS_M(sb), nr_to_scan); + return xfs_reclaim_inodes_nr(XFS_M(sb), nr_to_scan); } static const struct super_operations xfs_super_operations = { diff --git a/include/linux/fs.h b/include/linux/fs.h index 14a90f6886fa..0ae0bc3c1fde 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1335,10 +1335,6 @@ struct super_block { struct workqueue_struct *s_dio_done_wq; }; -/* superblock cache pruning functions */ -extern void prune_icache_sb(struct super_block *sb, int nr_to_scan); -extern void prune_dcache_sb(struct super_block *sb, int nr_to_scan); - extern struct timespec current_fs_time(struct super_block *sb); /* @@ -1631,8 +1627,8 @@ struct super_operations { ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t); #endif int (*bdev_try_to_free_page)(struct super_block*, struct page*, gfp_t); - int (*nr_cached_objects)(struct super_block *); - void (*free_cached_objects)(struct super_block *, int); + long (*nr_cached_objects)(struct super_block *); + long (*free_cached_objects)(struct super_block *, long); }; /* -- cgit v1.2.3 From a38e40824844a5ec85f3ea95632be953477d2afa Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:17:58 +1000 Subject: list: add a new LRU list type MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Several subsystems use the same construct for LRU lists - a list head, a spin lock and and item count. They also use exactly the same code for adding and removing items from the LRU. Create a generic type for these LRU lists. This is the beginning of generic, node aware LRUs for shrinkers to work with. [glommer@openvz.org: enum defined constants for lru. Suggested by gthelen, don't relock over retry] Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Reviewed-by: Greg Thelen Acked-by: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- include/linux/list_lru.h | 115 ++++++++++++++++++++++++++++++++++++++++++++++ mm/Makefile | 2 +- mm/list_lru.c | 117 +++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 233 insertions(+), 1 deletion(-) create mode 100644 include/linux/list_lru.h create mode 100644 mm/list_lru.c (limited to 'include') diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h new file mode 100644 index 000000000000..1a548b0b7578 --- /dev/null +++ b/include/linux/list_lru.h @@ -0,0 +1,115 @@ +/* + * Copyright (c) 2013 Red Hat, Inc. and Parallels Inc. All rights reserved. + * Authors: David Chinner and Glauber Costa + * + * Generic LRU infrastructure + */ +#ifndef _LRU_LIST_H +#define _LRU_LIST_H + +#include + +/* list_lru_walk_cb has to always return one of those */ +enum lru_status { + LRU_REMOVED, /* item removed from list */ + LRU_ROTATE, /* item referenced, give another pass */ + LRU_SKIP, /* item cannot be locked, skip */ + LRU_RETRY, /* item not freeable. May drop the lock + internally, but has to return locked. */ +}; + +struct list_lru { + spinlock_t lock; + struct list_head list; + /* kept as signed so we can catch imbalance bugs */ + long nr_items; +}; + +int list_lru_init(struct list_lru *lru); + +/** + * list_lru_add: add an element to the lru list's tail + * @list_lru: the lru pointer + * @item: the item to be added. + * + * If the element is already part of a list, this function returns doing + * nothing. Therefore the caller does not need to keep state about whether or + * not the element already belongs in the list and is allowed to lazy update + * it. Note however that this is valid for *a* list, not *this* list. If + * the caller organize itself in a way that elements can be in more than + * one type of list, it is up to the caller to fully remove the item from + * the previous list (with list_lru_del() for instance) before moving it + * to @list_lru + * + * Return value: true if the list was updated, false otherwise + */ +bool list_lru_add(struct list_lru *lru, struct list_head *item); + +/** + * list_lru_del: delete an element to the lru list + * @list_lru: the lru pointer + * @item: the item to be deleted. + * + * This function works analogously as list_lru_add in terms of list + * manipulation. The comments about an element already pertaining to + * a list are also valid for list_lru_del. + * + * Return value: true if the list was updated, false otherwise + */ +bool list_lru_del(struct list_lru *lru, struct list_head *item); + +/** + * list_lru_count: return the number of objects currently held by @lru + * @lru: the lru pointer. + * + * Always return a non-negative number, 0 for empty lists. There is no + * guarantee that the list is not updated while the count is being computed. + * Callers that want such a guarantee need to provide an outer lock. + */ +static inline unsigned long list_lru_count(struct list_lru *lru) +{ + return lru->nr_items; +} + +typedef enum lru_status +(*list_lru_walk_cb)(struct list_head *item, spinlock_t *lock, void *cb_arg); +/** + * list_lru_walk: walk a list_lru, isolating and disposing freeable items. + * @lru: the lru pointer. + * @isolate: callback function that is resposible for deciding what to do with + * the item currently being scanned + * @cb_arg: opaque type that will be passed to @isolate + * @nr_to_walk: how many items to scan. + * + * This function will scan all elements in a particular list_lru, calling the + * @isolate callback for each of those items, along with the current list + * spinlock and a caller-provided opaque. The @isolate callback can choose to + * drop the lock internally, but *must* return with the lock held. The callback + * will return an enum lru_status telling the list_lru infrastructure what to + * do with the object being scanned. + * + * Please note that nr_to_walk does not mean how many objects will be freed, + * just how many objects will be scanned. + * + * Return value: the number of objects effectively removed from the LRU. + */ +unsigned long list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, + void *cb_arg, unsigned long nr_to_walk); + +typedef void (*list_lru_dispose_cb)(struct list_head *dispose_list); +/** + * list_lru_dispose_all: forceably flush all elements in an @lru + * @lru: the lru pointer + * @dispose: callback function to be called for each lru list. + * + * This function will forceably isolate all elements into the dispose list, and + * call the @dispose callback to flush the list. Please note that the callback + * should expect items in any state, clean or dirty, and be able to flush all of + * them. + * + * Return value: how many objects were freed. It should be equal to all objects + * in the list_lru. + */ +unsigned long +list_lru_dispose_all(struct list_lru *lru, list_lru_dispose_cb dispose); +#endif /* _LRU_LIST_H */ diff --git a/mm/Makefile b/mm/Makefile index f00803386a67..305d10acd081 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -17,7 +17,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ util.o mmzone.o vmstat.o backing-dev.o \ mm_init.o mmu_context.o percpu.o slab_common.o \ compaction.o balloon_compaction.o \ - interval_tree.o $(mmu-y) + interval_tree.o list_lru.o $(mmu-y) obj-y += init-mm.o diff --git a/mm/list_lru.c b/mm/list_lru.c new file mode 100644 index 000000000000..dd74c5434cd8 --- /dev/null +++ b/mm/list_lru.c @@ -0,0 +1,117 @@ +/* + * Copyright (c) 2013 Red Hat, Inc. and Parallels Inc. All rights reserved. + * Authors: David Chinner and Glauber Costa + * + * Generic LRU infrastructure + */ +#include +#include +#include + +bool list_lru_add(struct list_lru *lru, struct list_head *item) +{ + spin_lock(&lru->lock); + if (list_empty(item)) { + list_add_tail(item, &lru->list); + lru->nr_items++; + spin_unlock(&lru->lock); + return true; + } + spin_unlock(&lru->lock); + return false; +} +EXPORT_SYMBOL_GPL(list_lru_add); + +bool list_lru_del(struct list_lru *lru, struct list_head *item) +{ + spin_lock(&lru->lock); + if (!list_empty(item)) { + list_del_init(item); + lru->nr_items--; + spin_unlock(&lru->lock); + return true; + } + spin_unlock(&lru->lock); + return false; +} +EXPORT_SYMBOL_GPL(list_lru_del); + +unsigned long list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, + void *cb_arg, unsigned long nr_to_walk) +{ + struct list_head *item, *n; + unsigned long removed = 0; + /* + * If we don't keep state of at which pass we are, we can loop at + * LRU_RETRY, since we have no guarantees that the caller will be able + * to do something other than retry on the next pass. We handle this by + * allowing at most one retry per object. This should not be altered + * by any condition other than LRU_RETRY. + */ + bool first_pass = true; + + spin_lock(&lru->lock); +restart: + list_for_each_safe(item, n, &lru->list) { + enum lru_status ret; + ret = isolate(item, &lru->lock, cb_arg); + switch (ret) { + case LRU_REMOVED: + lru->nr_items--; + removed++; + break; + case LRU_ROTATE: + list_move_tail(item, &lru->list); + break; + case LRU_SKIP: + break; + case LRU_RETRY: + if (!first_pass) { + first_pass = true; + break; + } + first_pass = false; + goto restart; + default: + BUG(); + } + + if (nr_to_walk-- == 0) + break; + + } + spin_unlock(&lru->lock); + return removed; +} +EXPORT_SYMBOL_GPL(list_lru_walk); + +unsigned long list_lru_dispose_all(struct list_lru *lru, + list_lru_dispose_cb dispose) +{ + unsigned long disposed = 0; + LIST_HEAD(dispose_list); + + spin_lock(&lru->lock); + while (!list_empty(&lru->list)) { + list_splice_init(&lru->list, &dispose_list); + disposed += lru->nr_items; + lru->nr_items = 0; + spin_unlock(&lru->lock); + + dispose(&dispose_list); + + spin_lock(&lru->lock); + } + spin_unlock(&lru->lock); + return disposed; +} + +int list_lru_init(struct list_lru *lru) +{ + spin_lock_init(&lru->lock); + INIT_LIST_HEAD(&lru->list); + lru->nr_items = 0; + + return 0; +} +EXPORT_SYMBOL_GPL(list_lru_init); -- cgit v1.2.3 From bc3b14cb2d505dda969dbe3a31038dbb24aca945 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:17:58 +1000 Subject: inode: convert inode lru list to generic lru list code. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [glommer@openvz.org: adapted for new LRU return codes] Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/inode.c | 175 +++++++++++++++++++++-------------------------------- fs/super.c | 12 ++-- include/linux/fs.h | 6 +- 3 files changed, 77 insertions(+), 116 deletions(-) (limited to 'include') diff --git a/fs/inode.c b/fs/inode.c index 021d64768a55..a973d268c157 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -17,6 +17,7 @@ #include #include /* for inode_has_buffers */ #include +#include #include "internal.h" /* @@ -24,7 +25,7 @@ * * inode->i_lock protects: * inode->i_state, inode->i_hash, __iget() - * inode->i_sb->s_inode_lru_lock protects: + * Inode LRU list locks protect: * inode->i_sb->s_inode_lru, inode->i_lru * inode_sb_list_lock protects: * sb->s_inodes, inode->i_sb_list @@ -37,7 +38,7 @@ * * inode_sb_list_lock * inode->i_lock - * inode->i_sb->s_inode_lru_lock + * Inode LRU list locks * * bdi->wb.list_lock * inode->i_lock @@ -401,13 +402,8 @@ EXPORT_SYMBOL(ihold); static void inode_lru_list_add(struct inode *inode) { - spin_lock(&inode->i_sb->s_inode_lru_lock); - if (list_empty(&inode->i_lru)) { - list_add(&inode->i_lru, &inode->i_sb->s_inode_lru); - inode->i_sb->s_nr_inodes_unused++; + if (list_lru_add(&inode->i_sb->s_inode_lru, &inode->i_lru)) this_cpu_inc(nr_unused); - } - spin_unlock(&inode->i_sb->s_inode_lru_lock); } /* @@ -425,13 +421,9 @@ void inode_add_lru(struct inode *inode) static void inode_lru_list_del(struct inode *inode) { - spin_lock(&inode->i_sb->s_inode_lru_lock); - if (!list_empty(&inode->i_lru)) { - list_del_init(&inode->i_lru); - inode->i_sb->s_nr_inodes_unused--; + + if (list_lru_del(&inode->i_sb->s_inode_lru, &inode->i_lru)) this_cpu_dec(nr_unused); - } - spin_unlock(&inode->i_sb->s_inode_lru_lock); } /** @@ -675,24 +667,8 @@ int invalidate_inodes(struct super_block *sb, bool kill_dirty) return busy; } -static int can_unuse(struct inode *inode) -{ - if (inode->i_state & ~I_REFERENCED) - return 0; - if (inode_has_buffers(inode)) - return 0; - if (atomic_read(&inode->i_count)) - return 0; - if (inode->i_data.nrpages) - return 0; - return 1; -} - /* - * Walk the superblock inode LRU for freeable inodes and attempt to free them. - * This is called from the superblock shrinker function with a number of inodes - * to trim from the LRU. Inodes to be freed are moved to a temporary list and - * then are freed outside inode_lock by dispose_list(). + * Isolate the inode from the LRU in preparation for freeing it. * * Any inodes which are pinned purely because of attached pagecache have their * pagecache removed. If the inode has metadata buffers attached to @@ -706,90 +682,79 @@ static int can_unuse(struct inode *inode) * LRU does not have strict ordering. Hence we don't want to reclaim inodes * with this flag set because they are the inodes that are out of order. */ -long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan) +static enum lru_status +inode_lru_isolate(struct list_head *item, spinlock_t *lru_lock, void *arg) { - LIST_HEAD(freeable); - long nr_scanned; - long freed = 0; - unsigned long reap = 0; + struct list_head *freeable = arg; + struct inode *inode = container_of(item, struct inode, i_lru); - spin_lock(&sb->s_inode_lru_lock); - for (nr_scanned = nr_to_scan; nr_scanned >= 0; nr_scanned--) { - struct inode *inode; + /* + * we are inverting the lru lock/inode->i_lock here, so use a trylock. + * If we fail to get the lock, just skip it. + */ + if (!spin_trylock(&inode->i_lock)) + return LRU_SKIP; - if (list_empty(&sb->s_inode_lru)) - break; + /* + * Referenced or dirty inodes are still in use. Give them another pass + * through the LRU as we canot reclaim them now. + */ + if (atomic_read(&inode->i_count) || + (inode->i_state & ~I_REFERENCED)) { + list_del_init(&inode->i_lru); + spin_unlock(&inode->i_lock); + this_cpu_dec(nr_unused); + return LRU_REMOVED; + } - inode = list_entry(sb->s_inode_lru.prev, struct inode, i_lru); + /* recently referenced inodes get one more pass */ + if (inode->i_state & I_REFERENCED) { + inode->i_state &= ~I_REFERENCED; + spin_unlock(&inode->i_lock); + return LRU_ROTATE; + } - /* - * we are inverting the sb->s_inode_lru_lock/inode->i_lock here, - * so use a trylock. If we fail to get the lock, just move the - * inode to the back of the list so we don't spin on it. - */ - if (!spin_trylock(&inode->i_lock)) { - list_move(&inode->i_lru, &sb->s_inode_lru); - continue; + if (inode_has_buffers(inode) || inode->i_data.nrpages) { + __iget(inode); + spin_unlock(&inode->i_lock); + spin_unlock(lru_lock); + if (remove_inode_buffers(inode)) { + unsigned long reap; + reap = invalidate_mapping_pages(&inode->i_data, 0, -1); + if (current_is_kswapd()) + __count_vm_events(KSWAPD_INODESTEAL, reap); + else + __count_vm_events(PGINODESTEAL, reap); + if (current->reclaim_state) + current->reclaim_state->reclaimed_slab += reap; } + iput(inode); + spin_lock(lru_lock); + return LRU_RETRY; + } - /* - * Referenced or dirty inodes are still in use. Give them - * another pass through the LRU as we canot reclaim them now. - */ - if (atomic_read(&inode->i_count) || - (inode->i_state & ~I_REFERENCED)) { - list_del_init(&inode->i_lru); - spin_unlock(&inode->i_lock); - sb->s_nr_inodes_unused--; - this_cpu_dec(nr_unused); - continue; - } + WARN_ON(inode->i_state & I_NEW); + inode->i_state |= I_FREEING; + spin_unlock(&inode->i_lock); - /* recently referenced inodes get one more pass */ - if (inode->i_state & I_REFERENCED) { - inode->i_state &= ~I_REFERENCED; - list_move(&inode->i_lru, &sb->s_inode_lru); - spin_unlock(&inode->i_lock); - continue; - } - if (inode_has_buffers(inode) || inode->i_data.nrpages) { - __iget(inode); - spin_unlock(&inode->i_lock); - spin_unlock(&sb->s_inode_lru_lock); - if (remove_inode_buffers(inode)) - reap += invalidate_mapping_pages(&inode->i_data, - 0, -1); - iput(inode); - spin_lock(&sb->s_inode_lru_lock); - - if (inode != list_entry(sb->s_inode_lru.next, - struct inode, i_lru)) - continue; /* wrong inode or list_empty */ - /* avoid lock inversions with trylock */ - if (!spin_trylock(&inode->i_lock)) - continue; - if (!can_unuse(inode)) { - spin_unlock(&inode->i_lock); - continue; - } - } - WARN_ON(inode->i_state & I_NEW); - inode->i_state |= I_FREEING; - spin_unlock(&inode->i_lock); + list_move(&inode->i_lru, freeable); + this_cpu_dec(nr_unused); + return LRU_REMOVED; +} - list_move(&inode->i_lru, &freeable); - sb->s_nr_inodes_unused--; - this_cpu_dec(nr_unused); - freed++; - } - if (current_is_kswapd()) - __count_vm_events(KSWAPD_INODESTEAL, reap); - else - __count_vm_events(PGINODESTEAL, reap); - spin_unlock(&sb->s_inode_lru_lock); - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab += reap; +/* + * Walk the superblock inode LRU for freeable inodes and attempt to free them. + * This is called from the superblock shrinker function with a number of inodes + * to trim from the LRU. Inodes to be freed are moved to a temporary list and + * then are freed outside inode_lock by dispose_list(). + */ +long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan) +{ + LIST_HEAD(freeable); + long freed; + freed = list_lru_walk(&sb->s_inode_lru, inode_lru_isolate, + &freeable, nr_to_scan); dispose_list(&freeable); return freed; } diff --git a/fs/super.c b/fs/super.c index 8aa2660642b9..aa7995d73bcc 100644 --- a/fs/super.c +++ b/fs/super.c @@ -78,14 +78,13 @@ static unsigned long super_cache_scan(struct shrinker *shrink, if (sb->s_op->nr_cached_objects) fs_objects = sb->s_op->nr_cached_objects(sb); - total_objects = sb->s_nr_dentry_unused + - sb->s_nr_inodes_unused + fs_objects + 1; + inodes = list_lru_count(&sb->s_inode_lru); + total_objects = sb->s_nr_dentry_unused + inodes + fs_objects + 1; /* proportion the scan between the caches */ dentries = mult_frac(sc->nr_to_scan, sb->s_nr_dentry_unused, total_objects); - inodes = mult_frac(sc->nr_to_scan, sb->s_nr_inodes_unused, - total_objects); + inodes = mult_frac(sc->nr_to_scan, inodes, total_objects); /* * prune the dcache first as the icache is pinned by it, then @@ -119,7 +118,7 @@ static unsigned long super_cache_count(struct shrinker *shrink, total_objects = sb->s_op->nr_cached_objects(sb); total_objects += sb->s_nr_dentry_unused; - total_objects += sb->s_nr_inodes_unused; + total_objects += list_lru_count(&sb->s_inode_lru); total_objects = vfs_pressure_ratio(total_objects); drop_super(sb); @@ -194,8 +193,7 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags) INIT_LIST_HEAD(&s->s_inodes); INIT_LIST_HEAD(&s->s_dentry_lru); spin_lock_init(&s->s_dentry_lru_lock); - INIT_LIST_HEAD(&s->s_inode_lru); - spin_lock_init(&s->s_inode_lru_lock); + list_lru_init(&s->s_inode_lru); INIT_LIST_HEAD(&s->s_mounts); init_rwsem(&s->s_umount); lockdep_set_class(&s->s_umount, &type->s_umount_key); diff --git a/include/linux/fs.h b/include/linux/fs.h index 0ae0bc3c1fde..e04786569c28 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -1275,10 +1276,7 @@ struct super_block { struct list_head s_dentry_lru; /* unused dentry lru */ long s_nr_dentry_unused; /* # of dentry on lru */ - /* s_inode_lru_lock protects s_inode_lru and s_nr_inodes_unused */ - spinlock_t s_inode_lru_lock ____cacheline_aligned_in_smp; - struct list_head s_inode_lru; /* unused inode lru */ - long s_nr_inodes_unused; /* # of inodes on lru */ + struct list_lru s_inode_lru ____cacheline_aligned_in_smp; struct block_device *s_bdev; struct backing_dev_info *s_bdi; -- cgit v1.2.3 From f604156751db77e08afe47ce29fe8f3d51ad9b04 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:18:00 +1000 Subject: dcache: convert to use new lru list infrastructure MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [glommer@openvz.org: don't reintroduce double decrement of nr_unused_dentries, adapted for new LRU return codes] Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/dcache.c | 170 ++++++++++++++++++++++++----------------------------- fs/super.c | 11 ++-- include/linux/fs.h | 15 +++-- 3 files changed, 90 insertions(+), 106 deletions(-) (limited to 'include') diff --git a/fs/dcache.c b/fs/dcache.c index 77d466b13fef..38a4a03499a2 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -37,6 +37,7 @@ #include #include #include +#include #include "internal.h" #include "mount.h" @@ -356,28 +357,17 @@ static void dentry_unlink_inode(struct dentry * dentry) } /* - * dentry_lru_(add|del|move_list) must be called with d_lock held. + * dentry_lru_(add|del)_list) must be called with d_lock held. */ static void dentry_lru_add(struct dentry *dentry) { if (unlikely(!(dentry->d_flags & DCACHE_LRU_LIST))) { - spin_lock(&dentry->d_sb->s_dentry_lru_lock); + if (list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)) + this_cpu_inc(nr_dentry_unused); dentry->d_flags |= DCACHE_LRU_LIST; - list_add(&dentry->d_lru, &dentry->d_sb->s_dentry_lru); - dentry->d_sb->s_nr_dentry_unused++; - this_cpu_inc(nr_dentry_unused); - spin_unlock(&dentry->d_sb->s_dentry_lru_lock); } } -static void __dentry_lru_del(struct dentry *dentry) -{ - list_del_init(&dentry->d_lru); - dentry->d_flags &= ~DCACHE_LRU_LIST; - dentry->d_sb->s_nr_dentry_unused--; - this_cpu_dec(nr_dentry_unused); -} - /* * Remove a dentry with references from the LRU. * @@ -393,27 +383,9 @@ static void dentry_lru_del(struct dentry *dentry) return; } - if (!list_empty(&dentry->d_lru)) { - spin_lock(&dentry->d_sb->s_dentry_lru_lock); - __dentry_lru_del(dentry); - spin_unlock(&dentry->d_sb->s_dentry_lru_lock); - } -} - -static void dentry_lru_move_list(struct dentry *dentry, struct list_head *list) -{ - BUG_ON(dentry->d_flags & DCACHE_SHRINK_LIST); - - spin_lock(&dentry->d_sb->s_dentry_lru_lock); - if (list_empty(&dentry->d_lru)) { - dentry->d_flags |= DCACHE_LRU_LIST; - list_add_tail(&dentry->d_lru, list); - } else { - list_move_tail(&dentry->d_lru, list); - dentry->d_sb->s_nr_dentry_unused--; + if (list_lru_del(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)) this_cpu_dec(nr_dentry_unused); - } - spin_unlock(&dentry->d_sb->s_dentry_lru_lock); + dentry->d_flags &= ~DCACHE_LRU_LIST; } /** @@ -901,12 +873,72 @@ static void shrink_dentry_list(struct list_head *list) rcu_read_unlock(); } +static enum lru_status +dentry_lru_isolate(struct list_head *item, spinlock_t *lru_lock, void *arg) +{ + struct list_head *freeable = arg; + struct dentry *dentry = container_of(item, struct dentry, d_lru); + + + /* + * we are inverting the lru lock/dentry->d_lock here, + * so use a trylock. If we fail to get the lock, just skip + * it + */ + if (!spin_trylock(&dentry->d_lock)) + return LRU_SKIP; + + /* + * Referenced dentries are still in use. If they have active + * counts, just remove them from the LRU. Otherwise give them + * another pass through the LRU. + */ + if (dentry->d_lockref.count) { + list_del_init(&dentry->d_lru); + spin_unlock(&dentry->d_lock); + return LRU_REMOVED; + } + + if (dentry->d_flags & DCACHE_REFERENCED) { + dentry->d_flags &= ~DCACHE_REFERENCED; + spin_unlock(&dentry->d_lock); + + /* + * The list move itself will be made by the common LRU code. At + * this point, we've dropped the dentry->d_lock but keep the + * lru lock. This is safe to do, since every list movement is + * protected by the lru lock even if both locks are held. + * + * This is guaranteed by the fact that all LRU management + * functions are intermediated by the LRU API calls like + * list_lru_add and list_lru_del. List movement in this file + * only ever occur through this functions or through callbacks + * like this one, that are called from the LRU API. + * + * The only exceptions to this are functions like + * shrink_dentry_list, and code that first checks for the + * DCACHE_SHRINK_LIST flag. Those are guaranteed to be + * operating only with stack provided lists after they are + * properly isolated from the main list. It is thus, always a + * local access. + */ + return LRU_ROTATE; + } + + dentry->d_flags |= DCACHE_SHRINK_LIST; + list_move_tail(&dentry->d_lru, freeable); + this_cpu_dec(nr_dentry_unused); + spin_unlock(&dentry->d_lock); + + return LRU_REMOVED; +} + /** * prune_dcache_sb - shrink the dcache * @sb: superblock - * @count: number of entries to try to free + * @nr_to_scan : number of entries to try to free * - * Attempt to shrink the superblock dcache LRU by @count entries. This is + * Attempt to shrink the superblock dcache LRU by @nr_to_scan entries. This is * done when we need more memory an called from the superblock shrinker * function. * @@ -915,45 +947,12 @@ static void shrink_dentry_list(struct list_head *list) */ long prune_dcache_sb(struct super_block *sb, unsigned long nr_to_scan) { - struct dentry *dentry; - LIST_HEAD(referenced); - LIST_HEAD(tmp); - long freed = 0; + LIST_HEAD(dispose); + long freed; -relock: - spin_lock(&sb->s_dentry_lru_lock); - while (!list_empty(&sb->s_dentry_lru)) { - dentry = list_entry(sb->s_dentry_lru.prev, - struct dentry, d_lru); - BUG_ON(dentry->d_sb != sb); - - if (!spin_trylock(&dentry->d_lock)) { - spin_unlock(&sb->s_dentry_lru_lock); - cpu_relax(); - goto relock; - } - - if (dentry->d_flags & DCACHE_REFERENCED) { - dentry->d_flags &= ~DCACHE_REFERENCED; - list_move(&dentry->d_lru, &referenced); - spin_unlock(&dentry->d_lock); - } else { - list_move(&dentry->d_lru, &tmp); - dentry->d_flags |= DCACHE_SHRINK_LIST; - this_cpu_dec(nr_dentry_unused); - sb->s_nr_dentry_unused--; - spin_unlock(&dentry->d_lock); - freed++; - if (!--nr_to_scan) - break; - } - cond_resched_lock(&sb->s_dentry_lru_lock); - } - if (!list_empty(&referenced)) - list_splice(&referenced, &sb->s_dentry_lru); - spin_unlock(&sb->s_dentry_lru_lock); - - shrink_dentry_list(&tmp); + freed = list_lru_walk(&sb->s_dentry_lru, dentry_lru_isolate, + &dispose, nr_to_scan); + shrink_dentry_list(&dispose); return freed; } @@ -987,24 +986,10 @@ shrink_dcache_list( */ void shrink_dcache_sb(struct super_block *sb) { - LIST_HEAD(tmp); - - spin_lock(&sb->s_dentry_lru_lock); - while (!list_empty(&sb->s_dentry_lru)) { - /* - * account for removal here so we don't need to handle it later - * even though the dentry is no longer on the lru list. - */ - list_splice_init(&sb->s_dentry_lru, &tmp); - this_cpu_sub(nr_dentry_unused, sb->s_nr_dentry_unused); - sb->s_nr_dentry_unused = 0; - spin_unlock(&sb->s_dentry_lru_lock); + long disposed; - shrink_dcache_list(&tmp); - - spin_lock(&sb->s_dentry_lru_lock); - } - spin_unlock(&sb->s_dentry_lru_lock); + disposed = list_lru_dispose_all(&sb->s_dentry_lru, shrink_dcache_list); + this_cpu_sub(nr_dentry_unused, disposed); } EXPORT_SYMBOL(shrink_dcache_sb); @@ -1366,7 +1351,8 @@ static enum d_walk_ret select_collect(void *_data, struct dentry *dentry) if (dentry->d_lockref.count) { dentry_lru_del(dentry); } else if (!(dentry->d_flags & DCACHE_SHRINK_LIST)) { - dentry_lru_move_list(dentry, &data->dispose); + dentry_lru_del(dentry); + list_add_tail(&dentry->d_lru, &data->dispose); dentry->d_flags |= DCACHE_SHRINK_LIST; data->found++; ret = D_WALK_NORETRY; diff --git a/fs/super.c b/fs/super.c index aa7995d73bcc..cd3c2cd9144d 100644 --- a/fs/super.c +++ b/fs/super.c @@ -79,11 +79,11 @@ static unsigned long super_cache_scan(struct shrinker *shrink, fs_objects = sb->s_op->nr_cached_objects(sb); inodes = list_lru_count(&sb->s_inode_lru); - total_objects = sb->s_nr_dentry_unused + inodes + fs_objects + 1; + dentries = list_lru_count(&sb->s_dentry_lru); + total_objects = dentries + inodes + fs_objects + 1; /* proportion the scan between the caches */ - dentries = mult_frac(sc->nr_to_scan, sb->s_nr_dentry_unused, - total_objects); + dentries = mult_frac(sc->nr_to_scan, dentries, total_objects); inodes = mult_frac(sc->nr_to_scan, inodes, total_objects); /* @@ -117,7 +117,7 @@ static unsigned long super_cache_count(struct shrinker *shrink, if (sb->s_op && sb->s_op->nr_cached_objects) total_objects = sb->s_op->nr_cached_objects(sb); - total_objects += sb->s_nr_dentry_unused; + total_objects += list_lru_count(&sb->s_dentry_lru); total_objects += list_lru_count(&sb->s_inode_lru); total_objects = vfs_pressure_ratio(total_objects); @@ -191,8 +191,7 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags) INIT_HLIST_NODE(&s->s_instances); INIT_HLIST_BL_HEAD(&s->s_anon); INIT_LIST_HEAD(&s->s_inodes); - INIT_LIST_HEAD(&s->s_dentry_lru); - spin_lock_init(&s->s_dentry_lru_lock); + list_lru_init(&s->s_dentry_lru); list_lru_init(&s->s_inode_lru); INIT_LIST_HEAD(&s->s_mounts); init_rwsem(&s->s_umount); diff --git a/include/linux/fs.h b/include/linux/fs.h index e04786569c28..36e45df87f6e 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1270,14 +1270,6 @@ struct super_block { struct list_head s_files; #endif struct list_head s_mounts; /* list of mounts; _not_ for fs use */ - - /* s_dentry_lru_lock protects s_dentry_lru and s_nr_dentry_unused */ - spinlock_t s_dentry_lru_lock ____cacheline_aligned_in_smp; - struct list_head s_dentry_lru; /* unused dentry lru */ - long s_nr_dentry_unused; /* # of dentry on lru */ - - struct list_lru s_inode_lru ____cacheline_aligned_in_smp; - struct block_device *s_bdev; struct backing_dev_info *s_bdi; struct mtd_info *s_mtd; @@ -1331,6 +1323,13 @@ struct super_block { /* AIO completions deferred from interrupt context */ struct workqueue_struct *s_dio_done_wq; + + /* + * Keep the lru lists last in the structure so they always sit on their + * own individual cachelines. + */ + struct list_lru s_dentry_lru ____cacheline_aligned_in_smp; + struct list_lru s_inode_lru ____cacheline_aligned_in_smp; }; extern struct timespec current_fs_time(struct super_block *sb); -- cgit v1.2.3 From 3b1d58a4c96799eb4c92039e1b851b86f853548a Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:18:00 +1000 Subject: list_lru: per-node list infrastructure MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Now that we have an LRU list API, we can start to enhance the implementation. This splits the single LRU list into per-node lists and locks to enhance scalability. Items are placed on lists according to the node the memory belongs to. To make scanning the lists efficient, also track whether the per-node lists have entries in them in a active nodemask. Note: We use a fixed-size array for the node LRU, this struct can be very big if MAX_NUMNODES is big. If this becomes a problem this is fixable by turning this into a pointer and dynamically allocating this to nr_node_ids. This quantity is firwmare-provided, and still would provide room for all nodes at the cost of a pointer lookup and an extra allocation. Because that allocation will most likely come from a may very well fail. [glommer@openvz.org: fix warnings, added note about node lru] Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Reviewed-by: Greg Thelen Acked-by: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- include/linux/list_lru.h | 23 ++++++-- mm/list_lru.c | 146 +++++++++++++++++++++++++++++++++++------------ 2 files changed, 129 insertions(+), 40 deletions(-) (limited to 'include') diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 1a548b0b7578..f4d4cb608c02 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -8,6 +8,7 @@ #define _LRU_LIST_H #include +#include /* list_lru_walk_cb has to always return one of those */ enum lru_status { @@ -18,11 +19,26 @@ enum lru_status { internally, but has to return locked. */ }; -struct list_lru { +struct list_lru_node { spinlock_t lock; struct list_head list; /* kept as signed so we can catch imbalance bugs */ long nr_items; +} ____cacheline_aligned_in_smp; + +struct list_lru { + /* + * Because we use a fixed-size array, this struct can be very big if + * MAX_NUMNODES is big. If this becomes a problem this is fixable by + * turning this into a pointer and dynamically allocating this to + * nr_node_ids. This quantity is firwmare-provided, and still would + * provide room for all nodes at the cost of a pointer lookup and an + * extra allocation. Because that allocation will most likely come from + * a different slab cache than the main structure holding this + * structure, we may very well fail. + */ + struct list_lru_node node[MAX_NUMNODES]; + nodemask_t active_nodes; }; int list_lru_init(struct list_lru *lru); @@ -66,10 +82,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item); * guarantee that the list is not updated while the count is being computed. * Callers that want such a guarantee need to provide an outer lock. */ -static inline unsigned long list_lru_count(struct list_lru *lru) -{ - return lru->nr_items; -} +unsigned long list_lru_count(struct list_lru *lru); typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, spinlock_t *lock, void *cb_arg); diff --git a/mm/list_lru.c b/mm/list_lru.c index dd74c5434cd8..1efe4ecc02b1 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -6,41 +6,73 @@ */ #include #include +#include #include bool list_lru_add(struct list_lru *lru, struct list_head *item) { - spin_lock(&lru->lock); + int nid = page_to_nid(virt_to_page(item)); + struct list_lru_node *nlru = &lru->node[nid]; + + spin_lock(&nlru->lock); + WARN_ON_ONCE(nlru->nr_items < 0); if (list_empty(item)) { - list_add_tail(item, &lru->list); - lru->nr_items++; - spin_unlock(&lru->lock); + list_add_tail(item, &nlru->list); + if (nlru->nr_items++ == 0) + node_set(nid, lru->active_nodes); + spin_unlock(&nlru->lock); return true; } - spin_unlock(&lru->lock); + spin_unlock(&nlru->lock); return false; } EXPORT_SYMBOL_GPL(list_lru_add); bool list_lru_del(struct list_lru *lru, struct list_head *item) { - spin_lock(&lru->lock); + int nid = page_to_nid(virt_to_page(item)); + struct list_lru_node *nlru = &lru->node[nid]; + + spin_lock(&nlru->lock); if (!list_empty(item)) { list_del_init(item); - lru->nr_items--; - spin_unlock(&lru->lock); + if (--nlru->nr_items == 0) + node_clear(nid, lru->active_nodes); + WARN_ON_ONCE(nlru->nr_items < 0); + spin_unlock(&nlru->lock); return true; } - spin_unlock(&lru->lock); + spin_unlock(&nlru->lock); return false; } EXPORT_SYMBOL_GPL(list_lru_del); -unsigned long list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, - void *cb_arg, unsigned long nr_to_walk) +unsigned long list_lru_count(struct list_lru *lru) { + unsigned long count = 0; + int nid; + + for_each_node_mask(nid, lru->active_nodes) { + struct list_lru_node *nlru = &lru->node[nid]; + + spin_lock(&nlru->lock); + WARN_ON_ONCE(nlru->nr_items < 0); + count += nlru->nr_items; + spin_unlock(&nlru->lock); + } + + return count; +} +EXPORT_SYMBOL_GPL(list_lru_count); + +static unsigned long +list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate, + void *cb_arg, unsigned long *nr_to_walk) +{ + + struct list_lru_node *nlru = &lru->node[nid]; struct list_head *item, *n; - unsigned long removed = 0; + unsigned long isolated = 0; /* * If we don't keep state of at which pass we are, we can loop at * LRU_RETRY, since we have no guarantees that the caller will be able @@ -50,18 +82,20 @@ unsigned long list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, */ bool first_pass = true; - spin_lock(&lru->lock); + spin_lock(&nlru->lock); restart: - list_for_each_safe(item, n, &lru->list) { + list_for_each_safe(item, n, &nlru->list) { enum lru_status ret; - ret = isolate(item, &lru->lock, cb_arg); + ret = isolate(item, &nlru->lock, cb_arg); switch (ret) { case LRU_REMOVED: - lru->nr_items--; - removed++; + if (--nlru->nr_items == 0) + node_clear(nid, lru->active_nodes); + WARN_ON_ONCE(nlru->nr_items < 0); + isolated++; break; case LRU_ROTATE: - list_move_tail(item, &lru->list); + list_move_tail(item, &nlru->list); break; case LRU_SKIP: break; @@ -76,42 +110,84 @@ restart: BUG(); } - if (nr_to_walk-- == 0) + if ((*nr_to_walk)-- == 0) break; } - spin_unlock(&lru->lock); - return removed; + + spin_unlock(&nlru->lock); + return isolated; +} +EXPORT_SYMBOL_GPL(list_lru_walk_node); + +unsigned long list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, + void *cb_arg, unsigned long nr_to_walk) +{ + unsigned long isolated = 0; + int nid; + + for_each_node_mask(nid, lru->active_nodes) { + isolated += list_lru_walk_node(lru, nid, isolate, + cb_arg, &nr_to_walk); + if (nr_to_walk <= 0) + break; + } + return isolated; } EXPORT_SYMBOL_GPL(list_lru_walk); -unsigned long list_lru_dispose_all(struct list_lru *lru, - list_lru_dispose_cb dispose) +static unsigned long list_lru_dispose_all_node(struct list_lru *lru, int nid, + list_lru_dispose_cb dispose) { - unsigned long disposed = 0; + struct list_lru_node *nlru = &lru->node[nid]; LIST_HEAD(dispose_list); + unsigned long disposed = 0; - spin_lock(&lru->lock); - while (!list_empty(&lru->list)) { - list_splice_init(&lru->list, &dispose_list); - disposed += lru->nr_items; - lru->nr_items = 0; - spin_unlock(&lru->lock); + spin_lock(&nlru->lock); + while (!list_empty(&nlru->list)) { + list_splice_init(&nlru->list, &dispose_list); + disposed += nlru->nr_items; + nlru->nr_items = 0; + node_clear(nid, lru->active_nodes); + spin_unlock(&nlru->lock); dispose(&dispose_list); - spin_lock(&lru->lock); + spin_lock(&nlru->lock); } - spin_unlock(&lru->lock); + spin_unlock(&nlru->lock); return disposed; } +unsigned long list_lru_dispose_all(struct list_lru *lru, + list_lru_dispose_cb dispose) +{ + unsigned long disposed; + unsigned long total = 0; + int nid; + + do { + disposed = 0; + for_each_node_mask(nid, lru->active_nodes) { + disposed += list_lru_dispose_all_node(lru, nid, + dispose); + } + total += disposed; + } while (disposed != 0); + + return total; +} + int list_lru_init(struct list_lru *lru) { - spin_lock_init(&lru->lock); - INIT_LIST_HEAD(&lru->list); - lru->nr_items = 0; + int i; + nodes_clear(lru->active_nodes); + for (i = 0; i < MAX_NUMNODES; i++) { + spin_lock_init(&lru->node[i].lock); + INIT_LIST_HEAD(&lru->node[i].list); + lru->node[i].nr_items = 0; + } return 0; } EXPORT_SYMBOL_GPL(list_lru_init); -- cgit v1.2.3 From 6a4f496fd2fc74fa036732ae52c184952d6e3e37 Mon Sep 17 00:00:00 2001 From: Glauber Costa Date: Wed, 28 Aug 2013 10:18:02 +1000 Subject: list_lru: per-node API MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This patch adapts the list_lru API to accept an optional node argument, to be used by NUMA aware shrinking functions. Code that does not care about the NUMA placement of objects can still call into the very same functions as before. They will simply iterate over all nodes. Signed-off-by: Glauber Costa Cc: Dave Chinner Cc: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- include/linux/list_lru.h | 39 ++++++++++++++++++++++++++++++++++----- mm/list_lru.c | 37 +++++++++---------------------------- 2 files changed, 43 insertions(+), 33 deletions(-) (limited to 'include') diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index f4d4cb608c02..2fe13e1a809a 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -75,20 +75,32 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item); bool list_lru_del(struct list_lru *lru, struct list_head *item); /** - * list_lru_count: return the number of objects currently held by @lru + * list_lru_count_node: return the number of objects currently held by @lru * @lru: the lru pointer. + * @nid: the node id to count from. * * Always return a non-negative number, 0 for empty lists. There is no * guarantee that the list is not updated while the count is being computed. * Callers that want such a guarantee need to provide an outer lock. */ -unsigned long list_lru_count(struct list_lru *lru); +unsigned long list_lru_count_node(struct list_lru *lru, int nid); +static inline unsigned long list_lru_count(struct list_lru *lru) +{ + long count = 0; + int nid; + + for_each_node_mask(nid, lru->active_nodes) + count += list_lru_count_node(lru, nid); + + return count; +} typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item, spinlock_t *lock, void *cb_arg); /** - * list_lru_walk: walk a list_lru, isolating and disposing freeable items. + * list_lru_walk_node: walk a list_lru, isolating and disposing freeable items. * @lru: the lru pointer. + * @nid: the node id to scan from. * @isolate: callback function that is resposible for deciding what to do with * the item currently being scanned * @cb_arg: opaque type that will be passed to @isolate @@ -106,8 +118,25 @@ typedef enum lru_status * * Return value: the number of objects effectively removed from the LRU. */ -unsigned long list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, - void *cb_arg, unsigned long nr_to_walk); +unsigned long list_lru_walk_node(struct list_lru *lru, int nid, + list_lru_walk_cb isolate, void *cb_arg, + unsigned long *nr_to_walk); + +static inline unsigned long +list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, + void *cb_arg, unsigned long nr_to_walk) +{ + long isolated = 0; + int nid; + + for_each_node_mask(nid, lru->active_nodes) { + isolated += list_lru_walk_node(lru, nid, isolate, + cb_arg, &nr_to_walk); + if (nr_to_walk <= 0) + break; + } + return isolated; +} typedef void (*list_lru_dispose_cb)(struct list_head *dispose_list); /** diff --git a/mm/list_lru.c b/mm/list_lru.c index e77c29f4c243..86cb55464f71 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -47,25 +47,22 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item) } EXPORT_SYMBOL_GPL(list_lru_del); -unsigned long list_lru_count(struct list_lru *lru) +unsigned long +list_lru_count_node(struct list_lru *lru, int nid) { unsigned long count = 0; - int nid; - - for_each_node_mask(nid, lru->active_nodes) { - struct list_lru_node *nlru = &lru->node[nid]; + struct list_lru_node *nlru = &lru->node[nid]; - spin_lock(&nlru->lock); - WARN_ON_ONCE(nlru->nr_items < 0); - count += nlru->nr_items; - spin_unlock(&nlru->lock); - } + spin_lock(&nlru->lock); + WARN_ON_ONCE(nlru->nr_items < 0); + count += nlru->nr_items; + spin_unlock(&nlru->lock); return count; } -EXPORT_SYMBOL_GPL(list_lru_count); +EXPORT_SYMBOL_GPL(list_lru_count_node); -static unsigned long +unsigned long list_lru_walk_node(struct list_lru *lru, int nid, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { @@ -115,22 +112,6 @@ restart: } EXPORT_SYMBOL_GPL(list_lru_walk_node); -unsigned long list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, - void *cb_arg, unsigned long nr_to_walk) -{ - unsigned long isolated = 0; - int nid; - - for_each_node_mask(nid, lru->active_nodes) { - isolated += list_lru_walk_node(lru, nid, isolate, - cb_arg, &nr_to_walk); - if (nr_to_walk <= 0) - break; - } - return isolated; -} -EXPORT_SYMBOL_GPL(list_lru_walk); - static unsigned long list_lru_dispose_all_node(struct list_lru *lru, int nid, list_lru_dispose_cb dispose) { -- cgit v1.2.3 From 4e717f5c1083995c334ced639cc77a75e9972567 Mon Sep 17 00:00:00 2001 From: Glauber Costa Date: Wed, 28 Aug 2013 10:18:03 +1000 Subject: list_lru: remove special case function list_lru_dispose_all. The list_lru implementation has one function, list_lru_dispose_all, with only one user (the dentry code). At first, such function appears to make sense because we are really not interested in the result of isolating each dentry separately - all of them are going away anyway. However, it's implementation is buggy in the following way: When we call list_lru_dispose_all in fs/dcache.c, we scan all dentries marking them with DCACHE_SHRINK_LIST. However, this is done without the nlru->lock taken. The imediate result of that is that someone else may add or remove the dentry from the LRU at the same time. When list_lru_del happens in that scenario we will see an element that is not yet marked with DCACHE_SHRINK_LIST (even though it will be in the future) and obviously remove it from an lru where the element no longer is. Since list_lru_dispose_all will in effect count down nlru's nr_items and list_lru_del will do the same, this will lead to an imbalance. The solution for this would not be so simple: we can obviously just keep the lru_lock taken, but then we have no guarantees that we will be able to acquire the dentry lock (dentry->d_lock). To properly solve this, we need a communication mechanism between the lru and dentry code, so they can coordinate this with each other. Such mechanism already exists in the form of the list_lru_walk_cb callback. So it is possible to construct a dcache-side prune function that does the right thing only by calling list_lru_walk in a loop until no more dentries are available. With only one user, plus the fact that a sane solution for the problem would involve boucing between dcache and list_lru anyway, I see little justification to keep the special case list_lru_dispose_all in tree. Signed-off-by: Glauber Costa Cc: Michal Hocko Acked-by: Dave Chinner Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/dcache.c | 49 ++++++++++++++++++++++++++++-------------------- include/linux/list_lru.h | 17 ----------------- mm/list_lru.c | 42 ----------------------------------------- 3 files changed, 29 insertions(+), 79 deletions(-) (limited to 'include') diff --git a/fs/dcache.c b/fs/dcache.c index 38a4a03499a2..d74b5bdff7f9 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -956,27 +956,29 @@ long prune_dcache_sb(struct super_block *sb, unsigned long nr_to_scan) return freed; } -/* - * Mark all the dentries as on being the dispose list so we don't think they are - * still on the LRU if we try to kill them from ascending the parent chain in - * try_prune_one_dentry() rather than directly from the dispose list. - */ -static void -shrink_dcache_list( - struct list_head *dispose) +static enum lru_status dentry_lru_isolate_shrink(struct list_head *item, + spinlock_t *lru_lock, void *arg) { - struct dentry *dentry; + struct list_head *freeable = arg; + struct dentry *dentry = container_of(item, struct dentry, d_lru); - rcu_read_lock(); - list_for_each_entry_rcu(dentry, dispose, d_lru) { - spin_lock(&dentry->d_lock); - dentry->d_flags |= DCACHE_SHRINK_LIST; - spin_unlock(&dentry->d_lock); - } - rcu_read_unlock(); - shrink_dentry_list(dispose); + /* + * we are inverting the lru lock/dentry->d_lock here, + * so use a trylock. If we fail to get the lock, just skip + * it + */ + if (!spin_trylock(&dentry->d_lock)) + return LRU_SKIP; + + dentry->d_flags |= DCACHE_SHRINK_LIST; + list_move_tail(&dentry->d_lru, freeable); + this_cpu_dec(nr_dentry_unused); + spin_unlock(&dentry->d_lock); + + return LRU_REMOVED; } + /** * shrink_dcache_sb - shrink dcache for a superblock * @sb: superblock @@ -986,10 +988,17 @@ shrink_dcache_list( */ void shrink_dcache_sb(struct super_block *sb) { - long disposed; + long freed; + + do { + LIST_HEAD(dispose); + + freed = list_lru_walk(&sb->s_dentry_lru, + dentry_lru_isolate_shrink, &dispose, UINT_MAX); - disposed = list_lru_dispose_all(&sb->s_dentry_lru, shrink_dcache_list); - this_cpu_sub(nr_dentry_unused, disposed); + this_cpu_sub(nr_dentry_unused, freed); + shrink_dentry_list(&dispose); + } while (freed > 0); } EXPORT_SYMBOL(shrink_dcache_sb); diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 2fe13e1a809a..4d02ad3badab 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -137,21 +137,4 @@ list_lru_walk(struct list_lru *lru, list_lru_walk_cb isolate, } return isolated; } - -typedef void (*list_lru_dispose_cb)(struct list_head *dispose_list); -/** - * list_lru_dispose_all: forceably flush all elements in an @lru - * @lru: the lru pointer - * @dispose: callback function to be called for each lru list. - * - * This function will forceably isolate all elements into the dispose list, and - * call the @dispose callback to flush the list. Please note that the callback - * should expect items in any state, clean or dirty, and be able to flush all of - * them. - * - * Return value: how many objects were freed. It should be equal to all objects - * in the list_lru. - */ -unsigned long -list_lru_dispose_all(struct list_lru *lru, list_lru_dispose_cb dispose); #endif /* _LRU_LIST_H */ diff --git a/mm/list_lru.c b/mm/list_lru.c index 86cb55464f71..f91c24188573 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -112,48 +112,6 @@ restart: } EXPORT_SYMBOL_GPL(list_lru_walk_node); -static unsigned long list_lru_dispose_all_node(struct list_lru *lru, int nid, - list_lru_dispose_cb dispose) -{ - struct list_lru_node *nlru = &lru->node[nid]; - LIST_HEAD(dispose_list); - unsigned long disposed = 0; - - spin_lock(&nlru->lock); - while (!list_empty(&nlru->list)) { - list_splice_init(&nlru->list, &dispose_list); - disposed += nlru->nr_items; - nlru->nr_items = 0; - node_clear(nid, lru->active_nodes); - spin_unlock(&nlru->lock); - - dispose(&dispose_list); - - spin_lock(&nlru->lock); - } - spin_unlock(&nlru->lock); - return disposed; -} - -unsigned long list_lru_dispose_all(struct list_lru *lru, - list_lru_dispose_cb dispose) -{ - unsigned long disposed; - unsigned long total = 0; - int nid; - - do { - disposed = 0; - for_each_node_mask(nid, lru->active_nodes) { - disposed += list_lru_dispose_all_node(lru, nid, - dispose); - } - total += disposed; - } while (disposed != 0); - - return total; -} - int list_lru_init(struct list_lru *lru) { int i; -- cgit v1.2.3 From 0ce3d74450815500e31f16a0b65f6bab687985c3 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:18:03 +1000 Subject: shrinker: add node awareness MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Pass the node of the current zone being reclaimed to shrink_slab(), allowing the shrinker control nodemask to be set appropriately for node aware shrinkers. Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Acked-by: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- drivers/staging/android/ashmem.c | 3 +++ fs/drop_caches.c | 1 + include/linux/shrinker.h | 3 +++ mm/memory-failure.c | 2 ++ mm/vmscan.c | 11 ++++++++--- 5 files changed, 17 insertions(+), 3 deletions(-) (limited to 'include') diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c index 21a3f7250531..65f36d728714 100644 --- a/drivers/staging/android/ashmem.c +++ b/drivers/staging/android/ashmem.c @@ -692,6 +692,9 @@ static long ashmem_ioctl(struct file *file, unsigned int cmd, unsigned long arg) .gfp_mask = GFP_KERNEL, .nr_to_scan = 0, }; + + nodes_setall(sc.nodes_to_scan); + ret = ashmem_shrink(&ashmem_shrinker, &sc); sc.nr_to_scan = ret; ashmem_shrink(&ashmem_shrinker, &sc); diff --git a/fs/drop_caches.c b/fs/drop_caches.c index c00e055b6282..9fd702f5bfb2 100644 --- a/fs/drop_caches.c +++ b/fs/drop_caches.c @@ -44,6 +44,7 @@ static void drop_slab(void) .gfp_mask = GFP_KERNEL, }; + nodes_setall(shrink.nodes_to_scan); do { nr_objects = shrink_slab(&shrink, 1000, 1000); } while (nr_objects > 10); diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 884e76222e1b..76f520c4c394 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -16,6 +16,9 @@ struct shrink_control { /* How many slab objects shrinker() should scan and try to reclaim */ unsigned long nr_to_scan; + + /* shrink from these nodes */ + nodemask_t nodes_to_scan; }; #define SHRINK_STOP (~0UL) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index d84c5e5331bb..baa4e0a45dec 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -248,10 +248,12 @@ void shake_page(struct page *p, int access) */ if (access) { int nr; + int nid = page_to_nid(p); do { struct shrink_control shrink = { .gfp_mask = GFP_KERNEL, }; + node_set(nid, shrink.nodes_to_scan); nr = shrink_slab(&shrink, 1000, 1000); if (page_count(p) == 1) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4d4e859b4b9c..fe0d5c458440 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2374,12 +2374,16 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, */ if (global_reclaim(sc)) { unsigned long lru_pages = 0; + + nodes_clear(shrink->nodes_to_scan); for_each_zone_zonelist(zone, z, zonelist, gfp_zone(sc->gfp_mask)) { if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL)) continue; lru_pages += zone_reclaimable_pages(zone); + node_set(zone_to_nid(zone), + shrink->nodes_to_scan); } shrink_slab(shrink, sc->nr_scanned, lru_pages); @@ -2836,6 +2840,8 @@ static bool kswapd_shrink_zone(struct zone *zone, return true; shrink_zone(zone, sc); + nodes_clear(shrink.nodes_to_scan); + node_set(zone_to_nid(zone), shrink.nodes_to_scan); reclaim_state->reclaimed_slab = 0; nr_slab = shrink_slab(&shrink, sc->nr_scanned, lru_pages); @@ -3544,10 +3550,9 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order) * number of slab pages and shake the slab until it is reduced * by the same nr_pages that we used for reclaiming unmapped * pages. - * - * Note that shrink_slab will free memory on all zones and may - * take a long time. */ + nodes_clear(shrink.nodes_to_scan); + node_set(zone_to_nid(zone), shrink.nodes_to_scan); for (;;) { unsigned long lru_pages = zone_reclaimable_pages(zone); -- cgit v1.2.3 From 1d3d4437eae1bb2963faab427f65f90663c64aa1 Mon Sep 17 00:00:00 2001 From: Glauber Costa Date: Wed, 28 Aug 2013 10:18:04 +1000 Subject: vmscan: per-node deferred work MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The list_lru infrastructure already keeps per-node LRU lists in its node-specific list_lru_node arrays and provide us with a per-node API, and the shrinkers are properly equiped with node information. This means that we can now focus our shrinking effort in a single node, but the work that is deferred from one run to another is kept global at nr_in_batch. Work can be deferred, for instance, during direct reclaim under a GFP_NOFS allocation, where situation, all the filesystem shrinkers will be prevented from running and accumulate in nr_in_batch the amount of work they should have done, but could not. This creates an impedance problem, where upon node pressure, work deferred will accumulate and end up being flushed in other nodes. The problem we describe is particularly harmful in big machines, where many nodes can accumulate at the same time, all adding to the global counter nr_in_batch. As we accumulate more and more, we start to ask for the caches to flush even bigger numbers. The result is that the caches are depleted and do not stabilize. To achieve stable steady state behavior, we need to tackle it differently. In this patch we keep the deferred count per-node, in the new array nr_deferred[] (the name is also a bit more descriptive) and will never accumulate that to other nodes. Signed-off-by: Glauber Costa Cc: Dave Chinner Cc: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- include/linux/shrinker.h | 14 ++- mm/vmscan.c | 241 +++++++++++++++++++++++++++-------------------- 2 files changed, 152 insertions(+), 103 deletions(-) (limited to 'include') diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 76f520c4c394..8f80f243fed9 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -19,6 +19,8 @@ struct shrink_control { /* shrink from these nodes */ nodemask_t nodes_to_scan; + /* current node being shrunk (for NUMA aware shrinkers) */ + int nid; }; #define SHRINK_STOP (~0UL) @@ -44,6 +46,8 @@ struct shrink_control { * due to potential deadlocks. If SHRINK_STOP is returned, then no further * attempts to call the @scan_objects will be made from the current reclaim * context. + * + * @flags determine the shrinker abilities, like numa awareness */ struct shrinker { int (*shrink)(struct shrinker *, struct shrink_control *sc); @@ -54,12 +58,18 @@ struct shrinker { int seeks; /* seeks to recreate an obj */ long batch; /* reclaim batch size, 0 = default */ + unsigned long flags; /* These are for internal use */ struct list_head list; - atomic_long_t nr_in_batch; /* objs pending delete */ + /* objs pending delete, per node */ + atomic_long_t *nr_deferred; }; #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ -extern void register_shrinker(struct shrinker *); + +/* Flags */ +#define SHRINKER_NUMA_AWARE (1 << 0) + +extern int register_shrinker(struct shrinker *); extern void unregister_shrinker(struct shrinker *); #endif diff --git a/mm/vmscan.c b/mm/vmscan.c index fe0d5c458440..799ebceeb4f7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -155,14 +155,31 @@ static unsigned long get_lru_size(struct lruvec *lruvec, enum lru_list lru) } /* - * Add a shrinker callback to be called from the vm + * Add a shrinker callback to be called from the vm. */ -void register_shrinker(struct shrinker *shrinker) +int register_shrinker(struct shrinker *shrinker) { - atomic_long_set(&shrinker->nr_in_batch, 0); + size_t size = sizeof(*shrinker->nr_deferred); + + /* + * If we only have one possible node in the system anyway, save + * ourselves the trouble and disable NUMA aware behavior. This way we + * will save memory and some small loop time later. + */ + if (nr_node_ids == 1) + shrinker->flags &= ~SHRINKER_NUMA_AWARE; + + if (shrinker->flags & SHRINKER_NUMA_AWARE) + size *= nr_node_ids; + + shrinker->nr_deferred = kzalloc(size, GFP_KERNEL); + if (!shrinker->nr_deferred) + return -ENOMEM; + down_write(&shrinker_rwsem); list_add_tail(&shrinker->list, &shrinker_list); up_write(&shrinker_rwsem); + return 0; } EXPORT_SYMBOL(register_shrinker); @@ -186,6 +203,118 @@ static inline int do_shrinker_shrink(struct shrinker *shrinker, } #define SHRINK_BATCH 128 + +static unsigned long +shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker, + unsigned long nr_pages_scanned, unsigned long lru_pages) +{ + unsigned long freed = 0; + unsigned long long delta; + long total_scan; + long max_pass; + long nr; + long new_nr; + int nid = shrinkctl->nid; + long batch_size = shrinker->batch ? shrinker->batch + : SHRINK_BATCH; + + if (shrinker->count_objects) + max_pass = shrinker->count_objects(shrinker, shrinkctl); + else + max_pass = do_shrinker_shrink(shrinker, shrinkctl, 0); + if (max_pass == 0) + return 0; + + /* + * copy the current shrinker scan count into a local variable + * and zero it so that other concurrent shrinker invocations + * don't also do this scanning work. + */ + nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); + + total_scan = nr; + delta = (4 * nr_pages_scanned) / shrinker->seeks; + delta *= max_pass; + do_div(delta, lru_pages + 1); + total_scan += delta; + if (total_scan < 0) { + printk(KERN_ERR + "shrink_slab: %pF negative objects to delete nr=%ld\n", + shrinker->shrink, total_scan); + total_scan = max_pass; + } + + /* + * We need to avoid excessive windup on filesystem shrinkers + * due to large numbers of GFP_NOFS allocations causing the + * shrinkers to return -1 all the time. This results in a large + * nr being built up so when a shrink that can do some work + * comes along it empties the entire cache due to nr >>> + * max_pass. This is bad for sustaining a working set in + * memory. + * + * Hence only allow the shrinker to scan the entire cache when + * a large delta change is calculated directly. + */ + if (delta < max_pass / 4) + total_scan = min(total_scan, max_pass / 2); + + /* + * Avoid risking looping forever due to too large nr value: + * never try to free more than twice the estimate number of + * freeable entries. + */ + if (total_scan > max_pass * 2) + total_scan = max_pass * 2; + + trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, + nr_pages_scanned, lru_pages, + max_pass, delta, total_scan); + + while (total_scan >= batch_size) { + + if (shrinker->scan_objects) { + unsigned long ret; + shrinkctl->nr_to_scan = batch_size; + ret = shrinker->scan_objects(shrinker, shrinkctl); + + if (ret == SHRINK_STOP) + break; + freed += ret; + } else { + int nr_before; + long ret; + + nr_before = do_shrinker_shrink(shrinker, shrinkctl, 0); + ret = do_shrinker_shrink(shrinker, shrinkctl, + batch_size); + if (ret == -1) + break; + if (ret < nr_before) + freed += nr_before - ret; + } + + count_vm_events(SLABS_SCANNED, batch_size); + total_scan -= batch_size; + + cond_resched(); + } + + /* + * move the unused scan count back into the shrinker in a + * manner that handles concurrent updates. If we exhausted the + * scan, there is no need to do an update. + */ + if (total_scan > 0) + new_nr = atomic_long_add_return(total_scan, + &shrinker->nr_deferred[nid]); + else + new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); + + trace_mm_shrink_slab_end(shrinker, freed, nr, new_nr); + return freed; +} + /* * Call the shrink functions to age shrinkable caches * @@ -227,108 +356,18 @@ unsigned long shrink_slab(struct shrink_control *shrinkctl, } list_for_each_entry(shrinker, &shrinker_list, list) { - unsigned long long delta; - long total_scan; - long max_pass; - long nr; - long new_nr; - long batch_size = shrinker->batch ? shrinker->batch - : SHRINK_BATCH; - - if (shrinker->count_objects) - max_pass = shrinker->count_objects(shrinker, shrinkctl); - else - max_pass = do_shrinker_shrink(shrinker, shrinkctl, 0); - if (max_pass == 0) - continue; - - /* - * copy the current shrinker scan count into a local variable - * and zero it so that other concurrent shrinker invocations - * don't also do this scanning work. - */ - nr = atomic_long_xchg(&shrinker->nr_in_batch, 0); - - total_scan = nr; - delta = (4 * nr_pages_scanned) / shrinker->seeks; - delta *= max_pass; - do_div(delta, lru_pages + 1); - total_scan += delta; - if (total_scan < 0) { - printk(KERN_ERR - "shrink_slab: %pF negative objects to delete nr=%ld\n", - shrinker->shrink, total_scan); - total_scan = max_pass; - } - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * max_pass. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < max_pass / 4) - total_scan = min(total_scan, max_pass / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > max_pass * 2) - total_scan = max_pass * 2; - - trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, - nr_pages_scanned, lru_pages, - max_pass, delta, total_scan); - - while (total_scan >= batch_size) { - - if (shrinker->scan_objects) { - unsigned long ret; - shrinkctl->nr_to_scan = batch_size; - ret = shrinker->scan_objects(shrinker, shrinkctl); + for_each_node_mask(shrinkctl->nid, shrinkctl->nodes_to_scan) { + if (!node_online(shrinkctl->nid)) + continue; - if (ret == SHRINK_STOP) - break; - freed += ret; - } else { - int nr_before; - long ret; - - nr_before = do_shrinker_shrink(shrinker, shrinkctl, 0); - ret = do_shrinker_shrink(shrinker, shrinkctl, - batch_size); - if (ret == -1) - break; - if (ret < nr_before) - freed += nr_before - ret; - } + if (!(shrinker->flags & SHRINKER_NUMA_AWARE) && + (shrinkctl->nid != 0)) + break; - count_vm_events(SLABS_SCANNED, batch_size); - total_scan -= batch_size; + freed += shrink_slab_node(shrinkctl, shrinker, + nr_pages_scanned, lru_pages); - cond_resched(); } - - /* - * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. If we exhausted the - * scan, there is no need to do an update. - */ - if (total_scan > 0) - new_nr = atomic_long_add_return(total_scan, - &shrinker->nr_in_batch); - else - new_nr = atomic_long_read(&shrinker->nr_in_batch); - - trace_mm_shrink_slab_end(shrinker, freed, nr, new_nr); } up_read(&shrinker_rwsem); out: -- cgit v1.2.3 From 9b17c62382dd2e7507984b9890bf44e070cdd8bb Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:18:05 +1000 Subject: fs: convert inode and dentry shrinking to be node aware MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Now that the shrinker is passing a node in the scan control structure, we can pass this to the the generic LRU list code to isolate reclaim to the lists on matching nodes. Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Acked-by: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/dcache.c | 8 +++++--- fs/inode.c | 7 ++++--- fs/internal.h | 6 ++++-- fs/super.c | 23 ++++++++++++++--------- fs/xfs/xfs_super.c | 6 ++++-- include/linux/fs.h | 4 ++-- 6 files changed, 33 insertions(+), 21 deletions(-) (limited to 'include') diff --git a/fs/dcache.c b/fs/dcache.c index d74b5bdff7f9..c932ed32c77b 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -937,6 +937,7 @@ dentry_lru_isolate(struct list_head *item, spinlock_t *lru_lock, void *arg) * prune_dcache_sb - shrink the dcache * @sb: superblock * @nr_to_scan : number of entries to try to free + * @nid: which node to scan for freeable entities * * Attempt to shrink the superblock dcache LRU by @nr_to_scan entries. This is * done when we need more memory an called from the superblock shrinker @@ -945,13 +946,14 @@ dentry_lru_isolate(struct list_head *item, spinlock_t *lru_lock, void *arg) * This function may fail to free any resources if all the dentries are in * use. */ -long prune_dcache_sb(struct super_block *sb, unsigned long nr_to_scan) +long prune_dcache_sb(struct super_block *sb, unsigned long nr_to_scan, + int nid) { LIST_HEAD(dispose); long freed; - freed = list_lru_walk(&sb->s_dentry_lru, dentry_lru_isolate, - &dispose, nr_to_scan); + freed = list_lru_walk_node(&sb->s_dentry_lru, nid, dentry_lru_isolate, + &dispose, &nr_to_scan); shrink_dentry_list(&dispose); return freed; } diff --git a/fs/inode.c b/fs/inode.c index 5b1ec47c5d39..b33ba8e021cc 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -748,13 +748,14 @@ inode_lru_isolate(struct list_head *item, spinlock_t *lru_lock, void *arg) * to trim from the LRU. Inodes to be freed are moved to a temporary list and * then are freed outside inode_lock by dispose_list(). */ -long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan) +long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan, + int nid) { LIST_HEAD(freeable); long freed; - freed = list_lru_walk(&sb->s_inode_lru, inode_lru_isolate, - &freeable, nr_to_scan); + freed = list_lru_walk_node(&sb->s_inode_lru, nid, inode_lru_isolate, + &freeable, &nr_to_scan); dispose_list(&freeable); return freed; } diff --git a/fs/internal.h b/fs/internal.h index cb83a3417a68..513e0d859a6c 100644 --- a/fs/internal.h +++ b/fs/internal.h @@ -114,7 +114,8 @@ extern int open_check_o_direct(struct file *f); * inode.c */ extern spinlock_t inode_sb_list_lock; -extern long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan); +extern long prune_icache_sb(struct super_block *sb, unsigned long nr_to_scan, + int nid); extern void inode_add_lru(struct inode *inode); /* @@ -131,7 +132,8 @@ extern int invalidate_inodes(struct super_block *, bool); */ extern struct dentry *__d_alloc(struct super_block *, const struct qstr *); extern int d_set_mounted(struct dentry *dentry); -extern long prune_dcache_sb(struct super_block *sb, unsigned long nr_to_scan); +extern long prune_dcache_sb(struct super_block *sb, unsigned long nr_to_scan, + int nid); /* * read_write.c diff --git a/fs/super.c b/fs/super.c index cd3c2cd9144d..181d42e2abff 100644 --- a/fs/super.c +++ b/fs/super.c @@ -76,10 +76,10 @@ static unsigned long super_cache_scan(struct shrinker *shrink, return SHRINK_STOP; if (sb->s_op->nr_cached_objects) - fs_objects = sb->s_op->nr_cached_objects(sb); + fs_objects = sb->s_op->nr_cached_objects(sb, sc->nid); - inodes = list_lru_count(&sb->s_inode_lru); - dentries = list_lru_count(&sb->s_dentry_lru); + inodes = list_lru_count_node(&sb->s_inode_lru, sc->nid); + dentries = list_lru_count_node(&sb->s_dentry_lru, sc->nid); total_objects = dentries + inodes + fs_objects + 1; /* proportion the scan between the caches */ @@ -90,13 +90,14 @@ static unsigned long super_cache_scan(struct shrinker *shrink, * prune the dcache first as the icache is pinned by it, then * prune the icache, followed by the filesystem specific caches */ - freed = prune_dcache_sb(sb, dentries); - freed += prune_icache_sb(sb, inodes); + freed = prune_dcache_sb(sb, dentries, sc->nid); + freed += prune_icache_sb(sb, inodes, sc->nid); if (fs_objects) { fs_objects = mult_frac(sc->nr_to_scan, fs_objects, total_objects); - freed += sb->s_op->free_cached_objects(sb, fs_objects); + freed += sb->s_op->free_cached_objects(sb, fs_objects, + sc->nid); } drop_super(sb); @@ -115,10 +116,13 @@ static unsigned long super_cache_count(struct shrinker *shrink, return 0; if (sb->s_op && sb->s_op->nr_cached_objects) - total_objects = sb->s_op->nr_cached_objects(sb); + total_objects = sb->s_op->nr_cached_objects(sb, + sc->nid); - total_objects += list_lru_count(&sb->s_dentry_lru); - total_objects += list_lru_count(&sb->s_inode_lru); + total_objects += list_lru_count_node(&sb->s_dentry_lru, + sc->nid); + total_objects += list_lru_count_node(&sb->s_inode_lru, + sc->nid); total_objects = vfs_pressure_ratio(total_objects); drop_super(sb); @@ -228,6 +232,7 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags) s->s_shrink.scan_objects = super_cache_scan; s->s_shrink.count_objects = super_cache_count; s->s_shrink.batch = 1024; + s->s_shrink.flags = SHRINKER_NUMA_AWARE; } out: return s; diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 71d7aaebb912..15188cc99449 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1537,7 +1537,8 @@ xfs_fs_mount( static long xfs_fs_nr_cached_objects( - struct super_block *sb) + struct super_block *sb, + int nid) { return xfs_reclaim_inodes_count(XFS_M(sb)); } @@ -1545,7 +1546,8 @@ xfs_fs_nr_cached_objects( static long xfs_fs_free_cached_objects( struct super_block *sb, - long nr_to_scan) + long nr_to_scan, + int nid) { return xfs_reclaim_inodes_nr(XFS_M(sb), nr_to_scan); } diff --git a/include/linux/fs.h b/include/linux/fs.h index 36e45df87f6e..a4acd3c61190 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1624,8 +1624,8 @@ struct super_operations { ssize_t (*quota_write)(struct super_block *, int, const char *, size_t, loff_t); #endif int (*bdev_try_to_free_page)(struct super_block*, struct page*, gfp_t); - long (*nr_cached_objects)(struct super_block *); - long (*free_cached_objects)(struct super_block *, long); + long (*nr_cached_objects)(struct super_block *, int); + long (*free_cached_objects)(struct super_block *, long, int); }; /* -- cgit v1.2.3 From a0b02131c5fcd8545b867db72224b3659e813f10 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 28 Aug 2013 10:18:16 +1000 Subject: shrinker: Kill old ->shrink API. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit There are no more users of this API, so kill it dead, dead, dead and quietly bury the corpse in a shallow, unmarked grave in a dark forest deep in the hills... [glommer@openvz.org: added flowers to the grave] Signed-off-by: Dave Chinner Signed-off-by: Glauber Costa Reviewed-by: Greg Thelen Acked-by: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- include/linux/shrinker.h | 15 +++++---------- include/trace/events/vmscan.h | 4 ++-- mm/vmscan.c | 41 ++++++++--------------------------------- 3 files changed, 15 insertions(+), 45 deletions(-) (limited to 'include') diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 8f80f243fed9..68c097077ef0 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -7,14 +7,15 @@ * * The 'gfpmask' refers to the allocation we are currently trying to * fulfil. - * - * Note that 'shrink' will be passed nr_to_scan == 0 when the VM is - * querying the cache size, so a fastpath for that case is appropriate. */ struct shrink_control { gfp_t gfp_mask; - /* How many slab objects shrinker() should scan and try to reclaim */ + /* + * How many objects scan_objects should scan and try to reclaim. + * This is reset before every call, so it is safe for callees + * to modify. + */ unsigned long nr_to_scan; /* shrink from these nodes */ @@ -27,11 +28,6 @@ struct shrink_control { /* * A callback you can register to apply pressure to ageable caches. * - * @shrink() should look through the least-recently-used 'nr_to_scan' entries - * and attempt to free them up. It should return the number of objects which - * remain in the cache. If it returns -1, it means it cannot do any scanning at - * this time (eg. there is a risk of deadlock). - * * @count_objects should return the number of freeable items in the cache. If * there are no objects to free or the number of freeable items cannot be * determined, it should return 0. No deadlock checks should be done during the @@ -50,7 +46,6 @@ struct shrink_control { * @flags determine the shrinker abilities, like numa awareness */ struct shrinker { - int (*shrink)(struct shrinker *, struct shrink_control *sc); unsigned long (*count_objects)(struct shrinker *, struct shrink_control *sc); unsigned long (*scan_objects)(struct shrinker *, diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index 63cfcccaebb3..132a985aba8b 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -202,7 +202,7 @@ TRACE_EVENT(mm_shrink_slab_start, TP_fast_assign( __entry->shr = shr; - __entry->shrink = shr->shrink; + __entry->shrink = shr->scan_objects; __entry->nr_objects_to_shrink = nr_objects_to_shrink; __entry->gfp_flags = sc->gfp_mask; __entry->pgs_scanned = pgs_scanned; @@ -241,7 +241,7 @@ TRACE_EVENT(mm_shrink_slab_end, TP_fast_assign( __entry->shr = shr; - __entry->shrink = shr->shrink; + __entry->shrink = shr->scan_objects; __entry->unused_scan = unused_scan_cnt; __entry->new_scan = new_scan_cnt; __entry->retval = shrinker_retval; diff --git a/mm/vmscan.c b/mm/vmscan.c index 799ebceeb4f7..e36454220614 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -194,14 +194,6 @@ void unregister_shrinker(struct shrinker *shrinker) } EXPORT_SYMBOL(unregister_shrinker); -static inline int do_shrinker_shrink(struct shrinker *shrinker, - struct shrink_control *sc, - unsigned long nr_to_scan) -{ - sc->nr_to_scan = nr_to_scan; - return (*shrinker->shrink)(shrinker, sc); -} - #define SHRINK_BATCH 128 static unsigned long @@ -218,10 +210,7 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker, long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; - if (shrinker->count_objects) - max_pass = shrinker->count_objects(shrinker, shrinkctl); - else - max_pass = do_shrinker_shrink(shrinker, shrinkctl, 0); + max_pass = shrinker->count_objects(shrinker, shrinkctl); if (max_pass == 0) return 0; @@ -240,7 +229,7 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker, if (total_scan < 0) { printk(KERN_ERR "shrink_slab: %pF negative objects to delete nr=%ld\n", - shrinker->shrink, total_scan); + shrinker->scan_objects, total_scan); total_scan = max_pass; } @@ -272,27 +261,13 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker, max_pass, delta, total_scan); while (total_scan >= batch_size) { + unsigned long ret; - if (shrinker->scan_objects) { - unsigned long ret; - shrinkctl->nr_to_scan = batch_size; - ret = shrinker->scan_objects(shrinker, shrinkctl); - - if (ret == SHRINK_STOP) - break; - freed += ret; - } else { - int nr_before; - long ret; - - nr_before = do_shrinker_shrink(shrinker, shrinkctl, 0); - ret = do_shrinker_shrink(shrinker, shrinkctl, - batch_size); - if (ret == -1) - break; - if (ret < nr_before) - freed += nr_before - ret; - } + shrinkctl->nr_to_scan = batch_size; + ret = shrinker->scan_objects(shrinker, shrinkctl); + if (ret == SHRINK_STOP) + break; + freed += ret; count_vm_events(SLABS_SCANNED, batch_size); total_scan -= batch_size; -- cgit v1.2.3 From 5ca302c8e502ca53b7d75f12127ec0289904003a Mon Sep 17 00:00:00 2001 From: Glauber Costa Date: Wed, 28 Aug 2013 10:18:18 +1000 Subject: list_lru: dynamically adjust node arrays MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit We currently use a compile-time constant to size the node array for the list_lru structure. Due to this, we don't need to allocate any memory at initialization time. But as a consequence, the structures that contain embedded list_lru lists can become way too big (the superblock for instance contains two of them). This patch aims at ameliorating this situation by dynamically allocating the node arrays with the firmware provided nr_node_ids. Signed-off-by: Glauber Costa Cc: Dave Chinner Cc: Mel Gorman Cc: "Theodore Ts'o" Cc: Adrian Hunter Cc: Al Viro Cc: Artem Bityutskiy Cc: Arve Hjønnevåg Cc: Carlos Maiolino Cc: Christoph Hellwig Cc: Chuck Lever Cc: Daniel Vetter Cc: David Rientjes Cc: Gleb Natapov Cc: Greg Thelen Cc: J. Bruce Fields Cc: Jan Kara Cc: Jerome Glisse Cc: John Stultz Cc: KAMEZAWA Hiroyuki Cc: Kent Overstreet Cc: Kirill A. Shutemov Cc: Marcelo Tosatti Cc: Mel Gorman Cc: Steven Whitehouse Cc: Thomas Hellstrom Cc: Trond Myklebust Signed-off-by: Andrew Morton Signed-off-by: Al Viro --- fs/super.c | 11 +++++++++-- fs/xfs/xfs_buf.c | 6 +++++- fs/xfs/xfs_qm.c | 10 ++++++++-- include/linux/list_lru.h | 13 ++----------- mm/list_lru.c | 14 +++++++++++++- 5 files changed, 37 insertions(+), 17 deletions(-) (limited to 'include') diff --git a/fs/super.c b/fs/super.c index 181d42e2abff..269d96857caa 100644 --- a/fs/super.c +++ b/fs/super.c @@ -195,8 +195,12 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags) INIT_HLIST_NODE(&s->s_instances); INIT_HLIST_BL_HEAD(&s->s_anon); INIT_LIST_HEAD(&s->s_inodes); - list_lru_init(&s->s_dentry_lru); - list_lru_init(&s->s_inode_lru); + + if (list_lru_init(&s->s_dentry_lru)) + goto err_out; + if (list_lru_init(&s->s_inode_lru)) + goto err_out_dentry_lru; + INIT_LIST_HEAD(&s->s_mounts); init_rwsem(&s->s_umount); lockdep_set_class(&s->s_umount, &type->s_umount_key); @@ -236,6 +240,9 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags) } out: return s; + +err_out_dentry_lru: + list_lru_destroy(&s->s_dentry_lru); err_out: security_sb_free(s); #ifdef CONFIG_SMP diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index d46f6a3dc1de..49fdb7bed481 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -1592,6 +1592,7 @@ xfs_free_buftarg( struct xfs_mount *mp, struct xfs_buftarg *btp) { + list_lru_destroy(&btp->bt_lru); unregister_shrinker(&btp->bt_shrinker); if (mp->m_flags & XFS_MOUNT_BARRIER) @@ -1666,9 +1667,12 @@ xfs_alloc_buftarg( if (!btp->bt_bdi) goto error; - list_lru_init(&btp->bt_lru); if (xfs_setsize_buftarg_early(btp, bdev)) goto error; + + if (list_lru_init(&btp->bt_lru)) + goto error; + btp->bt_shrinker.count_objects = xfs_buftarg_shrink_count; btp->bt_shrinker.scan_objects = xfs_buftarg_shrink_scan; btp->bt_shrinker.seeks = DEFAULT_SEEKS; diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index a29169b062e3..7f4138629a80 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -831,11 +831,18 @@ xfs_qm_init_quotainfo( qinf = mp->m_quotainfo = kmem_zalloc(sizeof(xfs_quotainfo_t), KM_SLEEP); + if ((error = list_lru_init(&qinf->qi_lru))) { + kmem_free(qinf); + mp->m_quotainfo = NULL; + return error; + } + /* * See if quotainodes are setup, and if not, allocate them, * and change the superblock accordingly. */ if ((error = xfs_qm_init_quotainos(mp))) { + list_lru_destroy(&qinf->qi_lru); kmem_free(qinf); mp->m_quotainfo = NULL; return error; @@ -846,8 +853,6 @@ xfs_qm_init_quotainfo( INIT_RADIX_TREE(&qinf->qi_pquota_tree, GFP_NOFS); mutex_init(&qinf->qi_tree_lock); - list_lru_init(&qinf->qi_lru); - /* mutex used to serialize quotaoffs */ mutex_init(&qinf->qi_quotaofflock); @@ -935,6 +940,7 @@ xfs_qm_destroy_quotainfo( qi = mp->m_quotainfo; ASSERT(qi != NULL); + list_lru_destroy(&qi->qi_lru); unregister_shrinker(&qi->qi_shrinker); if (qi->qi_uquotaip) { diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 4d02ad3badab..3ce541753c88 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -27,20 +27,11 @@ struct list_lru_node { } ____cacheline_aligned_in_smp; struct list_lru { - /* - * Because we use a fixed-size array, this struct can be very big if - * MAX_NUMNODES is big. If this becomes a problem this is fixable by - * turning this into a pointer and dynamically allocating this to - * nr_node_ids. This quantity is firwmare-provided, and still would - * provide room for all nodes at the cost of a pointer lookup and an - * extra allocation. Because that allocation will most likely come from - * a different slab cache than the main structure holding this - * structure, we may very well fail. - */ - struct list_lru_node node[MAX_NUMNODES]; + struct list_lru_node *node; nodemask_t active_nodes; }; +void list_lru_destroy(struct list_lru *lru); int list_lru_init(struct list_lru *lru); /** diff --git a/mm/list_lru.c b/mm/list_lru.c index f91c24188573..72467914b856 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -8,6 +8,7 @@ #include #include #include +#include bool list_lru_add(struct list_lru *lru, struct list_head *item) { @@ -115,9 +116,14 @@ EXPORT_SYMBOL_GPL(list_lru_walk_node); int list_lru_init(struct list_lru *lru) { int i; + size_t size = sizeof(*lru->node) * nr_node_ids; + + lru->node = kzalloc(size, GFP_KERNEL); + if (!lru->node) + return -ENOMEM; nodes_clear(lru->active_nodes); - for (i = 0; i < MAX_NUMNODES; i++) { + for (i = 0; i < nr_node_ids; i++) { spin_lock_init(&lru->node[i].lock); INIT_LIST_HEAD(&lru->node[i].list); lru->node[i].nr_items = 0; @@ -125,3 +131,9 @@ int list_lru_init(struct list_lru *lru) return 0; } EXPORT_SYMBOL_GPL(list_lru_init); + +void list_lru_destroy(struct list_lru *lru) +{ + kfree(lru->node); +} +EXPORT_SYMBOL_GPL(list_lru_destroy); -- cgit v1.2.3